2026-03-10T10:10:32.248 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T10:10:32.256 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T10:10:32.282 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/997 branch: squid description: orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} email: null first_in_suite: false flavor: default job_id: '997' last_in_suite: false machine_type: vps meta: - desc: 'setup ceph/v18.2.1 ' name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: cluster-conf: mgr: client mount timeout: 30 debug client: 20 debug mgr: 20 debug ms: 1 mon warn on pool no app: false conf: client: client mount timeout: 600 debug client: 20 debug ms: 1 rados mon op timeout: 900 rados osd op timeout: 900 global: mon pg warn min per osd: 0 mds: debug mds: 20 debug mds balancer: 20 debug ms: 1 mds debug frag: true mds debug scatterstat: true mds op complaint time: 180 mds verify scatter: true osd op complaint time: 180 rados mon op timeout: 900 rados osd op timeout: 900 mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon down mkfs grace: 300 mon op complaint time: 120 osd: bdev async discard: true bdev enable discard: true bluestore allocator: bitmap bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug ms: 1 debug osd: 20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd mclock iops capacity threshold hdd: 49000 osd objectstore: bluestore osd op complaint time: 180 flavor: default fs: xfs log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - FS_DEGRADED - filesystem is degraded - FS_INLINE_DATA_DEPRECATED - FS_WITH_FAILED_MDS - MDS_ALL_DOWN - filesystem is offline - is offline because no MDS - MDS_DAMAGE - MDS_DEGRADED - MDS_FAILED - MDS_INSUFFICIENT_STANDBY - MDS_UP_LESS_THAN_MAX - online, but wants - filesystem is online with fewer MDS than max_mds - POOL_APP_NOT_ENABLED - do not have an application enabled - overall HEALTH_ - Replacing daemon - deprecated feature inline_data - MGR_MODULE_ERROR - OSD_DOWN - osds down - overall HEALTH_ - \(OSD_DOWN\) - \(OSD_ - but it is still running - is not responding - MON_DOWN - PG_AVAILABILITY - PG_DEGRADED - Reduced data availability - Degraded data redundancy - pg .* is stuck inactive - pg .* is .*degraded - pg .* is stuck peering sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: bluestore: true conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} osd: bdev async discard: true bdev enable discard: true bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd objectstore: bluestore fs: xfs install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath kclient: syntax: v1 selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 thrashosds: bdev_inject_crash: 2 bdev_inject_crash_probability: 0.5 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - client.0 - osd.0 - osd.1 - osd.2 - - host.b - client.1 - osd.3 - osd.4 - osd.5 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm02.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJWKwTIosR/u4YTRR3Dyh4QZzcI1qXmhdMQH3YlwnWghSUgGgoRNjygykWYie/tVMbSC2jEhHKv37uLRcy40QXY= vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBnAfpibPT77IAFmBIDmoPvc+8jPrdYJGGxsPyDlOvfbg1SIm7z62ef7qo40Iq0+ke5AVuu6yd7O9/5/f8C+BKo= tasks: - install: exclude_packages: - ceph-volume tag: v18.2.1 - print: '**** done install task...' - cephadm: compiled_cephadm_branch: reef conf: osd: osd_class_default_list: '*' osd_class_load_list: '*' image: quay.io/ceph/ceph:v18.2.1 roleless: true - print: '**** done end installing v18.2.1 cephadm ...' - cephadm.shell: host.a: - ceph config set mgr mgr/cephadm/use_repo_digest true --force - print: '**** done cephadm.shell ceph config set mgr...' - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - cephadm.shell: host.a: - ceph fs volume create cephfs --placement=4 - ceph fs dump - cephadm.shell: host.a: - ceph fs set cephfs max_mds 1 - cephadm.shell: host.a: - ceph fs set cephfs allow_standby_replay true - cephadm.shell: host.a: - ceph fs set cephfs inline_data false - cephadm.shell: host.a: - ceph fs dump - ceph --format=json fs dump | jq -e ".filesystems | length == 1" - while ! ceph --format=json mds versions | jq -e ". | add == 4"; do sleep 1; done - fs.pre_upgrade_save: null - ceph-fuse: null - print: '**** done client' - parallel: - upgrade-tasks - workload-tasks - cephadm.shell: host.a: - ceph fs dump - fs.post_upgrade_checks: null teuthology: fragments_dropped: - /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/suites/orch/cephadm/mds_upgrade_sequence/tasks/3-upgrade-mgr-staggered.yaml meta: {} postmerge: - "local kernel = py_attrgetter(yaml).get('kernel')\nif kernel ~= nil then\n local\ \ branch = py_attrgetter(kernel).get('branch')\n if branch and not kernel.branch:find\ \ \"-all$\" then\n log.debug(\"removing default kernel specification: %s\"\ , kernel)\n py_attrgetter(kernel).pop('branch', nil)\n py_attrgetter(kernel).pop('deb',\ \ nil)\n py_attrgetter(kernel).pop('flavor', nil)\n py_attrgetter(kernel).pop('kdb',\ \ nil)\n py_attrgetter(kernel).pop('koji', nil)\n py_attrgetter(kernel).pop('koji_task',\ \ nil)\n py_attrgetter(kernel).pop('rpm', nil)\n py_attrgetter(kernel).pop('sha1',\ \ nil)\n py_attrgetter(kernel).pop('tag', nil)\n end\nend\n" variables: fail_fs: false teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps upgrade-tasks: sequential: - cephadm.shell: env: - sha1 host.a: - ceph config set mgr mgr/orchestrator/fail_fs false || true - cephadm.shell: env: - sha1 host.a: - ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force - ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force - ceph config set global log_to_journald false --force - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 - cephadm.shell: env: - sha1 host.a: - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph fs dump; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done - ceph orch ps - ceph orch upgrade status - ceph health detail - ceph versions - echo "wait for servicemap items w/ changing names to refresh" - sleep 60 - ceph orch ps - ceph versions - ceph versions | jq -e '.overall | length == 1' - ceph versions | jq -e '.overall | keys' | grep $sha1 user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 workload-tasks: sequential: - workunit: clients: all: - suites/fsstress.sh 2026-03-10T10:10:32.282 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T10:10:32.283 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T10:10:32.283 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T10:10:32.283 INFO:teuthology.task.internal:Checking packages... 2026-03-10T10:10:32.283 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T10:10:32.283 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T10:10:32.283 INFO:teuthology.packaging:ref: None 2026-03-10T10:10:32.283 INFO:teuthology.packaging:tag: None 2026-03-10T10:10:32.283 INFO:teuthology.packaging:branch: squid 2026-03-10T10:10:32.283 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:10:32.283 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T10:10:33.075 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T10:10:33.076 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T10:10:33.077 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T10:10:33.077 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T10:10:33.077 INFO:teuthology.task.internal:Saving configuration 2026-03-10T10:10:33.085 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T10:10:33.086 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T10:10:33.092 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm02.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/997', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 10:09:18.540606', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:02', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJWKwTIosR/u4YTRR3Dyh4QZzcI1qXmhdMQH3YlwnWghSUgGgoRNjygykWYie/tVMbSC2jEhHKv37uLRcy40QXY='} 2026-03-10T10:10:33.097 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/997', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 10:09:18.541001', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBnAfpibPT77IAFmBIDmoPvc+8jPrdYJGGxsPyDlOvfbg1SIm7z62ef7qo40Iq0+ke5AVuu6yd7O9/5/f8C+BKo='} 2026-03-10T10:10:33.097 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T10:10:33.098 INFO:teuthology.task.internal:roles: ubuntu@vm02.local - ['host.a', 'client.0', 'osd.0', 'osd.1', 'osd.2'] 2026-03-10T10:10:33.098 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['host.b', 'client.1', 'osd.3', 'osd.4', 'osd.5'] 2026-03-10T10:10:33.098 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T10:10:33.103 DEBUG:teuthology.task.console_log:vm02 does not support IPMI; excluding 2026-03-10T10:10:33.108 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-10T10:10:33.108 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f0ba8e86170>, signals=[15]) 2026-03-10T10:10:33.108 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T10:10:33.109 INFO:teuthology.task.internal:Opening connections... 2026-03-10T10:10:33.109 DEBUG:teuthology.task.internal:connecting to ubuntu@vm02.local 2026-03-10T10:10:33.109 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T10:10:33.166 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-10T10:10:33.167 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T10:10:33.226 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T10:10:33.227 DEBUG:teuthology.orchestra.run.vm02:> uname -m 2026-03-10T10:10:33.277 INFO:teuthology.orchestra.run.vm02.stdout:x86_64 2026-03-10T10:10:33.277 DEBUG:teuthology.orchestra.run.vm02:> cat /etc/os-release 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:NAME="CentOS Stream" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:VERSION="9" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:ID="centos" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:ID_LIKE="rhel fedora" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_ID="9" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:PLATFORM_ID="platform:el9" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:ANSI_COLOR="0;31" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:LOGO="fedora-logo-icon" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:HOME_URL="https://centos.org/" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T10:10:33.334 INFO:teuthology.orchestra.run.vm02.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T10:10:33.335 INFO:teuthology.lock.ops:Updating vm02.local on lock server 2026-03-10T10:10:33.339 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-10T10:10:33.353 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-10T10:10:33.353 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:NAME="CentOS Stream" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="9" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:ID="centos" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE="rhel fedora" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="9" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:PLATFORM_ID="platform:el9" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:ANSI_COLOR="0;31" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:LOGO="fedora-logo-icon" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://centos.org/" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T10:10:33.407 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T10:10:33.408 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-10T10:10:33.412 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T10:10:33.415 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T10:10:33.415 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T10:10:33.415 DEBUG:teuthology.orchestra.run.vm02:> test '!' -e /home/ubuntu/cephtest 2026-03-10T10:10:33.417 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-10T10:10:33.461 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T10:10:33.463 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T10:10:33.463 DEBUG:teuthology.orchestra.run.vm02:> test -z $(ls -A /var/lib/ceph) 2026-03-10T10:10:33.473 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-10T10:10:33.486 INFO:teuthology.orchestra.run.vm02.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T10:10:33.518 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T10:10:33.518 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T10:10:33.526 DEBUG:teuthology.orchestra.run.vm02:> test -e /ceph-qa-ready 2026-03-10T10:10:33.540 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:10:33.731 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-10T10:10:33.745 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:10:33.939 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T10:10:33.940 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T10:10:33.941 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T10:10:33.943 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T10:10:33.958 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T10:10:33.960 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T10:10:33.962 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T10:10:33.962 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T10:10:33.998 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T10:10:34.018 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T10:10:34.019 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T10:10:34.019 DEBUG:teuthology.orchestra.run.vm02:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T10:10:34.065 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:10:34.065 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T10:10:34.081 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:10:34.081 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T10:10:34.107 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T10:10:34.128 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T10:10:34.138 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T10:10:34.147 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T10:10:34.156 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T10:10:34.157 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T10:10:34.159 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T10:10:34.159 DEBUG:teuthology.orchestra.run.vm02:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T10:10:34.181 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T10:10:34.221 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T10:10:34.223 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T10:10:34.224 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T10:10:34.246 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T10:10:34.278 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T10:10:34.323 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T10:10:34.380 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:10:34.381 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T10:10:34.439 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T10:10:34.462 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T10:10:34.519 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:10:34.519 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T10:10:34.578 DEBUG:teuthology.orchestra.run.vm02:> sudo service rsyslog restart 2026-03-10T10:10:34.580 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-10T10:10:34.605 INFO:teuthology.orchestra.run.vm02.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T10:10:34.649 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T10:10:35.100 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T10:10:35.102 INFO:teuthology.task.internal:Starting timer... 2026-03-10T10:10:35.102 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T10:10:35.105 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T10:10:35.108 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-10T10:10:35.108 INFO:teuthology.task.selinux:Excluding vm02: VMs are not yet supported 2026-03-10T10:10:35.108 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-10T10:10:35.108 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T10:10:35.108 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T10:10:35.108 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T10:10:35.108 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T10:10:35.110 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T10:10:35.111 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T10:10:35.112 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T10:10:35.610 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T10:10:35.616 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T10:10:35.616 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory0q9cyr3a --limit vm02.local,vm05.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T10:12:23.088 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm02.local'), Remote(name='ubuntu@vm05.local')] 2026-03-10T10:12:23.088 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm02.local' 2026-03-10T10:12:23.089 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T10:12:23.150 DEBUG:teuthology.orchestra.run.vm02:> true 2026-03-10T10:12:23.229 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm02.local' 2026-03-10T10:12:23.229 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-10T10:12:23.229 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T10:12:23.297 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-10T10:12:23.372 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-10T10:12:23.372 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T10:12:23.374 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T10:12:23.375 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T10:12:23.375 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T10:12:23.376 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T10:12:23.376 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T10:12:23.411 INFO:teuthology.orchestra.run.vm02.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T10:12:23.429 INFO:teuthology.orchestra.run.vm02.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T10:12:23.453 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T10:12:23.462 INFO:teuthology.orchestra.run.vm02.stderr:sudo: ntpd: command not found 2026-03-10T10:12:23.474 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T10:12:23.478 INFO:teuthology.orchestra.run.vm02.stdout:506 Cannot talk to daemon 2026-03-10T10:12:23.498 INFO:teuthology.orchestra.run.vm02.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T10:12:23.504 INFO:teuthology.orchestra.run.vm05.stderr:sudo: ntpd: command not found 2026-03-10T10:12:23.515 INFO:teuthology.orchestra.run.vm02.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T10:12:23.516 INFO:teuthology.orchestra.run.vm05.stdout:506 Cannot talk to daemon 2026-03-10T10:12:23.535 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T10:12:23.552 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T10:12:23.567 INFO:teuthology.orchestra.run.vm02.stderr:bash: line 1: ntpq: command not found 2026-03-10T10:12:23.571 INFO:teuthology.orchestra.run.vm02.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T10:12:23.571 INFO:teuthology.orchestra.run.vm02.stdout:=============================================================================== 2026-03-10T10:12:23.603 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-10T10:12:23.607 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T10:12:23.607 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-10T10:12:23.607 INFO:teuthology.run_tasks:Running task install... 2026-03-10T10:12:23.609 DEBUG:teuthology.task.install:project ceph 2026-03-10T10:12:23.609 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T10:12:23.609 DEBUG:teuthology.task.install:config {'exclude_packages': ['ceph-volume'], 'tag': 'v18.2.1', 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T10:12:23.609 INFO:teuthology.task.install:Using flavor: default 2026-03-10T10:12:23.611 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T10:12:23.611 INFO:teuthology.task.install:extra packages: [] 2026-03-10T10:12:23.611 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': ['ceph-volume'], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': 'v18.2.1', 'wait_for_package': False} 2026-03-10T10:12:23.611 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-10T10:12:23.611 INFO:teuthology.packaging:ref: None 2026-03-10T10:12:23.611 INFO:teuthology.packaging:tag: v18.2.1 2026-03-10T10:12:23.611 INFO:teuthology.packaging:branch: None 2026-03-10T10:12:23.611 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:12:24.556 DEBUG:teuthology.repo_utils:git ls-remote https://github.com/ceph/ceph v18.2.1^{} -> 7fe91d5d5842e04be3b4f514d6dd990c54b29c76 2026-03-10T10:12:24.557 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=7fe91d5d5842e04be3b4f514d6dd990c54b29c76 2026-03-10T10:12:24.557 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': ['ceph-volume'], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': 'v18.2.1', 'wait_for_package': False} 2026-03-10T10:12:24.558 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-10T10:12:24.558 INFO:teuthology.packaging:ref: None 2026-03-10T10:12:24.558 INFO:teuthology.packaging:tag: v18.2.1 2026-03-10T10:12:24.558 INFO:teuthology.packaging:branch: None 2026-03-10T10:12:24.558 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:12:24.558 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=7fe91d5d5842e04be3b4f514d6dd990c54b29c76 2026-03-10T10:12:25.201 INFO:teuthology.task.install.rpm:Pulling from https://chacra.ceph.com/r/ceph/reef/7fe91d5d5842e04be3b4f514d6dd990c54b29c76/centos/9/flavors/default/ 2026-03-10T10:12:25.201 INFO:teuthology.task.install.rpm:Package version is 18.2.1-0 2026-03-10T10:12:25.234 INFO:teuthology.task.install.rpm:Pulling from https://chacra.ceph.com/r/ceph/reef/7fe91d5d5842e04be3b4f514d6dd990c54b29c76/centos/9/flavors/default/ 2026-03-10T10:12:25.234 INFO:teuthology.task.install.rpm:Package version is 18.2.1-0 2026-03-10T10:12:25.547 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://chacra.ceph.com/r/ceph/reef/7fe91d5d5842e04be3b4f514d6dd990c54b29c76/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://chacra.ceph.com/r/ceph/reef/7fe91d5d5842e04be3b4f514d6dd990c54b29c76/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://chacra.ceph.com/r/ceph/reef/7fe91d5d5842e04be3b4f514d6dd990c54b29c76/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T10:12:25.548 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:12:25.548 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T10:12:25.550 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://chacra.ceph.com/r/ceph/reef/7fe91d5d5842e04be3b4f514d6dd990c54b29c76/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://chacra.ceph.com/r/ceph/reef/7fe91d5d5842e04be3b4f514d6dd990c54b29c76/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://chacra.ceph.com/r/ceph/reef/7fe91d5d5842e04be3b4f514d6dd990c54b29c76/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T10:12:25.550 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:12:25.550 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T10:12:25.583 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T10:12:25.583 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-10T10:12:25.583 INFO:teuthology.packaging:ref: None 2026-03-10T10:12:25.583 INFO:teuthology.packaging:tag: v18.2.1 2026-03-10T10:12:25.583 INFO:teuthology.packaging:branch: None 2026-03-10T10:12:25.583 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:12:25.583 DEBUG:teuthology.orchestra.run.vm02:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;ref/v18.2.1/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T10:12:25.585 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T10:12:25.585 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-10T10:12:25.586 INFO:teuthology.packaging:ref: None 2026-03-10T10:12:25.586 INFO:teuthology.packaging:tag: v18.2.1 2026-03-10T10:12:25.586 INFO:teuthology.packaging:branch: None 2026-03-10T10:12:25.586 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:12:25.586 DEBUG:teuthology.orchestra.run.vm05:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;ref/v18.2.1/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T10:12:25.658 DEBUG:teuthology.orchestra.run.vm05:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T10:12:25.676 DEBUG:teuthology.orchestra.run.vm02:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T10:12:25.725 DEBUG:teuthology.orchestra.run.vm02:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T10:12:25.743 DEBUG:teuthology.orchestra.run.vm05:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T10:12:25.758 INFO:teuthology.orchestra.run.vm02.stdout:check_obsoletes = 1 2026-03-10T10:12:25.759 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean all 2026-03-10T10:12:25.775 INFO:teuthology.orchestra.run.vm05.stdout:check_obsoletes = 1 2026-03-10T10:12:25.776 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-10T10:12:25.970 INFO:teuthology.orchestra.run.vm02.stdout:41 files removed 2026-03-10T10:12:25.978 INFO:teuthology.orchestra.run.vm05.stdout:41 files removed 2026-03-10T10:12:25.995 DEBUG:teuthology.orchestra.run.vm02:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T10:12:25.997 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T10:12:27.431 INFO:teuthology.orchestra.run.vm02.stdout:ceph packages for x86_64 62 kB/s | 76 kB 00:01 2026-03-10T10:12:27.534 INFO:teuthology.orchestra.run.vm05.stdout:ceph packages for x86_64 57 kB/s | 76 kB 00:01 2026-03-10T10:12:28.080 INFO:teuthology.orchestra.run.vm02.stdout:ceph noarch packages 15 kB/s | 9.4 kB 00:00 2026-03-10T10:12:28.173 INFO:teuthology.orchestra.run.vm05.stdout:ceph noarch packages 15 kB/s | 9.4 kB 00:00 2026-03-10T10:12:28.724 INFO:teuthology.orchestra.run.vm02.stdout:ceph source packages 3.5 kB/s | 2.2 kB 00:00 2026-03-10T10:12:28.815 INFO:teuthology.orchestra.run.vm05.stdout:ceph source packages 3.5 kB/s | 2.2 kB 00:00 2026-03-10T10:12:30.346 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - BaseOS 5.9 MB/s | 8.9 MB 00:01 2026-03-10T10:12:30.576 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - BaseOS 4.9 MB/s | 8.9 MB 00:01 2026-03-10T10:12:33.112 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - AppStream 16 MB/s | 27 MB 00:01 2026-03-10T10:12:33.552 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - AppStream 11 MB/s | 27 MB 00:02 2026-03-10T10:12:37.371 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - CRB 8.3 MB/s | 8.0 MB 00:00 2026-03-10T10:12:37.596 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - CRB 11 MB/s | 8.0 MB 00:00 2026-03-10T10:12:38.869 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - Extras packages 82 kB/s | 20 kB 00:00 2026-03-10T10:12:39.105 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - Extras packages 29 kB/s | 20 kB 00:00 2026-03-10T10:12:39.344 INFO:teuthology.orchestra.run.vm05.stdout:Extra Packages for Enterprise Linux 53 MB/s | 20 MB 00:00 2026-03-10T10:12:39.622 INFO:teuthology.orchestra.run.vm02.stdout:Extra Packages for Enterprise Linux 48 MB/s | 20 MB 00:00 2026-03-10T10:12:44.585 INFO:teuthology.orchestra.run.vm02.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-10T10:12:44.676 INFO:teuthology.orchestra.run.vm05.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-10T10:12:45.978 INFO:teuthology.orchestra.run.vm02.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T10:12:45.978 INFO:teuthology.orchestra.run.vm02.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T10:12:45.983 INFO:teuthology.orchestra.run.vm02.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T10:12:45.983 INFO:teuthology.orchestra.run.vm02.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T10:12:46.011 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout:Installing: 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout: ceph x86_64 2:18.2.1-0.el9 ceph 6.4 k 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base x86_64 2:18.2.1-0.el9 ceph 5.2 M 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse x86_64 2:18.2.1-0.el9 ceph 839 k 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache x86_64 2:18.2.1-0.el9 ceph 142 k 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr x86_64 2:18.2.1-0.el9 ceph 1.4 M 2026-03-10T10:12:46.014 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm noarch 2:18.2.1-0.el9 ceph-noarch 132 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard noarch 2:18.2.1-0.el9 ceph-noarch 1.8 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.1-0.el9 ceph-noarch 7.4 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook noarch 2:18.2.1-0.el9 ceph-noarch 50 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw x86_64 2:18.2.1-0.el9 ceph 7.7 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test x86_64 2:18.2.1-0.el9 ceph 40 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: cephadm noarch 2:18.2.1-0.el9 ceph-noarch 221 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel x86_64 2:18.2.1-0.el9 ceph 31 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2 x86_64 2:18.2.1-0.el9 ceph 658 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel x86_64 2:18.2.1-0.el9 ceph 127 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs x86_64 2:18.2.1-0.el9 ceph 161 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados x86_64 2:18.2.1-0.el9 ceph 321 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd x86_64 2:18.2.1-0.el9 ceph 297 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw x86_64 2:18.2.1-0.el9 ceph 99 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse x86_64 2:18.2.1-0.el9 ceph 86 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror x86_64 2:18.2.1-0.el9 ceph 3.0 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd x86_64 2:18.2.1-0.el9 ceph 171 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout:Upgrading: 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: librados2 x86_64 2:18.2.1-0.el9 ceph 3.3 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: librbd1 x86_64 2:18.2.1-0.el9 ceph 3.0 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout:Installing dependencies: 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common x86_64 2:18.2.1-0.el9 ceph 18 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards noarch 2:18.2.1-0.el9 ceph-noarch 23 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds x86_64 2:18.2.1-0.el9 ceph 2.1 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core noarch 2:18.2.1-0.el9 ceph-noarch 242 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon x86_64 2:18.2.1-0.el9 ceph 4.4 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd x86_64 2:18.2.1-0.el9 ceph 18 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts noarch 2:18.2.1-0.el9 ceph-noarch 15 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux x86_64 2:18.2.1-0.el9 ceph 24 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: fmt x86_64 8.1.1-5.el9 epel 111 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite x86_64 2:18.2.1-0.el9 ceph 165 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 x86_64 2:18.2.1-0.el9 ceph 474 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: librgw2 x86_64 2:18.2.1-0.el9 ceph 4.5 M 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T10:12:46.015 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse x86_64 2:18.2.1-0.el9 ceph 45 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common x86_64 2:18.2.1-0.el9 ceph 124 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-jwt noarch 2.4.0-1.el9 epel 41 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout:Installing weak dependencies: 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: python3-jwt+crypto noarch 2.4.0-1.el9 epel 9.0 k 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:12:46.016 INFO:teuthology.orchestra.run.vm02.stdout:Install 117 Packages 2026-03-10T10:12:46.017 INFO:teuthology.orchestra.run.vm02.stdout:Upgrade 2 Packages 2026-03-10T10:12:46.017 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:12:46.017 INFO:teuthology.orchestra.run.vm02.stdout:Total download size: 182 M 2026-03-10T10:12:46.017 INFO:teuthology.orchestra.run.vm02.stdout:Downloading Packages: 2026-03-10T10:12:46.287 INFO:teuthology.orchestra.run.vm05.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T10:12:46.287 INFO:teuthology.orchestra.run.vm05.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T10:12:46.291 INFO:teuthology.orchestra.run.vm05.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T10:12:46.291 INFO:teuthology.orchestra.run.vm05.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T10:12:46.322 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout:Installing: 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:18.2.1-0.el9 ceph 6.4 k 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:18.2.1-0.el9 ceph 5.2 M 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:18.2.1-0.el9 ceph 839 k 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:18.2.1-0.el9 ceph 142 k 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:18.2.1-0.el9 ceph 1.4 M 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:18.2.1-0.el9 ceph-noarch 132 k 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:18.2.1-0.el9 ceph-noarch 1.8 M 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.1-0.el9 ceph-noarch 7.4 M 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:18.2.1-0.el9 ceph-noarch 50 k 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:18.2.1-0.el9 ceph 7.7 M 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:18.2.1-0.el9 ceph 40 M 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:18.2.1-0.el9 ceph-noarch 221 k 2026-03-10T10:12:46.328 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:18.2.1-0.el9 ceph 31 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:18.2.1-0.el9 ceph 658 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:18.2.1-0.el9 ceph 127 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:18.2.1-0.el9 ceph 161 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:18.2.1-0.el9 ceph 321 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:18.2.1-0.el9 ceph 297 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:18.2.1-0.el9 ceph 99 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:18.2.1-0.el9 ceph 86 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:18.2.1-0.el9 ceph 3.0 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:18.2.1-0.el9 ceph 171 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout:Upgrading: 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:18.2.1-0.el9 ceph 3.3 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:18.2.1-0.el9 ceph 3.0 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout:Installing dependencies: 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:18.2.1-0.el9 ceph 18 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:18.2.1-0.el9 ceph-noarch 23 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:18.2.1-0.el9 ceph 2.1 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:18.2.1-0.el9 ceph-noarch 242 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:18.2.1-0.el9 ceph 4.4 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:18.2.1-0.el9 ceph 18 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:18.2.1-0.el9 ceph-noarch 15 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:18.2.1-0.el9 ceph 24 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: fmt x86_64 8.1.1-5.el9 epel 111 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:18.2.1-0.el9 ceph 165 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:18.2.1-0.el9 ceph 474 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:18.2.1-0.el9 ceph 4.5 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:18.2.1-0.el9 ceph 45 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:18.2.1-0.el9 ceph 124 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T10:12:46.329 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt noarch 2.4.0-1.el9 epel 41 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout:Installing weak dependencies: 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt+crypto noarch 2.4.0-1.el9 epel 9.0 k 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout:Install 117 Packages 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout:Upgrade 2 Packages 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout:Total download size: 182 M 2026-03-10T10:12:46.330 INFO:teuthology.orchestra.run.vm05.stdout:Downloading Packages: 2026-03-10T10:12:47.107 INFO:teuthology.orchestra.run.vm02.stdout:(1/119): ceph-18.2.1-0.el9.x86_64.rpm 20 kB/s | 6.4 kB 00:00 2026-03-10T10:12:48.085 INFO:teuthology.orchestra.run.vm05.stdout:(1/119): ceph-18.2.1-0.el9.x86_64.rpm 21 kB/s | 6.4 kB 00:00 2026-03-10T10:12:49.127 INFO:teuthology.orchestra.run.vm05.stdout:(2/119): ceph-base-18.2.1-0.el9.x86_64.rpm 3.8 MB/s | 5.2 MB 00:01 2026-03-10T10:12:49.231 INFO:teuthology.orchestra.run.vm05.stdout:(3/119): ceph-immutable-object-cache-18.2.1-0.e 1.3 MB/s | 142 kB 00:00 2026-03-10T10:12:49.546 INFO:teuthology.orchestra.run.vm05.stdout:(4/119): ceph-mds-18.2.1-0.el9.x86_64.rpm 6.7 MB/s | 2.1 MB 00:00 2026-03-10T10:12:49.861 INFO:teuthology.orchestra.run.vm05.stdout:(5/119): ceph-mgr-18.2.1-0.el9.x86_64.rpm 4.6 MB/s | 1.4 MB 00:00 2026-03-10T10:12:50.010 INFO:teuthology.orchestra.run.vm02.stdout:(2/119): ceph-fuse-18.2.1-0.el9.x86_64.rpm 289 kB/s | 839 kB 00:02 2026-03-10T10:12:50.303 INFO:teuthology.orchestra.run.vm02.stdout:(3/119): ceph-common-18.2.1-0.el9.x86_64.rpm 5.2 MB/s | 18 MB 00:03 2026-03-10T10:12:50.317 INFO:teuthology.orchestra.run.vm02.stdout:(4/119): ceph-immutable-object-cache-18.2.1-0.e 462 kB/s | 142 kB 00:00 2026-03-10T10:12:50.457 INFO:teuthology.orchestra.run.vm05.stdout:(6/119): ceph-fuse-18.2.1-0.el9.x86_64.rpm 354 kB/s | 839 kB 00:02 2026-03-10T10:12:50.609 INFO:teuthology.orchestra.run.vm02.stdout:(5/119): ceph-mds-18.2.1-0.el9.x86_64.rpm 6.9 MB/s | 2.1 MB 00:00 2026-03-10T10:12:50.692 INFO:teuthology.orchestra.run.vm05.stdout:(7/119): ceph-mon-18.2.1-0.el9.x86_64.rpm 5.3 MB/s | 4.4 MB 00:00 2026-03-10T10:12:51.125 INFO:teuthology.orchestra.run.vm02.stdout:(6/119): ceph-mon-18.2.1-0.el9.x86_64.rpm 8.5 MB/s | 4.4 MB 00:00 2026-03-10T10:12:51.959 INFO:teuthology.orchestra.run.vm02.stdout:(7/119): ceph-mgr-18.2.1-0.el9.x86_64.rpm 903 kB/s | 1.4 MB 00:01 2026-03-10T10:12:52.050 INFO:teuthology.orchestra.run.vm05.stdout:(8/119): ceph-radosgw-18.2.1-0.el9.x86_64.rpm 5.7 MB/s | 7.7 MB 00:01 2026-03-10T10:12:52.153 INFO:teuthology.orchestra.run.vm05.stdout:(9/119): ceph-selinux-18.2.1-0.el9.x86_64.rpm 234 kB/s | 24 kB 00:00 2026-03-10T10:12:52.952 INFO:teuthology.orchestra.run.vm02.stdout:(8/119): ceph-osd-18.2.1-0.el9.x86_64.rpm 9.6 MB/s | 18 MB 00:01 2026-03-10T10:12:53.052 INFO:teuthology.orchestra.run.vm02.stdout:(9/119): ceph-selinux-18.2.1-0.el9.x86_64.rpm 240 kB/s | 24 kB 00:00 2026-03-10T10:12:54.853 INFO:teuthology.orchestra.run.vm02.stdout:(10/119): ceph-radosgw-18.2.1-0.el9.x86_64.rpm 2.7 MB/s | 7.7 MB 00:02 2026-03-10T10:12:54.961 INFO:teuthology.orchestra.run.vm02.stdout:(11/119): libcephfs-devel-18.2.1-0.el9.x86_64.r 286 kB/s | 31 kB 00:00 2026-03-10T10:12:55.170 INFO:teuthology.orchestra.run.vm02.stdout:(12/119): libcephfs2-18.2.1-0.el9.x86_64.rpm 3.1 MB/s | 658 kB 00:00 2026-03-10T10:12:55.276 INFO:teuthology.orchestra.run.vm02.stdout:(13/119): libcephsqlite-18.2.1-0.el9.x86_64.rpm 1.5 MB/s | 165 kB 00:00 2026-03-10T10:12:55.380 INFO:teuthology.orchestra.run.vm02.stdout:(14/119): librados-devel-18.2.1-0.el9.x86_64.rp 1.2 MB/s | 127 kB 00:00 2026-03-10T10:12:55.488 INFO:teuthology.orchestra.run.vm02.stdout:(15/119): libradosstriper1-18.2.1-0.el9.x86_64. 4.3 MB/s | 474 kB 00:00 2026-03-10T10:12:56.036 INFO:teuthology.orchestra.run.vm05.stdout:(10/119): ceph-common-18.2.1-0.el9.x86_64.rpm 2.2 MB/s | 18 MB 00:08 2026-03-10T10:12:56.183 INFO:teuthology.orchestra.run.vm05.stdout:(11/119): ceph-osd-18.2.1-0.el9.x86_64.rpm 3.1 MB/s | 18 MB 00:05 2026-03-10T10:12:56.183 INFO:teuthology.orchestra.run.vm05.stdout:(12/119): libcephfs-devel-18.2.1-0.el9.x86_64.r 210 kB/s | 31 kB 00:00 2026-03-10T10:12:56.228 INFO:teuthology.orchestra.run.vm02.stdout:(16/119): librgw2-18.2.1-0.el9.x86_64.rpm 6.0 MB/s | 4.5 MB 00:00 2026-03-10T10:12:56.291 INFO:teuthology.orchestra.run.vm05.stdout:(13/119): libcephfs2-18.2.1-0.el9.x86_64.rpm 6.0 MB/s | 658 kB 00:00 2026-03-10T10:12:56.292 INFO:teuthology.orchestra.run.vm05.stdout:(14/119): libcephsqlite-18.2.1-0.el9.x86_64.rpm 1.5 MB/s | 165 kB 00:00 2026-03-10T10:12:56.331 INFO:teuthology.orchestra.run.vm02.stdout:(17/119): python3-ceph-argparse-18.2.1-0.el9.x8 433 kB/s | 45 kB 00:00 2026-03-10T10:12:56.399 INFO:teuthology.orchestra.run.vm05.stdout:(15/119): librados-devel-18.2.1-0.el9.x86_64.rp 1.1 MB/s | 127 kB 00:00 2026-03-10T10:12:56.402 INFO:teuthology.orchestra.run.vm05.stdout:(16/119): libradosstriper1-18.2.1-0.el9.x86_64. 4.2 MB/s | 474 kB 00:00 2026-03-10T10:12:56.417 INFO:teuthology.orchestra.run.vm02.stdout:(18/119): ceph-base-18.2.1-0.el9.x86_64.rpm 552 kB/s | 5.2 MB 00:09 2026-03-10T10:12:56.436 INFO:teuthology.orchestra.run.vm02.stdout:(19/119): python3-ceph-common-18.2.1-0.el9.x86_ 1.2 MB/s | 124 kB 00:00 2026-03-10T10:12:56.507 INFO:teuthology.orchestra.run.vm05.stdout:(17/119): python3-ceph-argparse-18.2.1-0.el9.x8 427 kB/s | 45 kB 00:00 2026-03-10T10:12:56.542 INFO:teuthology.orchestra.run.vm02.stdout:(20/119): python3-rados-18.2.1-0.el9.x86_64.rpm 3.0 MB/s | 321 kB 00:00 2026-03-10T10:12:56.607 INFO:teuthology.orchestra.run.vm05.stdout:(18/119): python3-ceph-common-18.2.1-0.el9.x86_ 1.2 MB/s | 124 kB 00:00 2026-03-10T10:12:56.616 INFO:teuthology.orchestra.run.vm02.stdout:(21/119): python3-cephfs-18.2.1-0.el9.x86_64.rp 811 kB/s | 161 kB 00:00 2026-03-10T10:12:56.649 INFO:teuthology.orchestra.run.vm02.stdout:(22/119): python3-rbd-18.2.1-0.el9.x86_64.rpm 2.7 MB/s | 297 kB 00:00 2026-03-10T10:12:56.708 INFO:teuthology.orchestra.run.vm05.stdout:(19/119): python3-cephfs-18.2.1-0.el9.x86_64.rp 1.6 MB/s | 161 kB 00:00 2026-03-10T10:12:56.752 INFO:teuthology.orchestra.run.vm02.stdout:(23/119): rbd-fuse-18.2.1-0.el9.x86_64.rpm 832 kB/s | 86 kB 00:00 2026-03-10T10:12:56.811 INFO:teuthology.orchestra.run.vm05.stdout:(20/119): python3-rados-18.2.1-0.el9.x86_64.rpm 3.0 MB/s | 321 kB 00:00 2026-03-10T10:12:56.814 INFO:teuthology.orchestra.run.vm02.stdout:(24/119): python3-rgw-18.2.1-0.el9.x86_64.rpm 500 kB/s | 99 kB 00:00 2026-03-10T10:12:56.913 INFO:teuthology.orchestra.run.vm05.stdout:(21/119): python3-rbd-18.2.1-0.el9.x86_64.rpm 2.8 MB/s | 297 kB 00:00 2026-03-10T10:12:57.014 INFO:teuthology.orchestra.run.vm05.stdout:(22/119): python3-rgw-18.2.1-0.el9.x86_64.rpm 990 kB/s | 99 kB 00:00 2026-03-10T10:12:57.014 INFO:teuthology.orchestra.run.vm02.stdout:(25/119): rbd-nbd-18.2.1-0.el9.x86_64.rpm 855 kB/s | 171 kB 00:00 2026-03-10T10:12:57.104 INFO:teuthology.orchestra.run.vm05.stdout:(23/119): librgw2-18.2.1-0.el9.x86_64.rpm 6.3 MB/s | 4.5 MB 00:00 2026-03-10T10:12:57.113 INFO:teuthology.orchestra.run.vm05.stdout:(24/119): rbd-fuse-18.2.1-0.el9.x86_64.rpm 865 kB/s | 86 kB 00:00 2026-03-10T10:12:57.114 INFO:teuthology.orchestra.run.vm02.stdout:(26/119): ceph-grafana-dashboards-18.2.1-0.el9. 233 kB/s | 23 kB 00:00 2026-03-10T10:12:57.175 INFO:teuthology.orchestra.run.vm02.stdout:(27/119): rbd-mirror-18.2.1-0.el9.x86_64.rpm 7.1 MB/s | 3.0 MB 00:00 2026-03-10T10:12:57.215 INFO:teuthology.orchestra.run.vm05.stdout:(25/119): rbd-nbd-18.2.1-0.el9.x86_64.rpm 1.7 MB/s | 171 kB 00:00 2026-03-10T10:12:57.314 INFO:teuthology.orchestra.run.vm02.stdout:(28/119): ceph-mgr-cephadm-18.2.1-0.el9.noarch. 660 kB/s | 132 kB 00:00 2026-03-10T10:12:57.314 INFO:teuthology.orchestra.run.vm05.stdout:(26/119): ceph-grafana-dashboards-18.2.1-0.el9. 233 kB/s | 23 kB 00:00 2026-03-10T10:12:57.395 INFO:teuthology.orchestra.run.vm02.stdout:(29/119): ceph-mgr-dashboard-18.2.1-0.el9.noarc 8.0 MB/s | 1.8 MB 00:00 2026-03-10T10:12:57.417 INFO:teuthology.orchestra.run.vm05.stdout:(27/119): ceph-mgr-cephadm-18.2.1-0.el9.noarch. 1.3 MB/s | 132 kB 00:00 2026-03-10T10:12:57.501 INFO:teuthology.orchestra.run.vm02.stdout:(30/119): ceph-mgr-modules-core-18.2.1-0.el9.no 2.2 MB/s | 242 kB 00:00 2026-03-10T10:12:57.608 INFO:teuthology.orchestra.run.vm02.stdout:(31/119): ceph-mgr-rook-18.2.1-0.el9.noarch.rpm 470 kB/s | 50 kB 00:00 2026-03-10T10:12:57.608 INFO:teuthology.orchestra.run.vm05.stdout:(28/119): rbd-mirror-18.2.1-0.el9.x86_64.rpm 5.9 MB/s | 3.0 MB 00:00 2026-03-10T10:12:57.716 INFO:teuthology.orchestra.run.vm02.stdout:(32/119): ceph-prometheus-alerts-18.2.1-0.el9.n 135 kB/s | 15 kB 00:00 2026-03-10T10:12:57.729 INFO:teuthology.orchestra.run.vm05.stdout:(29/119): ceph-mgr-dashboard-18.2.1-0.el9.noarc 5.6 MB/s | 1.8 MB 00:00 2026-03-10T10:12:57.823 INFO:teuthology.orchestra.run.vm02.stdout:(33/119): cephadm-18.2.1-0.el9.noarch.rpm 2.0 MB/s | 221 kB 00:00 2026-03-10T10:12:57.831 INFO:teuthology.orchestra.run.vm05.stdout:(30/119): ceph-mgr-modules-core-18.2.1-0.el9.no 2.3 MB/s | 242 kB 00:00 2026-03-10T10:12:57.893 INFO:teuthology.orchestra.run.vm02.stdout:(34/119): ledmon-libs-1.1.0-3.el9.x86_64.rpm 583 kB/s | 40 kB 00:00 2026-03-10T10:12:57.931 INFO:teuthology.orchestra.run.vm05.stdout:(31/119): ceph-mgr-rook-18.2.1-0.el9.noarch.rpm 502 kB/s | 50 kB 00:00 2026-03-10T10:12:57.964 INFO:teuthology.orchestra.run.vm02.stdout:(35/119): libconfig-1.7.2-9.el9.x86_64.rpm 1.0 MB/s | 72 kB 00:00 2026-03-10T10:12:58.031 INFO:teuthology.orchestra.run.vm05.stdout:(32/119): ceph-prometheus-alerts-18.2.1-0.el9.n 147 kB/s | 15 kB 00:00 2026-03-10T10:12:58.134 INFO:teuthology.orchestra.run.vm05.stdout:(33/119): cephadm-18.2.1-0.el9.noarch.rpm 2.1 MB/s | 221 kB 00:00 2026-03-10T10:12:58.141 INFO:teuthology.orchestra.run.vm02.stdout:(36/119): libgfortran-11.5.0-14.el9.x86_64.rpm 4.4 MB/s | 794 kB 00:00 2026-03-10T10:12:58.185 INFO:teuthology.orchestra.run.vm05.stdout:(34/119): ledmon-libs-1.1.0-3.el9.x86_64.rpm 789 kB/s | 40 kB 00:00 2026-03-10T10:12:58.219 INFO:teuthology.orchestra.run.vm05.stdout:(35/119): libconfig-1.7.2-9.el9.x86_64.rpm 2.1 MB/s | 72 kB 00:00 2026-03-10T10:12:58.324 INFO:teuthology.orchestra.run.vm05.stdout:(36/119): libgfortran-11.5.0-14.el9.x86_64.rpm 7.4 MB/s | 794 kB 00:00 2026-03-10T10:12:58.324 INFO:teuthology.orchestra.run.vm02.stdout:(37/119): ceph-test-18.2.1-0.el9.x86_64.rpm 7.5 MB/s | 40 MB 00:05 2026-03-10T10:12:58.326 INFO:teuthology.orchestra.run.vm02.stdout:(38/119): libquadmath-11.5.0-14.el9.x86_64.rpm 1.0 MB/s | 184 kB 00:00 2026-03-10T10:12:58.341 INFO:teuthology.orchestra.run.vm05.stdout:(37/119): libquadmath-11.5.0-14.el9.x86_64.rpm 11 MB/s | 184 kB 00:00 2026-03-10T10:12:58.355 INFO:teuthology.orchestra.run.vm05.stdout:(38/119): mailcap-2.1.49-5.el9.noarch.rpm 2.2 MB/s | 33 kB 00:00 2026-03-10T10:12:58.374 INFO:teuthology.orchestra.run.vm05.stdout:(39/119): python3-cffi-1.14.5-5.el9.x86_64.rpm 14 MB/s | 253 kB 00:00 2026-03-10T10:12:58.376 INFO:teuthology.orchestra.run.vm02.stdout:(39/119): mailcap-2.1.49-5.el9.noarch.rpm 649 kB/s | 33 kB 00:00 2026-03-10T10:12:58.382 INFO:teuthology.orchestra.run.vm02.stdout:(40/119): python3-cffi-1.14.5-5.el9.x86_64.rpm 4.4 MB/s | 253 kB 00:00 2026-03-10T10:12:58.413 INFO:teuthology.orchestra.run.vm05.stdout:(40/119): python3-cryptography-36.0.1-5.el9.x86 32 MB/s | 1.2 MB 00:00 2026-03-10T10:12:58.421 INFO:teuthology.orchestra.run.vm02.stdout:(41/119): python3-ply-3.11-14.el9.noarch.rpm 2.7 MB/s | 106 kB 00:00 2026-03-10T10:12:58.434 INFO:teuthology.orchestra.run.vm05.stdout:(41/119): python3-ply-3.11-14.el9.noarch.rpm 5.1 MB/s | 106 kB 00:00 2026-03-10T10:12:58.451 INFO:teuthology.orchestra.run.vm05.stdout:(42/119): python3-pycparser-2.20-6.el9.noarch.r 8.1 MB/s | 135 kB 00:00 2026-03-10T10:12:58.467 INFO:teuthology.orchestra.run.vm05.stdout:(43/119): python3-requests-2.25.1-10.el9.noarch 7.8 MB/s | 126 kB 00:00 2026-03-10T10:12:58.478 INFO:teuthology.orchestra.run.vm02.stdout:(42/119): python3-pycparser-2.20-6.el9.noarch.r 2.3 MB/s | 135 kB 00:00 2026-03-10T10:12:58.483 INFO:teuthology.orchestra.run.vm05.stdout:(44/119): python3-urllib3-1.26.5-7.el9.noarch.r 13 MB/s | 218 kB 00:00 2026-03-10T10:12:58.513 INFO:teuthology.orchestra.run.vm02.stdout:(43/119): python3-requests-2.25.1-10.el9.noarch 3.5 MB/s | 126 kB 00:00 2026-03-10T10:12:58.519 INFO:teuthology.orchestra.run.vm02.stdout:(44/119): python3-cryptography-36.0.1-5.el9.x86 8.7 MB/s | 1.2 MB 00:00 2026-03-10T10:12:58.551 INFO:teuthology.orchestra.run.vm02.stdout:(45/119): python3-urllib3-1.26.5-7.el9.noarch.r 5.6 MB/s | 218 kB 00:00 2026-03-10T10:12:58.630 INFO:teuthology.orchestra.run.vm05.stdout:(45/119): ceph-mgr-diskprediction-local-18.2.1- 7.3 MB/s | 7.4 MB 00:01 2026-03-10T10:12:58.640 INFO:teuthology.orchestra.run.vm02.stdout:(46/119): flexiblas-3.0.4-9.el9.x86_64.rpm 333 kB/s | 30 kB 00:00 2026-03-10T10:12:58.657 INFO:teuthology.orchestra.run.vm05.stdout:(46/119): boost-program-options-1.75.0-13.el9.x 602 kB/s | 104 kB 00:00 2026-03-10T10:12:58.698 INFO:teuthology.orchestra.run.vm02.stdout:(47/119): boost-program-options-1.75.0-13.el9.x 583 kB/s | 104 kB 00:00 2026-03-10T10:12:58.730 INFO:teuthology.orchestra.run.vm02.stdout:(48/119): flexiblas-openblas-openmp-3.0.4-9.el9 471 kB/s | 15 kB 00:00 2026-03-10T10:12:58.749 INFO:teuthology.orchestra.run.vm05.stdout:(47/119): flexiblas-3.0.4-9.el9.x86_64.rpm 249 kB/s | 30 kB 00:00 2026-03-10T10:12:58.784 INFO:teuthology.orchestra.run.vm05.stdout:(48/119): flexiblas-openblas-openmp-3.0.4-9.el9 431 kB/s | 15 kB 00:00 2026-03-10T10:12:58.832 INFO:teuthology.orchestra.run.vm02.stdout:(49/119): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.5 MB/s | 160 kB 00:00 2026-03-10T10:12:58.864 INFO:teuthology.orchestra.run.vm02.stdout:(50/119): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.4 MB/s | 45 kB 00:00 2026-03-10T10:12:58.873 INFO:teuthology.orchestra.run.vm05.stdout:(49/119): flexiblas-netlib-3.0.4-9.el9.x86_64.r 14 MB/s | 3.0 MB 00:00 2026-03-10T10:12:58.890 INFO:teuthology.orchestra.run.vm05.stdout:(50/119): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.5 MB/s | 160 kB 00:00 2026-03-10T10:12:58.891 INFO:teuthology.orchestra.run.vm02.stdout:(51/119): flexiblas-netlib-3.0.4-9.el9.x86_64.r 12 MB/s | 3.0 MB 00:00 2026-03-10T10:12:58.925 INFO:teuthology.orchestra.run.vm02.stdout:(52/119): libstoragemgmt-1.10.1-1.el9.x86_64.rp 7.1 MB/s | 246 kB 00:00 2026-03-10T10:12:58.957 INFO:teuthology.orchestra.run.vm02.stdout:(53/119): librdkafka-1.6.1-102.el9.x86_64.rpm 7.0 MB/s | 662 kB 00:00 2026-03-10T10:12:58.959 INFO:teuthology.orchestra.run.vm02.stdout:(54/119): libxslt-1.1.34-12.el9.x86_64.rpm 6.7 MB/s | 233 kB 00:00 2026-03-10T10:12:58.982 INFO:teuthology.orchestra.run.vm05.stdout:(51/119): librabbitmq-0.11.0-7.el9.x86_64.rpm 414 kB/s | 45 kB 00:00 2026-03-10T10:12:58.991 INFO:teuthology.orchestra.run.vm02.stdout:(55/119): lttng-ust-2.12.0-6.el9.x86_64.rpm 8.3 MB/s | 292 kB 00:00 2026-03-10T10:12:58.992 INFO:teuthology.orchestra.run.vm02.stdout:(56/119): openblas-0.3.29-1.el9.x86_64.rpm 1.2 MB/s | 42 kB 00:00 2026-03-10T10:12:59.021 INFO:teuthology.orchestra.run.vm05.stdout:(52/119): libstoragemgmt-1.10.1-1.el9.x86_64.rp 6.2 MB/s | 246 kB 00:00 2026-03-10T10:12:59.053 INFO:teuthology.orchestra.run.vm05.stdout:(53/119): libxslt-1.1.34-12.el9.x86_64.rpm 7.2 MB/s | 233 kB 00:00 2026-03-10T10:12:59.056 INFO:teuthology.orchestra.run.vm05.stdout:(54/119): librdkafka-1.6.1-102.el9.x86_64.rpm 3.9 MB/s | 662 kB 00:00 2026-03-10T10:12:59.083 INFO:teuthology.orchestra.run.vm05.stdout:(55/119): lttng-ust-2.12.0-6.el9.x86_64.rpm 9.7 MB/s | 292 kB 00:00 2026-03-10T10:12:59.103 INFO:teuthology.orchestra.run.vm05.stdout:(56/119): openblas-0.3.29-1.el9.x86_64.rpm 897 kB/s | 42 kB 00:00 2026-03-10T10:12:59.322 INFO:teuthology.orchestra.run.vm02.stdout:(57/119): python3-babel-2.9.1-2.el9.noarch.rpm 18 MB/s | 6.0 MB 00:00 2026-03-10T10:12:59.421 INFO:teuthology.orchestra.run.vm05.stdout:(57/119): ceph-test-18.2.1-0.el9.x86_64.rpm 5.5 MB/s | 40 MB 00:07 2026-03-10T10:12:59.436 INFO:teuthology.orchestra.run.vm05.stdout:(58/119): openblas-openmp-0.3.29-1.el9.x86_64.r 15 MB/s | 5.3 MB 00:00 2026-03-10T10:12:59.454 INFO:teuthology.orchestra.run.vm02.stdout:(58/119): python3-devel-3.9.25-3.el9.x86_64.rpm 1.8 MB/s | 244 kB 00:00 2026-03-10T10:12:59.488 INFO:teuthology.orchestra.run.vm02.stdout:(59/119): python3-jinja2-2.11.3-8.el9.noarch.rp 7.2 MB/s | 249 kB 00:00 2026-03-10T10:12:59.497 INFO:teuthology.orchestra.run.vm05.stdout:(59/119): python3-babel-2.9.1-2.el9.noarch.rpm 15 MB/s | 6.0 MB 00:00 2026-03-10T10:12:59.513 INFO:teuthology.orchestra.run.vm05.stdout:(60/119): python3-jinja2-2.11.3-8.el9.noarch.rp 3.2 MB/s | 249 kB 00:00 2026-03-10T10:12:59.519 INFO:teuthology.orchestra.run.vm02.stdout:(60/119): python3-jmespath-1.0.1-1.el9.noarch.r 1.5 MB/s | 48 kB 00:00 2026-03-10T10:12:59.550 INFO:teuthology.orchestra.run.vm02.stdout:(61/119): openblas-openmp-0.3.29-1.el9.x86_64.r 9.5 MB/s | 5.3 MB 00:00 2026-03-10T10:12:59.552 INFO:teuthology.orchestra.run.vm02.stdout:(62/119): python3-libstoragemgmt-1.10.1-1.el9.x 5.3 MB/s | 177 kB 00:00 2026-03-10T10:12:59.583 INFO:teuthology.orchestra.run.vm02.stdout:(63/119): python3-mako-1.1.4-6.el9.noarch.rpm 5.2 MB/s | 172 kB 00:00 2026-03-10T10:12:59.583 INFO:teuthology.orchestra.run.vm02.stdout:(64/119): python3-markupsafe-1.1.1-12.el9.x86_6 1.1 MB/s | 35 kB 00:00 2026-03-10T10:12:59.591 INFO:teuthology.orchestra.run.vm05.stdout:(61/119): python3-devel-3.9.25-3.el9.x86_64.rpm 1.4 MB/s | 244 kB 00:00 2026-03-10T10:12:59.597 INFO:teuthology.orchestra.run.vm05.stdout:(62/119): python3-jmespath-1.0.1-1.el9.noarch.r 475 kB/s | 48 kB 00:00 2026-03-10T10:12:59.628 INFO:teuthology.orchestra.run.vm05.stdout:(63/119): python3-libstoragemgmt-1.10.1-1.el9.x 1.5 MB/s | 177 kB 00:00 2026-03-10T10:12:59.636 INFO:teuthology.orchestra.run.vm05.stdout:(64/119): python3-markupsafe-1.1.1-12.el9.x86_6 906 kB/s | 35 kB 00:00 2026-03-10T10:12:59.638 INFO:teuthology.orchestra.run.vm05.stdout:(65/119): python3-mako-1.1.4-6.el9.noarch.rpm 3.6 MB/s | 172 kB 00:00 2026-03-10T10:12:59.678 INFO:teuthology.orchestra.run.vm05.stdout:(66/119): python3-pyasn1-0.4.8-7.el9.noarch.rpm 3.8 MB/s | 157 kB 00:00 2026-03-10T10:12:59.680 INFO:teuthology.orchestra.run.vm02.stdout:(65/119): python3-numpy-f2py-1.23.5-2.el9.x86_6 4.5 MB/s | 442 kB 00:00 2026-03-10T10:12:59.714 INFO:teuthology.orchestra.run.vm02.stdout:(66/119): python3-pyasn1-0.4.8-7.el9.noarch.rpm 4.6 MB/s | 157 kB 00:00 2026-03-10T10:12:59.717 INFO:teuthology.orchestra.run.vm05.stdout:(67/119): python3-numpy-f2py-1.23.5-2.el9.x86_6 5.3 MB/s | 442 kB 00:00 2026-03-10T10:12:59.723 INFO:teuthology.orchestra.run.vm05.stdout:(68/119): python3-pyasn1-modules-0.4.8-7.el9.no 6.0 MB/s | 277 kB 00:00 2026-03-10T10:12:59.772 INFO:teuthology.orchestra.run.vm05.stdout:(69/119): python3-requests-oauthlib-1.3.0-12.el 973 kB/s | 54 kB 00:00 2026-03-10T10:12:59.777 INFO:teuthology.orchestra.run.vm02.stdout:(67/119): python3-pyasn1-modules-0.4.8-7.el9.no 4.3 MB/s | 277 kB 00:00 2026-03-10T10:12:59.809 INFO:teuthology.orchestra.run.vm02.stdout:(68/119): python3-requests-oauthlib-1.3.0-12.el 1.7 MB/s | 54 kB 00:00 2026-03-10T10:12:59.809 INFO:teuthology.orchestra.run.vm05.stdout:(70/119): python3-toml-0.10.2-6.el9.noarch.rpm 1.1 MB/s | 42 kB 00:00 2026-03-10T10:12:59.920 INFO:teuthology.orchestra.run.vm05.stdout:(71/119): socat-1.7.4.1-8.el9.x86_64.rpm 2.7 MB/s | 303 kB 00:00 2026-03-10T10:12:59.942 INFO:teuthology.orchestra.run.vm05.stdout:(72/119): python3-numpy-1.23.5-2.el9.x86_64.rpm 20 MB/s | 6.1 MB 00:00 2026-03-10T10:12:59.972 INFO:teuthology.orchestra.run.vm05.stdout:(73/119): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.2 MB/s | 64 kB 00:00 2026-03-10T10:12:59.974 INFO:teuthology.orchestra.run.vm05.stdout:(74/119): fmt-8.1.1-5.el9.x86_64.rpm 3.3 MB/s | 111 kB 00:00 2026-03-10T10:13:00.015 INFO:teuthology.orchestra.run.vm05.stdout:(75/119): gperftools-libs-2.9.1-3.el9.x86_64.rp 7.1 MB/s | 308 kB 00:00 2026-03-10T10:13:00.122 INFO:teuthology.orchestra.run.vm05.stdout:(76/119): libarrow-9.0.0-15.el9.x86_64.rpm 30 MB/s | 4.4 MB 00:00 2026-03-10T10:13:00.125 INFO:teuthology.orchestra.run.vm05.stdout:(77/119): liboath-2.6.12-1.el9.x86_64.rpm 17 MB/s | 49 kB 00:00 2026-03-10T10:13:00.128 INFO:teuthology.orchestra.run.vm05.stdout:(78/119): libunwind-1.6.2-1.el9.x86_64.rpm 26 MB/s | 67 kB 00:00 2026-03-10T10:13:00.143 INFO:teuthology.orchestra.run.vm05.stdout:(79/119): parquet-libs-9.0.0-15.el9.x86_64.rpm 54 MB/s | 838 kB 00:00 2026-03-10T10:13:00.168 INFO:teuthology.orchestra.run.vm05.stdout:(80/119): python3-asyncssh-2.13.2-5.el9.noarch. 22 MB/s | 548 kB 00:00 2026-03-10T10:13:00.190 INFO:teuthology.orchestra.run.vm05.stdout:(81/119): python3-autocommand-2.2.2-8.el9.noarc 1.4 MB/s | 29 kB 00:00 2026-03-10T10:13:00.193 INFO:teuthology.orchestra.run.vm05.stdout:(82/119): python3-backports-tarfile-1.2.0-1.el9 20 MB/s | 60 kB 00:00 2026-03-10T10:13:00.196 INFO:teuthology.orchestra.run.vm05.stdout:(83/119): python3-bcrypt-3.2.2-1.el9.x86_64.rpm 18 MB/s | 43 kB 00:00 2026-03-10T10:13:00.199 INFO:teuthology.orchestra.run.vm05.stdout:(84/119): python3-cachetools-4.2.4-1.el9.noarch 10 MB/s | 32 kB 00:00 2026-03-10T10:13:00.201 INFO:teuthology.orchestra.run.vm05.stdout:(85/119): python3-certifi-2023.05.07-4.el9.noar 6.2 MB/s | 14 kB 00:00 2026-03-10T10:13:00.207 INFO:teuthology.orchestra.run.vm05.stdout:(86/119): python3-cheroot-10.0.1-4.el9.noarch.r 34 MB/s | 173 kB 00:00 2026-03-10T10:13:00.215 INFO:teuthology.orchestra.run.vm05.stdout:(87/119): python3-cherrypy-18.6.1-2.el9.noarch. 43 MB/s | 358 kB 00:00 2026-03-10T10:13:00.222 INFO:teuthology.orchestra.run.vm05.stdout:(88/119): python3-google-auth-2.45.0-1.el9.noar 38 MB/s | 254 kB 00:00 2026-03-10T10:13:00.224 INFO:teuthology.orchestra.run.vm05.stdout:(89/119): python3-jaraco-8.2.1-3.el9.noarch.rpm 4.7 MB/s | 11 kB 00:00 2026-03-10T10:13:00.227 INFO:teuthology.orchestra.run.vm05.stdout:(90/119): python3-jaraco-classes-3.2.1-5.el9.no 7.1 MB/s | 18 kB 00:00 2026-03-10T10:13:00.229 INFO:teuthology.orchestra.run.vm05.stdout:(91/119): python3-jaraco-collections-3.0.0-8.el 10 MB/s | 23 kB 00:00 2026-03-10T10:13:00.232 INFO:teuthology.orchestra.run.vm05.stdout:(92/119): python3-jaraco-context-6.0.1-3.el9.no 8.3 MB/s | 20 kB 00:00 2026-03-10T10:13:00.235 INFO:teuthology.orchestra.run.vm05.stdout:(93/119): python3-jaraco-functools-3.5.0-2.el9. 6.5 MB/s | 19 kB 00:00 2026-03-10T10:13:00.238 INFO:teuthology.orchestra.run.vm05.stdout:(94/119): python3-jaraco-text-4.0.0-2.el9.noarc 9.8 MB/s | 26 kB 00:00 2026-03-10T10:13:00.240 INFO:teuthology.orchestra.run.vm05.stdout:(95/119): python3-jwt+crypto-2.4.0-1.el9.noarch 4.4 MB/s | 9.0 kB 00:00 2026-03-10T10:13:00.243 INFO:teuthology.orchestra.run.vm05.stdout:(96/119): python3-jwt-2.4.0-1.el9.noarch.rpm 15 MB/s | 41 kB 00:00 2026-03-10T10:13:00.254 INFO:teuthology.orchestra.run.vm05.stdout:(97/119): libarrow-doc-9.0.0-15.el9.noarch.rpm 104 kB/s | 25 kB 00:00 2026-03-10T10:13:00.259 INFO:teuthology.orchestra.run.vm05.stdout:(98/119): python3-logutils-0.3.5-21.el9.noarch. 11 MB/s | 46 kB 00:00 2026-03-10T10:13:00.265 INFO:teuthology.orchestra.run.vm05.stdout:(99/119): python3-kubernetes-26.1.0-3.el9.noarc 47 MB/s | 1.0 MB 00:00 2026-03-10T10:13:00.265 INFO:teuthology.orchestra.run.vm05.stdout:(100/119): python3-more-itertools-8.12.0-2.el9. 12 MB/s | 79 kB 00:00 2026-03-10T10:13:00.268 INFO:teuthology.orchestra.run.vm05.stdout:(101/119): python3-natsort-7.1.1-5.el9.noarch.r 17 MB/s | 58 kB 00:00 2026-03-10T10:13:00.271 INFO:teuthology.orchestra.run.vm05.stdout:(102/119): python3-portend-3.1.0-2.el9.noarch.r 7.4 MB/s | 16 kB 00:00 2026-03-10T10:13:00.275 INFO:teuthology.orchestra.run.vm05.stdout:(103/119): python3-pecan-1.4.2-3.el9.noarch.rpm 29 MB/s | 272 kB 00:00 2026-03-10T10:13:00.276 INFO:teuthology.orchestra.run.vm05.stdout:(104/119): python3-pyOpenSSL-21.0.0-1.el9.noarc 18 MB/s | 90 kB 00:00 2026-03-10T10:13:00.278 INFO:teuthology.orchestra.run.vm05.stdout:(105/119): python3-repoze-lru-0.7-16.el9.noarch 12 MB/s | 31 kB 00:00 2026-03-10T10:13:00.282 INFO:teuthology.orchestra.run.vm05.stdout:(106/119): python3-rsa-4.9-2.el9.noarch.rpm 18 MB/s | 59 kB 00:00 2026-03-10T10:13:00.283 INFO:teuthology.orchestra.run.vm05.stdout:(107/119): python3-routes-2.5.1-5.el9.noarch.rp 27 MB/s | 188 kB 00:00 2026-03-10T10:13:00.284 INFO:teuthology.orchestra.run.vm05.stdout:(108/119): python3-tempora-5.0.0-2.el9.noarch.r 16 MB/s | 36 kB 00:00 2026-03-10T10:13:00.287 INFO:teuthology.orchestra.run.vm05.stdout:(109/119): python3-typing-extensions-4.15.0-1.e 21 MB/s | 86 kB 00:00 2026-03-10T10:13:00.291 INFO:teuthology.orchestra.run.vm05.stdout:(110/119): python3-webob-1.8.8-2.el9.noarch.rpm 34 MB/s | 230 kB 00:00 2026-03-10T10:13:00.292 INFO:teuthology.orchestra.run.vm05.stdout:(111/119): python3-websocket-client-1.2.3-2.el9 19 MB/s | 90 kB 00:00 2026-03-10T10:13:00.295 INFO:teuthology.orchestra.run.vm05.stdout:(112/119): python3-xmltodict-0.12.0-15.el9.noar 7.9 MB/s | 22 kB 00:00 2026-03-10T10:13:00.297 INFO:teuthology.orchestra.run.vm05.stdout:(113/119): python3-zc-lockfile-2.0-10.el9.noarc 8.5 MB/s | 20 kB 00:00 2026-03-10T10:13:00.300 INFO:teuthology.orchestra.run.vm05.stdout:(114/119): python3-werkzeug-2.0.3-3.el9.1.noarc 48 MB/s | 427 kB 00:00 2026-03-10T10:13:00.306 INFO:teuthology.orchestra.run.vm05.stdout:(115/119): re2-20211101-20.el9.x86_64.rpm 21 MB/s | 191 kB 00:00 2026-03-10T10:13:00.358 INFO:teuthology.orchestra.run.vm05.stdout:(116/119): python3-scipy-1.9.3-2.el9.x86_64.rpm 30 MB/s | 19 MB 00:00 2026-03-10T10:13:00.384 INFO:teuthology.orchestra.run.vm05.stdout:(117/119): thrift-0.15.0-4.el9.x86_64.rpm 19 MB/s | 1.6 MB 00:00 2026-03-10T10:13:00.679 INFO:teuthology.orchestra.run.vm02.stdout:(69/119): python3-numpy-1.23.5-2.el9.x86_64.rpm 5.6 MB/s | 6.1 MB 00:01 2026-03-10T10:13:00.711 INFO:teuthology.orchestra.run.vm02.stdout:(70/119): python3-toml-0.10.2-6.el9.noarch.rpm 1.3 MB/s | 42 kB 00:00 2026-03-10T10:13:00.802 INFO:teuthology.orchestra.run.vm02.stdout:(71/119): socat-1.7.4.1-8.el9.x86_64.rpm 3.2 MB/s | 303 kB 00:00 2026-03-10T10:13:00.834 INFO:teuthology.orchestra.run.vm02.stdout:(72/119): xmlstarlet-1.6.1-20.el9.x86_64.rpm 2.0 MB/s | 64 kB 00:00 2026-03-10T10:13:00.841 INFO:teuthology.orchestra.run.vm02.stdout:(73/119): fmt-8.1.1-5.el9.x86_64.rpm 16 MB/s | 111 kB 00:00 2026-03-10T10:13:00.848 INFO:teuthology.orchestra.run.vm02.stdout:(74/119): gperftools-libs-2.9.1-3.el9.x86_64.rp 46 MB/s | 308 kB 00:00 2026-03-10T10:13:00.908 INFO:teuthology.orchestra.run.vm02.stdout:(75/119): libarrow-9.0.0-15.el9.x86_64.rpm 73 MB/s | 4.4 MB 00:00 2026-03-10T10:13:00.911 INFO:teuthology.orchestra.run.vm02.stdout:(76/119): libarrow-doc-9.0.0-15.el9.noarch.rpm 8.5 MB/s | 25 kB 00:00 2026-03-10T10:13:00.914 INFO:teuthology.orchestra.run.vm02.stdout:(77/119): liboath-2.6.12-1.el9.x86_64.rpm 20 MB/s | 49 kB 00:00 2026-03-10T10:13:00.917 INFO:teuthology.orchestra.run.vm02.stdout:(78/119): libunwind-1.6.2-1.el9.x86_64.rpm 25 MB/s | 67 kB 00:00 2026-03-10T10:13:00.929 INFO:teuthology.orchestra.run.vm02.stdout:(79/119): parquet-libs-9.0.0-15.el9.x86_64.rpm 70 MB/s | 838 kB 00:00 2026-03-10T10:13:00.950 INFO:teuthology.orchestra.run.vm02.stdout:(80/119): python3-asyncssh-2.13.2-5.el9.noarch. 26 MB/s | 548 kB 00:00 2026-03-10T10:13:00.952 INFO:teuthology.orchestra.run.vm02.stdout:(81/119): python3-autocommand-2.2.2-8.el9.noarc 14 MB/s | 29 kB 00:00 2026-03-10T10:13:00.954 INFO:teuthology.orchestra.run.vm02.stdout:(82/119): python3-backports-tarfile-1.2.0-1.el9 23 MB/s | 60 kB 00:00 2026-03-10T10:13:00.957 INFO:teuthology.orchestra.run.vm02.stdout:(83/119): python3-bcrypt-3.2.2-1.el9.x86_64.rpm 19 MB/s | 43 kB 00:00 2026-03-10T10:13:00.959 INFO:teuthology.orchestra.run.vm02.stdout:(84/119): python3-cachetools-4.2.4-1.el9.noarch 14 MB/s | 32 kB 00:00 2026-03-10T10:13:00.961 INFO:teuthology.orchestra.run.vm02.stdout:(85/119): python3-certifi-2023.05.07-4.el9.noar 7.1 MB/s | 14 kB 00:00 2026-03-10T10:13:00.966 INFO:teuthology.orchestra.run.vm02.stdout:(86/119): python3-cheroot-10.0.1-4.el9.noarch.r 41 MB/s | 173 kB 00:00 2026-03-10T10:13:00.972 INFO:teuthology.orchestra.run.vm02.stdout:(87/119): python3-cherrypy-18.6.1-2.el9.noarch. 61 MB/s | 358 kB 00:00 2026-03-10T10:13:00.977 INFO:teuthology.orchestra.run.vm02.stdout:(88/119): python3-google-auth-2.45.0-1.el9.noar 45 MB/s | 254 kB 00:00 2026-03-10T10:13:00.991 INFO:teuthology.orchestra.run.vm02.stdout:(89/119): python3-jaraco-8.2.1-3.el9.noarch.rpm 803 kB/s | 11 kB 00:00 2026-03-10T10:13:01.002 INFO:teuthology.orchestra.run.vm02.stdout:(90/119): python3-jaraco-classes-3.2.1-5.el9.no 1.6 MB/s | 18 kB 00:00 2026-03-10T10:13:01.005 INFO:teuthology.orchestra.run.vm02.stdout:(91/119): python3-jaraco-collections-3.0.0-8.el 7.5 MB/s | 23 kB 00:00 2026-03-10T10:13:01.007 INFO:teuthology.orchestra.run.vm02.stdout:(92/119): python3-jaraco-context-6.0.1-3.el9.no 10 MB/s | 20 kB 00:00 2026-03-10T10:13:01.009 INFO:teuthology.orchestra.run.vm02.stdout:(93/119): python3-jaraco-functools-3.5.0-2.el9. 10 MB/s | 19 kB 00:00 2026-03-10T10:13:01.011 INFO:teuthology.orchestra.run.vm02.stdout:(94/119): python3-jaraco-text-4.0.0-2.el9.noarc 13 MB/s | 26 kB 00:00 2026-03-10T10:13:01.013 INFO:teuthology.orchestra.run.vm02.stdout:(95/119): python3-jwt+crypto-2.4.0-1.el9.noarch 4.6 MB/s | 9.0 kB 00:00 2026-03-10T10:13:01.015 INFO:teuthology.orchestra.run.vm02.stdout:(96/119): python3-jwt-2.4.0-1.el9.noarch.rpm 18 MB/s | 41 kB 00:00 2026-03-10T10:13:01.029 INFO:teuthology.orchestra.run.vm02.stdout:(97/119): python3-kubernetes-26.1.0-3.el9.noarc 75 MB/s | 1.0 MB 00:00 2026-03-10T10:13:01.032 INFO:teuthology.orchestra.run.vm02.stdout:(98/119): python3-logutils-0.3.5-21.el9.noarch. 18 MB/s | 46 kB 00:00 2026-03-10T10:13:01.039 INFO:teuthology.orchestra.run.vm02.stdout:(99/119): python3-more-itertools-8.12.0-2.el9.n 11 MB/s | 79 kB 00:00 2026-03-10T10:13:01.041 INFO:teuthology.orchestra.run.vm02.stdout:(100/119): python3-natsort-7.1.1-5.el9.noarch.r 23 MB/s | 58 kB 00:00 2026-03-10T10:13:01.046 INFO:teuthology.orchestra.run.vm02.stdout:(101/119): python3-pecan-1.4.2-3.el9.noarch.rpm 55 MB/s | 272 kB 00:00 2026-03-10T10:13:01.048 INFO:teuthology.orchestra.run.vm02.stdout:(102/119): python3-portend-3.1.0-2.el9.noarch.r 8.4 MB/s | 16 kB 00:00 2026-03-10T10:13:01.051 INFO:teuthology.orchestra.run.vm02.stdout:(103/119): python3-pyOpenSSL-21.0.0-1.el9.noarc 31 MB/s | 90 kB 00:00 2026-03-10T10:13:01.055 INFO:teuthology.orchestra.run.vm02.stdout:(104/119): python3-repoze-lru-0.7-16.el9.noarch 7.6 MB/s | 31 kB 00:00 2026-03-10T10:13:01.061 INFO:teuthology.orchestra.run.vm02.stdout:(105/119): python3-routes-2.5.1-5.el9.noarch.rp 35 MB/s | 188 kB 00:00 2026-03-10T10:13:01.063 INFO:teuthology.orchestra.run.vm02.stdout:(106/119): python3-rsa-4.9-2.el9.noarch.rpm 25 MB/s | 59 kB 00:00 2026-03-10T10:13:01.066 INFO:teuthology.orchestra.run.vm02.stdout:(107/119): python3-tempora-5.0.0-2.el9.noarch.r 15 MB/s | 36 kB 00:00 2026-03-10T10:13:01.069 INFO:teuthology.orchestra.run.vm02.stdout:(108/119): python3-typing-extensions-4.15.0-1.e 29 MB/s | 86 kB 00:00 2026-03-10T10:13:01.074 INFO:teuthology.orchestra.run.vm02.stdout:(109/119): python3-webob-1.8.8-2.el9.noarch.rpm 45 MB/s | 230 kB 00:00 2026-03-10T10:13:01.077 INFO:teuthology.orchestra.run.vm02.stdout:(110/119): python3-websocket-client-1.2.3-2.el9 29 MB/s | 90 kB 00:00 2026-03-10T10:13:01.084 INFO:teuthology.orchestra.run.vm02.stdout:(111/119): python3-werkzeug-2.0.3-3.el9.1.noarc 62 MB/s | 427 kB 00:00 2026-03-10T10:13:01.086 INFO:teuthology.orchestra.run.vm02.stdout:(112/119): python3-xmltodict-0.12.0-15.el9.noar 11 MB/s | 22 kB 00:00 2026-03-10T10:13:01.088 INFO:teuthology.orchestra.run.vm02.stdout:(113/119): python3-zc-lockfile-2.0-10.el9.noarc 10 MB/s | 20 kB 00:00 2026-03-10T10:13:01.093 INFO:teuthology.orchestra.run.vm02.stdout:(114/119): re2-20211101-20.el9.x86_64.rpm 40 MB/s | 191 kB 00:00 2026-03-10T10:13:01.124 INFO:teuthology.orchestra.run.vm02.stdout:(115/119): thrift-0.15.0-4.el9.x86_64.rpm 52 MB/s | 1.6 MB 00:00 2026-03-10T10:13:01.303 INFO:teuthology.orchestra.run.vm05.stdout:(118/119): librados2-18.2.1-0.el9.x86_64.rpm 3.3 MB/s | 3.3 MB 00:00 2026-03-10T10:13:01.592 INFO:teuthology.orchestra.run.vm05.stdout:(119/119): librbd1-18.2.1-0.el9.x86_64.rpm 2.4 MB/s | 3.0 MB 00:01 2026-03-10T10:13:01.594 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-10T10:13:01.594 INFO:teuthology.orchestra.run.vm05.stdout:Total 12 MB/s | 182 MB 00:15 2026-03-10T10:13:01.734 INFO:teuthology.orchestra.run.vm02.stdout:(116/119): ceph-mgr-diskprediction-local-18.2.1 1.7 MB/s | 7.4 MB 00:04 2026-03-10T10:13:02.050 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:13:02.096 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:13:02.096 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:13:02.209 INFO:teuthology.orchestra.run.vm02.stdout:(117/119): librados2-18.2.1-0.el9.x86_64.rpm 3.0 MB/s | 3.3 MB 00:01 2026-03-10T10:13:02.858 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:13:02.858 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:13:03.561 INFO:teuthology.orchestra.run.vm02.stdout:(118/119): librbd1-18.2.1-0.el9.x86_64.rpm 1.6 MB/s | 3.0 MB 00:01 2026-03-10T10:13:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:13:03.707 INFO:teuthology.orchestra.run.vm05.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/121 2026-03-10T10:13:03.718 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/121 2026-03-10T10:13:03.874 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/121 2026-03-10T10:13:03.876 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librados2-2:18.2.1-0.el9.x86_64 4/121 2026-03-10T10:13:03.939 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:18.2.1-0.el9.x86_64 4/121 2026-03-10T10:13:03.940 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs2-2:18.2.1-0.el9.x86_64 5/121 2026-03-10T10:13:03.969 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:18.2.1-0.el9.x86_64 5/121 2026-03-10T10:13:03.977 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rados-2:18.2.1-0.el9.x86_64 6/121 2026-03-10T10:13:03.981 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/121 2026-03-10T10:13:03.983 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/121 2026-03-10T10:13:03.992 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/121 2026-03-10T10:13:03.994 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephsqlite-2:18.2.1-0.el9.x86_64 10/121 2026-03-10T10:13:04.026 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:18.2.1-0.el9.x86_64 10/121 2026-03-10T10:13:04.028 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libradosstriper1-2:18.2.1-0.el9.x86_64 11/121 2026-03-10T10:13:04.073 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:18.2.1-0.el9.x86_64 11/121 2026-03-10T10:13:04.079 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 12/121 2026-03-10T10:13:04.102 INFO:teuthology.orchestra.run.vm05.stdout: Installing : liboath-2.6.12-1.el9.x86_64 13/121 2026-03-10T10:13:04.111 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 14/121 2026-03-10T10:13:04.115 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 15/121 2026-03-10T10:13:04.141 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 16/121 2026-03-10T10:13:04.157 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 17/121 2026-03-10T10:13:04.162 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 18/121 2026-03-10T10:13:04.169 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 19/121 2026-03-10T10:13:04.172 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 20/121 2026-03-10T10:13:04.176 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 21/121 2026-03-10T10:13:04.186 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-argparse-2:18.2.1-0.el9.x86_64 22/121 2026-03-10T10:13:04.199 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cephfs-2:18.2.1-0.el9.x86_64 23/121 2026-03-10T10:13:04.227 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 24/121 2026-03-10T10:13:04.286 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 25/121 2026-03-10T10:13:04.302 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 26/121 2026-03-10T10:13:04.309 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rsa-4.9-2.el9.noarch 27/121 2026-03-10T10:13:04.318 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 28/121 2026-03-10T10:13:04.323 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librados-devel-2:18.2.1-0.el9.x86_64 29/121 2026-03-10T10:13:04.355 INFO:teuthology.orchestra.run.vm05.stdout: Installing : re2-1:20211101-20.el9.x86_64 30/121 2026-03-10T10:13:04.361 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 31/121 2026-03-10T10:13:04.378 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 32/121 2026-03-10T10:13:04.403 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 33/121 2026-03-10T10:13:04.409 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 34/121 2026-03-10T10:13:04.418 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 35/121 2026-03-10T10:13:04.432 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 36/121 2026-03-10T10:13:04.443 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 37/121 2026-03-10T10:13:04.454 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 38/121 2026-03-10T10:13:04.514 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 39/121 2026-03-10T10:13:04.523 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 40/121 2026-03-10T10:13:04.532 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 41/121 2026-03-10T10:13:04.577 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 42/121 2026-03-10T10:13:04.947 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 43/121 2026-03-10T10:13:04.963 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 44/121 2026-03-10T10:13:04.969 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 45/121 2026-03-10T10:13:04.977 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 46/121 2026-03-10T10:13:04.982 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 47/121 2026-03-10T10:13:04.989 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 48/121 2026-03-10T10:13:04.993 INFO:teuthology.orchestra.run.vm05.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 49/121 2026-03-10T10:13:04.996 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 50/121 2026-03-10T10:13:05.007 INFO:teuthology.orchestra.run.vm05.stdout: Installing : fmt-8.1.1-5.el9.x86_64 51/121 2026-03-10T10:13:05.015 INFO:teuthology.orchestra.run.vm05.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 52/121 2026-03-10T10:13:05.021 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 53/121 2026-03-10T10:13:05.029 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 54/121 2026-03-10T10:13:05.035 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 55/121 2026-03-10T10:13:05.043 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 56/121 2026-03-10T10:13:05.049 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 57/121 2026-03-10T10:13:05.090 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 58/121 2026-03-10T10:13:05.364 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 59/121 2026-03-10T10:13:05.394 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 60/121 2026-03-10T10:13:05.400 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 61/121 2026-03-10T10:13:05.464 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-0.3.29-1.el9.x86_64 62/121 2026-03-10T10:13:05.468 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 63/121 2026-03-10T10:13:05.494 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 64/121 2026-03-10T10:13:05.896 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 65/121 2026-03-10T10:13:05.986 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 66/121 2026-03-10T10:13:06.765 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/121 2026-03-10T10:13:06.793 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 68/121 2026-03-10T10:13:06.799 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 69/121 2026-03-10T10:13:06.804 INFO:teuthology.orchestra.run.vm05.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 70/121 2026-03-10T10:13:06.948 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 71/121 2026-03-10T10:13:06.951 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librbd1-2:18.2.1-0.el9.x86_64 72/121 2026-03-10T10:13:06.991 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:18.2.1-0.el9.x86_64 72/121 2026-03-10T10:13:06.994 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rbd-2:18.2.1-0.el9.x86_64 73/121 2026-03-10T10:13:07.004 INFO:teuthology.orchestra.run.vm05.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 74/121 2026-03-10T10:13:07.204 INFO:teuthology.orchestra.run.vm05.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 75/121 2026-03-10T10:13:07.207 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librgw2-2:18.2.1-0.el9.x86_64 76/121 2026-03-10T10:13:07.227 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:18.2.1-0.el9.x86_64 76/121 2026-03-10T10:13:07.235 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rgw-2:18.2.1-0.el9.x86_64 77/121 2026-03-10T10:13:07.256 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ply-3.11-14.el9.noarch 78/121 2026-03-10T10:13:07.277 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 79/121 2026-03-10T10:13:07.365 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 80/121 2026-03-10T10:13:07.379 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 81/121 2026-03-10T10:13:07.406 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 82/121 2026-03-10T10:13:07.447 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 83/121 2026-03-10T10:13:07.509 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 84/121 2026-03-10T10:13:07.522 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 85/121 2026-03-10T10:13:07.525 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jwt-2.4.0-1.el9.noarch 86/121 2026-03-10T10:13:07.532 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jwt+crypto-2.4.0-1.el9.noarch 87/121 2026-03-10T10:13:07.536 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 88/121 2026-03-10T10:13:07.541 INFO:teuthology.orchestra.run.vm05.stdout: Installing : mailcap-2.1.49-5.el9.noarch 89/121 2026-03-10T10:13:07.544 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 90/121 2026-03-10T10:13:07.562 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-10T10:13:07.562 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'libstoragemgmt' with GID 994. 2026-03-10T10:13:07.562 INFO:teuthology.orchestra.run.vm05.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 994 and GID 994. 2026-03-10T10:13:07.562 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:07.574 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-10T10:13:07.600 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-10T10:13:07.600 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T10:13:07.600 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:07.617 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 92/121 2026-03-10T10:13:07.670 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:18.2.1-0.el9.noarch 93/121 2026-03-10T10:13:07.673 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cephadm-2:18.2.1-0.el9.noarch 93/121 2026-03-10T10:13:07.678 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 94/121 2026-03-10T10:13:07.704 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 95/121 2026-03-10T10:13:07.708 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-common-2:18.2.1-0.el9.x86_64 96/121 2026-03-10T10:13:08.703 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:18.2.1-0.el9.x86_64 97/121 2026-03-10T10:13:08.709 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-common-2:18.2.1-0.el9.x86_64 97/121 2026-03-10T10:13:09.031 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:18.2.1-0.el9.x86_64 97/121 2026-03-10T10:13:09.038 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-base-2:18.2.1-0.el9.x86_64 98/121 2026-03-10T10:13:09.083 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:18.2.1-0.el9.x86_64 98/121 2026-03-10T10:13:09.083 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T10:13:09.083 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T10:13:09.083 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:09.089 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-selinux-2:18.2.1-0.el9.x86_64 99/121 2026-03-10T10:13:15.823 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:18.2.1-0.el9.x86_64 99/121 2026-03-10T10:13:15.823 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-10T10:13:15.823 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-10T10:13:15.823 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-10T10:13:15.823 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-10T10:13:15.823 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-10T10:13:15.823 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-10T10:13:15.823 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-10T10:13:15.823 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:15.858 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 100/121 2026-03-10T10:13:16.011 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 100/121 2026-03-10T10:13:16.017 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 101/121 2026-03-10T10:13:16.634 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 101/121 2026-03-10T10:13:16.636 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noa 102/121 2026-03-10T10:13:16.707 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noa 102/121 2026-03-10T10:13:16.789 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 103/121 2026-03-10T10:13:16.792 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-2:18.2.1-0.el9.x86_64 104/121 2026-03-10T10:13:16.822 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:18.2.1-0.el9.x86_64 104/121 2026-03-10T10:13:16.822 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:16.822 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T10:13:16.822 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T10:13:16.822 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T10:13:16.822 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:16.840 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-rook-2:18.2.1-0.el9.noarch 105/121 2026-03-10T10:13:16.963 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.1-0.el9.noarch 105/121 2026-03-10T10:13:17.001 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mds-2:18.2.1-0.el9.x86_64 106/121 2026-03-10T10:13:17.029 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:18.2.1-0.el9.x86_64 106/121 2026-03-10T10:13:17.029 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:17.029 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T10:13:17.029 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T10:13:17.029 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T10:13:17.029 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:17.304 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mon-2:18.2.1-0.el9.x86_64 107/121 2026-03-10T10:13:17.330 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:18.2.1-0.el9.x86_64 107/121 2026-03-10T10:13:17.330 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:17.330 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T10:13:17.330 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T10:13:17.330 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T10:13:17.330 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:18.199 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-osd-2:18.2.1-0.el9.x86_64 108/121 2026-03-10T10:13:18.224 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:18.2.1-0.el9.x86_64 108/121 2026-03-10T10:13:18.224 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:18.224 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T10:13:18.224 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T10:13:18.224 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T10:13:18.224 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:18.607 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-2:18.2.1-0.el9.x86_64 109/121 2026-03-10T10:13:18.611 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-radosgw-2:18.2.1-0.el9.x86_64 110/121 2026-03-10T10:13:18.632 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:18.2.1-0.el9.x86_64 110/121 2026-03-10T10:13:18.632 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:18.632 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T10:13:18.632 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T10:13:18.632 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T10:13:18.633 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:18.643 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-immutable-object-cache-2:18.2.1-0.el9.x86_6 111/121 2026-03-10T10:13:18.667 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_6 111/121 2026-03-10T10:13:18.667 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:18.667 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T10:13:18.667 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:18.818 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-mirror-2:18.2.1-0.el9.x86_64 112/121 2026-03-10T10:13:18.840 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:18.2.1-0.el9.x86_64 112/121 2026-03-10T10:13:18.840 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:18.840 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T10:13:18.840 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T10:13:18.840 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T10:13:18.840 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:20.090 INFO:teuthology.orchestra.run.vm02.stdout:(119/119): python3-scipy-1.9.3-2.el9.x86_64.rpm 973 kB/s | 19 MB 00:20 2026-03-10T10:13:20.092 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-10T10:13:20.092 INFO:teuthology.orchestra.run.vm02.stdout:Total 5.4 MB/s | 182 MB 00:34 2026-03-10T10:13:20.493 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:13:20.544 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:13:20.544 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:13:20.905 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-test-2:18.2.1-0.el9.x86_64 113/121 2026-03-10T10:13:20.916 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-fuse-2:18.2.1-0.el9.x86_64 114/121 2026-03-10T10:13:20.922 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-nbd-2:18.2.1-0.el9.x86_64 115/121 2026-03-10T10:13:20.962 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs-devel-2:18.2.1-0.el9.x86_64 116/121 2026-03-10T10:13:20.968 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-fuse-2:18.2.1-0.el9.x86_64 117/121 2026-03-10T10:13:20.976 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 118/121 2026-03-10T10:13:20.980 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 119/121 2026-03-10T10:13:20.980 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 120/121 2026-03-10T10:13:20.995 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 120/121 2026-03-10T10:13:20.995 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 121/121 2026-03-10T10:13:21.344 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:13:21.344 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:13:22.173 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:13:22.183 INFO:teuthology.orchestra.run.vm02.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/121 2026-03-10T10:13:22.196 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/121 2026-03-10T10:13:22.241 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 121/121 2026-03-10T10:13:22.242 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:18.2.1-0.el9.x86_64 1/121 2026-03-10T10:13:22.242 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:18.2.1-0.el9.x86_64 2/121 2026-03-10T10:13:22.242 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:18.2.1-0.el9.x86_64 3/121 2026-03-10T10:13:22.242 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:18.2.1-0.el9.x86_64 4/121 2026-03-10T10:13:22.242 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:18.2.1-0.el9.x86_6 5/121 2026-03-10T10:13:22.242 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:18.2.1-0.el9.x86_64 6/121 2026-03-10T10:13:22.242 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:18.2.1-0.el9.x86_64 7/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:18.2.1-0.el9.x86_64 8/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:18.2.1-0.el9.x86_64 9/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:18.2.1-0.el9.x86_64 10/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:18.2.1-0.el9.x86_64 11/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:18.2.1-0.el9.x86_64 12/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:18.2.1-0.el9.x86_64 13/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:18.2.1-0.el9.x86_64 14/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:18.2.1-0.el9.x86_64 15/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:18.2.1-0.el9.x86_64 16/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:18.2.1-0.el9.x86_64 17/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:18.2.1-0.el9.x86_64 18/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:18.2.1-0.el9.x86_64 19/121 2026-03-10T10:13:22.243 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:18.2.1-0.el9.x86_64 20/121 2026-03-10T10:13:22.244 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:18.2.1-0.el9.x86_64 21/121 2026-03-10T10:13:22.244 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:18.2.1-0.el9.x86_64 22/121 2026-03-10T10:13:22.244 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:18.2.1-0.el9.x86_64 23/121 2026-03-10T10:13:22.244 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:18.2.1-0.el9.x86_64 24/121 2026-03-10T10:13:22.245 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:18.2.1-0.el9.x86_64 25/121 2026-03-10T10:13:22.245 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:18.2.1-0.el9.x86_64 26/121 2026-03-10T10:13:22.245 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:18.2.1-0.el9.x86_64 27/121 2026-03-10T10:13:22.245 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 28/121 2026-03-10T10:13:22.245 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 29/121 2026-03-10T10:13:22.245 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 30/121 2026-03-10T10:13:22.245 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noa 31/121 2026-03-10T10:13:22.245 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 32/121 2026-03-10T10:13:22.245 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:18.2.1-0.el9.noarch 33/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 34/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:18.2.1-0.el9.noarch 35/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 36/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 37/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 38/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 39/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 40/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 41/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 42/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 43/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 44/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 45/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 46/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 47/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 48/121 2026-03-10T10:13:22.246 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 49/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 50/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 51/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 52/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 53/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 54/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 55/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 56/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 57/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 58/121 2026-03-10T10:13:22.247 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 59/121 2026-03-10T10:13:22.248 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 60/121 2026-03-10T10:13:22.248 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 61/121 2026-03-10T10:13:22.248 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 62/121 2026-03-10T10:13:22.248 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 63/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 64/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 65/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 66/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 68/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 69/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 70/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 71/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 72/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 73/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 74/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 75/121 2026-03-10T10:13:22.249 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 76/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 77/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 78/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 79/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 80/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 81/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 82/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 83/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 84/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 85/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 86/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 87/121 2026-03-10T10:13:22.250 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 88/121 2026-03-10T10:13:22.251 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 89/121 2026-03-10T10:13:22.251 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 90/121 2026-03-10T10:13:22.251 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 91/121 2026-03-10T10:13:22.251 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 92/121 2026-03-10T10:13:22.251 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 93/121 2026-03-10T10:13:22.251 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 94/121 2026-03-10T10:13:22.251 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 95/121 2026-03-10T10:13:22.251 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 96/121 2026-03-10T10:13:22.251 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jwt+crypto-2.4.0-1.el9.noarch 97/121 2026-03-10T10:13:22.252 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jwt-2.4.0-1.el9.noarch 98/121 2026-03-10T10:13:22.252 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 99/121 2026-03-10T10:13:22.252 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 100/121 2026-03-10T10:13:22.252 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 101/121 2026-03-10T10:13:22.252 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 102/121 2026-03-10T10:13:22.252 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 103/121 2026-03-10T10:13:22.252 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 104/121 2026-03-10T10:13:22.252 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 105/121 2026-03-10T10:13:22.252 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 106/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 107/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 108/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 109/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 110/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 111/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 112/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 113/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 114/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 115/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 116/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 117/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:18.2.1-0.el9.x86_64 118/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 119/121 2026-03-10T10:13:22.253 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:18.2.1-0.el9.x86_64 120/121 2026-03-10T10:13:22.368 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/121 2026-03-10T10:13:22.370 INFO:teuthology.orchestra.run.vm02.stdout: Upgrading : librados2-2:18.2.1-0.el9.x86_64 4/121 2026-03-10T10:13:22.400 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 121/121 2026-03-10T10:13:22.400 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout:Upgraded: 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:18.2.1-0.el9.x86_64 librbd1-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout:Installed: 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarch 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:18.2.1-0.el9.noarch 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:18.2.1-0.el9.noarch 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T10:13:22.401 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt-2.4.0-1.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt+crypto-2.4.0-1.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T10:13:22.402 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:13:22.403 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:13:22.415 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:18.2.1-0.el9.x86_64 4/121 2026-03-10T10:13:22.416 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephfs2-2:18.2.1-0.el9.x86_64 5/121 2026-03-10T10:13:22.446 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephfs2-2:18.2.1-0.el9.x86_64 5/121 2026-03-10T10:13:22.455 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rados-2:18.2.1-0.el9.x86_64 6/121 2026-03-10T10:13:22.458 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/121 2026-03-10T10:13:22.470 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/121 2026-03-10T10:13:22.480 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/121 2026-03-10T10:13:22.481 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephsqlite-2:18.2.1-0.el9.x86_64 10/121 2026-03-10T10:13:22.513 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephsqlite-2:18.2.1-0.el9.x86_64 10/121 2026-03-10T10:13:22.515 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libradosstriper1-2:18.2.1-0.el9.x86_64 11/121 2026-03-10T10:13:22.535 DEBUG:teuthology.parallel:result is None 2026-03-10T10:13:22.560 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libradosstriper1-2:18.2.1-0.el9.x86_64 11/121 2026-03-10T10:13:22.565 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 12/121 2026-03-10T10:13:22.592 INFO:teuthology.orchestra.run.vm02.stdout: Installing : liboath-2.6.12-1.el9.x86_64 13/121 2026-03-10T10:13:22.600 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 14/121 2026-03-10T10:13:22.604 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 15/121 2026-03-10T10:13:22.630 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 16/121 2026-03-10T10:13:22.646 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 17/121 2026-03-10T10:13:22.650 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 18/121 2026-03-10T10:13:22.658 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 19/121 2026-03-10T10:13:22.660 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 20/121 2026-03-10T10:13:22.665 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 21/121 2026-03-10T10:13:22.675 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ceph-argparse-2:18.2.1-0.el9.x86_64 22/121 2026-03-10T10:13:22.689 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cephfs-2:18.2.1-0.el9.x86_64 23/121 2026-03-10T10:13:22.719 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 24/121 2026-03-10T10:13:22.778 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 25/121 2026-03-10T10:13:22.795 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 26/121 2026-03-10T10:13:22.804 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rsa-4.9-2.el9.noarch 27/121 2026-03-10T10:13:22.813 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 28/121 2026-03-10T10:13:22.818 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librados-devel-2:18.2.1-0.el9.x86_64 29/121 2026-03-10T10:13:22.851 INFO:teuthology.orchestra.run.vm02.stdout: Installing : re2-1:20211101-20.el9.x86_64 30/121 2026-03-10T10:13:22.857 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 31/121 2026-03-10T10:13:22.875 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 32/121 2026-03-10T10:13:22.903 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 33/121 2026-03-10T10:13:22.909 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 34/121 2026-03-10T10:13:22.917 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 35/121 2026-03-10T10:13:22.931 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 36/121 2026-03-10T10:13:22.943 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 37/121 2026-03-10T10:13:22.955 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 38/121 2026-03-10T10:13:23.018 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 39/121 2026-03-10T10:13:23.027 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 40/121 2026-03-10T10:13:23.036 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 41/121 2026-03-10T10:13:23.083 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 42/121 2026-03-10T10:13:23.475 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 43/121 2026-03-10T10:13:23.502 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 44/121 2026-03-10T10:13:23.506 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 45/121 2026-03-10T10:13:23.514 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 46/121 2026-03-10T10:13:23.518 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 47/121 2026-03-10T10:13:23.526 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 48/121 2026-03-10T10:13:23.529 INFO:teuthology.orchestra.run.vm02.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 49/121 2026-03-10T10:13:23.532 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 50/121 2026-03-10T10:13:23.542 INFO:teuthology.orchestra.run.vm02.stdout: Installing : fmt-8.1.1-5.el9.x86_64 51/121 2026-03-10T10:13:23.549 INFO:teuthology.orchestra.run.vm02.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 52/121 2026-03-10T10:13:23.554 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 53/121 2026-03-10T10:13:23.562 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 54/121 2026-03-10T10:13:23.567 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 55/121 2026-03-10T10:13:23.576 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 56/121 2026-03-10T10:13:23.581 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 57/121 2026-03-10T10:13:23.623 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 58/121 2026-03-10T10:13:23.889 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 59/121 2026-03-10T10:13:23.921 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 60/121 2026-03-10T10:13:23.927 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 61/121 2026-03-10T10:13:23.989 INFO:teuthology.orchestra.run.vm02.stdout: Installing : openblas-0.3.29-1.el9.x86_64 62/121 2026-03-10T10:13:23.992 INFO:teuthology.orchestra.run.vm02.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 63/121 2026-03-10T10:13:24.015 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 64/121 2026-03-10T10:13:24.388 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 65/121 2026-03-10T10:13:24.473 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 66/121 2026-03-10T10:13:25.246 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/121 2026-03-10T10:13:25.275 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 68/121 2026-03-10T10:13:25.282 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 69/121 2026-03-10T10:13:25.287 INFO:teuthology.orchestra.run.vm02.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 70/121 2026-03-10T10:13:25.445 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 71/121 2026-03-10T10:13:25.447 INFO:teuthology.orchestra.run.vm02.stdout: Upgrading : librbd1-2:18.2.1-0.el9.x86_64 72/121 2026-03-10T10:13:25.478 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:18.2.1-0.el9.x86_64 72/121 2026-03-10T10:13:25.481 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rbd-2:18.2.1-0.el9.x86_64 73/121 2026-03-10T10:13:25.488 INFO:teuthology.orchestra.run.vm02.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 74/121 2026-03-10T10:13:25.710 INFO:teuthology.orchestra.run.vm02.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 75/121 2026-03-10T10:13:25.713 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librgw2-2:18.2.1-0.el9.x86_64 76/121 2026-03-10T10:13:25.730 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librgw2-2:18.2.1-0.el9.x86_64 76/121 2026-03-10T10:13:25.738 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rgw-2:18.2.1-0.el9.x86_64 77/121 2026-03-10T10:13:25.754 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ply-3.11-14.el9.noarch 78/121 2026-03-10T10:13:25.773 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 79/121 2026-03-10T10:13:25.859 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 80/121 2026-03-10T10:13:25.873 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 81/121 2026-03-10T10:13:25.901 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 82/121 2026-03-10T10:13:25.939 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 83/121 2026-03-10T10:13:26.000 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 84/121 2026-03-10T10:13:26.014 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 85/121 2026-03-10T10:13:26.017 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jwt-2.4.0-1.el9.noarch 86/121 2026-03-10T10:13:26.024 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jwt+crypto-2.4.0-1.el9.noarch 87/121 2026-03-10T10:13:26.028 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 88/121 2026-03-10T10:13:26.033 INFO:teuthology.orchestra.run.vm02.stdout: Installing : mailcap-2.1.49-5.el9.noarch 89/121 2026-03-10T10:13:26.036 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 90/121 2026-03-10T10:13:26.056 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-10T10:13:26.056 INFO:teuthology.orchestra.run.vm02.stdout:Creating group 'libstoragemgmt' with GID 994. 2026-03-10T10:13:26.056 INFO:teuthology.orchestra.run.vm02.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 994 and GID 994. 2026-03-10T10:13:26.056 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:26.069 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-10T10:13:26.096 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-10T10:13:26.096 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T10:13:26.096 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:26.113 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 92/121 2026-03-10T10:13:26.160 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: cephadm-2:18.2.1-0.el9.noarch 93/121 2026-03-10T10:13:26.163 INFO:teuthology.orchestra.run.vm02.stdout: Installing : cephadm-2:18.2.1-0.el9.noarch 93/121 2026-03-10T10:13:26.169 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 94/121 2026-03-10T10:13:26.197 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 95/121 2026-03-10T10:13:26.201 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ceph-common-2:18.2.1-0.el9.x86_64 96/121 2026-03-10T10:13:27.180 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:18.2.1-0.el9.x86_64 97/121 2026-03-10T10:13:27.210 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-common-2:18.2.1-0.el9.x86_64 97/121 2026-03-10T10:13:27.533 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:18.2.1-0.el9.x86_64 97/121 2026-03-10T10:13:27.540 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-base-2:18.2.1-0.el9.x86_64 98/121 2026-03-10T10:13:27.584 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:18.2.1-0.el9.x86_64 98/121 2026-03-10T10:13:27.584 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T10:13:27.584 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T10:13:27.584 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:27.589 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-selinux-2:18.2.1-0.el9.x86_64 99/121 2026-03-10T10:13:34.153 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-selinux-2:18.2.1-0.el9.x86_64 99/121 2026-03-10T10:13:34.153 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /sys 2026-03-10T10:13:34.153 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /proc 2026-03-10T10:13:34.153 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /mnt 2026-03-10T10:13:34.153 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /var/tmp 2026-03-10T10:13:34.153 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /home 2026-03-10T10:13:34.153 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /root 2026-03-10T10:13:34.153 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /tmp 2026-03-10T10:13:34.153 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:34.183 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 100/121 2026-03-10T10:13:34.309 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 100/121 2026-03-10T10:13:34.314 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 101/121 2026-03-10T10:13:34.828 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 101/121 2026-03-10T10:13:34.831 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noa 102/121 2026-03-10T10:13:34.890 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noa 102/121 2026-03-10T10:13:34.966 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 103/121 2026-03-10T10:13:34.968 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-2:18.2.1-0.el9.x86_64 104/121 2026-03-10T10:13:34.988 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:18.2.1-0.el9.x86_64 104/121 2026-03-10T10:13:34.988 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:34.988 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T10:13:34.988 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T10:13:34.988 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T10:13:34.988 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:35.001 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-rook-2:18.2.1-0.el9.noarch 105/121 2026-03-10T10:13:35.108 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.1-0.el9.noarch 105/121 2026-03-10T10:13:35.111 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mds-2:18.2.1-0.el9.x86_64 106/121 2026-03-10T10:13:35.131 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:18.2.1-0.el9.x86_64 106/121 2026-03-10T10:13:35.131 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:35.131 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T10:13:35.131 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T10:13:35.131 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T10:13:35.131 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:35.360 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mon-2:18.2.1-0.el9.x86_64 107/121 2026-03-10T10:13:35.379 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:18.2.1-0.el9.x86_64 107/121 2026-03-10T10:13:35.379 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:35.379 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T10:13:35.379 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T10:13:35.379 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T10:13:35.379 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:36.218 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-osd-2:18.2.1-0.el9.x86_64 108/121 2026-03-10T10:13:36.238 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:18.2.1-0.el9.x86_64 108/121 2026-03-10T10:13:36.238 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:36.238 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T10:13:36.238 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T10:13:36.238 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T10:13:36.239 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:36.617 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-2:18.2.1-0.el9.x86_64 109/121 2026-03-10T10:13:36.621 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-radosgw-2:18.2.1-0.el9.x86_64 110/121 2026-03-10T10:13:36.639 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:18.2.1-0.el9.x86_64 110/121 2026-03-10T10:13:36.639 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:36.640 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T10:13:36.640 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T10:13:36.640 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T10:13:36.640 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:36.651 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-immutable-object-cache-2:18.2.1-0.el9.x86_6 111/121 2026-03-10T10:13:36.667 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_6 111/121 2026-03-10T10:13:36.667 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:36.667 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T10:13:36.667 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:36.811 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-mirror-2:18.2.1-0.el9.x86_64 112/121 2026-03-10T10:13:36.829 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:18.2.1-0.el9.x86_64 112/121 2026-03-10T10:13:36.830 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:13:36.830 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T10:13:36.830 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T10:13:36.830 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T10:13:36.830 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:38.839 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-test-2:18.2.1-0.el9.x86_64 113/121 2026-03-10T10:13:38.850 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-fuse-2:18.2.1-0.el9.x86_64 114/121 2026-03-10T10:13:38.856 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-nbd-2:18.2.1-0.el9.x86_64 115/121 2026-03-10T10:13:38.896 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephfs-devel-2:18.2.1-0.el9.x86_64 116/121 2026-03-10T10:13:38.902 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-fuse-2:18.2.1-0.el9.x86_64 117/121 2026-03-10T10:13:38.910 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 118/121 2026-03-10T10:13:38.914 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 119/121 2026-03-10T10:13:38.914 INFO:teuthology.orchestra.run.vm02.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 120/121 2026-03-10T10:13:38.929 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 120/121 2026-03-10T10:13:38.929 INFO:teuthology.orchestra.run.vm02.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 121/121 2026-03-10T10:13:40.111 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 121/121 2026-03-10T10:13:40.111 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-2:18.2.1-0.el9.x86_64 1/121 2026-03-10T10:13:40.111 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-base-2:18.2.1-0.el9.x86_64 2/121 2026-03-10T10:13:40.111 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-common-2:18.2.1-0.el9.x86_64 3/121 2026-03-10T10:13:40.111 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-fuse-2:18.2.1-0.el9.x86_64 4/121 2026-03-10T10:13:40.111 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-immutable-object-cache-2:18.2.1-0.el9.x86_6 5/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mds-2:18.2.1-0.el9.x86_64 6/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-2:18.2.1-0.el9.x86_64 7/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mon-2:18.2.1-0.el9.x86_64 8/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-osd-2:18.2.1-0.el9.x86_64 9/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-radosgw-2:18.2.1-0.el9.x86_64 10/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-selinux-2:18.2.1-0.el9.x86_64 11/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-test-2:18.2.1-0.el9.x86_64 12/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs-devel-2:18.2.1-0.el9.x86_64 13/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs2-2:18.2.1-0.el9.x86_64 14/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephsqlite-2:18.2.1-0.el9.x86_64 15/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados-devel-2:18.2.1-0.el9.x86_64 16/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libradosstriper1-2:18.2.1-0.el9.x86_64 17/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librgw2-2:18.2.1-0.el9.x86_64 18/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-argparse-2:18.2.1-0.el9.x86_64 19/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-common-2:18.2.1-0.el9.x86_64 20/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cephfs-2:18.2.1-0.el9.x86_64 21/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rados-2:18.2.1-0.el9.x86_64 22/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rbd-2:18.2.1-0.el9.x86_64 23/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rgw-2:18.2.1-0.el9.x86_64 24/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-fuse-2:18.2.1-0.el9.x86_64 25/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-mirror-2:18.2.1-0.el9.x86_64 26/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-nbd-2:18.2.1-0.el9.x86_64 27/121 2026-03-10T10:13:40.112 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 28/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 29/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 30/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noa 31/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 32/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-rook-2:18.2.1-0.el9.noarch 33/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 34/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cephadm-2:18.2.1-0.el9.noarch 35/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 36/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 37/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 38/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 39/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 40/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 41/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 42/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ply-3.11-14.el9.noarch 43/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 44/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 45/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 46/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 47/121 2026-03-10T10:13:40.113 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 48/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 49/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 50/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 51/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 52/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 53/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 54/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 55/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 56/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 57/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 58/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 59/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 60/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 61/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 62/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 63/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 64/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 65/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 66/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 68/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 69/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 70/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 71/121 2026-03-10T10:13:40.114 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 72/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 73/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 74/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 75/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 76/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 77/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 78/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 79/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 80/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 81/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 82/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 83/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 84/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 85/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 86/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 87/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 88/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 89/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 90/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 91/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 92/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 93/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 94/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 95/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 96/121 2026-03-10T10:13:40.115 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jwt+crypto-2.4.0-1.el9.noarch 97/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jwt-2.4.0-1.el9.noarch 98/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 99/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 100/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 101/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 102/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 103/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 104/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 105/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 106/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 107/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 108/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 109/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 110/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 111/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 112/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 113/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 114/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 115/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : re2-1:20211101-20.el9.x86_64 116/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 117/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:18.2.1-0.el9.x86_64 118/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 119/121 2026-03-10T10:13:40.116 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:18.2.1-0.el9.x86_64 120/121 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 121/121 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout:Upgraded: 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: librados2-2:18.2.1-0.el9.x86_64 librbd1-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout:Installed: 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook-2:18.2.1-0.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: cephadm-2:18.2.1-0.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: librgw2-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T10:13:40.215 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jwt-2.4.0-1.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-jwt+crypto-2.4.0-1.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd-2:18.2.1-0.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T10:13:40.216 INFO:teuthology.orchestra.run.vm02.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T10:13:40.217 INFO:teuthology.orchestra.run.vm02.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T10:13:40.217 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T10:13:40.217 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:13:40.217 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:13:40.297 DEBUG:teuthology.parallel:result is None 2026-03-10T10:13:40.297 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-10T10:13:40.297 INFO:teuthology.packaging:ref: None 2026-03-10T10:13:40.297 INFO:teuthology.packaging:tag: v18.2.1 2026-03-10T10:13:40.297 INFO:teuthology.packaging:branch: None 2026-03-10T10:13:40.297 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:13:40.297 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=7fe91d5d5842e04be3b4f514d6dd990c54b29c76 2026-03-10T10:13:40.901 DEBUG:teuthology.orchestra.run.vm02:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T10:13:40.920 INFO:teuthology.orchestra.run.vm02.stdout:18.2.1-0.el9 2026-03-10T10:13:40.920 INFO:teuthology.packaging:The installed version of ceph is 18.2.1-0.el9 2026-03-10T10:13:40.920 INFO:teuthology.task.install:The correct ceph version 18.2.1-0 is installed. 2026-03-10T10:13:40.921 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-10T10:13:40.921 INFO:teuthology.packaging:ref: None 2026-03-10T10:13:40.921 INFO:teuthology.packaging:tag: v18.2.1 2026-03-10T10:13:40.921 INFO:teuthology.packaging:branch: None 2026-03-10T10:13:40.921 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:13:40.921 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=7fe91d5d5842e04be3b4f514d6dd990c54b29c76 2026-03-10T10:13:41.611 DEBUG:teuthology.orchestra.run.vm05:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T10:13:41.633 INFO:teuthology.orchestra.run.vm05.stdout:18.2.1-0.el9 2026-03-10T10:13:41.633 INFO:teuthology.packaging:The installed version of ceph is 18.2.1-0.el9 2026-03-10T10:13:41.633 INFO:teuthology.task.install:The correct ceph version 18.2.1-0 is installed. 2026-03-10T10:13:41.633 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T10:13:41.634 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:13:41.634 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T10:13:41.659 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:13:41.659 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T10:13:41.702 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T10:13:41.702 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:13:41.702 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T10:13:41.726 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T10:13:41.790 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:13:41.790 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T10:13:41.813 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T10:13:41.876 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T10:13:41.876 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:13:41.876 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T10:13:41.901 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T10:13:41.967 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:13:42.006 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T10:13:42.029 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T10:13:42.095 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T10:13:42.096 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:13:42.096 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T10:13:42.121 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T10:13:42.188 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:13:42.188 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T10:13:42.212 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T10:13:42.276 INFO:teuthology.run_tasks:Running task print... 2026-03-10T10:13:42.278 INFO:teuthology.task.print:**** done install task... 2026-03-10T10:13:42.278 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T10:13:42.322 INFO:tasks.cephadm:Config: {'compiled_cephadm_branch': 'reef', 'conf': {'osd': {'osd_class_default_list': '*', 'osd_class_load_list': '*', 'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd op complaint time': 180}, 'client': {'client mount timeout': 600, 'debug client': 20, 'debug ms': 1, 'rados mon op timeout': 900, 'rados osd op timeout': 900}, 'global': {'mon pg warn min per osd': 0}, 'mds': {'debug mds': 20, 'debug mds balancer': 20, 'debug ms': 1, 'mds debug frag': True, 'mds debug scatterstat': True, 'mds op complaint time': 180, 'mds verify scatter': True, 'osd op complaint time': 180, 'rados mon op timeout': 900, 'rados osd op timeout': 900}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon down mkfs grace': 300, 'mon op complaint time': 120}}, 'image': 'quay.io/ceph/ceph:v18.2.1', 'roleless': True, 'cluster-conf': {'mgr': {'client mount timeout': 30, 'debug client': 20, 'debug mgr': 20, 'debug ms': 1, 'mon warn on pool no app': False}}, 'flavor': 'default', 'fs': 'xfs', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'FS_DEGRADED', 'filesystem is degraded', 'FS_INLINE_DATA_DEPRECATED', 'FS_WITH_FAILED_MDS', 'MDS_ALL_DOWN', 'filesystem is offline', 'is offline because no MDS', 'MDS_DAMAGE', 'MDS_DEGRADED', 'MDS_FAILED', 'MDS_INSUFFICIENT_STANDBY', 'MDS_UP_LESS_THAN_MAX', 'online, but wants', 'filesystem is online with fewer MDS than max_mds', 'POOL_APP_NOT_ENABLED', 'do not have an application enabled', 'overall HEALTH_', 'Replacing daemon', 'deprecated feature inline_data', 'MGR_MODULE_ERROR', 'OSD_DOWN', 'osds down', 'overall HEALTH_', '\\(OSD_DOWN\\)', '\\(OSD_', 'but it is still running', 'is not responding', 'MON_DOWN', 'PG_AVAILABILITY', 'PG_DEGRADED', 'Reduced data availability', 'Degraded data redundancy', 'pg .* is stuck inactive', 'pg .* is .*degraded', 'pg .* is stuck peering'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T10:13:42.323 INFO:tasks.cephadm:Cluster image is quay.io/ceph/ceph:v18.2.1 2026-03-10T10:13:42.323 INFO:tasks.cephadm:Cluster fsid is d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:13:42.323 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T10:13:42.323 INFO:tasks.cephadm:No mon roles; fabricating mons 2026-03-10T10:13:42.323 INFO:tasks.cephadm:Monitor IPs: {'mon.vm02': '192.168.123.102', 'mon.vm05': '192.168.123.105'} 2026-03-10T10:13:42.323 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T10:13:42.323 DEBUG:teuthology.orchestra.run.vm02:> sudo hostname $(hostname -s) 2026-03-10T10:13:42.349 DEBUG:teuthology.orchestra.run.vm05:> sudo hostname $(hostname -s) 2026-03-10T10:13:42.377 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra for reef 2026-03-10T10:13:42.377 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:13:42.978 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T10:13:43.697 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref reef, sha1 ab47f43c099b2cbae6e21342fe673ce251da54d6 from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&ref=reef 2026-03-10T10:13:43.698 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm 2026-03-10T10:13:43.698 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm 2026-03-10T10:13:43.699 DEBUG:teuthology.orchestra.run.vm02:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T10:13:45.382 INFO:teuthology.orchestra.run.vm02.stdout:-rw-r--r--. 1 ubuntu ubuntu 217462 Mar 10 10:13 /home/ubuntu/cephtest/cephadm 2026-03-10T10:13:45.382 DEBUG:teuthology.orchestra.run.vm05:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T10:13:46.855 INFO:teuthology.orchestra.run.vm05.stdout:-rw-r--r--. 1 ubuntu ubuntu 217462 Mar 10 10:13 /home/ubuntu/cephtest/cephadm 2026-03-10T10:13:46.855 DEBUG:teuthology.orchestra.run.vm02:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T10:13:46.871 DEBUG:teuthology.orchestra.run.vm05:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T10:13:46.893 INFO:tasks.cephadm:Pulling image quay.io/ceph/ceph:v18.2.1 on all hosts... 2026-03-10T10:13:46.893 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 pull 2026-03-10T10:13:46.912 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 pull 2026-03-10T10:13:47.038 INFO:teuthology.orchestra.run.vm02.stderr:Pulling container image quay.io/ceph/ceph:v18.2.1... 2026-03-10T10:13:47.060 INFO:teuthology.orchestra.run.vm05.stderr:Pulling container image quay.io/ceph/ceph:v18.2.1... 2026-03-10T10:14:04.368 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:14:04.369 INFO:teuthology.orchestra.run.vm02.stdout: "ceph_version": "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)", 2026-03-10T10:14:04.369 INFO:teuthology.orchestra.run.vm02.stdout: "image_id": "5be31c24972a920012b90a9769e8313e2490c82aee752aefd49af9cf4c0f3fcf", 2026-03-10T10:14:04.369 INFO:teuthology.orchestra.run.vm02.stdout: "repo_digests": [ 2026-03-10T10:14:04.369 INFO:teuthology.orchestra.run.vm02.stdout: "quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3", 2026-03-10T10:14:04.369 INFO:teuthology.orchestra.run.vm02.stdout: "quay.io/ceph/ceph@sha256:e8e55db8b4fd270dbec25bc764437a2a3abb707971c4dba5f559fb83018049dc" 2026-03-10T10:14:04.369 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-10T10:14:04.369 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:14:05.414 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T10:14:05.414 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)", 2026-03-10T10:14:05.414 INFO:teuthology.orchestra.run.vm05.stdout: "image_id": "5be31c24972a920012b90a9769e8313e2490c82aee752aefd49af9cf4c0f3fcf", 2026-03-10T10:14:05.414 INFO:teuthology.orchestra.run.vm05.stdout: "repo_digests": [ 2026-03-10T10:14:05.414 INFO:teuthology.orchestra.run.vm05.stdout: "quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3", 2026-03-10T10:14:05.414 INFO:teuthology.orchestra.run.vm05.stdout: "quay.io/ceph/ceph@sha256:e8e55db8b4fd270dbec25bc764437a2a3abb707971c4dba5f559fb83018049dc" 2026-03-10T10:14:05.414 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-10T10:14:05.414 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T10:14:05.424 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /etc/ceph 2026-03-10T10:14:05.448 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph 2026-03-10T10:14:05.471 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 777 /etc/ceph 2026-03-10T10:14:05.509 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /etc/ceph 2026-03-10T10:14:05.533 INFO:tasks.cephadm:Writing seed config... 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] osd_class_default_list = * 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] osd_class_load_list = * 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] bdev async discard = True 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] bdev enable discard = True 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] bluestore allocator = bitmap 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] bluestore block size = 96636764160 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] bluestore fsck on mount = True 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] debug bluefs = 1/20 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] debug bluestore = 1/20 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] debug rocksdb = 4/10 2026-03-10T10:14:05.533 INFO:tasks.cephadm: override: [osd] mon osd backfillfull_ratio = 0.85 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [osd] mon osd full ratio = 0.9 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [osd] mon osd nearfull ratio = 0.8 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [osd] osd failsafe full ratio = 0.95 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [osd] osd objectstore = bluestore 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [osd] osd op complaint time = 180 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [client] client mount timeout = 600 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [client] debug client = 20 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [client] debug ms = 1 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [client] rados mon op timeout = 900 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [client] rados osd op timeout = 900 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [global] mon pg warn min per osd = 0 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] debug mds = 20 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] debug mds balancer = 20 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] debug ms = 1 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] mds debug frag = True 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] mds debug scatterstat = True 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] mds op complaint time = 180 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] mds verify scatter = True 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] osd op complaint time = 180 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] rados mon op timeout = 900 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mds] rados osd op timeout = 900 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mon] mon down mkfs grace = 300 2026-03-10T10:14:05.534 INFO:tasks.cephadm: override: [mon] mon op complaint time = 120 2026-03-10T10:14:05.534 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:14:05.534 DEBUG:teuthology.orchestra.run.vm02:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T10:14:05.564 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = d0ab5dc6-1c69-11f1-8798-3b5e87c3385d mon pg warn min per osd = 0 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true osd_class_default_list = * osd_class_load_list = * bdev async discard = True bdev enable discard = True bluestore allocator = bitmap bluestore block size = 96636764160 bluestore fsck on mount = True debug bluefs = 1/20 debug bluestore = 1/20 debug ms = 1 debug osd = 20 debug rocksdb = 4/10 mon osd backfillfull_ratio = 0.85 mon osd full ratio = 0.9 mon osd nearfull ratio = 0.8 osd failsafe full ratio = 0.95 osd mclock iops capacity threshold hdd = 49000 osd objectstore = bluestore osd op complaint time = 180 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 mon down mkfs grace = 300 mon op complaint time = 120 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true [client] client mount timeout = 600 debug client = 20 debug ms = 1 rados mon op timeout = 900 rados osd op timeout = 900 [mds] debug mds = 20 debug mds balancer = 20 debug ms = 1 mds debug frag = True mds debug scatterstat = True mds op complaint time = 180 mds verify scatter = True osd op complaint time = 180 rados mon op timeout = 900 rados osd op timeout = 900 2026-03-10T10:14:05.564 DEBUG:teuthology.orchestra.run.vm02:mon.vm02> sudo journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02.service 2026-03-10T10:14:05.605 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T10:14:05.605 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 -v bootstrap --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 192.168.123.102 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:05.713 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-10T10:14:05.713 INFO:teuthology.orchestra.run.vm02.stdout:cephadm ['--image', 'quay.io/ceph/ceph:v18.2.1', '-v', 'bootstrap', '--fsid', 'd0ab5dc6-1c69-11f1-8798-3b5e87c3385d', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-ip', '192.168.123.102', '--skip-admin-label'] 2026-03-10T10:14:05.732 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stdout 5.8.0 2026-03-10T10:14:05.732 INFO:teuthology.orchestra.run.vm02.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T10:14:05.732 INFO:teuthology.orchestra.run.vm02.stdout:Verifying podman|docker is present... 2026-03-10T10:14:05.749 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stdout 5.8.0 2026-03-10T10:14:05.750 INFO:teuthology.orchestra.run.vm02.stdout:Verifying lvm2 is present... 2026-03-10T10:14:05.750 INFO:teuthology.orchestra.run.vm02.stdout:Verifying time synchronization is in place... 2026-03-10T10:14:05.755 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T10:14:05.756 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T10:14:05.760 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T10:14:05.760 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-10T10:14:05.765 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout enabled 2026-03-10T10:14:05.769 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout active 2026-03-10T10:14:05.769 INFO:teuthology.orchestra.run.vm02.stdout:Unit chronyd.service is enabled and running 2026-03-10T10:14:05.769 INFO:teuthology.orchestra.run.vm02.stdout:Repeating the final host check... 2026-03-10T10:14:05.786 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stdout 5.8.0 2026-03-10T10:14:05.786 INFO:teuthology.orchestra.run.vm02.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T10:14:05.786 INFO:teuthology.orchestra.run.vm02.stdout:systemctl is present 2026-03-10T10:14:05.786 INFO:teuthology.orchestra.run.vm02.stdout:lvcreate is present 2026-03-10T10:14:05.791 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T10:14:05.791 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T10:14:05.796 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T10:14:05.796 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-10T10:14:05.800 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout enabled 2026-03-10T10:14:05.804 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout active 2026-03-10T10:14:05.805 INFO:teuthology.orchestra.run.vm02.stdout:Unit chronyd.service is enabled and running 2026-03-10T10:14:05.805 INFO:teuthology.orchestra.run.vm02.stdout:Host looks OK 2026-03-10T10:14:05.805 INFO:teuthology.orchestra.run.vm02.stdout:Cluster fsid: d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:05.805 INFO:teuthology.orchestra.run.vm02.stdout:Acquiring lock 139949305164752 on /run/cephadm/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d.lock 2026-03-10T10:14:05.805 INFO:teuthology.orchestra.run.vm02.stdout:Lock 139949305164752 acquired on /run/cephadm/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d.lock 2026-03-10T10:14:05.805 INFO:teuthology.orchestra.run.vm02.stdout:Verifying IP 192.168.123.102 port 3300 ... 2026-03-10T10:14:05.806 INFO:teuthology.orchestra.run.vm02.stdout:Verifying IP 192.168.123.102 port 6789 ... 2026-03-10T10:14:05.806 INFO:teuthology.orchestra.run.vm02.stdout:Base mon IP(s) is [192.168.123.102:3300, 192.168.123.102:6789], mon addrv is [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-10T10:14:05.808 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.102 metric 100 2026-03-10T10:14:05.808 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.102 metric 100 2026-03-10T10:14:05.810 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T10:14:05.810 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T10:14:05.812 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T10:14:05.812 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T10:14:05.812 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T10:14:05.812 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T10:14:05.812 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:2/64 scope link noprefixroute 2026-03-10T10:14:05.812 INFO:teuthology.orchestra.run.vm02.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T10:14:05.812 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.0/24` 2026-03-10T10:14:05.812 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.0/24` 2026-03-10T10:14:05.812 INFO:teuthology.orchestra.run.vm02.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T10:14:05.813 INFO:teuthology.orchestra.run.vm02.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T10:14:05.813 INFO:teuthology.orchestra.run.vm02.stdout:Pulling container image quay.io/ceph/ceph:v18.2.1... 2026-03-10T10:14:07.045 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stdout 5be31c24972a920012b90a9769e8313e2490c82aee752aefd49af9cf4c0f3fcf 2026-03-10T10:14:07.045 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Trying to pull quay.io/ceph/ceph:v18.2.1... 2026-03-10T10:14:07.045 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T10:14:07.045 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Copying blob sha256:a733d3c618b71f19c168ebecd1953429dce2c1631835ca182e9551c36dce5989 2026-03-10T10:14:07.045 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Copying blob sha256:7feca07754707458c3945cf0062cf4dabc512f6d90fe1a9a1370b362b6011124 2026-03-10T10:14:07.045 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Copying config sha256:5be31c24972a920012b90a9769e8313e2490c82aee752aefd49af9cf4c0f3fcf 2026-03-10T10:14:07.045 INFO:teuthology.orchestra.run.vm02.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T10:14:07.186 INFO:teuthology.orchestra.run.vm02.stdout:ceph: stdout ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable) 2026-03-10T10:14:07.186 INFO:teuthology.orchestra.run.vm02.stdout:Ceph version: ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable) 2026-03-10T10:14:07.186 INFO:teuthology.orchestra.run.vm02.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T10:14:07.263 INFO:teuthology.orchestra.run.vm02.stdout:stat: stdout 167 167 2026-03-10T10:14:07.263 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial keys... 2026-03-10T10:14:07.372 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQDv7q9pyQIpFBAASs31tqOZDWvELhhwjUBqsQ== 2026-03-10T10:14:07.454 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQDv7q9puaniGRAAGSh21X3SfEZEv9tSn/xb9g== 2026-03-10T10:14:07.549 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQDv7q9pN3/IHxAAhGepdY8xCPDi5rjAT11VzQ== 2026-03-10T10:14:07.549 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial monmap... 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = pacific 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout:monmaptool for vm02 [v2:192.168.123.102:3300,v1:192.168.123.102:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout:setting min_mon_release = pacific 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: set fsid to d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:07.648 INFO:teuthology.orchestra.run.vm02.stdout:Creating mon... 2026-03-10T10:14:07.775 INFO:teuthology.orchestra.run.vm02.stdout:create mon.vm02 on 2026-03-10T10:14:07.915 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T10:14:08.024 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T10:14:08.149 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d.target → /etc/systemd/system/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d.target. 2026-03-10T10:14:08.150 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d.target → /etc/systemd/system/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d.target. 2026-03-10T10:14:08.284 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02 2026-03-10T10:14:08.284 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to reset failed state of unit ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02.service: Unit ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02.service not loaded. 2026-03-10T10:14:08.415 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d.target.wants/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02.service → /etc/systemd/system/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@.service. 2026-03-10T10:14:08.564 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-10T10:14:08.565 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T10:14:08.565 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mon to start... 2026-03-10T10:14:08.565 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mon... 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout id: d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout services: 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum vm02 (age 0.149766s) 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout data: 2026-03-10T10:14:08.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.684+0000 7fea5e32e700 1 Processor -- start 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.685+0000 7fea5e32e700 1 -- start start 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.685+0000 7fea5e32e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58104fb0 0x7fea581073e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.685+0000 7fea5e32e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fea58074720 con 0x7fea58104fb0 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.685+0000 7fea57fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58104fb0 0x7fea581073e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.685+0000 7fea57fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58104fb0 0x7fea581073e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54086/0 (socket says 192.168.123.102:54086) 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.685+0000 7fea57fff700 1 -- 192.168.123.102:0/2943405247 learned_addr learned my addr 192.168.123.102:0/2943405247 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.686+0000 7fea57fff700 1 -- 192.168.123.102:0/2943405247 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fea58107920 con 0x7fea58104fb0 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.686+0000 7fea57fff700 1 --2- 192.168.123.102:0/2943405247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58104fb0 0x7fea581073e0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fea40009cf0 tx=0x7fea4000b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=19c8cc6f34a882d5 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.686+0000 7fea56ffd700 1 -- 192.168.123.102:0/2943405247 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fea40004030 con 0x7fea58104fb0 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.686+0000 7fea56ffd700 1 -- 192.168.123.102:0/2943405247 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7fea40004190 con 0x7fea58104fb0 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.686+0000 7fea56ffd700 1 -- 192.168.123.102:0/2943405247 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fea40004320 con 0x7fea58104fb0 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.687+0000 7fea5e32e700 1 -- 192.168.123.102:0/2943405247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58104fb0 msgr2=0x7fea581073e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.687+0000 7fea5e32e700 1 --2- 192.168.123.102:0/2943405247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58104fb0 0x7fea581073e0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fea40009cf0 tx=0x7fea4000b0e0 comp rx=0 tx=0).stop 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.687+0000 7fea5e32e700 1 -- 192.168.123.102:0/2943405247 shutdown_connections 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.687+0000 7fea5e32e700 1 --2- 192.168.123.102:0/2943405247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58104fb0 0x7fea581073e0 secure :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fea40009cf0 tx=0x7fea4000b0e0 comp rx=0 tx=0).stop 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.687+0000 7fea5e32e700 1 -- 192.168.123.102:0/2943405247 >> 192.168.123.102:0/2943405247 conn(0x7fea58100bd0 msgr2=0x7fea58103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.687+0000 7fea5e32e700 1 -- 192.168.123.102:0/2943405247 shutdown_connections 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.687+0000 7fea5e32e700 1 -- 192.168.123.102:0/2943405247 wait complete. 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.688+0000 7fea5e32e700 1 Processor -- start 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.688+0000 7fea5e32e700 1 -- start start 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.688+0000 7fea5e32e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58072010 0x7fea58072430 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.688+0000 7fea5e32e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fea58072970 con 0x7fea58072010 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.689+0000 7fea57fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58072010 0x7fea58072430 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:08.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.689+0000 7fea57fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58072010 0x7fea58072430 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54090/0 (socket says 192.168.123.102:54090) 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.689+0000 7fea57fff700 1 -- 192.168.123.102:0/1022963344 learned_addr learned my addr 192.168.123.102:0/1022963344 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.689+0000 7fea57fff700 1 -- 192.168.123.102:0/1022963344 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fea40009740 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.689+0000 7fea57fff700 1 --2- 192.168.123.102:0/1022963344 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58072010 0x7fea58072430 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7fea4000b120 tx=0x7fea40004750 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.690+0000 7fea557fa700 1 -- 192.168.123.102:0/1022963344 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fea40004030 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.690+0000 7fea557fa700 1 -- 192.168.123.102:0/1022963344 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7fea400036a0 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.690+0000 7fea557fa700 1 -- 192.168.123.102:0/1022963344 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fea40003830 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.690+0000 7fea5e32e700 1 -- 192.168.123.102:0/1022963344 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fea58072b70 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.690+0000 7fea5e32e700 1 -- 192.168.123.102:0/1022963344 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fea581ac300 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.690+0000 7fea557fa700 1 -- 192.168.123.102:0/1022963344 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7fea40003c90 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.690+0000 7fea557fa700 1 -- 192.168.123.102:0/1022963344 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7fea40025780 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.691+0000 7fea5e32e700 1 -- 192.168.123.102:0/1022963344 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fea5806bf50 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.692+0000 7fea557fa700 1 -- 192.168.123.102:0/1022963344 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72446 (secure 0 0 0) 0x7fea4001cd30 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.730+0000 7fea5e32e700 1 -- 192.168.123.102:0/1022963344 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "status"} v 0) v1 -- 0x7fea580619a0 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.731+0000 7fea557fa700 1 -- 192.168.123.102:0/1022963344 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "status"}]=0 v0) v1 ==== 54+0+320 (secure 0 0 0) 0x7fea4001c460 con 0x7fea58072010 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.732+0000 7fea5e32e700 1 -- 192.168.123.102:0/1022963344 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58072010 msgr2=0x7fea58072430 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.732+0000 7fea5e32e700 1 --2- 192.168.123.102:0/1022963344 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58072010 0x7fea58072430 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7fea4000b120 tx=0x7fea40004750 comp rx=0 tx=0).stop 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.733+0000 7fea5e32e700 1 -- 192.168.123.102:0/1022963344 shutdown_connections 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.733+0000 7fea5e32e700 1 --2- 192.168.123.102:0/1022963344 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fea58072010 0x7fea58072430 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.733+0000 7fea5e32e700 1 -- 192.168.123.102:0/1022963344 >> 192.168.123.102:0/1022963344 conn(0x7fea58100bd0 msgr2=0x7fea58102490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.733+0000 7fea5e32e700 1 -- 192.168.123.102:0/1022963344 shutdown_connections 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.733+0000 7fea5e32e700 1 -- 192.168.123.102:0/1022963344 wait complete. 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:mon is available 2026-03-10T10:14:08.776 INFO:teuthology.orchestra.run.vm02.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T10:14:08.988 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout fsid = d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_backfillfull_ratio = 0.85 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_full_ratio = 0.9 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_nearfull_ratio = 0.8 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.893+0000 7ff7ed5f6700 1 Processor -- start 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.894+0000 7ff7ed5f6700 1 -- start start 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.894+0000 7ff7ed5f6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e81095b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.894+0000 7ff7ed5f6700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff7e8074720 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.894+0000 7ff7e6ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e81095b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.894+0000 7ff7e6ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e81095b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54092/0 (socket says 192.168.123.102:54092) 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.894+0000 7ff7e6ffd700 1 -- 192.168.123.102:0/513229670 learned_addr learned my addr 192.168.123.102:0/513229670 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.894+0000 7ff7e6ffd700 1 -- 192.168.123.102:0/513229670 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff7e8109af0 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.894+0000 7ff7e6ffd700 1 --2- 192.168.123.102:0/513229670 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e81095b0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7ff7d0009a90 tx=0x7ff7d0009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=45f2a15be0b5c54b server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.895+0000 7ff7e5ffb700 1 -- 192.168.123.102:0/513229670 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff7d000fbf0 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.895+0000 7ff7e5ffb700 1 -- 192.168.123.102:0/513229670 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7ff7d000fd50 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.895+0000 7ff7e5ffb700 1 -- 192.168.123.102:0/513229670 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff7d0004030 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.895+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/513229670 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 msgr2=0x7ff7e81095b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.895+0000 7ff7ed5f6700 1 --2- 192.168.123.102:0/513229670 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e81095b0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7ff7d0009a90 tx=0x7ff7d0009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.895+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/513229670 shutdown_connections 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.895+0000 7ff7ed5f6700 1 --2- 192.168.123.102:0/513229670 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e81095b0 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.895+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/513229670 >> 192.168.123.102:0/513229670 conn(0x7ff7e8100bd0 msgr2=0x7ff7e8103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.896+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/513229670 shutdown_connections 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.896+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/513229670 wait complete. 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.896+0000 7ff7ed5f6700 1 Processor -- start 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.896+0000 7ff7ed5f6700 1 -- start start 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.896+0000 7ff7ed5f6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e8197920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.896+0000 7ff7ed5f6700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff7e8197e60 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.896+0000 7ff7e6ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e8197920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.896+0000 7ff7e6ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e8197920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54104/0 (socket says 192.168.123.102:54104) 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.896+0000 7ff7e6ffd700 1 -- 192.168.123.102:0/1297826081 learned_addr learned my addr 192.168.123.102:0/1297826081 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.897+0000 7ff7e6ffd700 1 -- 192.168.123.102:0/1297826081 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff7d0009740 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.897+0000 7ff7e6ffd700 1 --2- 192.168.123.102:0/1297826081 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e8197920 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7ff7e8073b60 tx=0x7ff7d0004750 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.897+0000 7ff7dffff700 1 -- 192.168.123.102:0/1297826081 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff7d000fbf0 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.897+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/1297826081 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff7e8198060 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.897+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/1297826081 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff7e8198480 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.897+0000 7ff7dffff700 1 -- 192.168.123.102:0/1297826081 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7ff7d0003d40 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.897+0000 7ff7dffff700 1 -- 192.168.123.102:0/1297826081 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff7d0017450 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.897+0000 7ff7dffff700 1 -- 192.168.123.102:0/1297826081 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7ff7d00178b0 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.897+0000 7ff7dffff700 1 -- 192.168.123.102:0/1297826081 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7ff7d002c9f0 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.899+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/1297826081 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff7c8005320 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.900+0000 7ff7dffff700 1 -- 192.168.123.102:0/1297826081 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72446 (secure 0 0 0) 0x7ff7d0031050 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.935+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/1297826081 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) v1 -- 0x7ff7c8002430 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.942+0000 7ff7dffff700 1 -- 192.168.123.102:0/1297826081 <== mon.0 v2:192.168.123.102:3300/0 7 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7ff7d002c450 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.942+0000 7ff7dffff700 1 -- 192.168.123.102:0/1297826081 <== mon.0 v2:192.168.123.102:3300/0 8 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v2) v1 ==== 70+0+435 (secure 0 0 0) 0x7ff7d0027070 con 0x7ff7e81071c0 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.943+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/1297826081 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 msgr2=0x7ff7e8197920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:08.989 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.943+0000 7ff7ed5f6700 1 --2- 192.168.123.102:0/1297826081 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e8197920 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7ff7e8073b60 tx=0x7ff7d0004750 comp rx=0 tx=0).stop 2026-03-10T10:14:08.990 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.944+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/1297826081 shutdown_connections 2026-03-10T10:14:08.990 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.944+0000 7ff7ed5f6700 1 --2- 192.168.123.102:0/1297826081 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7e81071c0 0x7ff7e8197920 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:08.990 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.944+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/1297826081 >> 192.168.123.102:0/1297826081 conn(0x7ff7e8100bd0 msgr2=0x7ff7e8109280 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:08.990 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.944+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/1297826081 shutdown_connections 2026-03-10T10:14:08.990 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:08.944+0000 7ff7ed5f6700 1 -- 192.168.123.102:0/1297826081 wait complete. 2026-03-10T10:14:08.990 INFO:teuthology.orchestra.run.vm02.stdout:Generating new minimal ceph.conf... 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.107+0000 7f4392a34700 1 Processor -- start 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.108+0000 7f4392a34700 1 -- start start 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.108+0000 7f4392a34700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c108d90 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.108+0000 7f4392a34700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f438c109360 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.108+0000 7f438bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c108d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.108+0000 7f438bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c108d90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54120/0 (socket says 192.168.123.102:54120) 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.108+0000 7f438bfff700 1 -- 192.168.123.102:0/3711907908 learned_addr learned my addr 192.168.123.102:0/3711907908 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.109+0000 7f438bfff700 1 -- 192.168.123.102:0/3711907908 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f438c109b70 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.109+0000 7f438bfff700 1 --2- 192.168.123.102:0/3711907908 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c108d90 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f4374009a90 tx=0x7f4374009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9395aaac3f9bfe4b server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.109+0000 7f438b7fe700 1 -- 192.168.123.102:0/3711907908 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4374004030 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.109+0000 7f438b7fe700 1 -- 192.168.123.102:0/3711907908 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7f437400b7e0 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.109+0000 7f438b7fe700 1 -- 192.168.123.102:0/3711907908 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4374003970 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.109+0000 7f4392a34700 1 -- 192.168.123.102:0/3711907908 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 msgr2=0x7f438c108d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.109+0000 7f4392a34700 1 --2- 192.168.123.102:0/3711907908 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c108d90 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f4374009a90 tx=0x7f4374009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.110+0000 7f4392a34700 1 -- 192.168.123.102:0/3711907908 shutdown_connections 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.110+0000 7f4392a34700 1 --2- 192.168.123.102:0/3711907908 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c108d90 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.110+0000 7f4392a34700 1 -- 192.168.123.102:0/3711907908 >> 192.168.123.102:0/3711907908 conn(0x7f438c07be30 msgr2=0x7f438c1064e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.110+0000 7f4392a34700 1 -- 192.168.123.102:0/3711907908 shutdown_connections 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.110+0000 7f4392a34700 1 -- 192.168.123.102:0/3711907908 wait complete. 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.110+0000 7f4392a34700 1 Processor -- start 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.110+0000 7f4392a34700 1 -- start start 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.110+0000 7f4392a34700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c19c490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.110+0000 7f4392a34700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f438c19c9d0 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f438bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c19c490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f438bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c19c490 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54126/0 (socket says 192.168.123.102:54126) 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f438bfff700 1 -- 192.168.123.102:0/3207040280 learned_addr learned my addr 192.168.123.102:0/3207040280 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f438bfff700 1 -- 192.168.123.102:0/3207040280 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4374009740 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f438bfff700 1 --2- 192.168.123.102:0/3207040280 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c19c490 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f437400bfb0 tx=0x7f4374003be0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f4389ffb700 1 -- 192.168.123.102:0/3207040280 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f43740040c0 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f4389ffb700 1 -- 192.168.123.102:0/3207040280 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7f4374004220 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f4389ffb700 1 -- 192.168.123.102:0/3207040280 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4374011420 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f4392a34700 1 -- 192.168.123.102:0/3207040280 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f438c19cbd0 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.111+0000 7f4392a34700 1 -- 192.168.123.102:0/3207040280 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f438c19cf30 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.112+0000 7f4389ffb700 1 -- 192.168.123.102:0/3207040280 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7f4374011580 con 0x7f438c108970 2026-03-10T10:14:10.415 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.112+0000 7f4389ffb700 1 -- 192.168.123.102:0/3207040280 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f4374023a90 con 0x7f438c108970 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.112+0000 7f4392a34700 1 -- 192.168.123.102:0/3207040280 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f438c04fa20 con 0x7f438c108970 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.114+0000 7f4389ffb700 1 -- 192.168.123.102:0/3207040280 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72446 (secure 0 0 0) 0x7f4374028050 con 0x7f438c108970 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.149+0000 7f4392a34700 1 -- 192.168.123.102:0/3207040280 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) v1 -- 0x7f438c10cd60 con 0x7f438c108970 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.150+0000 7f4389ffb700 1 -- 192.168.123.102:0/3207040280 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v2) v1 ==== 76+0+181 (secure 0 0 0) 0x7f438c10cd60 con 0x7f438c108970 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.377+0000 7f4392a34700 1 -- 192.168.123.102:0/3207040280 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 msgr2=0x7f438c19c490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.379+0000 7f4392a34700 1 --2- 192.168.123.102:0/3207040280 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c19c490 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f437400bfb0 tx=0x7f4374003be0 comp rx=0 tx=0).stop 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.379+0000 7f4392a34700 1 -- 192.168.123.102:0/3207040280 shutdown_connections 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.379+0000 7f4392a34700 1 --2- 192.168.123.102:0/3207040280 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f438c108970 0x7f438c19c490 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.379+0000 7f4392a34700 1 -- 192.168.123.102:0/3207040280 >> 192.168.123.102:0/3207040280 conn(0x7f438c07be30 msgr2=0x7f438c105d50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.379+0000 7f4392a34700 1 -- 192.168.123.102:0/3207040280 shutdown_connections 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:09.379+0000 7f4392a34700 1 -- 192.168.123.102:0/3207040280 wait complete. 2026-03-10T10:14:10.416 INFO:teuthology.orchestra.run.vm02.stdout:Restarting the monitor... 2026-03-10T10:14:10.495 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 systemd[1]: Stopping Ceph mon.vm02 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:14:10.749 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02[49904]: 2026-03-10T10:14:10.493+0000 7fa88a9c4700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm02 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:14:10.749 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02[49904]: 2026-03-10T10:14:10.493+0000 7fa88a9c4700 -1 mon.vm02@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T10:14:10.957 INFO:teuthology.orchestra.run.vm02.stdout:Setting public_network to 192.168.123.0/24 in global config section 2026-03-10T10:14:11.009 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 podman[50114]: 2026-03-10 10:14:10.748461782 +0000 UTC m=+0.269418690 container died 5dbcc93c26f9d399a45aba70f4b58860772377c43bf332de8fc9110a41e847e9 (image=quay.io/ceph/ceph:v18.2.1, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 8 Base Image, GIT_BRANCH=HEAD, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=HEAD, maintainer=Guillaume Abrioux , GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, io.buildah.version=1.29.1, org.label-schema.build-date=20240222, org.label-schema.schema-version=1.0, CEPH_POINT_RELEASE=-18.2.1, GIT_CLEAN=True, org.label-schema.vendor=CentOS) 2026-03-10T10:14:11.009 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 podman[50114]: 2026-03-10 10:14:10.765033207 +0000 UTC m=+0.285990115 container remove 5dbcc93c26f9d399a45aba70f4b58860772377c43bf332de8fc9110a41e847e9 (image=quay.io/ceph/ceph:v18.2.1, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.29.1, org.label-schema.license=GPLv2, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_BRANCH=HEAD, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, RELEASE=HEAD, ceph=True, org.label-schema.build-date=20240222, CEPH_POINT_RELEASE=-18.2.1, org.label-schema.name=CentOS Stream 8 Base Image) 2026-03-10T10:14:11.009 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 bash[50114]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02 2026-03-10T10:14:11.009 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02.service: Deactivated successfully. 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 systemd[1]: Stopped Ceph mon.vm02 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 systemd[1]: Starting Ceph mon.vm02 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 podman[50185]: 2026-03-10 10:14:10.91259848 +0000 UTC m=+0.016424142 container create ab92d831cc1d0e24669aca88ce7ab5f62bbdd2ea45e7f8c4ada2277bd1fd5ffc (image=quay.io/ceph/ceph:v18.2.1, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, GIT_CLEAN=True, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, org.label-schema.name=CentOS Stream 8 Base Image, RELEASE=HEAD, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.license=GPLv2, GIT_BRANCH=HEAD, org.label-schema.build-date=20240222, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.29.1, CEPH_POINT_RELEASE=-18.2.1) 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 podman[50185]: 2026-03-10 10:14:10.947682819 +0000 UTC m=+0.051508481 container init ab92d831cc1d0e24669aca88ce7ab5f62bbdd2ea45e7f8c4ada2277bd1fd5ffc (image=quay.io/ceph/ceph:v18.2.1, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, org.label-schema.vendor=CentOS, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.29.1, CEPH_POINT_RELEASE=-18.2.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20240222, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, org.label-schema.name=CentOS Stream 8 Base Image, GIT_BRANCH=HEAD, RELEASE=HEAD, org.label-schema.schema-version=1.0) 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 podman[50185]: 2026-03-10 10:14:10.950596536 +0000 UTC m=+0.054422198 container start ab92d831cc1d0e24669aca88ce7ab5f62bbdd2ea45e7f8c4ada2277bd1fd5ffc (image=quay.io/ceph/ceph:v18.2.1, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, GIT_CLEAN=True, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, org.label-schema.license=GPLv2, GIT_BRANCH=HEAD, ceph=True, org.label-schema.build-date=20240222, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.29.1, org.label-schema.name=CentOS Stream 8 Base Image, RELEASE=HEAD, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=-18.2.1) 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 bash[50185]: ab92d831cc1d0e24669aca88ce7ab5f62bbdd2ea45e7f8c4ada2277bd1fd5ffc 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 podman[50185]: 2026-03-10 10:14:10.905418853 +0000 UTC m=+0.009244515 image pull 5be31c24972a920012b90a9769e8313e2490c82aee752aefd49af9cf4c0f3fcf quay.io/ceph/ceph:v18.2.1 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 systemd[1]: Started Ceph mon.vm02 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable), process ceph-mon, pid 2 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: pidfile_write: ignore empty --pid-file 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: load: jerasure load: lrc 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: RocksDB version: 7.9.2 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Git sha 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Compile date 2023-12-11 22:07:34 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: DB SUMMARY 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: DB Session ID: H73SIPTPEV2CXT55GQOR 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: CURRENT file: CURRENT 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm02/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm02/store.db: 000009.log size: 97594 ; 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.error_if_exists: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.create_if_missing: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.paranoid_checks: 1 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.env: 0x561caafc2720 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.info_log: 0x561cad585360 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.statistics: (nil) 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.use_fsync: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_log_file_size: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.allow_fallocate: 1 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.use_direct_reads: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.db_log_dir: 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.wal_dir: 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.write_buffer_manager: 0x561cac814320 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T10:14:11.010 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.unordered_write: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.row_cache: None 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.wal_filter: None 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.two_write_queues: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.wal_compression: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.atomic_flush: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.log_readahead_size: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_background_jobs: 2 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_background_compactions: -1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_subcompactions: 1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_open_files: -1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_background_flushes: -1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Compression algorithms supported: 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: kZSTD supported: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: kXpressCompression supported: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: kZlibCompression supported: 1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: kSnappyCompression supported: 1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: kLZ4Compression supported: 1 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: kBZip2Compression supported: 0 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm02/store.db/MANIFEST-000010 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.merge_operator: 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_filter: None 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T10:14:11.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561cad585480) 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: cache_index_and_filter_blocks: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: pin_top_level_index_and_filter: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: index_type: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: data_block_index_type: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: index_shortening: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: checksum: 4 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: no_block_cache: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_cache: 0x561cac897350 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_cache_name: BinnedLRUCache 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_cache_options: 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: capacity : 536870912 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: num_shard_bits : 4 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: strict_capacity_limit : 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: high_pri_pool_ratio: 0.000 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_cache_compressed: (nil) 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: persistent_cache: (nil) 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_size: 4096 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_size_deviation: 10 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_restart_interval: 16 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: index_block_restart_interval: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: metadata_block_size: 4096 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: partition_filters: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: use_delta_encoding: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: filter_policy: bloomfilter 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: whole_key_filtering: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: verify_compression: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: read_amp_bytes_per_bit: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: format_version: 5 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: enable_index_compression: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_align: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: max_auto_readahead_size: 262144 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: prepopulate_block_cache: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: initial_auto_readahead_size: 8192 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression: NoCompression 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.num_levels: 7 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T10:14:11.012 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.table_properties_collectors: 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.inplace_update_support: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.bloom_locality: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.max_successive_merges: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.ttl: 2592000 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.enable_blob_files: false 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.min_blob_size: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm02/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 10a06b0e-254c-477a-90be-fd62a43f94b6 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773137650975171, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773137650977107, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 93125, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 302, "table_properties": {"data_size": 91153, "index_size": 259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 13570, "raw_average_key_size": 50, "raw_value_size": 83770, "raw_average_value_size": 311, "num_data_blocks": 11, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773137650, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "10a06b0e-254c-477a-90be-fd62a43f94b6", "db_session_id": "H73SIPTPEV2CXT55GQOR", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T10:14:11.013 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773137650977149, "job": 1, "event": "recovery_finished"} 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm02/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561cac934000 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: DB pointer 0x561cac920000 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ** DB Stats ** 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ** Compaction Stats [default] ** 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: L0 2/0 92.79 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 51.6 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Sum 2/0 92.79 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 51.6 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 51.6 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ** Compaction Stats [default] ** 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 51.6 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Cumulative compaction: 0.00 GB write, 18.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Interval compaction: 0.00 GB write, 18.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Block cache BinnedLRUCache@0x561cac897350#2 capacity: 512.00 MB usage: 1.36 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-06 secs_since: 0 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Block cache entry stats(count,size,portion): FilterBlock(2,0.89 KB,0.000169873%) IndexBlock(2,0.47 KB,8.9407e-05%) Misc(1,0.00 KB,0%) 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: starting mon.vm02 rank 0 at public addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] at bind addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon_data /var/lib/ceph/mon/ceph-vm02 fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???) e1 preinit fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).mds e1 new map 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).mds e1 print_map 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: e1 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: legacy client fscid: -1 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout: No filesystems configured 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).mgr e0 loading version 1 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).mgr e1 active server: (0) 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: mon.vm02@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta expand map: {default=false} 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta from 'false' to 'false' 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta expanded map: {default=false} 2026-03-10T10:14:11.014 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta expand map: {default=info} 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta from 'info' to 'info' 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta expanded map: {default=info} 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta expand map: {default=daemon} 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta from 'daemon' to 'daemon' 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta expanded map: {default=daemon} 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta expand map: {default=debug} 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta from 'debug' to 'debug' 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:10 vm02 ceph-mon[50200]: expand_channel_meta expanded map: {default=debug} 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.095+0000 7f6eedead700 1 Processor -- start 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.096+0000 7f6eedead700 1 -- start start 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.096+0000 7f6eedead700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee81054a0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.096+0000 7f6eedead700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6ee8079e00 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.096+0000 7f6ee77fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee81054a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.096+0000 7f6ee77fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee81054a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54136/0 (socket says 192.168.123.102:54136) 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.096+0000 7f6ee77fe700 1 -- 192.168.123.102:0/4201050061 learned_addr learned my addr 192.168.123.102:0/4201050061 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.097+0000 7f6ee77fe700 1 -- 192.168.123.102:0/4201050061 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6ee8079f40 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.097+0000 7f6ee77fe700 1 --2- 192.168.123.102:0/4201050061 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee81054a0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f6ed0009a90 tx=0x7f6ed0009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=492107e30a96c23a server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.097+0000 7f6ee67fc700 1 -- 192.168.123.102:0/4201050061 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6ed0004030 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.097+0000 7f6ee67fc700 1 -- 192.168.123.102:0/4201050061 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7f6ed000b7e0 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.097+0000 7f6eedead700 1 -- 192.168.123.102:0/4201050061 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 msgr2=0x7f6ee81054a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.097+0000 7f6eedead700 1 --2- 192.168.123.102:0/4201050061 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee81054a0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f6ed0009a90 tx=0x7f6ed0009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6eedead700 1 -- 192.168.123.102:0/4201050061 shutdown_connections 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6eedead700 1 --2- 192.168.123.102:0/4201050061 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee81054a0 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6eedead700 1 -- 192.168.123.102:0/4201050061 >> 192.168.123.102:0/4201050061 conn(0x7f6ee8100be0 msgr2=0x7f6ee8103040 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6eedead700 1 -- 192.168.123.102:0/4201050061 shutdown_connections 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6eedead700 1 -- 192.168.123.102:0/4201050061 wait complete. 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6eedead700 1 Processor -- start 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6eedead700 1 -- start start 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6eedead700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee810ac00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6eedead700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6ee8079e00 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6ee77fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee810ac00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6ee77fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee810ac00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54152/0 (socket says 192.168.123.102:54152) 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.098+0000 7f6ee77fe700 1 -- 192.168.123.102:0/1048565193 learned_addr learned my addr 192.168.123.102:0/1048565193 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.099+0000 7f6ee77fe700 1 -- 192.168.123.102:0/1048565193 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6ed0009740 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.099+0000 7f6ee77fe700 1 --2- 192.168.123.102:0/1048565193 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee810ac00 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f6ed0009710 tx=0x7f6ed0003e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.099+0000 7f6ee4ff9700 1 -- 192.168.123.102:0/1048565193 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6ed00042d0 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.099+0000 7f6ee4ff9700 1 -- 192.168.123.102:0/1048565193 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7f6ed0004430 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.099+0000 7f6eedead700 1 -- 192.168.123.102:0/1048565193 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6ee810b140 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.100+0000 7f6ee4ff9700 1 -- 192.168.123.102:0/1048565193 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6ed0011460 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.100+0000 7f6ee4ff9700 1 -- 192.168.123.102:0/1048565193 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7f6ed0011a50 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.100+0000 7f6eedead700 1 -- 192.168.123.102:0/1048565193 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6ee8107660 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.100+0000 7f6ee4ff9700 1 -- 192.168.123.102:0/1048565193 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f6ed001a770 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.100+0000 7f6eedead700 1 -- 192.168.123.102:0/1048565193 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6ee804fa20 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.102+0000 7f6ee4ff9700 1 -- 192.168.123.102:0/1048565193 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72446 (secure 0 0 0) 0x7f6ed0040b10 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.136+0000 7f6eedead700 1 -- 192.168.123.102:0/1048565193 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command([{prefix=config set, name=public_network}] v 0) v1 -- 0x7f6ee80623c0 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.139+0000 7f6ee4ff9700 1 -- 192.168.123.102:0/1048565193 <== mon.0 v2:192.168.123.102:3300/0 7 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f6ed00228a0 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.139+0000 7f6ee4ff9700 1 -- 192.168.123.102:0/1048565193 <== mon.0 v2:192.168.123.102:3300/0 8 ==== mon_command_ack([{prefix=config set, name=public_network}]=0 v3)=0 v3) v1 ==== 130+0+0 (secure 0 0 0) 0x7f6ed002f030 con 0x7f6ee8105080 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.140+0000 7f6eedead700 1 -- 192.168.123.102:0/1048565193 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 msgr2=0x7f6ee810ac00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.140+0000 7f6eedead700 1 --2- 192.168.123.102:0/1048565193 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee810ac00 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f6ed0009710 tx=0x7f6ed0003e70 comp rx=0 tx=0).stop 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.141+0000 7f6eedead700 1 -- 192.168.123.102:0/1048565193 shutdown_connections 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.141+0000 7f6eedead700 1 --2- 192.168.123.102:0/1048565193 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6ee8105080 0x7f6ee810ac00 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.141+0000 7f6eedead700 1 -- 192.168.123.102:0/1048565193 >> 192.168.123.102:0/1048565193 conn(0x7f6ee8100be0 msgr2=0x7f6ee8190c40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.141+0000 7f6eedead700 1 -- 192.168.123.102:0/1048565193 shutdown_connections 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.141+0000 7f6eedead700 1 -- 192.168.123.102:0/1048565193 wait complete. 2026-03-10T10:14:11.185 INFO:teuthology.orchestra.run.vm02.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T10:14:11.186 INFO:teuthology.orchestra.run.vm02.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:11.186 INFO:teuthology.orchestra.run.vm02.stdout:Creating mgr... 2026-03-10T10:14:11.186 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T10:14:11.186 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T10:14:11.186 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:8443 ... 2026-03-10T10:14:11.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:11 vm02 ceph-mon[50200]: mon.vm02 is new leader, mons vm02 in quorum (ranks 0) 2026-03-10T10:14:11.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:11 vm02 ceph-mon[50200]: monmap e1: 1 mons at {vm02=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0]} removed_ranks: {} disallowed_leaders: {} 2026-03-10T10:14:11.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:11 vm02 ceph-mon[50200]: fsmap 2026-03-10T10:14:11.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:11 vm02 ceph-mon[50200]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T10:14:11.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:11 vm02 ceph-mon[50200]: mgrmap e1: no daemons active 2026-03-10T10:14:11.325 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mgr.vm02.zmavgl 2026-03-10T10:14:11.325 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to reset failed state of unit ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mgr.vm02.zmavgl.service: Unit ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mgr.vm02.zmavgl.service not loaded. 2026-03-10T10:14:11.444 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d.target.wants/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mgr.vm02.zmavgl.service → /etc/systemd/system/ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@.service. 2026-03-10T10:14:11.606 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-10T10:14:11.606 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T10:14:11.606 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-10T10:14:11.606 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to open ports <[9283, 8765, 8443]>. firewalld.service is not available 2026-03-10T10:14:11.606 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr to start... 2026-03-10T10:14:11.606 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr... 2026-03-10T10:14:11.833 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "d0ab5dc6-1c69-11f1-8798-3b5e87c3385d", 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-10T10:14:11.834 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:11.835 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:11.837 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T10:14:08.583831+0000", 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.752+0000 7f26e2cde700 1 Processor -- start 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.752+0000 7f26e2cde700 1 -- start start 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.752+0000 7f26e2cde700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc072220 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.752+0000 7f26e2cde700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f26dc0727f0 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.752+0000 7f26e1cdc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc072220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.752+0000 7f26e1cdc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc072220 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54178/0 (socket says 192.168.123.102:54178) 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.752+0000 7f26e1cdc700 1 -- 192.168.123.102:0/389038328 learned_addr learned my addr 192.168.123.102:0/389038328 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.753+0000 7f26e1cdc700 1 -- 192.168.123.102:0/389038328 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f26dc10ddb0 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.753+0000 7f26e1cdc700 1 --2- 192.168.123.102:0/389038328 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc072220 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f26d8009a90 tx=0x7f26d8009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ed6911bbe4761a01 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.753+0000 7f26e0cda700 1 -- 192.168.123.102:0/389038328 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f26d8004030 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.753+0000 7f26e0cda700 1 -- 192.168.123.102:0/389038328 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f26d800b7e0 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.753+0000 7f26e0cda700 1 -- 192.168.123.102:0/389038328 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f26d80039f0 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.753+0000 7f26e2cde700 1 -- 192.168.123.102:0/389038328 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 msgr2=0x7f26dc072220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.753+0000 7f26e2cde700 1 --2- 192.168.123.102:0/389038328 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc072220 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f26d8009a90 tx=0x7f26d8009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e2cde700 1 -- 192.168.123.102:0/389038328 shutdown_connections 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e2cde700 1 --2- 192.168.123.102:0/389038328 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc072220 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e2cde700 1 -- 192.168.123.102:0/389038328 >> 192.168.123.102:0/389038328 conn(0x7f26dc06d320 msgr2=0x7f26dc06f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e2cde700 1 -- 192.168.123.102:0/389038328 shutdown_connections 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e2cde700 1 -- 192.168.123.102:0/389038328 wait complete. 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e2cde700 1 Processor -- start 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e2cde700 1 -- start start 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e2cde700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc1a8fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e2cde700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f26dc1a9510 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e1cdc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc1a8fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e1cdc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc1a8fd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54190/0 (socket says 192.168.123.102:54190) 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.754+0000 7f26e1cdc700 1 -- 192.168.123.102:0/4000758758 learned_addr learned my addr 192.168.123.102:0/4000758758 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.755+0000 7f26e1cdc700 1 -- 192.168.123.102:0/4000758758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f26d8009740 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.755+0000 7f26e1cdc700 1 --2- 192.168.123.102:0/4000758758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc1a8fd0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f26d800beb0 tx=0x7f26d8003c60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.755+0000 7f26d2ffd700 1 -- 192.168.123.102:0/4000758758 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f26d8004080 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.755+0000 7f26e2cde700 1 -- 192.168.123.102:0/4000758758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f26dc1a9710 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.755+0000 7f26e2cde700 1 -- 192.168.123.102:0/4000758758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f26dc1a9b30 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.756+0000 7f26e2cde700 1 -- 192.168.123.102:0/4000758758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f26dc0623c0 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.756+0000 7f26d2ffd700 1 -- 192.168.123.102:0/4000758758 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f26d802f020 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.756+0000 7f26d2ffd700 1 -- 192.168.123.102:0/4000758758 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f26d802bd50 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.756+0000 7f26d2ffd700 1 -- 192.168.123.102:0/4000758758 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7f26d801a790 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.756+0000 7f26d2ffd700 1 -- 192.168.123.102:0/4000758758 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f26d8030080 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.757+0000 7f26d2ffd700 1 -- 192.168.123.102:0/4000758758 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72446 (secure 0 0 0) 0x7f26d802b430 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.796+0000 7f26e2cde700 1 -- 192.168.123.102:0/4000758758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7f26dc07aff0 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.796+0000 7f26d2ffd700 1 -- 192.168.123.102:0/4000758758 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1241 (secure 0 0 0) 0x7f26d802b610 con 0x7f26dc071e00 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.799+0000 7f26d0ff9700 1 -- 192.168.123.102:0/4000758758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 msgr2=0x7f26dc1a8fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.799+0000 7f26d0ff9700 1 --2- 192.168.123.102:0/4000758758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc1a8fd0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f26d800beb0 tx=0x7f26d8003c60 comp rx=0 tx=0).stop 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.801+0000 7f26d0ff9700 1 -- 192.168.123.102:0/4000758758 shutdown_connections 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.801+0000 7f26d0ff9700 1 --2- 192.168.123.102:0/4000758758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f26dc071e00 0x7f26dc1a8fd0 secure :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f26d800beb0 tx=0x7f26d8003c60 comp rx=0 tx=0).stop 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.801+0000 7f26d0ff9700 1 -- 192.168.123.102:0/4000758758 >> 192.168.123.102:0/4000758758 conn(0x7f26dc06d320 msgr2=0x7f26dc06ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.803+0000 7f26d0ff9700 1 -- 192.168.123.102:0/4000758758 shutdown_connections 2026-03-10T10:14:11.838 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:11.803+0000 7f26d0ff9700 1 -- 192.168.123.102:0/4000758758 wait complete. 2026-03-10T10:14:11.839 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (1/15)... 2026-03-10T10:14:12.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:12 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1048565193' entity='client.admin' 2026-03-10T10:14:12.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:12 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/4000758758' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "d0ab5dc6-1c69-11f1-8798-3b5e87c3385d", 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T10:14:14.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:14.075 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T10:14:08.583831+0000", 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.984+0000 7fcd0b205700 1 Processor -- start 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.984+0000 7fcd0b205700 1 -- start start 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.984+0000 7fcd0b205700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd04072220 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.984+0000 7fcd0b205700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcd040727f0 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.984+0000 7fcd0a203700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd04072220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.984+0000 7fcd0a203700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd04072220 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54200/0 (socket says 192.168.123.102:54200) 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.984+0000 7fcd0a203700 1 -- 192.168.123.102:0/4093570832 learned_addr learned my addr 192.168.123.102:0/4093570832 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.985+0000 7fcd0a203700 1 -- 192.168.123.102:0/4093570832 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcd0410ddb0 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.985+0000 7fcd0a203700 1 --2- 192.168.123.102:0/4093570832 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd04072220 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fcd00009a90 tx=0x7fcd00009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7ba35e2ad66b007f server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.985+0000 7fcd09201700 1 -- 192.168.123.102:0/4093570832 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcd00004030 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.985+0000 7fcd09201700 1 -- 192.168.123.102:0/4093570832 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fcd0000b7e0 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.985+0000 7fcd09201700 1 -- 192.168.123.102:0/4093570832 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcd000039f0 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.986+0000 7fcd0b205700 1 -- 192.168.123.102:0/4093570832 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 msgr2=0x7fcd04072220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.986+0000 7fcd0b205700 1 --2- 192.168.123.102:0/4093570832 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd04072220 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fcd00009a90 tx=0x7fcd00009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.986+0000 7fcd0b205700 1 -- 192.168.123.102:0/4093570832 shutdown_connections 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.986+0000 7fcd0b205700 1 --2- 192.168.123.102:0/4093570832 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd04072220 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.986+0000 7fcd0b205700 1 -- 192.168.123.102:0/4093570832 >> 192.168.123.102:0/4093570832 conn(0x7fcd0406d320 msgr2=0x7fcd0406f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.987+0000 7fcd0b205700 1 -- 192.168.123.102:0/4093570832 shutdown_connections 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.987+0000 7fcd0b205700 1 -- 192.168.123.102:0/4093570832 wait complete. 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.987+0000 7fcd0b205700 1 Processor -- start 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.987+0000 7fcd0b205700 1 -- start start 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.987+0000 7fcd0b205700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd041a8fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.987+0000 7fcd0b205700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcd041a9510 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.988+0000 7fcd0a203700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd041a8fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.988+0000 7fcd0a203700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd041a8fd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54204/0 (socket says 192.168.123.102:54204) 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.988+0000 7fcd0a203700 1 -- 192.168.123.102:0/327089636 learned_addr learned my addr 192.168.123.102:0/327089636 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.988+0000 7fcd0a203700 1 -- 192.168.123.102:0/327089636 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcd00009740 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.988+0000 7fcd0a203700 1 --2- 192.168.123.102:0/327089636 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd041a8fd0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fcd0000beb0 tx=0x7fcd00003c60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.988+0000 7fccfb7fe700 1 -- 192.168.123.102:0/327089636 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcd00004080 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.988+0000 7fcd0b205700 1 -- 192.168.123.102:0/327089636 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcd041a9710 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.988+0000 7fcd0b205700 1 -- 192.168.123.102:0/327089636 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcd041a9b30 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.989+0000 7fccfb7fe700 1 -- 192.168.123.102:0/327089636 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fcd0001a430 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.989+0000 7fccfb7fe700 1 -- 192.168.123.102:0/327089636 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcd00011420 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.989+0000 7fccfb7fe700 1 -- 192.168.123.102:0/327089636 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7fcd00011980 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.989+0000 7fccfb7fe700 1 -- 192.168.123.102:0/327089636 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7fcd00022400 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.990+0000 7fcd0b205700 1 -- 192.168.123.102:0/327089636 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcce8005320 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:13.991+0000 7fccfb7fe700 1 -- 192.168.123.102:0/327089636 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72446 (secure 0 0 0) 0x7fcd0001aa30 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:14.028+0000 7fcd0b205700 1 -- 192.168.123.102:0/327089636 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7fcce8005cc0 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:14.029+0000 7fccfb7fe700 1 -- 192.168.123.102:0/327089636 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1241 (secure 0 0 0) 0x7fcd00011d20 con 0x7fcd04071e00 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:14.030+0000 7fcd0b205700 1 -- 192.168.123.102:0/327089636 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 msgr2=0x7fcd041a8fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:14.030+0000 7fcd0b205700 1 --2- 192.168.123.102:0/327089636 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd041a8fd0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fcd0000beb0 tx=0x7fcd00003c60 comp rx=0 tx=0).stop 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:14.030+0000 7fcd0b205700 1 -- 192.168.123.102:0/327089636 shutdown_connections 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:14.030+0000 7fcd0b205700 1 --2- 192.168.123.102:0/327089636 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd04071e00 0x7fcd041a8fd0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:14.030+0000 7fcd0b205700 1 -- 192.168.123.102:0/327089636 >> 192.168.123.102:0/327089636 conn(0x7fcd0406d320 msgr2=0x7fcd0406ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:14.076 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:14.030+0000 7fcd0b205700 1 -- 192.168.123.102:0/327089636 shutdown_connections 2026-03-10T10:14:14.077 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:14.031+0000 7fcd0b205700 1 -- 192.168.123.102:0/327089636 wait complete. 2026-03-10T10:14:14.077 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (2/15)... 2026-03-10T10:14:14.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:14 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/327089636' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:14:16.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: Activating manager daemon vm02.zmavgl 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: mgrmap e2: vm02.zmavgl(active, starting, since 0.00430776s) 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: Manager daemon vm02.zmavgl is now available 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:16.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:15 vm02 ceph-mon[50200]: from='mgr.14100 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "d0ab5dc6-1c69-11f1-8798-3b5e87c3385d", 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:16.320 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T10:14:16.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T10:14:08.583831+0000", 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.223+0000 7f49df388700 1 Processor -- start 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.224+0000 7f49df388700 1 -- start start 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.224+0000 7f49df388700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d81053d0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:16.322 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.224+0000 7f49df388700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f49d8074720 con 0x7f49d8104fb0 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.224+0000 7f49dd124700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d81053d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.224+0000 7f49dd124700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d81053d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54284/0 (socket says 192.168.123.102:54284) 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.224+0000 7f49dd124700 1 -- 192.168.123.102:0/2564677646 learned_addr learned my addr 192.168.123.102:0/2564677646 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.225+0000 7f49dd124700 1 -- 192.168.123.102:0/2564677646 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f49d8105910 con 0x7f49d8104fb0 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.225+0000 7f49dd124700 1 --2- 192.168.123.102:0/2564677646 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d81053d0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f49c8009cf0 tx=0x7f49c800b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9484091efed7e3d8 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49cffff700 1 -- 192.168.123.102:0/2564677646 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f49c8004030 con 0x7f49d8104fb0 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49cffff700 1 -- 192.168.123.102:0/2564677646 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f49c800b810 con 0x7f49d8104fb0 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49cffff700 1 -- 192.168.123.102:0/2564677646 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f49c8003a90 con 0x7f49d8104fb0 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49df388700 1 -- 192.168.123.102:0/2564677646 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 msgr2=0x7f49d81053d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49df388700 1 --2- 192.168.123.102:0/2564677646 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d81053d0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f49c8009cf0 tx=0x7f49c800b0e0 comp rx=0 tx=0).stop 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49df388700 1 -- 192.168.123.102:0/2564677646 shutdown_connections 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49df388700 1 --2- 192.168.123.102:0/2564677646 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d81053d0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49df388700 1 -- 192.168.123.102:0/2564677646 >> 192.168.123.102:0/2564677646 conn(0x7f49d8100bd0 msgr2=0x7f49d8103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49df388700 1 -- 192.168.123.102:0/2564677646 shutdown_connections 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.226+0000 7f49df388700 1 -- 192.168.123.102:0/2564677646 wait complete. 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.227+0000 7f49df388700 1 Processor -- start 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.227+0000 7f49df388700 1 -- start start 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.227+0000 7f49df388700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d819c600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:16.323 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.227+0000 7f49df388700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f49d819cb40 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.227+0000 7f49dd124700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d819c600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.227+0000 7f49dd124700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d819c600 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54294/0 (socket says 192.168.123.102:54294) 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.227+0000 7f49dd124700 1 -- 192.168.123.102:0/1547414080 learned_addr learned my addr 192.168.123.102:0/1547414080 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.227+0000 7f49dd124700 1 -- 192.168.123.102:0/1547414080 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f49c8009740 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.228+0000 7f49dd124700 1 --2- 192.168.123.102:0/1547414080 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d819c600 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f49c8000c00 tx=0x7f49c8011870 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.228+0000 7f49ce7fc700 1 -- 192.168.123.102:0/1547414080 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f49c8011b10 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.228+0000 7f49ce7fc700 1 -- 192.168.123.102:0/1547414080 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f49c8011c70 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.228+0000 7f49df388700 1 -- 192.168.123.102:0/1547414080 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f49d819cd40 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.228+0000 7f49df388700 1 -- 192.168.123.102:0/1547414080 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f49d819d160 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.229+0000 7f49ce7fc700 1 -- 192.168.123.102:0/1547414080 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f49c801a5b0 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.229+0000 7f49ce7fc700 1 -- 192.168.123.102:0/1547414080 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 2) v1 ==== 45034+0+0 (secure 0 0 0) 0x7f49c8023480 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.229+0000 7f49df388700 1 -- 192.168.123.102:0/1547414080 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f49bc005320 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.230+0000 7f49ce7fc700 1 -- 192.168.123.102:0/1547414080 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f49c801e070 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.231+0000 7f49ce7fc700 1 -- 192.168.123.102:0/1547414080 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72446 (secure 0 0 0) 0x7f49c80217f0 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.269+0000 7f49df388700 1 -- 192.168.123.102:0/1547414080 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7f49bc0059f0 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.270+0000 7f49ce7fc700 1 -- 192.168.123.102:0/1547414080 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1241 (secure 0 0 0) 0x7f49c80217f0 con 0x7f49d8104fb0 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.272+0000 7f49df388700 1 -- 192.168.123.102:0/1547414080 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 msgr2=0x7f49d819c600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.273+0000 7f49df388700 1 --2- 192.168.123.102:0/1547414080 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d819c600 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f49c8000c00 tx=0x7f49c8011870 comp rx=0 tx=0).stop 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.273+0000 7f49df388700 1 -- 192.168.123.102:0/1547414080 shutdown_connections 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.273+0000 7f49df388700 1 --2- 192.168.123.102:0/1547414080 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f49d8104fb0 0x7f49d819c600 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.273+0000 7f49df388700 1 -- 192.168.123.102:0/1547414080 >> 192.168.123.102:0/1547414080 conn(0x7f49d8100bd0 msgr2=0x7f49d8190b70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.273+0000 7f49df388700 1 -- 192.168.123.102:0/1547414080 shutdown_connections 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:16.273+0000 7f49df388700 1 -- 192.168.123.102:0/1547414080 wait complete. 2026-03-10T10:14:16.324 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (3/15)... 2026-03-10T10:14:17.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:16 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1547414080' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:14:17.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:16 vm02 ceph-mon[50200]: mgrmap e3: vm02.zmavgl(active, since 1.00847s) 2026-03-10T10:14:18.638 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "d0ab5dc6-1c69-11f1-8798-3b5e87c3385d", 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 7, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T10:14:08.583831+0000", 2026-03-10T10:14:18.639 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.437+0000 7ff74a753700 1 Processor -- start 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.437+0000 7ff74a753700 1 -- start start 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.438+0000 7ff74a753700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff744106790 0x7ff744106bb0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.438+0000 7ff74a753700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff744107180 con 0x7ff744106790 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.438+0000 7ff743fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff744106790 0x7ff744106bb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.438+0000 7ff743fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff744106790 0x7ff744106bb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34200/0 (socket says 192.168.123.102:34200) 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.438+0000 7ff743fff700 1 -- 192.168.123.102:0/4011506183 learned_addr learned my addr 192.168.123.102:0/4011506183 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.438+0000 7ff743fff700 1 -- 192.168.123.102:0/4011506183 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff744107990 con 0x7ff744106790 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.438+0000 7ff743fff700 1 --2- 192.168.123.102:0/4011506183 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff744106790 0x7ff744106bb0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7ff72c009a90 tx=0x7ff72c009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=2202bde9bbfa082e server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.439+0000 7ff742ffd700 1 -- 192.168.123.102:0/4011506183 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff72c004030 con 0x7ff744106790 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.439+0000 7ff742ffd700 1 -- 192.168.123.102:0/4011506183 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7ff72c00b7e0 con 0x7ff744106790 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.439+0000 7ff74a753700 1 -- 192.168.123.102:0/4011506183 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff744106790 msgr2=0x7ff744106bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.439+0000 7ff74a753700 1 --2- 192.168.123.102:0/4011506183 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff744106790 0x7ff744106bb0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7ff72c009a90 tx=0x7ff72c009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.439+0000 7ff74a753700 1 -- 192.168.123.102:0/4011506183 shutdown_connections 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.439+0000 7ff74a753700 1 --2- 192.168.123.102:0/4011506183 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff744106790 0x7ff744106bb0 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.439+0000 7ff74a753700 1 -- 192.168.123.102:0/4011506183 >> 192.168.123.102:0/4011506183 conn(0x7ff744101d30 msgr2=0x7ff744104170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.439+0000 7ff74a753700 1 -- 192.168.123.102:0/4011506183 shutdown_connections 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.440+0000 7ff74a753700 1 -- 192.168.123.102:0/4011506183 wait complete. 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.440+0000 7ff74a753700 1 Processor -- start 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.440+0000 7ff74a753700 1 -- start start 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.440+0000 7ff74a753700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff74419c9a0 0x7ff74419cdc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.440+0000 7ff74a753700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff744107180 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.440+0000 7ff743fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff74419c9a0 0x7ff74419cdc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.440+0000 7ff743fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff74419c9a0 0x7ff74419cdc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34204/0 (socket says 192.168.123.102:34204) 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.440+0000 7ff743fff700 1 -- 192.168.123.102:0/723765780 learned_addr learned my addr 192.168.123.102:0/723765780 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.441+0000 7ff743fff700 1 -- 192.168.123.102:0/723765780 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff72c009740 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.441+0000 7ff743fff700 1 --2- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff74419c9a0 0x7ff74419cdc0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7ff72c00bd00 tx=0x7ff72c00bde0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.441+0000 7ff7417fa700 1 -- 192.168.123.102:0/723765780 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff72c01a670 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.441+0000 7ff7417fa700 1 -- 192.168.123.102:0/723765780 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7ff72c01ac70 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.441+0000 7ff7417fa700 1 -- 192.168.123.102:0/723765780 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff72c0044e0 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.441+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff744108810 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.441+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff74419d590 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.442+0000 7ff7417fa700 1 -- 192.168.123.102:0/723765780 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 4) v1 ==== 45267+0+0 (secure 0 0 0) 0x7ff72c011420 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.442+0000 7ff7417fa700 1 --2- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff730038370 0x7ff73003a830 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.442+0000 7ff7417fa700 1 -- 192.168.123.102:0/723765780 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7ff72c04cab0 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.442+0000 7ff7437fe700 1 --2- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff730038370 0x7ff73003a830 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.443+0000 7ff7437fe700 1 --2- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff730038370 0x7ff73003a830 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7ff734006fd0 tx=0x7ff734006e40 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.443+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff744196490 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.446+0000 7ff7417fa700 1 -- 192.168.123.102:0/723765780 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7ff72c010970 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.584+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7ff74404fa20 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.585+0000 7ff7417fa700 1 -- 192.168.123.102:0/723765780 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1240 (secure 0 0 0) 0x7ff72c02a390 con 0x7ff74419c9a0 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.588+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff730038370 msgr2=0x7ff73003a830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.588+0000 7ff74a753700 1 --2- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff730038370 0x7ff73003a830 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7ff734006fd0 tx=0x7ff734006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.588+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff74419c9a0 msgr2=0x7ff74419cdc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.588+0000 7ff74a753700 1 --2- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff74419c9a0 0x7ff74419cdc0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7ff72c00bd00 tx=0x7ff72c00bde0 comp rx=0 tx=0).stop 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.589+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 shutdown_connections 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.589+0000 7ff74a753700 1 --2- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff730038370 0x7ff73003a830 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.589+0000 7ff74a753700 1 --2- 192.168.123.102:0/723765780 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff74419c9a0 0x7ff74419cdc0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.589+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 >> 192.168.123.102:0/723765780 conn(0x7ff744101d30 msgr2=0x7ff74410a1e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.589+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 shutdown_connections 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.589+0000 7ff74a753700 1 -- 192.168.123.102:0/723765780 wait complete. 2026-03-10T10:14:18.640 INFO:teuthology.orchestra.run.vm02.stdout:mgr is available 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout fsid = d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_backfillfull_ratio = 0.85 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_full_ratio = 0.9 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_nearfull_ratio = 0.8 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.755+0000 7f10c15ee700 1 Processor -- start 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.756+0000 7f10c15ee700 1 -- start start 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.756+0000 7f10c15ee700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1095b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.756+0000 7f10c15ee700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f10bc074720 con 0x7f10bc1071c0 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.756+0000 7f10baffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1095b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.756+0000 7f10baffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1095b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34220/0 (socket says 192.168.123.102:34220) 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.756+0000 7f10baffd700 1 -- 192.168.123.102:0/3418570396 learned_addr learned my addr 192.168.123.102:0/3418570396 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.756+0000 7f10baffd700 1 -- 192.168.123.102:0/3418570396 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f10bc109af0 con 0x7f10bc1071c0 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.756+0000 7f10baffd700 1 --2- 192.168.123.102:0/3418570396 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1095b0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f10a4009cf0 tx=0x7f10a400b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7b82484e5eaad920 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.757+0000 7f10b9ffb700 1 -- 192.168.123.102:0/3418570396 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f10a4004030 con 0x7f10bc1071c0 2026-03-10T10:14:18.907 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.757+0000 7f10b9ffb700 1 -- 192.168.123.102:0/3418570396 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f10a400b810 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.757+0000 7f10b9ffb700 1 -- 192.168.123.102:0/3418570396 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f10a4003a90 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.757+0000 7f10c15ee700 1 -- 192.168.123.102:0/3418570396 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 msgr2=0x7f10bc1095b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.757+0000 7f10c15ee700 1 --2- 192.168.123.102:0/3418570396 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1095b0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f10a4009cf0 tx=0x7f10a400b0e0 comp rx=0 tx=0).stop 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.757+0000 7f10c15ee700 1 -- 192.168.123.102:0/3418570396 shutdown_connections 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.757+0000 7f10c15ee700 1 --2- 192.168.123.102:0/3418570396 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1095b0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.757+0000 7f10c15ee700 1 -- 192.168.123.102:0/3418570396 >> 192.168.123.102:0/3418570396 conn(0x7f10bc100bd0 msgr2=0x7f10bc103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.757+0000 7f10c15ee700 1 -- 192.168.123.102:0/3418570396 shutdown_connections 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.758+0000 7f10c15ee700 1 -- 192.168.123.102:0/3418570396 wait complete. 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.758+0000 7f10c15ee700 1 Processor -- start 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.758+0000 7f10c15ee700 1 -- start start 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.758+0000 7f10c15ee700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1a0a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.758+0000 7f10c15ee700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f10bc1a0fa0 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.758+0000 7f10baffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1a0a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.758+0000 7f10baffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1a0a60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34228/0 (socket says 192.168.123.102:34228) 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.758+0000 7f10baffd700 1 -- 192.168.123.102:0/486931352 learned_addr learned my addr 192.168.123.102:0/486931352 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.758+0000 7f10baffd700 1 -- 192.168.123.102:0/486931352 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f10a4009740 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.759+0000 7f10baffd700 1 --2- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1a0a60 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f10a4003d50 tx=0x7f10a4003e30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.759+0000 7f10b3fff700 1 -- 192.168.123.102:0/486931352 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f10a4004150 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.759+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f10bc1a11a0 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.759+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f10bc1a15c0 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.760+0000 7f10b3fff700 1 -- 192.168.123.102:0/486931352 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f10a40042b0 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.760+0000 7f10b3fff700 1 -- 192.168.123.102:0/486931352 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f10a4011550 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.760+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f10bc192140 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.762+0000 7f10b3fff700 1 -- 192.168.123.102:0/486931352 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 4) v1 ==== 45267+0+0 (secure 0 0 0) 0x7f10a4011770 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.762+0000 7f10b3fff700 1 --2- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f10a8038070 0x7f10a803a530 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.762+0000 7f10ba7fc700 1 --2- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f10a8038070 0x7f10a803a530 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.763+0000 7f10ba7fc700 1 --2- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f10a8038070 0x7f10a803a530 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f10ac006fd0 tx=0x7f10ac006e40 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.763+0000 7f10b3fff700 1 -- 192.168.123.102:0/486931352 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f10a404cb00 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.763+0000 7f10b3fff700 1 -- 192.168.123.102:0/486931352 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f10a4025400 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.872+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) v1 -- 0x7f10bc02cef0 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.876+0000 7f10b3fff700 1 -- 192.168.123.102:0/486931352 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v3) v1 ==== 70+0+373 (secure 0 0 0) 0x7f10a4020020 con 0x7f10bc1071c0 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.878+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f10a8038070 msgr2=0x7f10a803a530 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.878+0000 7f10c15ee700 1 --2- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f10a8038070 0x7f10a803a530 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f10ac006fd0 tx=0x7f10ac006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.878+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 msgr2=0x7f10bc1a0a60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.878+0000 7f10c15ee700 1 --2- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1a0a60 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f10a4003d50 tx=0x7f10a4003e30 comp rx=0 tx=0).stop 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.879+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 shutdown_connections 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.879+0000 7f10c15ee700 1 --2- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f10a8038070 0x7f10a803a530 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.879+0000 7f10c15ee700 1 --2- 192.168.123.102:0/486931352 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f10bc1071c0 0x7f10bc1a0a60 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.879+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 >> 192.168.123.102:0/486931352 conn(0x7f10bc100bd0 msgr2=0x7f10bc1018b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.879+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 shutdown_connections 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:18.879+0000 7f10c15ee700 1 -- 192.168.123.102:0/486931352 wait complete. 2026-03-10T10:14:18.908 INFO:teuthology.orchestra.run.vm02.stdout:Enabling cephadm module... 2026-03-10T10:14:19.192 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:18 vm02 ceph-mon[50200]: mgrmap e4: vm02.zmavgl(active, since 2s) 2026-03-10T10:14:19.192 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:18 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/723765780' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:14:19.192 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:18 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/486931352' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T10:14:20.034 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.053+0000 7f2a0c748700 1 Processor -- start 2026-03-10T10:14:20.034 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.053+0000 7f2a0c748700 1 -- start start 2026-03-10T10:14:20.034 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.054+0000 7f2a0c748700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041073e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.054+0000 7f2a0c748700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a04074720 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.054+0000 7f2a0a4e4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041073e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.054+0000 7f2a0a4e4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041073e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34234/0 (socket says 192.168.123.102:34234) 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.054+0000 7f2a0a4e4700 1 -- 192.168.123.102:0/4234848347 learned_addr learned my addr 192.168.123.102:0/4234848347 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.054+0000 7f2a0a4e4700 1 -- 192.168.123.102:0/4234848347 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2a04107920 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.055+0000 7f2a0a4e4700 1 --2- 192.168.123.102:0/4234848347 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041073e0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f29f4009a90 tx=0x7f29f4009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=a2ee55f28ee2a3ea server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.055+0000 7f2a094e2700 1 -- 192.168.123.102:0/4234848347 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f29f4004030 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.055+0000 7f2a094e2700 1 -- 192.168.123.102:0/4234848347 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f29f400b7e0 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.055+0000 7f2a094e2700 1 -- 192.168.123.102:0/4234848347 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f29f40039f0 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.055+0000 7f2a0c748700 1 -- 192.168.123.102:0/4234848347 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 msgr2=0x7f2a041073e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.055+0000 7f2a0c748700 1 --2- 192.168.123.102:0/4234848347 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041073e0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f29f4009a90 tx=0x7f29f4009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.056+0000 7f2a0c748700 1 -- 192.168.123.102:0/4234848347 shutdown_connections 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.056+0000 7f2a0c748700 1 --2- 192.168.123.102:0/4234848347 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041073e0 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.056+0000 7f2a0c748700 1 -- 192.168.123.102:0/4234848347 >> 192.168.123.102:0/4234848347 conn(0x7f2a04100bd0 msgr2=0x7f2a04103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.056+0000 7f2a0c748700 1 -- 192.168.123.102:0/4234848347 shutdown_connections 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.056+0000 7f2a0c748700 1 -- 192.168.123.102:0/4234848347 wait complete. 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.056+0000 7f2a0c748700 1 Processor -- start 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.056+0000 7f2a0c748700 1 -- start start 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.057+0000 7f2a0c748700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041a0950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.057+0000 7f2a0c748700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a041a0e90 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.057+0000 7f2a0a4e4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041a0950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.057+0000 7f2a0a4e4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041a0950 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34236/0 (socket says 192.168.123.102:34236) 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.057+0000 7f2a0a4e4700 1 -- 192.168.123.102:0/3067773774 learned_addr learned my addr 192.168.123.102:0/3067773774 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.057+0000 7f2a0a4e4700 1 -- 192.168.123.102:0/3067773774 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f29f4009740 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.057+0000 7f2a0a4e4700 1 --2- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041a0950 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f29f400bef0 tx=0x7f29f4003c60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.058+0000 7f29fb7fe700 1 -- 192.168.123.102:0/3067773774 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f29f4004080 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.058+0000 7f29fb7fe700 1 -- 192.168.123.102:0/3067773774 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f29f401a430 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.058+0000 7f29fb7fe700 1 -- 192.168.123.102:0/3067773774 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f29f4011590 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.058+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2a041a1090 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.058+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2a041a14b0 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.058+0000 7f29fb7fe700 1 -- 192.168.123.102:0/3067773774 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 4) v1 ==== 45267+0+0 (secure 0 0 0) 0x7f29f40116f0 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.058+0000 7f29fb7fe700 1 --2- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f29f0038330 0x7f29f003a7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.058+0000 7f29fb7fe700 1 -- 192.168.123.102:0/3067773774 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f29f404d040 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.058+0000 7f2a09ce3700 1 --2- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f29f0038330 0x7f29f003a7f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.059+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2a0419a290 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.062+0000 7f2a09ce3700 1 --2- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f29f0038330 0x7f29f003a7f0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f2a00006fd0 tx=0x7f2a00006e40 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.062+0000 7f29fb7fe700 1 -- 192.168.123.102:0/3067773774 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f29f4029330 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.191+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1 -- 0x7f2a040623c0 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.994+0000 7f29fb7fe700 1 -- 192.168.123.102:0/3067773774 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mgrmap(e 5) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f29f402b950 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.995+0000 7f29fb7fe700 1 -- 192.168.123.102:0/3067773774 <== mon.0 v2:192.168.123.102:3300/0 8 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "cephadm"}]=0 v5) v1 ==== 86+0+0 (secure 0 0 0) 0x7f29f401aa10 con 0x7f2a04104fb0 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.998+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f29f0038330 msgr2=0x7f29f003a7f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.998+0000 7f2a0c748700 1 --2- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f29f0038330 0x7f29f003a7f0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f2a00006fd0 tx=0x7f2a00006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.998+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 msgr2=0x7f2a041a0950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.998+0000 7f2a0c748700 1 --2- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041a0950 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f29f400bef0 tx=0x7f29f4003c60 comp rx=0 tx=0).stop 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.998+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 shutdown_connections 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.998+0000 7f2a0c748700 1 --2- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f29f0038330 0x7f29f003a7f0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.998+0000 7f2a0c748700 1 --2- 192.168.123.102:0/3067773774 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a04104fb0 0x7f2a041a0950 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.999+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 >> 192.168.123.102:0/3067773774 conn(0x7f2a04100bd0 msgr2=0x7f2a04103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.999+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 shutdown_connections 2026-03-10T10:14:20.035 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:19.999+0000 7f2a0c748700 1 -- 192.168.123.102:0/3067773774 wait complete. 2026-03-10T10:14:20.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:19 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3067773774' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "active_name": "vm02.zmavgl", 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.178+0000 7f4441312700 1 Processor -- start 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.178+0000 7f4441312700 1 -- start start 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.178+0000 7f4441312700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c0725f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.178+0000 7f4441312700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f443c072bc0 con 0x7f443c0721d0 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.178+0000 7f443bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c0725f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.179+0000 7f443bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c0725f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34268/0 (socket says 192.168.123.102:34268) 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.179+0000 7f443bfff700 1 -- 192.168.123.102:0/3461021641 learned_addr learned my addr 192.168.123.102:0/3461021641 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:20.369 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.179+0000 7f443bfff700 1 -- 192.168.123.102:0/3461021641 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f443c10e1c0 con 0x7f443c0721d0 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.179+0000 7f443bfff700 1 --2- 192.168.123.102:0/3461021641 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c0725f0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f442c00ab30 tx=0x7f442c010730 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c16d6b62e7980a9 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.180+0000 7f443affd700 1 -- 192.168.123.102:0/3461021641 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f442c010e00 con 0x7f443c0721d0 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.180+0000 7f443affd700 1 -- 192.168.123.102:0/3461021641 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f442c0044d0 con 0x7f443c0721d0 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.180+0000 7f443affd700 1 -- 192.168.123.102:0/3461021641 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f442c01a5b0 con 0x7f443c0721d0 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.180+0000 7f4441312700 1 -- 192.168.123.102:0/3461021641 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 msgr2=0x7f443c0725f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.180+0000 7f4441312700 1 --2- 192.168.123.102:0/3461021641 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c0725f0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f442c00ab30 tx=0x7f442c010730 comp rx=0 tx=0).stop 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.181+0000 7f4441312700 1 -- 192.168.123.102:0/3461021641 shutdown_connections 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.181+0000 7f4441312700 1 --2- 192.168.123.102:0/3461021641 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c0725f0 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.181+0000 7f4441312700 1 -- 192.168.123.102:0/3461021641 >> 192.168.123.102:0/3461021641 conn(0x7f443c06d320 msgr2=0x7f443c06f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.182+0000 7f4441312700 1 -- 192.168.123.102:0/3461021641 shutdown_connections 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.182+0000 7f4441312700 1 -- 192.168.123.102:0/3461021641 wait complete. 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.182+0000 7f4441312700 1 Processor -- start 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.182+0000 7f4441312700 1 -- start start 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.182+0000 7f4441312700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c1a90d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.182+0000 7f4441312700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f443c1a9610 con 0x7f443c0721d0 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.183+0000 7f443bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c1a90d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.183+0000 7f443bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c1a90d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34280/0 (socket says 192.168.123.102:34280) 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.183+0000 7f443bfff700 1 -- 192.168.123.102:0/3994596787 learned_addr learned my addr 192.168.123.102:0/3994596787 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.183+0000 7f443bfff700 1 -- 192.168.123.102:0/3994596787 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f442c00a7e0 con 0x7f443c0721d0 2026-03-10T10:14:20.370 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.183+0000 7f443bfff700 1 --2- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c1a90d0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f442c000c00 tx=0x7f442c0042e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.184+0000 7f44397fa700 1 -- 192.168.123.102:0/3994596787 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f442c0036a0 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.184+0000 7f44397fa700 1 -- 192.168.123.102:0/3994596787 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f442c00f070 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.184+0000 7f44397fa700 1 -- 192.168.123.102:0/3994596787 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f442c009760 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.184+0000 7f4441312700 1 -- 192.168.123.102:0/3994596787 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f443c1a9810 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.184+0000 7f4441312700 1 -- 192.168.123.102:0/3994596787 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f443c1a9c90 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.185+0000 7f4441312700 1 -- 192.168.123.102:0/3994596787 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f443c04f070 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.190+0000 7f44397fa700 1 -- 192.168.123.102:0/3994596787 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 5) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f442c018070 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.190+0000 7f44397fa700 1 --2- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4424038090 0x7f442403a550 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.190+0000 7f44397fa700 1 -- 192.168.123.102:0/3994596787 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f442c04b630 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.190+0000 7f44397fa700 1 -- 192.168.123.102:0/3994596787 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f442c04db70 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.190+0000 7f443b7fe700 1 -- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4424038090 msgr2=0x7f442403a550 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.190+0000 7f443b7fe700 1 --2- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4424038090 0x7f442403a550 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.319+0000 7f4441312700 1 -- 192.168.123.102:0/3994596787 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) v1 -- 0x7f443c1acc00 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.320+0000 7f44397fa700 1 -- 192.168.123.102:0/3994596787 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v5) v1 ==== 56+0+98 (secure 0 0 0) 0x7f442c01a880 con 0x7f443c0721d0 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.325+0000 7f442affd700 1 -- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4424038090 msgr2=0x7f442403a550 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.325+0000 7f442affd700 1 --2- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4424038090 0x7f442403a550 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.325+0000 7f442affd700 1 -- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 msgr2=0x7f443c1a90d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.325+0000 7f442affd700 1 --2- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c1a90d0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f442c000c00 tx=0x7f442c0042e0 comp rx=0 tx=0).stop 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.325+0000 7f442affd700 1 -- 192.168.123.102:0/3994596787 shutdown_connections 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.325+0000 7f442affd700 1 --2- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4424038090 0x7f442403a550 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.325+0000 7f442affd700 1 --2- 192.168.123.102:0/3994596787 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f443c0721d0 0x7f443c1a90d0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.326+0000 7f442affd700 1 -- 192.168.123.102:0/3994596787 >> 192.168.123.102:0/3994596787 conn(0x7f443c06d320 msgr2=0x7f443c06e020 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.326+0000 7f442affd700 1 -- 192.168.123.102:0/3994596787 shutdown_connections 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.326+0000 7f442affd700 1 -- 192.168.123.102:0/3994596787 wait complete. 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for the mgr to restart... 2026-03-10T10:14:20.371 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr epoch 5... 2026-03-10T10:14:21.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:20 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3067773774' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T10:14:21.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:20 vm02 ceph-mon[50200]: mgrmap e5: vm02.zmavgl(active, since 4s) 2026-03-10T10:14:21.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:20 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3994596787' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: Active manager daemon vm02.zmavgl restarted 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: Activating manager daemon vm02.zmavgl 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: mgrmap e6: vm02.zmavgl(active, starting, since 0.00573195s) 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: Manager daemon vm02.zmavgl is now available 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:24.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:24 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:14:25.205 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.497+0000 7f3209b31700 1 Processor -- start 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.497+0000 7f3209b31700 1 -- start start 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.497+0000 7f3209b31700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204072100 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.497+0000 7f3209b31700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f32040726d0 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.497+0000 7f3208b2f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204072100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.497+0000 7f3208b2f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204072100 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34292/0 (socket says 192.168.123.102:34292) 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.497+0000 7f3208b2f700 1 -- 192.168.123.102:0/1444196265 learned_addr learned my addr 192.168.123.102:0/1444196265 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.498+0000 7f3208b2f700 1 -- 192.168.123.102:0/1444196265 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3204072810 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.498+0000 7f3208b2f700 1 --2- 192.168.123.102:0/1444196265 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204072100 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f31f400d0d0 tx=0x7f31f400d3e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=edf98a0a7bc90cc6 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.499+0000 7f32037fe700 1 -- 192.168.123.102:0/1444196265 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f31f4010070 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.499+0000 7f32037fe700 1 -- 192.168.123.102:0/1444196265 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f31f4004030 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.499+0000 7f32037fe700 1 -- 192.168.123.102:0/1444196265 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f31f4003d00 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.499+0000 7f3209b31700 1 -- 192.168.123.102:0/1444196265 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 msgr2=0x7f3204072100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.499+0000 7f3209b31700 1 --2- 192.168.123.102:0/1444196265 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204072100 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f31f400d0d0 tx=0x7f31f400d3e0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.500+0000 7f3209b31700 1 -- 192.168.123.102:0/1444196265 shutdown_connections 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.500+0000 7f3209b31700 1 --2- 192.168.123.102:0/1444196265 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204072100 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.500+0000 7f3209b31700 1 -- 192.168.123.102:0/1444196265 >> 192.168.123.102:0/1444196265 conn(0x7f320406d320 msgr2=0x7f320406f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.500+0000 7f3209b31700 1 -- 192.168.123.102:0/1444196265 shutdown_connections 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.500+0000 7f3209b31700 1 -- 192.168.123.102:0/1444196265 wait complete. 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.501+0000 7f3209b31700 1 Processor -- start 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.501+0000 7f3209b31700 1 -- start start 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.501+0000 7f3209b31700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204086d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.501+0000 7f3209b31700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3204087270 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.501+0000 7f3208b2f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204086d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.501+0000 7f3208b2f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204086d30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34298/0 (socket says 192.168.123.102:34298) 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.501+0000 7f3208b2f700 1 -- 192.168.123.102:0/3555486448 learned_addr learned my addr 192.168.123.102:0/3555486448 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.501+0000 7f3208b2f700 1 -- 192.168.123.102:0/3555486448 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f31f40088c0 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.501+0000 7f3208b2f700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204086d30 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f31f4008890 tx=0x7f31f400ed80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.502+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f31f4010040 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.502+0000 7f3209b31700 1 -- 192.168.123.102:0/3555486448 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3204087470 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.502+0000 7f3209b31700 1 -- 192.168.123.102:0/3555486448 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3204087890 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.502+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f31f4004510 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.503+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f31f4016830 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.503+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 5) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f31f401d070 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.503+0000 7f3201ffb700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.503+0000 7f3203fff700 1 -- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 msgr2=0x7f31ec03a860 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.503+0000 7f3203fff700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.503+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7f31ec03af70 con 0x7f31ec0383a0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.503+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f31f404e190 con 0x7f3204071ce0 2026-03-10T10:14:25.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.704+0000 7f3203fff700 1 -- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 msgr2=0x7f31ec03a860 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:20.704+0000 7f3203fff700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:21.104+0000 7f3203fff700 1 -- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 msgr2=0x7f31ec03a860 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:21.104+0000 7f3203fff700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:21.905+0000 7f3203fff700 1 -- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 msgr2=0x7f31ec03a860 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:21.905+0000 7f3203fff700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:23.507+0000 7f3203fff700 1 -- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 msgr2=0x7f31ec03a860 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:23.507+0000 7f3203fff700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 3.200000 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:24.168+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mgrmap(e 6) v1 ==== 45045+0+0 (secure 0 0 0) 0x7f31f402b460 con 0x7f3204071ce0 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:24.168+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 msgr2=0x7f31ec03a860 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:24.168+0000 7f3201ffb700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.171+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mgrmap(e 7) v1 ==== 45172+0+0 (secure 0 0 0) 0x7f31f4013ca0 con 0x7f3204071ce0 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.171+0000 7f3201ffb700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.171+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7f31ec03af70 con 0x7f31ec0383a0 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.172+0000 7f3203fff700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.173+0000 7f3203fff700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f31fc003d90 tx=0x7f31fc0073b0 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.175+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== command_reply(tid 0: 0 ) v1 ==== 8+0+6910 (secure 0 0 0) 0x7f31ec03af70 con 0x7f31ec0383a0 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.178+0000 7f3209b31700 1 -- 192.168.123.102:0/3555486448 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- command(tid 1: {"prefix": "mgr_status"}) v1 -- 0x7f31f0002630 con 0x7f31ec0383a0 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.178+0000 7f3201ffb700 1 -- 192.168.123.102:0/3555486448 <== mgr.14120 v2:192.168.123.102:6800/2 2 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+51 (secure 0 0 0) 0x7f31f0002630 con 0x7f31ec0383a0 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 -- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 msgr2=0x7f31ec03a860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f31fc003d90 tx=0x7f31fc0073b0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 -- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 msgr2=0x7f3204086d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204086d30 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f31f4008890 tx=0x7f31f400ed80 comp rx=0 tx=0).stop 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 -- 192.168.123.102:0/3555486448 shutdown_connections 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f31ec0383a0 0x7f31ec03a860 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 --2- 192.168.123.102:0/3555486448 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3204071ce0 0x7f3204086d30 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 -- 192.168.123.102:0/3555486448 >> 192.168.123.102:0/3555486448 conn(0x7f320406d320 msgr2=0x7f320406df20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 -- 192.168.123.102:0/3555486448 shutdown_connections 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.179+0000 7f3209b31700 1 -- 192.168.123.102:0/3555486448 wait complete. 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:mgr epoch 5 is available 2026-03-10T10:14:25.207 INFO:teuthology.orchestra.run.vm02.stdout:Setting orchestrator backend to cephadm... 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.308+0000 7fd1682ef700 1 Processor -- start 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.308+0000 7fd1682ef700 1 -- start start 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.308+0000 7fd1682ef700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd160105650 0x7fd160105a70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.308+0000 7fd1682ef700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd160106040 con 0x7fd160105650 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.309+0000 7fd16608b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd160105650 0x7fd160105a70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.309+0000 7fd16608b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd160105650 0x7fd160105a70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34362/0 (socket says 192.168.123.102:34362) 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.309+0000 7fd16608b700 1 -- 192.168.123.102:0/3313029991 learned_addr learned my addr 192.168.123.102:0/3313029991 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.309+0000 7fd16608b700 1 -- 192.168.123.102:0/3313029991 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd160106850 con 0x7fd160105650 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.309+0000 7fd16608b700 1 --2- 192.168.123.102:0/3313029991 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd160105650 0x7fd160105a70 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fd150009a90 tx=0x7fd150009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b845c36b772d49c8 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.310+0000 7fd165089700 1 -- 192.168.123.102:0/3313029991 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd150004030 con 0x7fd160105650 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.310+0000 7fd165089700 1 -- 192.168.123.102:0/3313029991 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fd15000b7e0 con 0x7fd160105650 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.310+0000 7fd1682ef700 1 -- 192.168.123.102:0/3313029991 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd160105650 msgr2=0x7fd160105a70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.310+0000 7fd1682ef700 1 --2- 192.168.123.102:0/3313029991 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd160105650 0x7fd160105a70 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fd150009a90 tx=0x7fd150009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.310+0000 7fd1682ef700 1 -- 192.168.123.102:0/3313029991 shutdown_connections 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.310+0000 7fd1682ef700 1 --2- 192.168.123.102:0/3313029991 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd160105650 0x7fd160105a70 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.310+0000 7fd1682ef700 1 -- 192.168.123.102:0/3313029991 >> 192.168.123.102:0/3313029991 conn(0x7fd160100bd0 msgr2=0x7fd160103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.310+0000 7fd1682ef700 1 -- 192.168.123.102:0/3313029991 shutdown_connections 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.310+0000 7fd1682ef700 1 -- 192.168.123.102:0/3313029991 wait complete. 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.311+0000 7fd1682ef700 1 Processor -- start 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.311+0000 7fd1682ef700 1 -- start start 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.311+0000 7fd1682ef700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd16019c9a0 0x7fd16019cdc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.311+0000 7fd1682ef700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd160106040 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.311+0000 7fd16608b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd16019c9a0 0x7fd16019cdc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.311+0000 7fd16608b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd16019c9a0 0x7fd16019cdc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34370/0 (socket says 192.168.123.102:34370) 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.312+0000 7fd16608b700 1 -- 192.168.123.102:0/2604658275 learned_addr learned my addr 192.168.123.102:0/2604658275 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.312+0000 7fd16608b700 1 -- 192.168.123.102:0/2604658275 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd150009740 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.312+0000 7fd16608b700 1 --2- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd16019c9a0 0x7fd16019cdc0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fd15000bd00 tx=0x7fd15000bde0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.312+0000 7fd1577fe700 1 -- 192.168.123.102:0/2604658275 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd15001a670 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.312+0000 7fd1577fe700 1 -- 192.168.123.102:0/2604658275 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fd15001ac70 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.312+0000 7fd1577fe700 1 -- 192.168.123.102:0/2604658275 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd1500044e0 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.312+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd1601076d0 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.312+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd16019d590 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.313+0000 7fd1577fe700 1 -- 192.168.123.102:0/2604658275 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 7) v1 ==== 45172+0+0 (secure 0 0 0) 0x7fd15001a7d0 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.313+0000 7fd1577fe700 1 --2- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd14c038250 0x7fd14c03a710 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.313+0000 7fd1577fe700 1 -- 192.168.123.102:0/2604658275 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fd15004b4c0 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.313+0000 7fd16588a700 1 --2- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd14c038250 0x7fd14c03a710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.313+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd16004fa90 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.315+0000 7fd16588a700 1 --2- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd14c038250 0x7fd14c03a710 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fd15c006fd0 tx=0x7fd15c006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.316+0000 7fd1577fe700 1 -- 192.168.123.102:0/2604658275 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fd150003df0 con 0x7fd16019c9a0 2026-03-10T10:14:25.482 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.423+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}) v1 -- 0x7fd16002d070 con 0x7fd14c038250 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.429+0000 7fd1577fe700 1 -- 192.168.123.102:0/2604658275 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7fd16002d070 con 0x7fd14c038250 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.434+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd14c038250 msgr2=0x7fd14c03a710 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.434+0000 7fd1682ef700 1 --2- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd14c038250 0x7fd14c03a710 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fd15c006fd0 tx=0x7fd15c006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.434+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd16019c9a0 msgr2=0x7fd16019cdc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.434+0000 7fd1682ef700 1 --2- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd16019c9a0 0x7fd16019cdc0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fd15000bd00 tx=0x7fd15000bde0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.434+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 shutdown_connections 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.434+0000 7fd1682ef700 1 --2- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd14c038250 0x7fd14c03a710 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.434+0000 7fd1682ef700 1 --2- 192.168.123.102:0/2604658275 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd16019c9a0 0x7fd16019cdc0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.434+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 >> 192.168.123.102:0/2604658275 conn(0x7fd160100bd0 msgr2=0x7fd160103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.434+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 shutdown_connections 2026-03-10T10:14:25.483 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.435+0000 7fd1682ef700 1 -- 192.168.123.102:0/2604658275 wait complete. 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: Found migration_current of "None". Setting to last migration. 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: [10/Mar/2026:10:14:24] ENGINE Bus STARTING 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: [10/Mar/2026:10:14:24] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: [10/Mar/2026:10:14:24] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: [10/Mar/2026:10:14:24] ENGINE Bus STARTED 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: mgrmap e7: vm02.zmavgl(active, since 1.00967s) 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:25.696 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:25 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.586+0000 7facf0af8700 1 Processor -- start 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.586+0000 7facf0af8700 1 -- start start 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.586+0000 7facf0af8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec079240 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.587+0000 7facf0af8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7facec079810 con 0x7facec07ade0 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.587+0000 7facea59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec079240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.587+0000 7facea59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec079240 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34380/0 (socket says 192.168.123.102:34380) 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.587+0000 7facea59c700 1 -- 192.168.123.102:0/1894359878 learned_addr learned my addr 192.168.123.102:0/1894359878 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.587+0000 7facea59c700 1 -- 192.168.123.102:0/1894359878 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7facec079950 con 0x7facec07ade0 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.588+0000 7facea59c700 1 --2- 192.168.123.102:0/1894359878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec079240 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7facd4009cf0 tx=0x7facd400b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=fa408e6fdf904f35 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.588+0000 7face959a700 1 -- 192.168.123.102:0/1894359878 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7facd4004030 con 0x7facec07ade0 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.588+0000 7face959a700 1 -- 192.168.123.102:0/1894359878 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7facd400b810 con 0x7facec07ade0 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.588+0000 7facf0af8700 1 -- 192.168.123.102:0/1894359878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 msgr2=0x7facec079240 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.588+0000 7facf0af8700 1 --2- 192.168.123.102:0/1894359878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec079240 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7facd4009cf0 tx=0x7facd400b0e0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.588+0000 7facf0af8700 1 -- 192.168.123.102:0/1894359878 shutdown_connections 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.588+0000 7facf0af8700 1 --2- 192.168.123.102:0/1894359878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec079240 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.588+0000 7facf0af8700 1 -- 192.168.123.102:0/1894359878 >> 192.168.123.102:0/1894359878 conn(0x7facec101ce0 msgr2=0x7facec104140 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.589+0000 7facf0af8700 1 -- 192.168.123.102:0/1894359878 shutdown_connections 2026-03-10T10:14:25.731 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.589+0000 7facf0af8700 1 -- 192.168.123.102:0/1894359878 wait complete. 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.589+0000 7facf0af8700 1 Processor -- start 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.589+0000 7facf0af8700 1 -- start start 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.590+0000 7facf0af8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec1a0e60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.590+0000 7facf0af8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7facec079810 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.590+0000 7facea59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec1a0e60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.590+0000 7facea59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec1a0e60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34396/0 (socket says 192.168.123.102:34396) 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.590+0000 7facea59c700 1 -- 192.168.123.102:0/1364627091 learned_addr learned my addr 192.168.123.102:0/1364627091 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.590+0000 7facea59c700 1 -- 192.168.123.102:0/1364627091 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7facd4009740 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.590+0000 7facea59c700 1 --2- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec1a0e60 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7facd400bdb0 tx=0x7facd400be90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.591+0000 7face37fe700 1 -- 192.168.123.102:0/1364627091 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7facd4003f10 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.591+0000 7face37fe700 1 -- 192.168.123.102:0/1364627091 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7facd4004510 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.591+0000 7face37fe700 1 -- 192.168.123.102:0/1364627091 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7facd401ad20 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.591+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7facec1a13a0 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.591+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7facec1a17c0 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.591+0000 7face37fe700 1 -- 192.168.123.102:0/1364627091 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 7) v1 ==== 45172+0+0 (secure 0 0 0) 0x7facd4011420 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.592+0000 7face37fe700 1 --2- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7facd8038270 0x7facd803a730 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.592+0000 7face37fe700 1 -- 192.168.123.102:0/1364627091 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7facd404c9f0 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.592+0000 7face9d9b700 1 --2- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7facd8038270 0x7facd803a730 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.592+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7facec19b200 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.595+0000 7face9d9b700 1 --2- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7facd8038270 0x7facd803a730 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7facdc006fd0 tx=0x7facdc006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.595+0000 7face37fe700 1 -- 192.168.123.102:0/1364627091 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7facd40116d0 con 0x7facec07ade0 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.693+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}) v1 -- 0x7facec1a1a70 con 0x7facd8038270 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.696+0000 7face37fe700 1 -- 192.168.123.102:0/1364627091 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+16 (secure 0 0 0) 0x7facec1a1a70 con 0x7facd8038270 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.697+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7facd8038270 msgr2=0x7facd803a730 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.697+0000 7facf0af8700 1 --2- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7facd8038270 0x7facd803a730 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7facdc006fd0 tx=0x7facdc006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.697+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 msgr2=0x7facec1a0e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.697+0000 7facf0af8700 1 --2- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec1a0e60 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7facd400bdb0 tx=0x7facd400be90 comp rx=0 tx=0).stop 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.698+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 shutdown_connections 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.698+0000 7facf0af8700 1 --2- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7facd8038270 0x7facd803a730 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.698+0000 7facf0af8700 1 --2- 192.168.123.102:0/1364627091 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facec07ade0 0x7facec1a0e60 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.698+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 >> 192.168.123.102:0/1364627091 conn(0x7facec101ce0 msgr2=0x7facec1029c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.698+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 shutdown_connections 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.698+0000 7facf0af8700 1 -- 192.168.123.102:0/1364627091 wait complete. 2026-03-10T10:14:25.732 INFO:teuthology.orchestra.run.vm02.stdout:Generating ssh key... 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.847+0000 7f409d700700 1 Processor -- start 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.847+0000 7f409d700700 1 -- start start 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.847+0000 7f409d700700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4098105650 0x7f4098105a70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.847+0000 7f409d700700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4098106040 con 0x7f4098105650 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.847+0000 7f4096ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4098105650 0x7f4098105a70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.848+0000 7f4096ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4098105650 0x7f4098105a70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34398/0 (socket says 192.168.123.102:34398) 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.848+0000 7f4096ffd700 1 -- 192.168.123.102:0/1961686547 learned_addr learned my addr 192.168.123.102:0/1961686547 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.848+0000 7f4096ffd700 1 -- 192.168.123.102:0/1961686547 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4098106850 con 0x7f4098105650 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.848+0000 7f4096ffd700 1 --2- 192.168.123.102:0/1961686547 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4098105650 0x7f4098105a70 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f4080009a90 tx=0x7f4080009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6692832a9f7a5b3e server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.848+0000 7f4095ffb700 1 -- 192.168.123.102:0/1961686547 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4080004030 con 0x7f4098105650 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.848+0000 7f4095ffb700 1 -- 192.168.123.102:0/1961686547 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f408000b7e0 con 0x7f4098105650 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.849+0000 7f409d700700 1 -- 192.168.123.102:0/1961686547 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4098105650 msgr2=0x7f4098105a70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.849+0000 7f409d700700 1 --2- 192.168.123.102:0/1961686547 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4098105650 0x7f4098105a70 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f4080009a90 tx=0x7f4080009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.849+0000 7f409d700700 1 -- 192.168.123.102:0/1961686547 shutdown_connections 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.849+0000 7f409d700700 1 --2- 192.168.123.102:0/1961686547 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4098105650 0x7f4098105a70 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.849+0000 7f409d700700 1 -- 192.168.123.102:0/1961686547 >> 192.168.123.102:0/1961686547 conn(0x7f4098100bd0 msgr2=0x7f4098103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.849+0000 7f409d700700 1 -- 192.168.123.102:0/1961686547 shutdown_connections 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.849+0000 7f409d700700 1 -- 192.168.123.102:0/1961686547 wait complete. 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.850+0000 7f409d700700 1 Processor -- start 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.850+0000 7f409d700700 1 -- start start 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.850+0000 7f409d700700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f409819c800 0x7f409819cc20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.850+0000 7f4096ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f409819c800 0x7f409819cc20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.850+0000 7f4096ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f409819c800 0x7f409819cc20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34404/0 (socket says 192.168.123.102:34404) 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.850+0000 7f4096ffd700 1 -- 192.168.123.102:0/171666945 learned_addr learned my addr 192.168.123.102:0/171666945 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.850+0000 7f409d700700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4098106040 con 0x7f409819c800 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.850+0000 7f4096ffd700 1 -- 192.168.123.102:0/171666945 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4080009740 con 0x7f409819c800 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.851+0000 7f4096ffd700 1 --2- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f409819c800 0x7f409819cc20 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f408000bd00 tx=0x7f408000bde0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.851+0000 7f408ffff700 1 -- 192.168.123.102:0/171666945 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4080003ec0 con 0x7f409819c800 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.851+0000 7f408ffff700 1 -- 192.168.123.102:0/171666945 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f40800044c0 con 0x7f409819c800 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.851+0000 7f408ffff700 1 -- 192.168.123.102:0/171666945 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f408001ac80 con 0x7f409819c800 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.851+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f409819d160 con 0x7f409819c800 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.851+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f409819fde0 con 0x7f409819c800 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.852+0000 7f408ffff700 1 -- 192.168.123.102:0/171666945 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 7) v1 ==== 45172+0+0 (secure 0 0 0) 0x7f4080011420 con 0x7f409819c800 2026-03-10T10:14:26.135 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.852+0000 7f408ffff700 1 --2- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4084038270 0x7f408403a730 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.852+0000 7f408ffff700 1 -- 192.168.123.102:0/171666945 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f408004c5f0 con 0x7f409819c800 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.852+0000 7f40967fc700 1 --2- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4084038270 0x7f408403a730 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.853+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f409804fa90 con 0x7f409819c800 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.853+0000 7f40967fc700 1 --2- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4084038270 0x7f408403a730 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f4088006fd0 tx=0x7f4088006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.855+0000 7f408ffff700 1 -- 192.168.123.102:0/171666945 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f408001ade0 con 0x7f409819c800 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:25.952+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}) v1 -- 0x7f4098102ba0 con 0x7f4084038270 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.107+0000 7f408ffff700 1 -- 192.168.123.102:0/171666945 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7f4098102ba0 con 0x7f4084038270 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.109+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4084038270 msgr2=0x7f408403a730 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.109+0000 7f409d700700 1 --2- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4084038270 0x7f408403a730 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f4088006fd0 tx=0x7f4088006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.109+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f409819c800 msgr2=0x7f409819cc20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.109+0000 7f409d700700 1 --2- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f409819c800 0x7f409819cc20 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f408000bd00 tx=0x7f408000bde0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.109+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 shutdown_connections 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.109+0000 7f409d700700 1 --2- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4084038270 0x7f408403a730 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.109+0000 7f409d700700 1 --2- 192.168.123.102:0/171666945 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f409819c800 0x7f409819cc20 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.110+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 >> 192.168.123.102:0/171666945 conn(0x7f4098100bd0 msgr2=0x7f40980740f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.110+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 shutdown_connections 2026-03-10T10:14:26.136 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.110+0000 7f409d700700 1 -- 192.168.123.102:0/171666945 wait complete. 2026-03-10T10:14:26.400 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRVxTDdDyFKO6pXsaJyfSFxLj/hX5KHgxPxcbv7r5nyOeZV/Kesbgr1xq1DsxwRPjKqRhK3EAdfixQIRb43FDaxDCmoO8tVeEejwwdslkgW35yd9ymzGbI2u8vUr+wgW0Mx7wO3kZzLuPRxCezlDRFa8AvO4FBvnQyQWB3v419ZwbdRQkgWGhxON6Uqo2pS+QVmdeFwxf+9RJ9dWBTtUGFQp0avKehk/57Ca/RozqVoqIb39BJXsJBkWH9NVrme1g95fRLcM7KUGx1+zGGLR8mQsFA5Xb7v8wd+bXX8su4TmvObAx8BSeBRHa4M4OV5pYcUb2GRnxdabPYMP2Vb4+QQLHDqWcvieg/LDmNA15wXBEk5cM1yoNp8gefoUYt9oQy2EBs/gIOPn4OSD01RHxNSywUJl9U6aXZiCMj4omdS6fsAF9DqzsvBqsw77AUkPUV44EuxYluImt44gG+AMYMkz0GtcpML7ESn97mm78fkaG5EUgRfl+oj9fmCzfUvyU= ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:26.400 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.248+0000 7f1aa8f8d700 1 Processor -- start 2026-03-10T10:14:26.400 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.249+0000 7f1aa8f8d700 1 -- start start 2026-03-10T10:14:26.400 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.249+0000 7f1aa8f8d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa4108980 0x7f1aa4108da0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.400 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.249+0000 7f1aa8f8d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1aa4109370 con 0x7f1aa4108980 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.249+0000 7f1aa259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa4108980 0x7f1aa4108da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.249+0000 7f1aa259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa4108980 0x7f1aa4108da0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34412/0 (socket says 192.168.123.102:34412) 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.249+0000 7f1aa259c700 1 -- 192.168.123.102:0/1080131882 learned_addr learned my addr 192.168.123.102:0/1080131882 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.249+0000 7f1aa259c700 1 -- 192.168.123.102:0/1080131882 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1aa4109b80 con 0x7f1aa4108980 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.249+0000 7f1aa259c700 1 --2- 192.168.123.102:0/1080131882 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa4108980 0x7f1aa4108da0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f1a8c01ad90 tx=0x7f1a8c01c3d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=e4ffe532a444d521 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.250+0000 7f1aa159a700 1 -- 192.168.123.102:0/1080131882 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1a8c01c9e0 con 0x7f1aa4108980 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.250+0000 7f1aa159a700 1 -- 192.168.123.102:0/1080131882 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f1a8c004030 con 0x7f1aa4108980 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.251+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/1080131882 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa4108980 msgr2=0x7f1aa4108da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.251+0000 7f1aa8f8d700 1 --2- 192.168.123.102:0/1080131882 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa4108980 0x7f1aa4108da0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f1a8c01ad90 tx=0x7f1a8c01c3d0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.251+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/1080131882 shutdown_connections 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.251+0000 7f1aa8f8d700 1 --2- 192.168.123.102:0/1080131882 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa4108980 0x7f1aa4108da0 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.251+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/1080131882 >> 192.168.123.102:0/1080131882 conn(0x7f1aa41044d0 msgr2=0x7f1aa41068c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.251+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/1080131882 shutdown_connections 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.251+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/1080131882 wait complete. 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.252+0000 7f1aa8f8d700 1 Processor -- start 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.252+0000 7f1aa8f8d700 1 -- start start 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.252+0000 7f1aa8f8d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa407c690 0x7f1aa407cab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.252+0000 7f1aa8f8d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1aa4109370 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.252+0000 7f1aa259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa407c690 0x7f1aa407cab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.252+0000 7f1aa259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa407c690 0x7f1aa407cab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34426/0 (socket says 192.168.123.102:34426) 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.252+0000 7f1aa259c700 1 -- 192.168.123.102:0/2529962679 learned_addr learned my addr 192.168.123.102:0/2529962679 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.253+0000 7f1aa259c700 1 -- 192.168.123.102:0/2529962679 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1a8c01a7e0 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.253+0000 7f1aa259c700 1 --2- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa407c690 0x7f1aa407cab0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f1a8c01ad60 tx=0x7f1a8c004070 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.253+0000 7f1a9b7fe700 1 -- 192.168.123.102:0/2529962679 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1a8c0041d0 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.253+0000 7f1a9b7fe700 1 -- 192.168.123.102:0/2529962679 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f1a8c02c440 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.253+0000 7f1a9b7fe700 1 -- 192.168.123.102:0/2529962679 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1a8c022430 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.253+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1aa410aa00 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.254+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1aa407d220 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.254+0000 7f1a9b7fe700 1 -- 192.168.123.102:0/2529962679 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 7) v1 ==== 45172+0+0 (secure 0 0 0) 0x7f1a8c0226a0 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.254+0000 7f1a9b7fe700 1 --2- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1a90038200 0x7f1a9003a6c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.254+0000 7f1a9b7fe700 1 -- 192.168.123.102:0/2529962679 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f1a8c05dca0 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.254+0000 7f1aa1d9b700 1 --2- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1a90038200 0x7f1a9003a6c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.255+0000 7f1aa1d9b700 1 --2- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1a90038200 0x7f1a9003a6c0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f1a94006fd0 tx=0x7f1a94006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.255+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1aa40623c0 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.258+0000 7f1a9b7fe700 1 -- 192.168.123.102:0/2529962679 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f1a8c03e430 con 0x7f1aa407c690 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.361+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}) v1 -- 0x7f1aa4070c50 con 0x7f1a90038200 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.362+0000 7f1a9b7fe700 1 -- 192.168.123.102:0/2529962679 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+595 (secure 0 0 0) 0x7f1aa4070c50 con 0x7f1a90038200 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.363+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1a90038200 msgr2=0x7f1a9003a6c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.363+0000 7f1aa8f8d700 1 --2- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1a90038200 0x7f1a9003a6c0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f1a94006fd0 tx=0x7f1a94006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.363+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa407c690 msgr2=0x7f1aa407cab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.363+0000 7f1aa8f8d700 1 --2- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa407c690 0x7f1aa407cab0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f1a8c01ad60 tx=0x7f1a8c004070 comp rx=0 tx=0).stop 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.364+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 shutdown_connections 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.364+0000 7f1aa8f8d700 1 --2- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1a90038200 0x7f1a9003a6c0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.364+0000 7f1aa8f8d700 1 --2- 192.168.123.102:0/2529962679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1aa407c690 0x7f1aa407cab0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.364+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 >> 192.168.123.102:0/2529962679 conn(0x7f1aa41044d0 msgr2=0x7f1aa406b8a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.364+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 shutdown_connections 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.364+0000 7f1aa8f8d700 1 -- 192.168.123.102:0/2529962679 wait complete. 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T10:14:26.401 INFO:teuthology.orchestra.run.vm02.stdout:Adding host vm02... 2026-03-10T10:14:27.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:26 vm02 ceph-mon[50200]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:14:27.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:26 vm02 ceph-mon[50200]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:14:27.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:26 vm02 ceph-mon[50200]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:27.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:26 vm02 ceph-mon[50200]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:27.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:26 vm02 ceph-mon[50200]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:27.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:26 vm02 ceph-mon[50200]: Generating ssh key... 2026-03-10T10:14:27.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:26 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:27.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:26 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:28.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:27 vm02 ceph-mon[50200]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:28.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:27 vm02 ceph-mon[50200]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:28.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:27 vm02 ceph-mon[50200]: mgrmap e8: vm02.zmavgl(active, since 2s) 2026-03-10T10:14:28.397 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Added host 'vm02' with addr '192.168.123.102' 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.520+0000 7f601a211700 1 Processor -- start 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.520+0000 7f601a211700 1 -- start start 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.520+0000 7f601a211700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f60141095b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.520+0000 7f601a211700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6014074720 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.521+0000 7f60137fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f60141095b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.521+0000 7f60137fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f60141095b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34438/0 (socket says 192.168.123.102:34438) 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.521+0000 7f60137fe700 1 -- 192.168.123.102:0/2563185233 learned_addr learned my addr 192.168.123.102:0/2563185233 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.521+0000 7f60137fe700 1 -- 192.168.123.102:0/2563185233 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6014109af0 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.521+0000 7f60137fe700 1 --2- 192.168.123.102:0/2563185233 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f60141095b0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f5ffc009cf0 tx=0x7f5ffc00b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=f32a5520f3d04a1a server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f60127fc700 1 -- 192.168.123.102:0/2563185233 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5ffc004030 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f60127fc700 1 -- 192.168.123.102:0/2563185233 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f5ffc00b810 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f60127fc700 1 -- 192.168.123.102:0/2563185233 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5ffc003a90 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f601a211700 1 -- 192.168.123.102:0/2563185233 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 msgr2=0x7f60141095b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f601a211700 1 --2- 192.168.123.102:0/2563185233 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f60141095b0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f5ffc009cf0 tx=0x7f5ffc00b0e0 comp rx=0 tx=0).stop 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f601a211700 1 -- 192.168.123.102:0/2563185233 shutdown_connections 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f601a211700 1 --2- 192.168.123.102:0/2563185233 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f60141095b0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f601a211700 1 -- 192.168.123.102:0/2563185233 >> 192.168.123.102:0/2563185233 conn(0x7f6014100bd0 msgr2=0x7f6014103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f601a211700 1 -- 192.168.123.102:0/2563185233 shutdown_connections 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.522+0000 7f601a211700 1 -- 192.168.123.102:0/2563185233 wait complete. 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.523+0000 7f601a211700 1 Processor -- start 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.523+0000 7f601a211700 1 -- start start 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.523+0000 7f601a211700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f601419c650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.523+0000 7f601a211700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f601419cb90 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.524+0000 7f60137fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f601419c650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.524+0000 7f60137fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f601419c650 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:34446/0 (socket says 192.168.123.102:34446) 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.524+0000 7f60137fe700 1 -- 192.168.123.102:0/580815073 learned_addr learned my addr 192.168.123.102:0/580815073 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.524+0000 7f60137fe700 1 -- 192.168.123.102:0/580815073 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5ffc009740 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.524+0000 7f60137fe700 1 --2- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f601419c650 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f5ffc003d50 tx=0x7f5ffc003e30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.524+0000 7f6010ff9700 1 -- 192.168.123.102:0/580815073 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5ffc004150 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.524+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f601419cd90 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.525+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f601419d1b0 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.525+0000 7f6010ff9700 1 -- 192.168.123.102:0/580815073 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f5ffc0042b0 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.525+0000 7f6010ff9700 1 -- 192.168.123.102:0/580815073 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5ffc011420 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.526+0000 7f6010ff9700 1 -- 192.168.123.102:0/580815073 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 7) v1 ==== 45172+0+0 (secure 0 0 0) 0x7f5ffc011690 con 0x7f60141071c0 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.526+0000 7f6010ff9700 1 --2- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6000038220 0x7f600003a6e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:28.398 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.526+0000 7f6010ff9700 1 -- 192.168.123.102:0/580815073 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f5ffc04bbf0 con 0x7f60141071c0 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.526+0000 7f6012ffd700 1 --2- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6000038220 0x7f600003a6e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.526+0000 7f6012ffd700 1 --2- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6000038220 0x7f600003a6e0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f6004006fd0 tx=0x7f6004006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.526+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5ff4005320 con 0x7f60141071c0 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.529+0000 7f6010ff9700 1 -- 192.168.123.102:0/580815073 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f5ffc020070 con 0x7f60141071c0 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:26.628+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}) v1 -- 0x7f5ff4000bf0 con 0x7f6000038220 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:27.109+0000 7f6010ff9700 1 -- 192.168.123.102:0/580815073 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f5ffc004420 con 0x7f60141071c0 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.362+0000 7f6010ff9700 1 -- 192.168.123.102:0/580815073 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+46 (secure 0 0 0) 0x7f5ff4000bf0 con 0x7f6000038220 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.365+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6000038220 msgr2=0x7f600003a6e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.365+0000 7f601a211700 1 --2- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6000038220 0x7f600003a6e0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f6004006fd0 tx=0x7f6004006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.365+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 msgr2=0x7f601419c650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.365+0000 7f601a211700 1 --2- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f601419c650 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f5ffc003d50 tx=0x7f5ffc003e30 comp rx=0 tx=0).stop 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.365+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 shutdown_connections 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.365+0000 7f601a211700 1 --2- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6000038220 0x7f600003a6e0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.365+0000 7f601a211700 1 --2- 192.168.123.102:0/580815073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60141071c0 0x7f601419c650 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.365+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 >> 192.168.123.102:0/580815073 conn(0x7f6014100bd0 msgr2=0x7f60141018c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.366+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 shutdown_connections 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.366+0000 7f601a211700 1 -- 192.168.123.102:0/580815073 wait complete. 2026-03-10T10:14:28.399 INFO:teuthology.orchestra.run.vm02.stdout:Deploying mon service with default placement... 2026-03-10T10:14:28.735 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.520+0000 7fb698d25700 1 Processor -- start 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.520+0000 7fb698d25700 1 -- start start 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.520+0000 7fb698d25700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb694072410 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.520+0000 7fb698d25700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb6940729e0 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.520+0000 7fb69259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb694072410 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.520+0000 7fb69259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb694072410 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55062/0 (socket says 192.168.123.102:55062) 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.520+0000 7fb69259c700 1 -- 192.168.123.102:0/1839288475 learned_addr learned my addr 192.168.123.102:0/1839288475 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.520+0000 7fb69259c700 1 -- 192.168.123.102:0/1839288475 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb69410df70 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.520+0000 7fb69259c700 1 --2- 192.168.123.102:0/1839288475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb694072410 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fb684009a90 tx=0x7fb684009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9fbb5241073fbad3 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb69159a700 1 -- 192.168.123.102:0/1839288475 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb684004030 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb69159a700 1 -- 192.168.123.102:0/1839288475 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fb68400b7e0 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 -- 192.168.123.102:0/1839288475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 msgr2=0x7fb694072410 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 --2- 192.168.123.102:0/1839288475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb694072410 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fb684009a90 tx=0x7fb684009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 -- 192.168.123.102:0/1839288475 shutdown_connections 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 --2- 192.168.123.102:0/1839288475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb694072410 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 -- 192.168.123.102:0/1839288475 >> 192.168.123.102:0/1839288475 conn(0x7fb69406d660 msgr2=0x7fb69406fac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 -- 192.168.123.102:0/1839288475 shutdown_connections 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 -- 192.168.123.102:0/1839288475 wait complete. 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 Processor -- start 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 -- start start 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb69411d560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.521+0000 7fb698d25700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb6940729e0 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.522+0000 7fb69259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb69411d560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.522+0000 7fb69259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb69411d560 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55074/0 (socket says 192.168.123.102:55074) 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.522+0000 7fb69259c700 1 -- 192.168.123.102:0/1015220890 learned_addr learned my addr 192.168.123.102:0/1015220890 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.522+0000 7fb69259c700 1 -- 192.168.123.102:0/1015220890 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb684009740 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.522+0000 7fb69259c700 1 --2- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb69411d560 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fb684009710 tx=0x7fb684003f40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.522+0000 7fb6837fe700 1 -- 192.168.123.102:0/1015220890 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb6840043a0 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.522+0000 7fb698d25700 1 -- 192.168.123.102:0/1015220890 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb69411db00 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.522+0000 7fb698d25700 1 -- 192.168.123.102:0/1015220890 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb69411be20 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.523+0000 7fb6837fe700 1 -- 192.168.123.102:0/1015220890 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fb684004500 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.523+0000 7fb6837fe700 1 -- 192.168.123.102:0/1015220890 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb6840115e0 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.524+0000 7fb6837fe700 1 -- 192.168.123.102:0/1015220890 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7fb684011800 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.524+0000 7fb6837fe700 1 --2- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb67c038410 0x7fb67c03a8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.524+0000 7fb691d9b700 1 --2- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb67c038410 0x7fb67c03a8d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.524+0000 7fb6837fe700 1 -- 192.168.123.102:0/1015220890 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fb68404d0f0 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.524+0000 7fb698d25700 1 -- 192.168.123.102:0/1015220890 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb6740052f0 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.527+0000 7fb6837fe700 1 -- 192.168.123.102:0/1015220890 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fb684011ab0 con 0x7fb694071ff0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.527+0000 7fb691d9b700 1 --2- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb67c038410 0x7fb67c03a8d0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fb688006fd0 tx=0x7fb688006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.642+0000 7fb698d25700 1 -- 192.168.123.102:0/1015220890 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}) v1 -- 0x7fb674000bc0 con 0x7fb67c038410 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.702+0000 7fb6837fe700 1 -- 192.168.123.102:0/1015220890 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+24 (secure 0 0 0) 0x7fb674000bc0 con 0x7fb67c038410 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.705+0000 7fb6817fa700 1 -- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb67c038410 msgr2=0x7fb67c03a8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.705+0000 7fb6817fa700 1 --2- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb67c038410 0x7fb67c03a8d0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fb688006fd0 tx=0x7fb688006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.705+0000 7fb6817fa700 1 -- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 msgr2=0x7fb69411d560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.705+0000 7fb6817fa700 1 --2- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb69411d560 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7fb684009710 tx=0x7fb684003f40 comp rx=0 tx=0).stop 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.706+0000 7fb6817fa700 1 -- 192.168.123.102:0/1015220890 shutdown_connections 2026-03-10T10:14:28.736 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.706+0000 7fb6817fa700 1 --2- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb67c038410 0x7fb67c03a8d0 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:28.737 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.706+0000 7fb6817fa700 1 --2- 192.168.123.102:0/1015220890 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb694071ff0 0x7fb69411d560 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:28.737 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.706+0000 7fb6817fa700 1 -- 192.168.123.102:0/1015220890 >> 192.168.123.102:0/1015220890 conn(0x7fb69406d660 msgr2=0x7fb69406f2e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:28.737 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.707+0000 7fb6817fa700 1 -- 192.168.123.102:0/1015220890 shutdown_connections 2026-03-10T10:14:28.737 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.707+0000 7fb6817fa700 1 -- 192.168.123.102:0/1015220890 wait complete. 2026-03-10T10:14:28.737 INFO:teuthology.orchestra.run.vm02.stdout:Deploying mgr service with default placement... 2026-03-10T10:14:28.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:28 vm02 ceph-mon[50200]: Deploying cephadm binary to vm02 2026-03-10T10:14:28.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:28 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:28.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:28 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:28 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:28.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:28 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:29.123 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.887+0000 7fbb35c57700 1 Processor -- start 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.887+0000 7fbb35c57700 1 -- start start 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.887+0000 7fbb35c57700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb300725f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.887+0000 7fbb35c57700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbb30072bc0 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.887+0000 7fbb34c55700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb300725f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.887+0000 7fbb34c55700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb300725f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55086/0 (socket says 192.168.123.102:55086) 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.887+0000 7fbb34c55700 1 -- 192.168.123.102:0/3688696048 learned_addr learned my addr 192.168.123.102:0/3688696048 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.888+0000 7fbb34c55700 1 -- 192.168.123.102:0/3688696048 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbb3010e1c0 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.888+0000 7fbb34c55700 1 --2- 192.168.123.102:0/3688696048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb300725f0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fbb24009a90 tx=0x7fbb24009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1daf5521cd3f61ee server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.888+0000 7fbb2f7fe700 1 -- 192.168.123.102:0/3688696048 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbb24004030 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.888+0000 7fbb2f7fe700 1 -- 192.168.123.102:0/3688696048 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fbb2400b7e0 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.888+0000 7fbb2f7fe700 1 -- 192.168.123.102:0/3688696048 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbb24003ad0 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.891+0000 7fbb35c57700 1 -- 192.168.123.102:0/3688696048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 msgr2=0x7fbb300725f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.891+0000 7fbb35c57700 1 --2- 192.168.123.102:0/3688696048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb300725f0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fbb24009a90 tx=0x7fbb24009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.891+0000 7fbb35c57700 1 -- 192.168.123.102:0/3688696048 shutdown_connections 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.891+0000 7fbb35c57700 1 --2- 192.168.123.102:0/3688696048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb300725f0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.891+0000 7fbb35c57700 1 -- 192.168.123.102:0/3688696048 >> 192.168.123.102:0/3688696048 conn(0x7fbb3006d320 msgr2=0x7fbb3006f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.891+0000 7fbb35c57700 1 -- 192.168.123.102:0/3688696048 shutdown_connections 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.891+0000 7fbb35c57700 1 -- 192.168.123.102:0/3688696048 wait complete. 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.892+0000 7fbb35c57700 1 Processor -- start 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.892+0000 7fbb35c57700 1 -- start start 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.892+0000 7fbb35c57700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb301a3d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.892+0000 7fbb35c57700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbb301a42c0 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.892+0000 7fbb34c55700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb301a3d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.892+0000 7fbb34c55700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb301a3d80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55098/0 (socket says 192.168.123.102:55098) 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.892+0000 7fbb34c55700 1 -- 192.168.123.102:0/3305725736 learned_addr learned my addr 192.168.123.102:0/3305725736 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.893+0000 7fbb34c55700 1 -- 192.168.123.102:0/3305725736 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbb24009740 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.894+0000 7fbb34c55700 1 --2- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb301a3d80 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fbb24006b20 tx=0x7fbb24003fe0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.896+0000 7fbb2dffb700 1 -- 192.168.123.102:0/3305725736 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbb24004490 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.896+0000 7fbb2dffb700 1 -- 192.168.123.102:0/3305725736 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fbb240178c0 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.896+0000 7fbb2dffb700 1 -- 192.168.123.102:0/3305725736 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbb2401f830 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.896+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbb301a44c0 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.896+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbb301a09b0 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.897+0000 7fbb2dffb700 1 -- 192.168.123.102:0/3305725736 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7fbb24017420 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.897+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbb301a4650 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.898+0000 7fbb2dffb700 1 --2- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb20038090 0x7fbb2003a550 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.899+0000 7fbb2ffff700 1 --2- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb20038090 0x7fbb2003a550 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.901+0000 7fbb2dffb700 1 -- 192.168.123.102:0/3305725736 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fbb24013070 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.903+0000 7fbb2dffb700 1 -- 192.168.123.102:0/3305725736 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fbb24027540 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.904+0000 7fbb2ffff700 1 --2- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb20038090 0x7fbb2003a550 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fbb1c006fd0 tx=0x7fbb1c006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:28.904+0000 7fbb2dffb700 1 -- 192.168.123.102:0/3305725736 <== mon.0 v2:192.168.123.102:3300/0 7 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbb2404b880 con 0x7fbb300721d0 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.031+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}) v1 -- 0x7fbb30062460 con 0x7fbb20038090 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.034+0000 7fbb2dffb700 1 -- 192.168.123.102:0/3305725736 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+24 (secure 0 0 0) 0x7fbb30062460 con 0x7fbb20038090 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb20038090 msgr2=0x7fbb2003a550 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 --2- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb20038090 0x7fbb2003a550 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fbb1c006fd0 tx=0x7fbb1c006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 msgr2=0x7fbb301a3d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 --2- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb301a3d80 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fbb24006b20 tx=0x7fbb24003fe0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 shutdown_connections 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 --2- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb20038090 0x7fbb2003a550 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 --2- 192.168.123.102:0/3305725736 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb300721d0 0x7fbb301a3d80 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 >> 192.168.123.102:0/3305725736 conn(0x7fbb3006d320 msgr2=0x7fbb3006df60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 shutdown_connections 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.037+0000 7fbb35c57700 1 -- 192.168.123.102:0/3305725736 wait complete. 2026-03-10T10:14:29.124 INFO:teuthology.orchestra.run.vm02.stdout:Deploying crash service with default placement... 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled crash update... 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.259+0000 7f6273124700 1 Processor -- start 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.259+0000 7f6273124700 1 -- start start 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.259+0000 7f6273124700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c072220 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.259+0000 7f6273124700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f626c0727f0 con 0x7f626c071e00 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.260+0000 7f6272122700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c072220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.260+0000 7f6272122700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c072220 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55102/0 (socket says 192.168.123.102:55102) 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.260+0000 7f6272122700 1 -- 192.168.123.102:0/2755641379 learned_addr learned my addr 192.168.123.102:0/2755641379 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.260+0000 7f6272122700 1 -- 192.168.123.102:0/2755641379 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f626c10ddb0 con 0x7f626c071e00 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.261+0000 7f6272122700 1 --2- 192.168.123.102:0/2755641379 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c072220 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f6268009a90 tx=0x7f6268009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=22964e7b699c4754 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.261+0000 7f6271120700 1 -- 192.168.123.102:0/2755641379 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6268004030 con 0x7f626c071e00 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.261+0000 7f6271120700 1 -- 192.168.123.102:0/2755641379 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f626800b7e0 con 0x7f626c071e00 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.261+0000 7f6271120700 1 -- 192.168.123.102:0/2755641379 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6268003ae0 con 0x7f626c071e00 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.262+0000 7f6273124700 1 -- 192.168.123.102:0/2755641379 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 msgr2=0x7f626c072220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.262+0000 7f6273124700 1 --2- 192.168.123.102:0/2755641379 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c072220 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f6268009a90 tx=0x7f6268009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.262+0000 7f6273124700 1 -- 192.168.123.102:0/2755641379 shutdown_connections 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.262+0000 7f6273124700 1 --2- 192.168.123.102:0/2755641379 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c072220 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.262+0000 7f6273124700 1 -- 192.168.123.102:0/2755641379 >> 192.168.123.102:0/2755641379 conn(0x7f626c06d320 msgr2=0x7f626c06f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.264+0000 7f6273124700 1 -- 192.168.123.102:0/2755641379 shutdown_connections 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.265+0000 7f6273124700 1 -- 192.168.123.102:0/2755641379 wait complete. 2026-03-10T10:14:29.471 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.266+0000 7f6273124700 1 Processor -- start 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.268+0000 7f6273124700 1 -- start start 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.268+0000 7f6273124700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c1a90a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.268+0000 7f6273124700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f626c1a95e0 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.268+0000 7f6272122700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c1a90a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.268+0000 7f6272122700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c1a90a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55114/0 (socket says 192.168.123.102:55114) 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.268+0000 7f6272122700 1 -- 192.168.123.102:0/1645927475 learned_addr learned my addr 192.168.123.102:0/1645927475 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.268+0000 7f6272122700 1 -- 192.168.123.102:0/1645927475 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6268009740 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.268+0000 7f6272122700 1 --2- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c1a90a0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f6268009710 tx=0x7f6268004080 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.269+0000 7f625f7fe700 1 -- 192.168.123.102:0/1645927475 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6268004220 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.269+0000 7f625f7fe700 1 -- 192.168.123.102:0/1645927475 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6268004380 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.270+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f626c1a97e0 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.270+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f626c1a9c00 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.271+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f626c0623c0 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.273+0000 7f625f7fe700 1 -- 192.168.123.102:0/1645927475 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f6268011510 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.273+0000 7f625f7fe700 1 -- 192.168.123.102:0/1645927475 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f62680116f0 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.273+0000 7f625f7fe700 1 --2- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62580380e0 0x7f625803a5a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.273+0000 7f625f7fe700 1 -- 192.168.123.102:0/1645927475 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f626805e210 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.274+0000 7f6271921700 1 --2- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62580380e0 0x7f625803a5a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.274+0000 7f625f7fe700 1 -- 192.168.123.102:0/1645927475 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f6268060700 con 0x7f626c071e00 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.274+0000 7f6271921700 1 --2- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62580380e0 0x7f625803a5a0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f6260006fd0 tx=0x7f6260006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.399+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}) v1 -- 0x7f626c06e5c0 con 0x7f62580380e0 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.403+0000 7f625f7fe700 1 -- 192.168.123.102:0/1645927475 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+26 (secure 0 0 0) 0x7f626c06e5c0 con 0x7f62580380e0 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.405+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62580380e0 msgr2=0x7f625803a5a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.405+0000 7f6273124700 1 --2- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62580380e0 0x7f625803a5a0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f6260006fd0 tx=0x7f6260006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.406+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 msgr2=0x7f626c1a90a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.406+0000 7f6273124700 1 --2- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c1a90a0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f6268009710 tx=0x7f6268004080 comp rx=0 tx=0).stop 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.411+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 shutdown_connections 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.411+0000 7f6273124700 1 --2- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62580380e0 0x7f625803a5a0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.411+0000 7f6273124700 1 --2- 192.168.123.102:0/1645927475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f626c071e00 0x7f626c1a90a0 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.411+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 >> 192.168.123.102:0/1645927475 conn(0x7f626c06d320 msgr2=0x7f626c06deb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.412+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 shutdown_connections 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.412+0000 7f6273124700 1 -- 192.168.123.102:0/1645927475 wait complete. 2026-03-10T10:14:29.472 INFO:teuthology.orchestra.run.vm02.stdout:Deploying ceph-exporter service with default placement... 2026-03-10T10:14:29.824 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled ceph-exporter update... 2026-03-10T10:14:29.824 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.615+0000 7f5bcf398700 1 Processor -- start 2026-03-10T10:14:29.824 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.615+0000 7f5bcf398700 1 -- start start 2026-03-10T10:14:29.824 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.615+0000 7f5bcf398700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc8105490 0x7f5bc81058b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:29.824 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.615+0000 7f5bcf398700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5bc8105e80 con 0x7f5bc8105490 2026-03-10T10:14:29.824 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.616+0000 7f5bce396700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc8105490 0x7f5bc81058b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:29.824 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.616+0000 7f5bce396700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc8105490 0x7f5bc81058b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55128/0 (socket says 192.168.123.102:55128) 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.616+0000 7f5bce396700 1 -- 192.168.123.102:0/3456860800 learned_addr learned my addr 192.168.123.102:0/3456860800 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.616+0000 7f5bce396700 1 -- 192.168.123.102:0/3456860800 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5bc8106690 con 0x7f5bc8105490 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.616+0000 7f5bce396700 1 --2- 192.168.123.102:0/3456860800 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc8105490 0x7f5bc81058b0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f5bc400bd30 tx=0x7f5bc400d5d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=4afdca920c0697b9 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.617+0000 7f5bcd394700 1 -- 192.168.123.102:0/3456860800 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5bc400be80 con 0x7f5bc8105490 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.617+0000 7f5bcd394700 1 -- 192.168.123.102:0/3456860800 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5bc4004510 con 0x7f5bc8105490 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.617+0000 7f5bcf398700 1 -- 192.168.123.102:0/3456860800 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc8105490 msgr2=0x7f5bc81058b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.617+0000 7f5bcf398700 1 --2- 192.168.123.102:0/3456860800 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc8105490 0x7f5bc81058b0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f5bc400bd30 tx=0x7f5bc400d5d0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.617+0000 7f5bcf398700 1 -- 192.168.123.102:0/3456860800 shutdown_connections 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.617+0000 7f5bcf398700 1 --2- 192.168.123.102:0/3456860800 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc8105490 0x7f5bc81058b0 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.617+0000 7f5bcf398700 1 -- 192.168.123.102:0/3456860800 >> 192.168.123.102:0/3456860800 conn(0x7f5bc8100a70 msgr2=0x7f5bc8102e90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.617+0000 7f5bcf398700 1 -- 192.168.123.102:0/3456860800 shutdown_connections 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.617+0000 7f5bcf398700 1 -- 192.168.123.102:0/3456860800 wait complete. 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.618+0000 7f5bcf398700 1 Processor -- start 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.618+0000 7f5bcf398700 1 -- start start 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.618+0000 7f5bcf398700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc81a0ab0 0x7f5bc81a0ed0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.618+0000 7f5bcf398700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5bc81a1410 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.618+0000 7f5bce396700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc81a0ab0 0x7f5bc81a0ed0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.618+0000 7f5bce396700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc81a0ab0 0x7f5bc81a0ed0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55142/0 (socket says 192.168.123.102:55142) 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.618+0000 7f5bce396700 1 -- 192.168.123.102:0/1447364497 learned_addr learned my addr 192.168.123.102:0/1447364497 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.618+0000 7f5bce396700 1 -- 192.168.123.102:0/1447364497 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5bc400b9e0 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.618+0000 7f5bce396700 1 --2- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc81a0ab0 0x7f5bc81a0ed0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f5bc400b340 tx=0x7f5bc4004230 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.620+0000 7f5bbf7fe700 1 -- 192.168.123.102:0/1447364497 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5bc4003c20 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.620+0000 7f5bcf398700 1 -- 192.168.123.102:0/1447364497 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5bc81a1610 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.620+0000 7f5bcf398700 1 -- 192.168.123.102:0/1447364497 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5bc818f030 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.621+0000 7f5bbf7fe700 1 -- 192.168.123.102:0/1447364497 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5bc40045e0 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.621+0000 7f5bbf7fe700 1 -- 192.168.123.102:0/1447364497 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5bc40196b0 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.621+0000 7f5bbf7fe700 1 -- 192.168.123.102:0/1447364497 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f5bc401f070 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.621+0000 7f5bbf7fe700 1 --2- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5bb403c7d0 0x7f5bb403ec90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.622+0000 7f5bcdb95700 1 --2- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5bb403c7d0 0x7f5bb403ec90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.622+0000 7f5bbf7fe700 1 -- 192.168.123.102:0/1447364497 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f5bc404bd80 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.622+0000 7f5bcdb95700 1 --2- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5bb403c7d0 0x7f5bb403ec90 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f5bc000ad30 tx=0x7f5bc00093f0 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.622+0000 7f5bcf398700 1 -- 192.168.123.102:0/1447364497 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5bac005320 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.628+0000 7f5bbf7fe700 1 -- 192.168.123.102:0/1447364497 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f5bc402a390 con 0x7f5bc81a0ab0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.769+0000 7f5bcf398700 1 -- 192.168.123.102:0/1447364497 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}) v1 -- 0x7f5bac000bf0 con 0x7f5bb403c7d0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.792+0000 7f5bbf7fe700 1 -- 192.168.123.102:0/1447364497 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+34 (secure 0 0 0) 0x7f5bac000bf0 con 0x7f5bb403c7d0 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 -- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5bb403c7d0 msgr2=0x7f5bb403ec90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 --2- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5bb403c7d0 0x7f5bb403ec90 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f5bc000ad30 tx=0x7f5bc00093f0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 -- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc81a0ab0 msgr2=0x7f5bc81a0ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 --2- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc81a0ab0 0x7f5bc81a0ed0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f5bc400b340 tx=0x7f5bc4004230 comp rx=0 tx=0).stop 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 -- 192.168.123.102:0/1447364497 shutdown_connections 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 --2- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5bb403c7d0 0x7f5bb403ec90 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 --2- 192.168.123.102:0/1447364497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5bc81a0ab0 0x7f5bc81a0ed0 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 -- 192.168.123.102:0/1447364497 >> 192.168.123.102:0/1447364497 conn(0x7f5bc8100a70 msgr2=0x7f5bc806c7c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 -- 192.168.123.102:0/1447364497 shutdown_connections 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.795+0000 7f5bbd7fa700 1 -- 192.168.123.102:0/1447364497 wait complete. 2026-03-10T10:14:29.825 INFO:teuthology.orchestra.run.vm02.stdout:Deploying prometheus service with default placement... 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: Added host vm02 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: Saving service mon spec with placement count:5 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: Saving service mgr spec with placement count:2 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:30.087 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:30 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled prometheus update... 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.965+0000 7f7789cf8700 1 Processor -- start 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.965+0000 7f7789cf8700 1 -- start start 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.965+0000 7f7789cf8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f7784108db0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.965+0000 7f7789cf8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7784109380 con 0x7f7784108990 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.966+0000 7f77837fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f7784108db0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.966+0000 7f77837fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f7784108db0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55154/0 (socket says 192.168.123.102:55154) 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.966+0000 7f77837fe700 1 -- 192.168.123.102:0/2186974263 learned_addr learned my addr 192.168.123.102:0/2186974263 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.966+0000 7f77837fe700 1 -- 192.168.123.102:0/2186974263 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7784109b90 con 0x7f7784108990 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.966+0000 7f77837fe700 1 --2- 192.168.123.102:0/2186974263 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f7784108db0 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f776c009a90 tx=0x7f776c009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b685d2dcbd98d34c server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.966+0000 7f77827fc700 1 -- 192.168.123.102:0/2186974263 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f776c004030 con 0x7f7784108990 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.967+0000 7f77827fc700 1 -- 192.168.123.102:0/2186974263 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f776c00b7e0 con 0x7f7784108990 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.967+0000 7f7789cf8700 1 -- 192.168.123.102:0/2186974263 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 msgr2=0x7f7784108db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.967+0000 7f7789cf8700 1 --2- 192.168.123.102:0/2186974263 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f7784108db0 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f776c009a90 tx=0x7f776c009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.967+0000 7f7789cf8700 1 -- 192.168.123.102:0/2186974263 shutdown_connections 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.967+0000 7f7789cf8700 1 --2- 192.168.123.102:0/2186974263 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f7784108db0 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.967+0000 7f7789cf8700 1 -- 192.168.123.102:0/2186974263 >> 192.168.123.102:0/2186974263 conn(0x7f7784103f50 msgr2=0x7f7784106370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:30.127 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.967+0000 7f7789cf8700 1 -- 192.168.123.102:0/2186974263 shutdown_connections 2026-03-10T10:14:30.128 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.967+0000 7f7789cf8700 1 -- 192.168.123.102:0/2186974263 wait complete. 2026-03-10T10:14:30.128 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.968+0000 7f7789cf8700 1 Processor -- start 2026-03-10T10:14:30.128 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.968+0000 7f7789cf8700 1 -- start start 2026-03-10T10:14:30.128 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.968+0000 7f7789cf8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f778419cab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:30.128 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.968+0000 7f7789cf8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7784109380 con 0x7f7784108990 2026-03-10T10:14:30.128 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.968+0000 7f77837fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f778419cab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:30.128 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.968+0000 7f77837fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f778419cab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55164/0 (socket says 192.168.123.102:55164) 2026-03-10T10:14:30.128 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.968+0000 7f77837fe700 1 -- 192.168.123.102:0/3566227633 learned_addr learned my addr 192.168.123.102:0/3566227633 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.968+0000 7f77837fe700 1 -- 192.168.123.102:0/3566227633 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f776c009740 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.969+0000 7f77837fe700 1 --2- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f778419cab0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f776c00bd80 tx=0x7f776c00be60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.969+0000 7f7780ff9700 1 -- 192.168.123.102:0/3566227633 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f776c003f60 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.969+0000 7f7780ff9700 1 -- 192.168.123.102:0/3566227633 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f776c0045a0 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.969+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f778419cff0 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.969+0000 7f7780ff9700 1 -- 192.168.123.102:0/3566227633 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f776c024e00 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.969+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f778419d410 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.970+0000 7f7780ff9700 1 -- 192.168.123.102:0/3566227633 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f776c01b480 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.970+0000 7f7780ff9700 1 --2- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f77700382e0 0x7f777003a7a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.970+0000 7f7780ff9700 1 -- 192.168.123.102:0/3566227633 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f776c04d060 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.970+0000 7f7782ffd700 1 --2- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f77700382e0 0x7f777003a7a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.970+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7784196960 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.971+0000 7f7782ffd700 1 --2- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f77700382e0 0x7f777003a7a0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f7774006fd0 tx=0x7f7774006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:29.973+0000 7f7780ff9700 1 -- 192.168.123.102:0/3566227633 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f776c01f070 con 0x7f7784108990 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.084+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}) v1 -- 0x7f778402ce20 con 0x7f77700382e0 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.089+0000 7f7780ff9700 1 -- 192.168.123.102:0/3566227633 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+31 (secure 0 0 0) 0x7f778402ce20 con 0x7f77700382e0 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.091+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f77700382e0 msgr2=0x7f777003a7a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.091+0000 7f7789cf8700 1 --2- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f77700382e0 0x7f777003a7a0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f7774006fd0 tx=0x7f7774006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.092+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 msgr2=0x7f778419cab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.092+0000 7f7789cf8700 1 --2- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f778419cab0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f776c00bd80 tx=0x7f776c00be60 comp rx=0 tx=0).stop 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.093+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 shutdown_connections 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.093+0000 7f7789cf8700 1 --2- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f77700382e0 0x7f777003a7a0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.093+0000 7f7789cf8700 1 --2- 192.168.123.102:0/3566227633 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7784108990 0x7f778419cab0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.129 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.093+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 >> 192.168.123.102:0/3566227633 conn(0x7f7784103f50 msgr2=0x7f7784106310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:30.130 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.093+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 shutdown_connections 2026-03-10T10:14:30.130 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.093+0000 7f7789cf8700 1 -- 192.168.123.102:0/3566227633 wait complete. 2026-03-10T10:14:30.130 INFO:teuthology.orchestra.run.vm02.stdout:Deploying grafana service with default placement... 2026-03-10T10:14:30.446 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled grafana update... 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.258+0000 7f30b0545700 1 Processor -- start 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.259+0000 7f30b0545700 1 -- start start 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.259+0000 7f30b0545700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a8108970 0x7f30a8108d90 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.259+0000 7f30b0545700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f30a8109360 con 0x7f30a8108970 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.260+0000 7f30ae2e1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a8108970 0x7f30a8108d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.260+0000 7f30ae2e1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a8108970 0x7f30a8108d90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55174/0 (socket says 192.168.123.102:55174) 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.260+0000 7f30ae2e1700 1 -- 192.168.123.102:0/109296152 learned_addr learned my addr 192.168.123.102:0/109296152 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.260+0000 7f30ae2e1700 1 -- 192.168.123.102:0/109296152 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f30a8109b70 con 0x7f30a8108970 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.260+0000 7f30ae2e1700 1 --2- 192.168.123.102:0/109296152 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a8108970 0x7f30a8108d90 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f3098009a90 tx=0x7f3098009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=e5d9cc5b648b6135 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.260+0000 7f30ad2df700 1 -- 192.168.123.102:0/109296152 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3098004030 con 0x7f30a8108970 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.260+0000 7f30ad2df700 1 -- 192.168.123.102:0/109296152 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f309800b7e0 con 0x7f30a8108970 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.261+0000 7f30ad2df700 1 -- 192.168.123.102:0/109296152 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3098003b30 con 0x7f30a8108970 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.261+0000 7f30b0545700 1 -- 192.168.123.102:0/109296152 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a8108970 msgr2=0x7f30a8108d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.261+0000 7f30b0545700 1 --2- 192.168.123.102:0/109296152 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a8108970 0x7f30a8108d90 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f3098009a90 tx=0x7f3098009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.261+0000 7f30b0545700 1 -- 192.168.123.102:0/109296152 shutdown_connections 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.261+0000 7f30b0545700 1 --2- 192.168.123.102:0/109296152 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a8108970 0x7f30a8108d90 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.261+0000 7f30b0545700 1 -- 192.168.123.102:0/109296152 >> 192.168.123.102:0/109296152 conn(0x7f30a807be30 msgr2=0x7f30a81064e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.261+0000 7f30b0545700 1 -- 192.168.123.102:0/109296152 shutdown_connections 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.261+0000 7f30b0545700 1 -- 192.168.123.102:0/109296152 wait complete. 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.262+0000 7f30b0545700 1 Processor -- start 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.262+0000 7f30b0545700 1 -- start start 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.262+0000 7f30b0545700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a807c690 0x7f30a807cab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.262+0000 7f30b0545700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f30a807cff0 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.262+0000 7f30ae2e1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a807c690 0x7f30a807cab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.262+0000 7f30ae2e1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a807c690 0x7f30a807cab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55178/0 (socket says 192.168.123.102:55178) 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.262+0000 7f30ae2e1700 1 -- 192.168.123.102:0/695286786 learned_addr learned my addr 192.168.123.102:0/695286786 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.262+0000 7f30ae2e1700 1 -- 192.168.123.102:0/695286786 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3098009740 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.263+0000 7f30ae2e1700 1 --2- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a807c690 0x7f30a807cab0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f3098003770 tx=0x7f309800bfa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.263+0000 7f309f7fe700 1 -- 192.168.123.102:0/695286786 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3098004030 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.263+0000 7f309f7fe700 1 -- 192.168.123.102:0/695286786 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3098024470 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.263+0000 7f309f7fe700 1 -- 192.168.123.102:0/695286786 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f309801a440 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.263+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f30a807d1f0 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.263+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f30a807fe40 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.264+0000 7f309f7fe700 1 -- 192.168.123.102:0/695286786 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f3098021070 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.264+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f30a804fa20 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.264+0000 7f309f7fe700 1 --2- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f30940383a0 0x7f309403a860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.264+0000 7f309f7fe700 1 -- 192.168.123.102:0/695286786 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f309804bfb0 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.264+0000 7f30adae0700 1 --2- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f30940383a0 0x7f309403a860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.265+0000 7f30adae0700 1 --2- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f30940383a0 0x7f309403a860 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f30a4006fd0 tx=0x7f30a4006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.267+0000 7f309f7fe700 1 -- 192.168.123.102:0/695286786 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f309801a5a0 con 0x7f30a807c690 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.371+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}) v1 -- 0x7f30a81064d0 con 0x7f30940383a0 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.376+0000 7f309f7fe700 1 -- 192.168.123.102:0/695286786 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+28 (secure 0 0 0) 0x7f30a81064d0 con 0x7f30940383a0 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f30940383a0 msgr2=0x7f309403a860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 --2- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f30940383a0 0x7f309403a860 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f30a4006fd0 tx=0x7f30a4006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a807c690 msgr2=0x7f30a807cab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 --2- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a807c690 0x7f30a807cab0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f3098003770 tx=0x7f309800bfa0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 shutdown_connections 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 --2- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f30940383a0 0x7f309403a860 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 --2- 192.168.123.102:0/695286786 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f30a807c690 0x7f30a807cab0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 >> 192.168.123.102:0/695286786 conn(0x7f30a807be30 msgr2=0x7f30a8105dc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 shutdown_connections 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.378+0000 7f30b0545700 1 -- 192.168.123.102:0/695286786 wait complete. 2026-03-10T10:14:30.447 INFO:teuthology.orchestra.run.vm02.stdout:Deploying node-exporter service with default placement... 2026-03-10T10:14:30.754 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled node-exporter update... 2026-03-10T10:14:30.754 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.576+0000 7f71c16be700 1 Processor -- start 2026-03-10T10:14:30.754 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.576+0000 7f71c16be700 1 -- start start 2026-03-10T10:14:30.754 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.576+0000 7f71c16be700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc0725f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:30.754 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.576+0000 7f71c16be700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f71bc072bc0 con 0x7f71bc0721d0 2026-03-10T10:14:30.754 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.577+0000 7f71bbfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc0725f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:30.754 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.577+0000 7f71bbfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc0725f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55190/0 (socket says 192.168.123.102:55190) 2026-03-10T10:14:30.754 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.577+0000 7f71bbfff700 1 -- 192.168.123.102:0/1455819160 learned_addr learned my addr 192.168.123.102:0/1455819160 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:30.754 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.577+0000 7f71bbfff700 1 -- 192.168.123.102:0/1455819160 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f71bc10e1c0 con 0x7f71bc0721d0 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.577+0000 7f71bbfff700 1 --2- 192.168.123.102:0/1455819160 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc0725f0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f71ac00d180 tx=0x7f71ac00d490 comp rx=0 tx=0).ready entity=mon.0 client_cookie=48dad50e0a491160 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.578+0000 7f71baffd700 1 -- 192.168.123.102:0/1455819160 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f71ac010070 con 0x7f71bc0721d0 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.578+0000 7f71baffd700 1 -- 192.168.123.102:0/1455819160 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f71ac004510 con 0x7f71bc0721d0 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.578+0000 7f71c16be700 1 -- 192.168.123.102:0/1455819160 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 msgr2=0x7f71bc0725f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.578+0000 7f71c16be700 1 --2- 192.168.123.102:0/1455819160 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc0725f0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f71ac00d180 tx=0x7f71ac00d490 comp rx=0 tx=0).stop 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.578+0000 7f71c16be700 1 -- 192.168.123.102:0/1455819160 shutdown_connections 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.578+0000 7f71c16be700 1 --2- 192.168.123.102:0/1455819160 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc0725f0 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.578+0000 7f71c16be700 1 -- 192.168.123.102:0/1455819160 >> 192.168.123.102:0/1455819160 conn(0x7f71bc06d320 msgr2=0x7f71bc06f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.578+0000 7f71c16be700 1 -- 192.168.123.102:0/1455819160 shutdown_connections 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.578+0000 7f71c16be700 1 -- 192.168.123.102:0/1455819160 wait complete. 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.579+0000 7f71c16be700 1 Processor -- start 2026-03-10T10:14:30.755 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.579+0000 7f71c16be700 1 -- start start 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.579+0000 7f71c16be700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc1a23f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.579+0000 7f71c16be700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f71ac003c20 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.579+0000 7f71bbfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc1a23f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.579+0000 7f71bbfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc1a23f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55200/0 (socket says 192.168.123.102:55200) 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.579+0000 7f71bbfff700 1 -- 192.168.123.102:0/392316631 learned_addr learned my addr 192.168.123.102:0/392316631 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.579+0000 7f71bbfff700 1 -- 192.168.123.102:0/392316631 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f71ac0087c0 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.579+0000 7f71bbfff700 1 --2- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc1a23f0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f71ac008c40 tx=0x7f71ac008d20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.580+0000 7f71b97fa700 1 -- 192.168.123.102:0/392316631 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f71ac010050 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.580+0000 7f71c16be700 1 -- 192.168.123.102:0/392316631 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f71bc1a2930 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.580+0000 7f71c16be700 1 -- 192.168.123.102:0/392316631 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f71bc1a2d50 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.581+0000 7f71b97fa700 1 -- 192.168.123.102:0/392316631 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f71ac00b150 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.581+0000 7f71b97fa700 1 -- 192.168.123.102:0/392316631 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f71ac0164e0 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.581+0000 7f71b97fa700 1 -- 192.168.123.102:0/392316631 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f71ac016700 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.581+0000 7f71b97fa700 1 --2- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f71a4038410 0x7f71a403a8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.581+0000 7f71b97fa700 1 -- 192.168.123.102:0/392316631 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f71ac04cab0 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.581+0000 7f71c16be700 1 -- 192.168.123.102:0/392316631 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f71a8005320 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.581+0000 7f71bb7fe700 1 --2- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f71a4038410 0x7f71a403a8d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.584+0000 7f71b97fa700 1 -- 192.168.123.102:0/392316631 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f71ac028460 con 0x7f71bc0721d0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.584+0000 7f71bb7fe700 1 --2- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f71a4038410 0x7f71a403a8d0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f71b400ad30 tx=0x7f71b40093f0 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.706+0000 7f71c16be700 1 -- 192.168.123.102:0/392316631 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}) v1 -- 0x7f71a8000bf0 con 0x7f71a4038410 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.711+0000 7f71b97fa700 1 -- 192.168.123.102:0/392316631 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+34 (secure 0 0 0) 0x7f71a8000bf0 con 0x7f71a4038410 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.714+0000 7f71a2ffd700 1 -- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f71a4038410 msgr2=0x7f71a403a8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.715+0000 7f71a2ffd700 1 --2- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f71a4038410 0x7f71a403a8d0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f71b400ad30 tx=0x7f71b40093f0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.715+0000 7f71a2ffd700 1 -- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 msgr2=0x7f71bc1a23f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.715+0000 7f71a2ffd700 1 --2- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc1a23f0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f71ac008c40 tx=0x7f71ac008d20 comp rx=0 tx=0).stop 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.715+0000 7f71a2ffd700 1 -- 192.168.123.102:0/392316631 shutdown_connections 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.715+0000 7f71a2ffd700 1 --2- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f71a4038410 0x7f71a403a8d0 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.715+0000 7f71a2ffd700 1 --2- 192.168.123.102:0/392316631 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f71bc0721d0 0x7f71bc1a23f0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.715+0000 7f71a2ffd700 1 -- 192.168.123.102:0/392316631 >> 192.168.123.102:0/392316631 conn(0x7f71bc06d320 msgr2=0x7f71bc06e9f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.715+0000 7f71a2ffd700 1 -- 192.168.123.102:0/392316631 shutdown_connections 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.715+0000 7f71a2ffd700 1 -- 192.168.123.102:0/392316631 wait complete. 2026-03-10T10:14:30.756 INFO:teuthology.orchestra.run.vm02.stdout:Deploying alertmanager service with default placement... 2026-03-10T10:14:31.061 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled alertmanager update... 2026-03-10T10:14:31.061 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.899+0000 7f55c889f700 1 Processor -- start 2026-03-10T10:14:31.061 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.899+0000 7f55c889f700 1 -- start start 2026-03-10T10:14:31.061 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.899+0000 7f55c889f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c00722a0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.061 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.899+0000 7f55c889f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f55c0072870 con 0x7f55c0071e80 2026-03-10T10:14:31.061 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.900+0000 7f55c663b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c00722a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.061 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.900+0000 7f55c663b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c00722a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55204/0 (socket says 192.168.123.102:55204) 2026-03-10T10:14:31.061 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.900+0000 7f55c663b700 1 -- 192.168.123.102:0/1158520945 learned_addr learned my addr 192.168.123.102:0/1158520945 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:31.061 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.903+0000 7f55c663b700 1 -- 192.168.123.102:0/1158520945 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f55c00729b0 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.903+0000 7f55c663b700 1 --2- 192.168.123.102:0/1158520945 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c00722a0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f55bc00ac30 tx=0x7f55bc010730 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b1376fda9846aca server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c5639700 1 -- 192.168.123.102:0/1158520945 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f55bc010d40 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c5639700 1 -- 192.168.123.102:0/1158520945 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f55bc004500 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c5639700 1 -- 192.168.123.102:0/1158520945 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f55bc01a5c0 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c889f700 1 -- 192.168.123.102:0/1158520945 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 msgr2=0x7f55c00722a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c889f700 1 --2- 192.168.123.102:0/1158520945 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c00722a0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f55bc00ac30 tx=0x7f55bc010730 comp rx=0 tx=0).stop 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c889f700 1 -- 192.168.123.102:0/1158520945 shutdown_connections 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c889f700 1 --2- 192.168.123.102:0/1158520945 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c00722a0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c889f700 1 -- 192.168.123.102:0/1158520945 >> 192.168.123.102:0/1158520945 conn(0x7f55c006d660 msgr2=0x7f55c006fac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c889f700 1 -- 192.168.123.102:0/1158520945 shutdown_connections 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.904+0000 7f55c889f700 1 -- 192.168.123.102:0/1158520945 wait complete. 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.905+0000 7f55c889f700 1 Processor -- start 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.905+0000 7f55c889f700 1 -- start start 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.905+0000 7f55c889f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c0086f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.905+0000 7f55c889f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f55c00874a0 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.905+0000 7f55c663b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c0086f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.905+0000 7f55c663b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c0086f60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55220/0 (socket says 192.168.123.102:55220) 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.905+0000 7f55c663b700 1 -- 192.168.123.102:0/2921364069 learned_addr learned my addr 192.168.123.102:0/2921364069 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.905+0000 7f55c663b700 1 -- 192.168.123.102:0/2921364069 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f55bc00a8e0 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.906+0000 7f55c663b700 1 --2- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c0086f60 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f55bc01c040 tx=0x7f55bc0036a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.906+0000 7f55b77fe700 1 -- 192.168.123.102:0/2921364069 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f55bc0038c0 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.906+0000 7f55b77fe700 1 -- 192.168.123.102:0/2921364069 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f55bc003a20 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.906+0000 7f55b77fe700 1 -- 192.168.123.102:0/2921364069 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f55bc009550 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.906+0000 7f55c889f700 1 -- 192.168.123.102:0/2921364069 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f55c00876a0 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.906+0000 7f55c889f700 1 -- 192.168.123.102:0/2921364069 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f55c0087a00 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.907+0000 7f55b77fe700 1 -- 192.168.123.102:0/2921364069 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f55bc018070 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.908+0000 7f55b77fe700 1 --2- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f55ac038030 0x7f55ac03a4f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.908+0000 7f55b77fe700 1 -- 192.168.123.102:0/2921364069 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f55bc04b6b0 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.908+0000 7f55c5e3a700 1 --2- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f55ac038030 0x7f55ac03a4f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.908+0000 7f55c5e3a700 1 --2- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f55ac038030 0x7f55ac03a4f0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f55b800ad30 tx=0x7f55b80093f0 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.909+0000 7f55c889f700 1 -- 192.168.123.102:0/2921364069 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f55c004fa20 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:30.912+0000 7f55b77fe700 1 -- 192.168.123.102:0/2921364069 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f55bc01e030 con 0x7f55c0071e80 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.022+0000 7f55c889f700 1 -- 192.168.123.102:0/2921364069 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}) v1 -- 0x7f55c006fa40 con 0x7f55ac038030 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.028+0000 7f55b77fe700 1 -- 192.168.123.102:0/2921364069 <== mgr.14120 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+33 (secure 0 0 0) 0x7f55c006fa40 con 0x7f55ac038030 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 -- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f55ac038030 msgr2=0x7f55ac03a4f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 --2- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f55ac038030 0x7f55ac03a4f0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f55b800ad30 tx=0x7f55b80093f0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 -- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 msgr2=0x7f55c0086f60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 --2- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c0086f60 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f55bc01c040 tx=0x7f55bc0036a0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 -- 192.168.123.102:0/2921364069 shutdown_connections 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 --2- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f55ac038030 0x7f55ac03a4f0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 --2- 192.168.123.102:0/2921364069 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55c0071e80 0x7f55c0086f60 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 -- 192.168.123.102:0/2921364069 >> 192.168.123.102:0/2921364069 conn(0x7f55c006d660 msgr2=0x7f55c006f330 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 -- 192.168.123.102:0/2921364069 shutdown_connections 2026-03-10T10:14:31.062 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.031+0000 7f55b57fa700 1 -- 192.168.123.102:0/2921364069 wait complete. 2026-03-10T10:14:31.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:31 vm02 ceph-mon[50200]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:31.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:31 vm02 ceph-mon[50200]: Saving service crash spec with placement * 2026-03-10T10:14:31.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:31 vm02 ceph-mon[50200]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:31.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:31 vm02 ceph-mon[50200]: Saving service ceph-exporter spec with placement * 2026-03-10T10:14:31.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:31 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:31.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:31 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:31.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:31 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:31.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:31 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:31.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:31 vm02 ceph-mon[50200]: from='mgr.14120 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.210+0000 7f523990c700 1 Processor -- start 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.210+0000 7f523990c700 1 -- start start 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.210+0000 7f523990c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c005a80 0x7f522c005ea0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.210+0000 7f523990c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f522c006470 con 0x7f522c005a80 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.210+0000 7f523890a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c005a80 0x7f522c005ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.210+0000 7f523890a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c005a80 0x7f522c005ea0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55232/0 (socket says 192.168.123.102:55232) 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.210+0000 7f523890a700 1 -- 192.168.123.102:0/2542124901 learned_addr learned my addr 192.168.123.102:0/2542124901 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.211+0000 7f523890a700 1 -- 192.168.123.102:0/2542124901 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f522c006cd0 con 0x7f522c005a80 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.211+0000 7f523890a700 1 --2- 192.168.123.102:0/2542124901 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c005a80 0x7f522c005ea0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f5228008a90 tx=0x7f5228008da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9be25d08650ca966 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.211+0000 7f52337fe700 1 -- 192.168.123.102:0/2542124901 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f522800e9a0 con 0x7f522c005a80 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.211+0000 7f52337fe700 1 -- 192.168.123.102:0/2542124901 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5228010040 con 0x7f522c005a80 2026-03-10T10:14:31.401 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 -- 192.168.123.102:0/2542124901 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c005a80 msgr2=0x7f522c005ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 --2- 192.168.123.102:0/2542124901 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c005a80 0x7f522c005ea0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f5228008a90 tx=0x7f5228008da0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 -- 192.168.123.102:0/2542124901 shutdown_connections 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 --2- 192.168.123.102:0/2542124901 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c005a80 0x7f522c005ea0 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 -- 192.168.123.102:0/2542124901 >> 192.168.123.102:0/2542124901 conn(0x7f522c09fed0 msgr2=0x7f522c0a2330 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 -- 192.168.123.102:0/2542124901 shutdown_connections 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 -- 192.168.123.102:0/2542124901 wait complete. 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 Processor -- start 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 -- start start 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c014970 0x7f522c014d90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.212+0000 7f523990c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5228003a50 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.213+0000 7f523890a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c014970 0x7f522c014d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.213+0000 7f523890a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c014970 0x7f522c014d90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55244/0 (socket says 192.168.123.102:55244) 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.213+0000 7f523890a700 1 -- 192.168.123.102:0/3320100158 learned_addr learned my addr 192.168.123.102:0/3320100158 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.213+0000 7f523890a700 1 -- 192.168.123.102:0/3320100158 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5228008740 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.213+0000 7f523890a700 1 --2- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c014970 0x7f522c014d90 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f5228004040 tx=0x7f5228004120 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.213+0000 7f5231ffb700 1 -- 192.168.123.102:0/3320100158 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5228004580 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.213+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f522c0162e0 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.213+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f522c015580 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.214+0000 7f5231ffb700 1 -- 192.168.123.102:0/3320100158 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5228010050 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.214+0000 7f5231ffb700 1 -- 192.168.123.102:0/3320100158 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f522801e710 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.214+0000 7f5231ffb700 1 -- 192.168.123.102:0/3320100158 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f522801e930 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.214+0000 7f5231ffb700 1 --2- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f522403c820 0x7f522403ece0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.214+0000 7f5231ffb700 1 -- 192.168.123.102:0/3320100158 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f522804caa0 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.214+0000 7f5233fff700 1 --2- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f522403c820 0x7f522403ece0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.215+0000 7f5233fff700 1 --2- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f522403c820 0x7f522403ece0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f523404f8e0 tx=0x7f523406a340 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.215+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5218005320 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.218+0000 7f5231ffb700 1 -- 192.168.123.102:0/3320100158 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f522801b020 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.338+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1 -- 0x7f5218005190 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.342+0000 7f5231ffb700 1 -- 192.168.123.102:0/3320100158 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/container_init}]=0 v7)=0 v7) v1 ==== 142+0+0 (secure 0 0 0) 0x7f5228025d50 con 0x7f522c014970 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.349+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f522403c820 msgr2=0x7f522403ece0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.349+0000 7f523990c700 1 --2- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f522403c820 0x7f522403ece0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f523404f8e0 tx=0x7f523406a340 comp rx=0 tx=0).stop 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.349+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c014970 msgr2=0x7f522c014d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.349+0000 7f523990c700 1 --2- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c014970 0x7f522c014d90 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f5228004040 tx=0x7f5228004120 comp rx=0 tx=0).stop 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.349+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 shutdown_connections 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.349+0000 7f523990c700 1 --2- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f522403c820 0x7f522403ece0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.349+0000 7f523990c700 1 --2- 192.168.123.102:0/3320100158 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c014970 0x7f522c014d90 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.349+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 >> 192.168.123.102:0/3320100158 conn(0x7f522c09fed0 msgr2=0x7f522c0a0850 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.350+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 shutdown_connections 2026-03-10T10:14:31.402 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.350+0000 7f523990c700 1 -- 192.168.123.102:0/3320100158 wait complete. 2026-03-10T10:14:31.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.524+0000 7f5e690a9700 1 Processor -- start 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.524+0000 7f5e690a9700 1 -- start start 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.524+0000 7f5e690a9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e64105a70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.524+0000 7f5e690a9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5e64106040 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.524+0000 7f5e62d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e64105a70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.524+0000 7f5e62d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e64105a70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55256/0 (socket says 192.168.123.102:55256) 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.524+0000 7f5e62d9d700 1 -- 192.168.123.102:0/1287036130 learned_addr learned my addr 192.168.123.102:0/1287036130 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.525+0000 7f5e62d9d700 1 -- 192.168.123.102:0/1287036130 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5e64106850 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.525+0000 7f5e62d9d700 1 --2- 192.168.123.102:0/1287036130 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e64105a70 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f5e4c00bf90 tx=0x7f5e4c00d5d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d062b973c1735dc6 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.525+0000 7f5e61d9b700 1 -- 192.168.123.102:0/1287036130 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5e4c00dcc0 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.525+0000 7f5e61d9b700 1 -- 192.168.123.102:0/1287036130 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5e4c00de20 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.526+0000 7f5e690a9700 1 -- 192.168.123.102:0/1287036130 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 msgr2=0x7f5e64105a70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.526+0000 7f5e690a9700 1 --2- 192.168.123.102:0/1287036130 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e64105a70 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f5e4c00bf90 tx=0x7f5e4c00d5d0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.526+0000 7f5e690a9700 1 -- 192.168.123.102:0/1287036130 shutdown_connections 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.526+0000 7f5e690a9700 1 --2- 192.168.123.102:0/1287036130 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e64105a70 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.526+0000 7f5e690a9700 1 -- 192.168.123.102:0/1287036130 >> 192.168.123.102:0/1287036130 conn(0x7f5e64100bd0 msgr2=0x7f5e64103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.526+0000 7f5e690a9700 1 -- 192.168.123.102:0/1287036130 shutdown_connections 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.526+0000 7f5e690a9700 1 -- 192.168.123.102:0/1287036130 wait complete. 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.526+0000 7f5e690a9700 1 Processor -- start 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.527+0000 7f5e690a9700 1 -- start start 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.527+0000 7f5e690a9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e6419b750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.527+0000 7f5e690a9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5e4c014070 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.527+0000 7f5e62d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e6419b750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.527+0000 7f5e62d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e6419b750 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55268/0 (socket says 192.168.123.102:55268) 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.527+0000 7f5e62d9d700 1 -- 192.168.123.102:0/686295650 learned_addr learned my addr 192.168.123.102:0/686295650 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.527+0000 7f5e62d9d700 1 -- 192.168.123.102:0/686295650 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5e4c00b9e0 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.527+0000 7f5e62d9d700 1 --2- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e6419b750 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f5e4c010910 tx=0x7f5e4c0109f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.527+0000 7f5e5bfff700 1 -- 192.168.123.102:0/686295650 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5e4c00dcc0 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.528+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5e6419bc90 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.528+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5e641981b0 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.528+0000 7f5e5bfff700 1 -- 192.168.123.102:0/686295650 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5e4c003b00 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.528+0000 7f5e5bfff700 1 -- 192.168.123.102:0/686295650 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5e4c0223e0 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.528+0000 7f5e5bfff700 1 -- 192.168.123.102:0/686295650 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f5e4c022540 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.528+0000 7f5e5bfff700 1 --2- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5e500383c0 0x7f5e5003a880 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.528+0000 7f5e5bfff700 1 -- 192.168.123.102:0/686295650 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f5e4c04d400 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.529+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5e44005320 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.532+0000 7f5e6259c700 1 --2- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5e500383c0 0x7f5e5003a880 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.532+0000 7f5e5bfff700 1 -- 192.168.123.102:0/686295650 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f5e4c02c430 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.532+0000 7f5e6259c700 1 --2- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5e500383c0 0x7f5e5003a880 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f5e54006fd0 tx=0x7f5e54006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.632+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0) v1 -- 0x7f5e44005f70 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.638+0000 7f5e5bfff700 1 -- 192.168.123.102:0/686295650 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/dashboard/ssl_server_port}]=0 v8)=0 v8) v1 ==== 130+0+0 (secure 0 0 0) 0x7f5e4c00fbc0 con 0x7f5e64105650 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.642+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5e500383c0 msgr2=0x7f5e5003a880 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.642+0000 7f5e690a9700 1 --2- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5e500383c0 0x7f5e5003a880 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f5e54006fd0 tx=0x7f5e54006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.642+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 msgr2=0x7f5e6419b750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.642+0000 7f5e690a9700 1 --2- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e6419b750 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f5e4c010910 tx=0x7f5e4c0109f0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.642+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 shutdown_connections 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.642+0000 7f5e690a9700 1 --2- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5e500383c0 0x7f5e5003a880 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.642+0000 7f5e690a9700 1 --2- 192.168.123.102:0/686295650 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5e64105650 0x7f5e6419b750 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.695 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.642+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 >> 192.168.123.102:0/686295650 conn(0x7f5e64100bd0 msgr2=0x7f5e6418f020 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:31.695 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.642+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 shutdown_connections 2026-03-10T10:14:31.695 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.643+0000 7f5e690a9700 1 -- 192.168.123.102:0/686295650 wait complete. 2026-03-10T10:14:31.695 INFO:teuthology.orchestra.run.vm02.stdout:Enabling the dashboard module... 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: Saving service prometheus spec with placement count:1 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: Saving service grafana spec with placement count:1 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: Saving service node-exporter spec with placement * 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: Saving service alertmanager spec with placement count:1 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3320100158' entity='client.admin' 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/686295650' entity='client.admin' 2026-03-10T10:14:32.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:32 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3217747722' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.815+0000 7f4707df9700 1 Processor -- start 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.816+0000 7f4707df9700 1 -- start start 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.816+0000 7f4707df9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f4700108d90 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.816+0000 7f4707df9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4700109360 con 0x7f4700108970 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.816+0000 7f4705b95700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f4700108d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.816+0000 7f4705b95700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f4700108d90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55270/0 (socket says 192.168.123.102:55270) 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.816+0000 7f4705b95700 1 -- 192.168.123.102:0/2408451609 learned_addr learned my addr 192.168.123.102:0/2408451609 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.817+0000 7f4705b95700 1 -- 192.168.123.102:0/2408451609 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4700109b70 con 0x7f4700108970 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.817+0000 7f4705b95700 1 --2- 192.168.123.102:0/2408451609 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f4700108d90 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f46f0009cf0 tx=0x7f46f000b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=51498ddaa2db2aa3 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.817+0000 7f4704b93700 1 -- 192.168.123.102:0/2408451609 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f46f0004030 con 0x7f4700108970 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.817+0000 7f4704b93700 1 -- 192.168.123.102:0/2408451609 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f46f000b810 con 0x7f4700108970 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.817+0000 7f4707df9700 1 -- 192.168.123.102:0/2408451609 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 msgr2=0x7f4700108d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.817+0000 7f4707df9700 1 --2- 192.168.123.102:0/2408451609 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f4700108d90 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f46f0009cf0 tx=0x7f46f000b0e0 comp rx=0 tx=0).stop 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.818+0000 7f4707df9700 1 -- 192.168.123.102:0/2408451609 shutdown_connections 2026-03-10T10:14:32.693 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.818+0000 7f4707df9700 1 --2- 192.168.123.102:0/2408451609 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f4700108d90 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.818+0000 7f4707df9700 1 -- 192.168.123.102:0/2408451609 >> 192.168.123.102:0/2408451609 conn(0x7f470007be30 msgr2=0x7f47001064e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.818+0000 7f4707df9700 1 -- 192.168.123.102:0/2408451609 shutdown_connections 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.818+0000 7f4707df9700 1 -- 192.168.123.102:0/2408451609 wait complete. 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.818+0000 7f4707df9700 1 Processor -- start 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.818+0000 7f4707df9700 1 -- start start 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.819+0000 7f4707df9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f470019ca90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.819+0000 7f4707df9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4700109360 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.819+0000 7f4705b95700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f470019ca90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.819+0000 7f4705b95700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f470019ca90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55286/0 (socket says 192.168.123.102:55286) 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.819+0000 7f4705b95700 1 -- 192.168.123.102:0/3217747722 learned_addr learned my addr 192.168.123.102:0/3217747722 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.819+0000 7f4705b95700 1 -- 192.168.123.102:0/3217747722 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f46f0009740 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.819+0000 7f4705b95700 1 --2- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f470019ca90 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f46f0006e90 tx=0x7f46f0003d30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.819+0000 7f46f6ffd700 1 -- 192.168.123.102:0/3217747722 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f46f0003f40 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.819+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f470019cfd0 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.820+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f470019d3f0 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.820+0000 7f46f6ffd700 1 -- 192.168.123.102:0/3217747722 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f46f0004580 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.820+0000 7f46f6ffd700 1 -- 192.168.123.102:0/3217747722 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f46f001ae60 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.820+0000 7f46f6ffd700 1 -- 192.168.123.102:0/3217747722 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 8) v1 ==== 45278+0+0 (secure 0 0 0) 0x7f46f0011420 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.820+0000 7f46f6ffd700 1 --2- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f46ec038430 0x7f46ec03a8f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.820+0000 7f46f6ffd700 1 -- 192.168.123.102:0/3217747722 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f46f004cbd0 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.821+0000 7f4705394700 1 --2- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f46ec038430 0x7f46ec03a8f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.821+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f46e4005320 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.824+0000 7f4705394700 1 --2- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f46ec038430 0x7f46ec03a8f0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f46fc006fd0 tx=0x7f46fc006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.824+0000 7f46f6ffd700 1 -- 192.168.123.102:0/3217747722 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f46f00116d0 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:31.955+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0) v1 -- 0x7f46e40059f0 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.641+0000 7f46f6ffd700 1 -- 192.168.123.102:0/3217747722 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mgrmap(e 9) v1 ==== 45291+0+0 (secure 0 0 0) 0x7f46f002d3e0 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.642+0000 7f46f6ffd700 1 -- 192.168.123.102:0/3217747722 <== mon.0 v2:192.168.123.102:3300/0 8 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "dashboard"}]=0 v9) v1 ==== 88+0+0 (secure 0 0 0) 0x7f46f004c930 con 0x7f4700108970 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f46ec038430 msgr2=0x7f46ec03a8f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:32.694 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 --2- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f46ec038430 0x7f46ec03a8f0 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f46fc006fd0 tx=0x7f46fc006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:32.704 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 msgr2=0x7f470019ca90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:32.704 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 --2- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f470019ca90 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f46f0006e90 tx=0x7f46f0003d30 comp rx=0 tx=0).stop 2026-03-10T10:14:32.704 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 shutdown_connections 2026-03-10T10:14:32.709 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 --2- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f46ec038430 0x7f46ec03a8f0 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:32.709 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 --2- 192.168.123.102:0/3217747722 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4700108970 0x7f470019ca90 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:32.709 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 >> 192.168.123.102:0/3217747722 conn(0x7f470007be30 msgr2=0x7f4700107490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:32.709 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 shutdown_connections 2026-03-10T10:14:32.709 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.645+0000 7f4707df9700 1 -- 192.168.123.102:0/3217747722 wait complete. 2026-03-10T10:14:33.056 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:14:33.056 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T10:14:33.056 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T10:14:33.056 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "active_name": "vm02.zmavgl", 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.846+0000 7fe8d4991700 1 Processor -- start 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.846+0000 7fe8d4991700 1 -- start start 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.846+0000 7fe8d4991700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d0071ce0 0x7fe8d0072100 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.846+0000 7fe8d4991700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe8d00726d0 con 0x7fe8d0071ce0 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.846+0000 7fe8cf7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d0071ce0 0x7fe8d0072100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.846+0000 7fe8cf7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d0071ce0 0x7fe8d0072100 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55312/0 (socket says 192.168.123.102:55312) 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.846+0000 7fe8cf7fe700 1 -- 192.168.123.102:0/3356147791 learned_addr learned my addr 192.168.123.102:0/3356147791 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.847+0000 7fe8cf7fe700 1 -- 192.168.123.102:0/3356147791 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe8d0072810 con 0x7fe8d0071ce0 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.847+0000 7fe8cf7fe700 1 --2- 192.168.123.102:0/3356147791 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d0071ce0 0x7fe8d0072100 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fe8c0009a80 tx=0x7fe8c0009d90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=fac3d3a1b1eae630 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.847+0000 7fe8ce7fc700 1 -- 192.168.123.102:0/3356147791 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe8c0004030 con 0x7fe8d0071ce0 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.847+0000 7fe8ce7fc700 1 -- 192.168.123.102:0/3356147791 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe8c000c710 con 0x7fe8d0071ce0 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 -- 192.168.123.102:0/3356147791 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d0071ce0 msgr2=0x7fe8d0072100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 --2- 192.168.123.102:0/3356147791 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d0071ce0 0x7fe8d0072100 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fe8c0009a80 tx=0x7fe8c0009d90 comp rx=0 tx=0).stop 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 -- 192.168.123.102:0/3356147791 shutdown_connections 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 --2- 192.168.123.102:0/3356147791 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d0071ce0 0x7fe8d0072100 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 -- 192.168.123.102:0/3356147791 >> 192.168.123.102:0/3356147791 conn(0x7fe8d006d320 msgr2=0x7fe8d006f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 -- 192.168.123.102:0/3356147791 shutdown_connections 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 -- 192.168.123.102:0/3356147791 wait complete. 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 Processor -- start 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 -- start start 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d007e700 0x7fe8d007eb20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.848+0000 7fe8d4991700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe8d00726d0 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.849+0000 7fe8cf7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d007e700 0x7fe8d007eb20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.849+0000 7fe8cf7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d007e700 0x7fe8d007eb20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55322/0 (socket says 192.168.123.102:55322) 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.849+0000 7fe8cf7fe700 1 -- 192.168.123.102:0/3572267725 learned_addr learned my addr 192.168.123.102:0/3572267725 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.849+0000 7fe8cf7fe700 1 -- 192.168.123.102:0/3572267725 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe8c000d040 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.849+0000 7fe8cf7fe700 1 --2- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d007e700 0x7fe8d007eb20 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fe8c0009f10 tx=0x7fe8c00126d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.849+0000 7fe8ccff9700 1 -- 192.168.123.102:0/3572267725 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe8c00092e0 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.849+0000 7fe8d4991700 1 -- 192.168.123.102:0/3572267725 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe8d007f060 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.849+0000 7fe8d4991700 1 -- 192.168.123.102:0/3572267725 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe8d0081ce0 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.850+0000 7fe8ccff9700 1 -- 192.168.123.102:0/3572267725 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe8c001a430 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.850+0000 7fe8ccff9700 1 -- 192.168.123.102:0/3572267725 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe8c0003820 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.851+0000 7fe8d4991700 1 -- 192.168.123.102:0/3572267725 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe8bc005320 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.851+0000 7fe8ccff9700 1 -- 192.168.123.102:0/3572267725 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 9) v1 ==== 45291+0+0 (secure 0 0 0) 0x7fe8c001aa80 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.851+0000 7fe8ccff9700 1 --2- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe8b8038330 0x7fe8b803a7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.851+0000 7fe8ceffd700 1 -- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe8b8038330 msgr2=0x7fe8b803a7f0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.851+0000 7fe8ceffd700 1 --2- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe8b8038330 0x7fe8b803a7f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.851+0000 7fe8ccff9700 1 -- 192.168.123.102:0/3572267725 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fe8c00295a0 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.853+0000 7fe8ccff9700 1 -- 192.168.123.102:0/3572267725 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fe8c0018940 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.989+0000 7fe8d4991700 1 -- 192.168.123.102:0/3572267725 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) v1 -- 0x7fe8bc006200 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.992+0000 7fe8ccff9700 1 -- 192.168.123.102:0/3572267725 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v9) v1 ==== 56+0+98 (secure 0 0 0) 0x7fe8c0018350 con 0x7fe8d007e700 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.994+0000 7fe8b67fc700 1 -- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe8b8038330 msgr2=0x7fe8b803a7f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.994+0000 7fe8b67fc700 1 --2- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe8b8038330 0x7fe8b803a7f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.995+0000 7fe8b67fc700 1 -- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d007e700 msgr2=0x7fe8d007eb20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.995+0000 7fe8b67fc700 1 --2- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d007e700 0x7fe8d007eb20 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fe8c0009f10 tx=0x7fe8c00126d0 comp rx=0 tx=0).stop 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.995+0000 7fe8b67fc700 1 -- 192.168.123.102:0/3572267725 shutdown_connections 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.995+0000 7fe8b67fc700 1 --2- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe8b8038330 0x7fe8b803a7f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.995+0000 7fe8b67fc700 1 --2- 192.168.123.102:0/3572267725 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe8d007e700 0x7fe8d007eb20 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.995+0000 7fe8b67fc700 1 -- 192.168.123.102:0/3572267725 >> 192.168.123.102:0/3572267725 conn(0x7fe8d006d320 msgr2=0x7fe8d006df00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.995+0000 7fe8b67fc700 1 -- 192.168.123.102:0/3572267725 shutdown_connections 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:32.995+0000 7fe8b67fc700 1 -- 192.168.123.102:0/3572267725 wait complete. 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for the mgr to restart... 2026-03-10T10:14:33.057 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr epoch 9... 2026-03-10T10:14:33.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:33 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3217747722' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T10:14:33.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:33 vm02 ceph-mon[50200]: mgrmap e9: vm02.zmavgl(active, since 8s) 2026-03-10T10:14:33.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:33 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3572267725' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:14:37.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:37 vm02 ceph-mon[50200]: Active manager daemon vm02.zmavgl restarted 2026-03-10T10:14:37.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:37 vm02 ceph-mon[50200]: Activating manager daemon vm02.zmavgl 2026-03-10T10:14:38.364 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:14:38.364 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T10:14:38.364 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.200+0000 7fb398926700 1 Processor -- start 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.200+0000 7fb398926700 1 -- start start 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.200+0000 7fb398926700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb394072220 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.200+0000 7fb398926700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb3940727f0 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.201+0000 7fb392d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb394072220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.201+0000 7fb392d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb394072220 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55324/0 (socket says 192.168.123.102:55324) 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.201+0000 7fb392d9d700 1 -- 192.168.123.102:0/12044552 learned_addr learned my addr 192.168.123.102:0/12044552 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.202+0000 7fb392d9d700 1 -- 192.168.123.102:0/12044552 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb39410ddb0 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.202+0000 7fb392d9d700 1 --2- 192.168.123.102:0/12044552 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb394072220 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fb384009a90 tx=0x7fb384009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d5a632eace1c6492 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.202+0000 7fb391d9b700 1 -- 192.168.123.102:0/12044552 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb384004030 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.202+0000 7fb391d9b700 1 -- 192.168.123.102:0/12044552 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb38400b7e0 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.203+0000 7fb398926700 1 -- 192.168.123.102:0/12044552 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 msgr2=0x7fb394072220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.203+0000 7fb398926700 1 --2- 192.168.123.102:0/12044552 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb394072220 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fb384009a90 tx=0x7fb384009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.203+0000 7fb398926700 1 -- 192.168.123.102:0/12044552 shutdown_connections 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.203+0000 7fb398926700 1 --2- 192.168.123.102:0/12044552 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb394072220 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.203+0000 7fb398926700 1 -- 192.168.123.102:0/12044552 >> 192.168.123.102:0/12044552 conn(0x7fb39406d320 msgr2=0x7fb39406f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.203+0000 7fb398926700 1 -- 192.168.123.102:0/12044552 shutdown_connections 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.204+0000 7fb398926700 1 -- 192.168.123.102:0/12044552 wait complete. 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.204+0000 7fb398926700 1 Processor -- start 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.204+0000 7fb398926700 1 -- start start 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.204+0000 7fb398926700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb3941a9140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.204+0000 7fb398926700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb3940727f0 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.205+0000 7fb392d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb3941a9140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.205+0000 7fb392d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb3941a9140 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:55334/0 (socket says 192.168.123.102:55334) 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.205+0000 7fb392d9d700 1 -- 192.168.123.102:0/1705546405 learned_addr learned my addr 192.168.123.102:0/1705546405 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.205+0000 7fb392d9d700 1 -- 192.168.123.102:0/1705546405 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb384009740 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.205+0000 7fb392d9d700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb3941a9140 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fb384003a50 tx=0x7fb384003d50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.206+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb384004180 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.206+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb3941a96e0 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.206+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb3941a9b00 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.206+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb3840042e0 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.207+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb384011640 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.207+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 9) v1 ==== 45291+0+0 (secure 0 0 0) 0x7fb3840117a0 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.207+0000 7fb383fff700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.208+0000 7fb39259c700 1 -- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 msgr2=0x7fb37c03a8d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.208+0000 7fb39259c700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.208+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fb38404d160 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.208+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7fb378000d10 con 0x7fb37c038410 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.408+0000 7fb39259c700 1 -- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 msgr2=0x7fb37c03a8d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.408+0000 7fb39259c700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.809+0000 7fb39259c700 1 -- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 msgr2=0x7fb37c03a8d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:33.809+0000 7fb39259c700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:34.610+0000 7fb39259c700 1 -- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 msgr2=0x7fb37c03a8d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:34.610+0000 7fb39259c700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:36.211+0000 7fb39259c700 1 -- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 msgr2=0x7fb37c03a8d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:36.211+0000 7fb39259c700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 3.200000 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:37.263+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mgrmap(e 10) v1 ==== 45058+0+0 (secure 0 0 0) 0x7fb38401ab30 con 0x7fb394071e00 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:37.263+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 msgr2=0x7fb37c03a8d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:14:38.365 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:37.264+0000 7fb383fff700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.299+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mgrmap(e 11) v1 ==== 45185+0+0 (secure 0 0 0) 0x7fb384018350 con 0x7fb394071e00 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.299+0000 7fb383fff700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.299+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7fb378000d10 con 0x7fb37c038410 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.301+0000 7fb39259c700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.302+0000 7fb39259c700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fb388003a10 tx=0x7fb3880092b0 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.302+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 <== mgr.14164 v2:192.168.123.102:6800/2 1 ==== command_reply(tid 0: 0 ) v1 ==== 8+0+6910 (secure 0 0 0) 0x7fb378000d10 con 0x7fb37c038410 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.307+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- command(tid 1: {"prefix": "mgr_status"}) v1 -- 0x7fb3780027d0 con 0x7fb37c038410 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.308+0000 7fb383fff700 1 -- 192.168.123.102:0/1705546405 <== mgr.14164 v2:192.168.123.102:6800/2 2 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+52 (secure 0 0 0) 0x7fb3780027d0 con 0x7fb37c038410 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.308+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 msgr2=0x7fb37c03a8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.308+0000 7fb398926700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7fb388003a10 tx=0x7fb3880092b0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.308+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 msgr2=0x7fb3941a9140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.308+0000 7fb398926700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb3941a9140 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fb384003a50 tx=0x7fb384003d50 comp rx=0 tx=0).stop 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.308+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 shutdown_connections 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.309+0000 7fb398926700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb37c038410 0x7fb37c03a8d0 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.309+0000 7fb398926700 1 --2- 192.168.123.102:0/1705546405 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb394071e00 0x7fb3941a9140 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.309+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 >> 192.168.123.102:0/1705546405 conn(0x7fb39406d320 msgr2=0x7fb39406df90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.309+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 shutdown_connections 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.309+0000 7fb398926700 1 -- 192.168.123.102:0/1705546405 wait complete. 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:mgr epoch 9 is available 2026-03-10T10:14:38.366 INFO:teuthology.orchestra.run.vm02.stdout:Generating a dashboard self-signed certificate... 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: mgrmap e10: vm02.zmavgl(active, starting, since 0.171661s) 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: Manager daemon vm02.zmavgl is now available 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:14:38.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:38 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:38.728 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T10:14:38.728 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.503+0000 7f9d8fcbd700 1 Processor -- start 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.503+0000 7f9d8fcbd700 1 -- start start 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.503+0000 7f9d8fcbd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8806ad60 0x7f9d8806b180 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.503+0000 7f9d8fcbd700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9d8806b750 con 0x7f9d8806ad60 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.506+0000 7f9d8da59700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8806ad60 0x7f9d8806b180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.506+0000 7f9d8da59700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8806ad60 0x7f9d8806b180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33520/0 (socket says 192.168.123.102:33520) 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.506+0000 7f9d8da59700 1 -- 192.168.123.102:0/2442175257 learned_addr learned my addr 192.168.123.102:0/2442175257 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.506+0000 7f9d8da59700 1 -- 192.168.123.102:0/2442175257 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9d8806bfb0 con 0x7f9d8806ad60 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.506+0000 7f9d8da59700 1 --2- 192.168.123.102:0/2442175257 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8806ad60 0x7f9d8806b180 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f9d84010c60 tx=0x7f9d84010f70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d689256e31d218e server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.506+0000 7f9d8ca57700 1 -- 192.168.123.102:0/2442175257 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9d84004030 con 0x7f9d8806ad60 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.506+0000 7f9d8ca57700 1 -- 192.168.123.102:0/2442175257 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9d840036a0 con 0x7f9d8806ad60 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.506+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/2442175257 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8806ad60 msgr2=0x7f9d8806b180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.506+0000 7f9d8fcbd700 1 --2- 192.168.123.102:0/2442175257 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8806ad60 0x7f9d8806b180 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f9d84010c60 tx=0x7f9d84010f70 comp rx=0 tx=0).stop 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/2442175257 shutdown_connections 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8fcbd700 1 --2- 192.168.123.102:0/2442175257 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8806ad60 0x7f9d8806b180 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/2442175257 >> 192.168.123.102:0/2442175257 conn(0x7f9d88100350 msgr2=0x7f9d881027b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/2442175257 shutdown_connections 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/2442175257 wait complete. 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8fcbd700 1 Processor -- start 2026-03-10T10:14:38.729 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8fcbd700 1 -- start start 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8fcbd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8819a880 0x7f9d8819aca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8fcbd700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9d84018650 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8da59700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8819a880 0x7f9d8819aca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8da59700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8819a880 0x7f9d8819aca0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33526/0 (socket says 192.168.123.102:33526) 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8da59700 1 -- 192.168.123.102:0/1787861581 learned_addr learned my addr 192.168.123.102:0/1787861581 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.507+0000 7f9d8da59700 1 -- 192.168.123.102:0/1787861581 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9d84010910 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.508+0000 7f9d8da59700 1 --2- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8819a880 0x7f9d8819aca0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f9d84018490 tx=0x7f9d840179e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.508+0000 7f9d7effd700 1 -- 192.168.123.102:0/1787861581 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9d840184f0 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.508+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9d8819b1e0 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.508+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9d8819de60 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.508+0000 7f9d7effd700 1 -- 192.168.123.102:0/1787861581 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9d84017dc0 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.508+0000 7f9d7effd700 1 -- 192.168.123.102:0/1787861581 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9d84003840 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.508+0000 7f9d7effd700 1 -- 192.168.123.102:0/1787861581 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 11) v1 ==== 45185+0+0 (secure 0 0 0) 0x7f9d84003a60 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.509+0000 7f9d7effd700 1 --2- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9d74038350 0x7f9d7403a810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.509+0000 7f9d8d258700 1 --2- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9d74038350 0x7f9d7403a810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.509+0000 7f9d8d258700 1 --2- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9d74038350 0x7f9d7403a810 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f9d8000ad80 tx=0x7f9d800093f0 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.509+0000 7f9d7effd700 1 -- 192.168.123.102:0/1787861581 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f9d8404c8e0 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.512+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9d6c005320 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.515+0000 7f9d7effd700 1 -- 192.168.123.102:0/1787861581 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f9d8402a430 con 0x7f9d8819a880 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.639+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}) v1 -- 0x7f9d6c000bf0 con 0x7f9d74038350 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.671+0000 7f9d7effd700 1 -- 192.168.123.102:0/1787861581 <== mgr.14164 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f9d6c000bf0 con 0x7f9d74038350 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.673+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9d74038350 msgr2=0x7f9d7403a810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.673+0000 7f9d8fcbd700 1 --2- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9d74038350 0x7f9d7403a810 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f9d8000ad80 tx=0x7f9d800093f0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.674+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8819a880 msgr2=0x7f9d8819aca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.674+0000 7f9d8fcbd700 1 --2- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8819a880 0x7f9d8819aca0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f9d84018490 tx=0x7f9d840179e0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.674+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 shutdown_connections 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.674+0000 7f9d8fcbd700 1 --2- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9d74038350 0x7f9d7403a810 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.674+0000 7f9d8fcbd700 1 --2- 192.168.123.102:0/1787861581 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d8819a880 0x7f9d8819aca0 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.674+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 >> 192.168.123.102:0/1787861581 conn(0x7f9d88100350 msgr2=0x7f9d8806da80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.674+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 shutdown_connections 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.674+0000 7f9d8fcbd700 1 -- 192.168.123.102:0/1787861581 wait complete. 2026-03-10T10:14:38.730 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial admin user... 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$kYDVYabf8yogyinao3UmKOBJtsPs..6hsub7wZqBbv/yX.9.wRqLi", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773137679, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.871+0000 7f754d905700 1 Processor -- start 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.871+0000 7f754d905700 1 -- start start 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.871+0000 7f754d905700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f7548072190 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.871+0000 7f754d905700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7548072760 con 0x7f7548071d70 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.872+0000 7f754c903700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f7548072190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.872+0000 7f754c903700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f7548072190 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33530/0 (socket says 192.168.123.102:33530) 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.872+0000 7f754c903700 1 -- 192.168.123.102:0/1609290332 learned_addr learned my addr 192.168.123.102:0/1609290332 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.872+0000 7f754c903700 1 -- 192.168.123.102:0/1609290332 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f75480728a0 con 0x7f7548071d70 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.872+0000 7f754c903700 1 --2- 192.168.123.102:0/1609290332 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f7548072190 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f7538009a90 tx=0x7f7538009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d6e7dcedc51713ba server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.872+0000 7f75477fe700 1 -- 192.168.123.102:0/1609290332 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7538004030 con 0x7f7548071d70 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.873+0000 7f75477fe700 1 -- 192.168.123.102:0/1609290332 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f753800b7e0 con 0x7f7548071d70 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.873+0000 7f75477fe700 1 -- 192.168.123.102:0/1609290332 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7538003b30 con 0x7f7548071d70 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.873+0000 7f754d905700 1 -- 192.168.123.102:0/1609290332 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 msgr2=0x7f7548072190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.873+0000 7f754d905700 1 --2- 192.168.123.102:0/1609290332 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f7548072190 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f7538009a90 tx=0x7f7538009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.873+0000 7f754d905700 1 -- 192.168.123.102:0/1609290332 shutdown_connections 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.873+0000 7f754d905700 1 --2- 192.168.123.102:0/1609290332 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f7548072190 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.873+0000 7f754d905700 1 -- 192.168.123.102:0/1609290332 >> 192.168.123.102:0/1609290332 conn(0x7f754806d320 msgr2=0x7f754806f760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.873+0000 7f754d905700 1 -- 192.168.123.102:0/1609290332 shutdown_connections 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.873+0000 7f754d905700 1 -- 192.168.123.102:0/1609290332 wait complete. 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.874+0000 7f754d905700 1 Processor -- start 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.874+0000 7f754d905700 1 -- start start 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.874+0000 7f754d905700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f754811d2f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:39.198 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.874+0000 7f754d905700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f754811d830 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.874+0000 7f754c903700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f754811d2f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.874+0000 7f754c903700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f754811d2f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33542/0 (socket says 192.168.123.102:33542) 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.874+0000 7f754c903700 1 -- 192.168.123.102:0/3279153281 learned_addr learned my addr 192.168.123.102:0/3279153281 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.874+0000 7f754c903700 1 -- 192.168.123.102:0/3279153281 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7538009740 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.874+0000 7f754c903700 1 --2- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f754811d2f0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f7538000c00 tx=0x7f753800bfa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.875+0000 7f7545ffb700 1 -- 192.168.123.102:0/3279153281 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f75380040f0 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.875+0000 7f7545ffb700 1 -- 192.168.123.102:0/3279153281 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f7538004250 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.875+0000 7f7545ffb700 1 -- 192.168.123.102:0/3279153281 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7538011560 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.875+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f754811da30 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.875+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f754811bb10 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.875+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f754804f000 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.878+0000 7f7545ffb700 1 -- 192.168.123.102:0/3279153281 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 11) v1 ==== 45185+0+0 (secure 0 0 0) 0x7f75380043c0 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.879+0000 7f7545ffb700 1 --2- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f7530038270 0x7f753003a730 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.879+0000 7f7545ffb700 1 -- 192.168.123.102:0/3279153281 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f7538028030 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.879+0000 7f7545ffb700 1 -- 192.168.123.102:0/3279153281 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f753802c930 con 0x7f7548071d70 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.879+0000 7f7547fff700 1 --2- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f7530038270 0x7f753003a730 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.880+0000 7f7547fff700 1 --2- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f7530038270 0x7f753003a730 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f753c006fd0 tx=0x7f753c006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:38.997+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}) v1 -- 0x7f75480621e0 con 0x7f7530038270 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.162+0000 7f7545ffb700 1 -- 192.168.123.102:0/3279153281 <== mgr.14164 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+252 (secure 0 0 0) 0x7f75480621e0 con 0x7f7530038270 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f7530038270 msgr2=0x7f753003a730 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 --2- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f7530038270 0x7f753003a730 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f753c006fd0 tx=0x7f753c006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 msgr2=0x7f754811d2f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 --2- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f754811d2f0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f7538000c00 tx=0x7f753800bfa0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 shutdown_connections 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 --2- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f7530038270 0x7f753003a730 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 --2- 192.168.123.102:0/3279153281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7548071d70 0x7f754811d2f0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 >> 192.168.123.102:0/3279153281 conn(0x7f754806d320 msgr2=0x7f754806df80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 shutdown_connections 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.165+0000 7f754d905700 1 -- 192.168.123.102:0/3279153281 wait complete. 2026-03-10T10:14:39.199 INFO:teuthology.orchestra.run.vm02.stdout:Fetching dashboard port number... 2026-03-10T10:14:39.512 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: [10/Mar/2026:10:14:37] ENGINE Bus STARTING 2026-03-10T10:14:39.512 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: [10/Mar/2026:10:14:38] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: [10/Mar/2026:10:14:38] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: [10/Mar/2026:10:14:38] ENGINE Bus STARTED 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: mgrmap e11: vm02.zmavgl(active, since 1.20756s) 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: from='client.14168 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: from='client.14168 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:39.513 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:39 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.360+0000 7fa3afe94700 1 Processor -- start 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.360+0000 7fa3afe94700 1 -- start start 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.360+0000 7fa3afe94700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a8108db0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.360+0000 7fa3afe94700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa3a8109380 con 0x7fa3a8108990 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.361+0000 7fa3adc30700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a8108db0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.361+0000 7fa3adc30700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a8108db0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33546/0 (socket says 192.168.123.102:33546) 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.361+0000 7fa3adc30700 1 -- 192.168.123.102:0/397620097 learned_addr learned my addr 192.168.123.102:0/397620097 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.361+0000 7fa3adc30700 1 -- 192.168.123.102:0/397620097 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa3a8109b90 con 0x7fa3a8108990 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.361+0000 7fa3adc30700 1 --2- 192.168.123.102:0/397620097 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a8108db0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7fa3a4009a90 tx=0x7fa3a4009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=196a7f3657e204fa server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3acc2e700 1 -- 192.168.123.102:0/397620097 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa3a4004030 con 0x7fa3a8108990 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3acc2e700 1 -- 192.168.123.102:0/397620097 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa3a400b7e0 con 0x7fa3a8108990 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3acc2e700 1 -- 192.168.123.102:0/397620097 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa3a4003ae0 con 0x7fa3a8108990 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3afe94700 1 -- 192.168.123.102:0/397620097 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 msgr2=0x7fa3a8108db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3afe94700 1 --2- 192.168.123.102:0/397620097 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a8108db0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7fa3a4009a90 tx=0x7fa3a4009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3afe94700 1 -- 192.168.123.102:0/397620097 shutdown_connections 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3afe94700 1 --2- 192.168.123.102:0/397620097 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a8108db0 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3afe94700 1 -- 192.168.123.102:0/397620097 >> 192.168.123.102:0/397620097 conn(0x7fa3a8103f50 msgr2=0x7fa3a8106370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3afe94700 1 -- 192.168.123.102:0/397620097 shutdown_connections 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.362+0000 7fa3afe94700 1 -- 192.168.123.102:0/397620097 wait complete. 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.363+0000 7fa3afe94700 1 Processor -- start 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.363+0000 7fa3afe94700 1 -- start start 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.363+0000 7fa3afe94700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a819c6b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.363+0000 7fa3afe94700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa3a819cbf0 con 0x7fa3a8108990 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.363+0000 7fa3adc30700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a819c6b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.363+0000 7fa3adc30700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a819c6b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33556/0 (socket says 192.168.123.102:33556) 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.363+0000 7fa3adc30700 1 -- 192.168.123.102:0/2658776739 learned_addr learned my addr 192.168.123.102:0/2658776739 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.364+0000 7fa3adc30700 1 -- 192.168.123.102:0/2658776739 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa3a4009740 con 0x7fa3a8108990 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.364+0000 7fa3adc30700 1 --2- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a819c6b0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fa3a4009710 tx=0x7fa3a400bfa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:39.552 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.364+0000 7fa39effd700 1 -- 192.168.123.102:0/2658776739 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa3a4004160 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.364+0000 7fa39effd700 1 -- 192.168.123.102:0/2658776739 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa3a40042c0 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.364+0000 7fa39effd700 1 -- 192.168.123.102:0/2658776739 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa3a40115a0 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.364+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa3a819cdf0 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.364+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa3a819d210 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.365+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa3a80623c0 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.368+0000 7fa39effd700 1 -- 192.168.123.102:0/2658776739 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 11) v1 ==== 45185+0+0 (secure 0 0 0) 0x7fa3a4028020 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.368+0000 7fa39effd700 1 --2- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa394038320 0x7fa39403a7e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.368+0000 7fa39effd700 1 -- 192.168.123.102:0/2658776739 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fa3a404c3e0 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.369+0000 7fa39effd700 1 -- 192.168.123.102:0/2658776739 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa3a404c860 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.369+0000 7fa3ad42f700 1 --2- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa394038320 0x7fa39403a7e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.369+0000 7fa3ad42f700 1 --2- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa394038320 0x7fa39403a7e0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fa398006fd0 tx=0x7fa398006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.495+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"} v 0) v1 -- 0x7fa3a819fc50 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.496+0000 7fa39effd700 1 -- 192.168.123.102:0/2658776739 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]=0 v8) v1 ==== 112+0+5 (secure 0 0 0) 0x7fa3a4004430 con 0x7fa3a8108990 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.498+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa394038320 msgr2=0x7fa39403a7e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.498+0000 7fa3afe94700 1 --2- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa394038320 0x7fa39403a7e0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7fa398006fd0 tx=0x7fa398006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.498+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 msgr2=0x7fa3a819c6b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.498+0000 7fa3afe94700 1 --2- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a819c6b0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fa3a4009710 tx=0x7fa3a400bfa0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.499+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 shutdown_connections 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.499+0000 7fa3afe94700 1 --2- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa394038320 0x7fa39403a7e0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.499+0000 7fa3afe94700 1 --2- 192.168.123.102:0/2658776739 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa3a8108990 0x7fa3a819c6b0 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.499+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 >> 192.168.123.102:0/2658776739 conn(0x7fa3a8103f50 msgr2=0x7fa3a8104bb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.499+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 shutdown_connections 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.499+0000 7fa3afe94700 1 -- 192.168.123.102:0/2658776739 wait complete. 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:Ceph Dashboard is now available at: 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout: URL: https://vm02.local:8443/ 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout: User: admin 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout: Password: 2odjzvm57s 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:39.553 INFO:teuthology.orchestra.run.vm02.stdout:Saving cluster configuration to /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config directory 2026-03-10T10:14:39.554 INFO:teuthology.orchestra.run.vm02.stdout:Enabling autotune for osd_memory_target 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.672+0000 7fc50fa85700 1 Processor -- start 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.673+0000 7fc50fa85700 1 -- start start 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.673+0000 7fc50fa85700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc508105450 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.673+0000 7fc50fa85700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc5080746a0 con 0x7fc508105030 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.673+0000 7fc50d821700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc508105450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.673+0000 7fc50d821700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc508105450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33558/0 (socket says 192.168.123.102:33558) 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.673+0000 7fc50d821700 1 -- 192.168.123.102:0/2463614301 learned_addr learned my addr 192.168.123.102:0/2463614301 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.673+0000 7fc50d821700 1 -- 192.168.123.102:0/2463614301 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc508105990 con 0x7fc508105030 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.674+0000 7fc50d821700 1 --2- 192.168.123.102:0/2463614301 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc508105450 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7fc4f8009a90 tx=0x7fc4f8009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1d6a8abb05ca2ae1 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.674+0000 7fc50c81f700 1 -- 192.168.123.102:0/2463614301 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fc4f8004030 con 0x7fc508105030 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.674+0000 7fc50c81f700 1 -- 192.168.123.102:0/2463614301 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc4f800b7e0 con 0x7fc508105030 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.674+0000 7fc50c81f700 1 -- 192.168.123.102:0/2463614301 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fc4f8003ae0 con 0x7fc508105030 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.674+0000 7fc50fa85700 1 -- 192.168.123.102:0/2463614301 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 msgr2=0x7fc508105450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.674+0000 7fc50fa85700 1 --2- 192.168.123.102:0/2463614301 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc508105450 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7fc4f8009a90 tx=0x7fc4f8009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.674+0000 7fc50fa85700 1 -- 192.168.123.102:0/2463614301 shutdown_connections 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.674+0000 7fc50fa85700 1 --2- 192.168.123.102:0/2463614301 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc508105450 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.674+0000 7fc50fa85700 1 -- 192.168.123.102:0/2463614301 >> 192.168.123.102:0/2463614301 conn(0x7fc508100bd0 msgr2=0x7fc508103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.675+0000 7fc50fa85700 1 -- 192.168.123.102:0/2463614301 shutdown_connections 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.675+0000 7fc50fa85700 1 -- 192.168.123.102:0/2463614301 wait complete. 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.675+0000 7fc50fa85700 1 Processor -- start 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.675+0000 7fc50fa85700 1 -- start start 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc50fa85700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc50819c630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc50fa85700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc50819cb70 con 0x7fc508105030 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc50d821700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc50819c630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc50d821700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc50819c630 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33570/0 (socket says 192.168.123.102:33570) 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc50d821700 1 -- 192.168.123.102:0/772531529 learned_addr learned my addr 192.168.123.102:0/772531529 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc50d821700 1 -- 192.168.123.102:0/772531529 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc4f8009740 con 0x7fc508105030 2026-03-10T10:14:39.816 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc50d821700 1 --2- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc50819c630 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7fc4f8000c00 tx=0x7fc4f800bfa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc4feffd700 1 -- 192.168.123.102:0/772531529 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fc4f80041a0 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc4feffd700 1 -- 192.168.123.102:0/772531529 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc4f8004300 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc4feffd700 1 -- 192.168.123.102:0/772531529 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fc4f8011550 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc50819cd70 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.676+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc50819d190 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.677+0000 7fc4feffd700 1 -- 192.168.123.102:0/772531529 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 11) v1 ==== 45185+0+0 (secure 0 0 0) 0x7fc4f80116b0 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.677+0000 7fc4feffd700 1 --2- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc4f4040b30 0x7fc4f4042ff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.677+0000 7fc4feffd700 1 -- 192.168.123.102:0/772531529 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fc4f804d0b0 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.677+0000 7fc50d020700 1 --2- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc4f4040b30 0x7fc4f4042ff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.678+0000 7fc50d020700 1 --2- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc4f4040b30 0x7fc4f4042ff0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7fc504006fd0 tx=0x7fc504006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.678+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc508195eb0 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.681+0000 7fc4feffd700 1 -- 192.168.123.102:0/772531529 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fc4f80119b0 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.785+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1 -- 0x7fc5080623c0 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.785+0000 7fc4feffd700 1 -- 192.168.123.102:0/772531529 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{prefix=config set, name=osd_memory_target_autotune}]=0 v8)=0 v8) v1 ==== 127+0+0 (secure 0 0 0) 0x7fc4f8018b40 con 0x7fc508105030 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc4f4040b30 msgr2=0x7fc4f4042ff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 --2- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc4f4040b30 0x7fc4f4042ff0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7fc504006fd0 tx=0x7fc504006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 msgr2=0x7fc50819c630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 --2- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc50819c630 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7fc4f8000c00 tx=0x7fc4f800bfa0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 shutdown_connections 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 --2- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc4f4040b30 0x7fc4f4042ff0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 --2- 192.168.123.102:0/772531529 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc508105030 0x7fc50819c630 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 >> 192.168.123.102:0/772531529 conn(0x7fc508100bd0 msgr2=0x7fc508190bf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 shutdown_connections 2026-03-10T10:14:39.817 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.788+0000 7fc50fa85700 1 -- 192.168.123.102:0/772531529 wait complete. 2026-03-10T10:14:40.152 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.942+0000 7f2f54e7d700 1 Processor -- start 2026-03-10T10:14:40.153 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.943+0000 7f2f54e7d700 1 -- start start 2026-03-10T10:14:40.153 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.943+0000 7f2f54e7d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f50104fb0 0x7f2f501073e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.943+0000 7f2f54e7d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2f50074720 con 0x7f2f50104fb0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.943+0000 7f2f4e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f50104fb0 0x7f2f501073e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.943+0000 7f2f4e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f50104fb0 0x7f2f501073e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33582/0 (socket says 192.168.123.102:33582) 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.943+0000 7f2f4e59c700 1 -- 192.168.123.102:0/3251703681 learned_addr learned my addr 192.168.123.102:0/3251703681 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.944+0000 7f2f4e59c700 1 -- 192.168.123.102:0/3251703681 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2f50107920 con 0x7f2f50104fb0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.944+0000 7f2f4e59c700 1 --2- 192.168.123.102:0/3251703681 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f50104fb0 0x7f2f501073e0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f2f40009a90 tx=0x7f2f40009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7e7ae300f1b0eb45 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.944+0000 7f2f4dd9b700 1 -- 192.168.123.102:0/3251703681 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2f40004030 con 0x7f2f50104fb0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.944+0000 7f2f4dd9b700 1 -- 192.168.123.102:0/3251703681 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f2f4000b7e0 con 0x7f2f50104fb0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.945+0000 7f2f54e7d700 1 -- 192.168.123.102:0/3251703681 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f50104fb0 msgr2=0x7f2f501073e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.945+0000 7f2f54e7d700 1 --2- 192.168.123.102:0/3251703681 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f50104fb0 0x7f2f501073e0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f2f40009a90 tx=0x7f2f40009da0 comp rx=0 tx=0).stop 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.945+0000 7f2f54e7d700 1 -- 192.168.123.102:0/3251703681 shutdown_connections 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.945+0000 7f2f54e7d700 1 --2- 192.168.123.102:0/3251703681 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f50104fb0 0x7f2f501073e0 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.945+0000 7f2f54e7d700 1 -- 192.168.123.102:0/3251703681 >> 192.168.123.102:0/3251703681 conn(0x7f2f50100bd0 msgr2=0x7f2f50103030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.945+0000 7f2f54e7d700 1 -- 192.168.123.102:0/3251703681 shutdown_connections 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.945+0000 7f2f54e7d700 1 -- 192.168.123.102:0/3251703681 wait complete. 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.945+0000 7f2f54e7d700 1 Processor -- start 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.946+0000 7f2f54e7d700 1 -- start start 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.946+0000 7f2f54e7d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f501a0bd0 0x7f2f501a1010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.946+0000 7f2f54e7d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2f50074720 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.946+0000 7f2f4e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f501a0bd0 0x7f2f501a1010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.946+0000 7f2f4e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f501a0bd0 0x7f2f501a1010 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33596/0 (socket says 192.168.123.102:33596) 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.946+0000 7f2f4e59c700 1 -- 192.168.123.102:0/1020259591 learned_addr learned my addr 192.168.123.102:0/1020259591 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.946+0000 7f2f4e59c700 1 -- 192.168.123.102:0/1020259591 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2f40009740 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.946+0000 7f2f4e59c700 1 --2- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f501a0bd0 0x7f2f501a1010 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f2f40009130 tx=0x7f2f4000be30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.947+0000 7f2f477fe700 1 -- 192.168.123.102:0/1020259591 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2f40003f60 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.947+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2f501a1550 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.947+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2f501a4190 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.948+0000 7f2f477fe700 1 -- 192.168.123.102:0/1020259591 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f2f400045a0 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.948+0000 7f2f477fe700 1 -- 192.168.123.102:0/1020259591 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2f4001ad80 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.948+0000 7f2f477fe700 1 -- 192.168.123.102:0/1020259591 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 11) v1 ==== 45185+0+0 (secure 0 0 0) 0x7f2f40011420 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.948+0000 7f2f477fe700 1 --2- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2f30038350 0x7f2f3003a810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.948+0000 7f2f477fe700 1 -- 192.168.123.102:0/1020259591 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f2f4004c830 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.948+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2f34005320 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.949+0000 7f2f47fff700 1 --2- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2f30038350 0x7f2f3003a810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.950+0000 7f2f47fff700 1 --2- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2f30038350 0x7f2f3003a810 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f2f38006fd0 tx=0x7f2f38006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:39.951+0000 7f2f477fe700 1 -- 192.168.123.102:0/1020259591 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f2f40011720 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.008+0000 7f2f477fe700 1 -- 192.168.123.102:0/1020259591 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mgrmap(e 12) v1 ==== 45291+0+0 (secure 0 0 0) 0x7f2f40011900 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.111+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1 -- 0x7f2f34005f70 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.118+0000 7f2f477fe700 1 -- 192.168.123.102:0/1020259591 <== mon.0 v2:192.168.123.102:3300/0 8 ==== mon_command_ack([{prefix=config-key set, key=mgr/dashboard/cluster/status}]=0 set mgr/dashboard/cluster/status v34)=0 set mgr/dashboard/cluster/status v34) v1 ==== 153+0+0 (secure 0 0 0) 0x7f2f40010970 con 0x7f2f501a0bd0 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.120+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2f30038350 msgr2=0x7f2f3003a810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.120+0000 7f2f54e7d700 1 --2- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2f30038350 0x7f2f3003a810 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f2f38006fd0 tx=0x7f2f38006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.120+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f501a0bd0 msgr2=0x7f2f501a1010 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.120+0000 7f2f54e7d700 1 --2- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f501a0bd0 0x7f2f501a1010 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f2f40009130 tx=0x7f2f4000be30 comp rx=0 tx=0).stop 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.120+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 shutdown_connections 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.120+0000 7f2f54e7d700 1 --2- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2f30038350 0x7f2f3003a810 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.120+0000 7f2f54e7d700 1 --2- 192.168.123.102:0/1020259591 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2f501a0bd0 0x7f2f501a1010 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.120+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 >> 192.168.123.102:0/1020259591 conn(0x7f2f50100bd0 msgr2=0x7f2f501071d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.121+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 shutdown_connections 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr 2026-03-10T10:14:40.121+0000 7f2f54e7d700 1 -- 192.168.123.102:0/1020259591 wait complete. 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:40.154 INFO:teuthology.orchestra.run.vm02.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T10:14:40.155 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:40.155 INFO:teuthology.orchestra.run.vm02.stdout: ceph telemetry on 2026-03-10T10:14:40.155 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:40.155 INFO:teuthology.orchestra.run.vm02.stdout:For more information see: 2026-03-10T10:14:40.155 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:40.155 INFO:teuthology.orchestra.run.vm02.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T10:14:40.155 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:40.155 INFO:teuthology.orchestra.run.vm02.stdout:Bootstrap complete. 2026-03-10T10:14:40.181 INFO:tasks.cephadm:Fetching config... 2026-03-10T10:14:40.181 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:14:40.181 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T10:14:40.210 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T10:14:40.210 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:14:40.210 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T10:14:40.278 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T10:14:40.278 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:14:40.278 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/keyring of=/dev/stdout 2026-03-10T10:14:40.347 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T10:14:40.347 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:14:40.347 DEBUG:teuthology.orchestra.run.vm02:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T10:14:40.404 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T10:14:40.404 DEBUG:teuthology.orchestra.run.vm02:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRVxTDdDyFKO6pXsaJyfSFxLj/hX5KHgxPxcbv7r5nyOeZV/Kesbgr1xq1DsxwRPjKqRhK3EAdfixQIRb43FDaxDCmoO8tVeEejwwdslkgW35yd9ymzGbI2u8vUr+wgW0Mx7wO3kZzLuPRxCezlDRFa8AvO4FBvnQyQWB3v419ZwbdRQkgWGhxON6Uqo2pS+QVmdeFwxf+9RJ9dWBTtUGFQp0avKehk/57Ca/RozqVoqIb39BJXsJBkWH9NVrme1g95fRLcM7KUGx1+zGGLR8mQsFA5Xb7v8wd+bXX8su4TmvObAx8BSeBRHa4M4OV5pYcUb2GRnxdabPYMP2Vb4+QQLHDqWcvieg/LDmNA15wXBEk5cM1yoNp8gefoUYt9oQy2EBs/gIOPn4OSD01RHxNSywUJl9U6aXZiCMj4omdS6fsAF9DqzsvBqsw77AUkPUV44EuxYluImt44gG+AMYMkz0GtcpML7ESn97mm78fkaG5EUgRfl+oj9fmCzfUvyU= ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T10:14:40.470 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:40 vm02 ceph-mon[50200]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:40.470 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:40 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/2658776739' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T10:14:40.470 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:40 vm02 ceph-mon[50200]: mgrmap e12: vm02.zmavgl(active, since 2s) 2026-03-10T10:14:40.470 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:40 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1020259591' entity='client.admin' 2026-03-10T10:14:40.482 INFO:teuthology.orchestra.run.vm02.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRVxTDdDyFKO6pXsaJyfSFxLj/hX5KHgxPxcbv7r5nyOeZV/Kesbgr1xq1DsxwRPjKqRhK3EAdfixQIRb43FDaxDCmoO8tVeEejwwdslkgW35yd9ymzGbI2u8vUr+wgW0Mx7wO3kZzLuPRxCezlDRFa8AvO4FBvnQyQWB3v419ZwbdRQkgWGhxON6Uqo2pS+QVmdeFwxf+9RJ9dWBTtUGFQp0avKehk/57Ca/RozqVoqIb39BJXsJBkWH9NVrme1g95fRLcM7KUGx1+zGGLR8mQsFA5Xb7v8wd+bXX8su4TmvObAx8BSeBRHa4M4OV5pYcUb2GRnxdabPYMP2Vb4+QQLHDqWcvieg/LDmNA15wXBEk5cM1yoNp8gefoUYt9oQy2EBs/gIOPn4OSD01RHxNSywUJl9U6aXZiCMj4omdS6fsAF9DqzsvBqsw77AUkPUV44EuxYluImt44gG+AMYMkz0GtcpML7ESn97mm78fkaG5EUgRfl+oj9fmCzfUvyU= ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:40.493 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRVxTDdDyFKO6pXsaJyfSFxLj/hX5KHgxPxcbv7r5nyOeZV/Kesbgr1xq1DsxwRPjKqRhK3EAdfixQIRb43FDaxDCmoO8tVeEejwwdslkgW35yd9ymzGbI2u8vUr+wgW0Mx7wO3kZzLuPRxCezlDRFa8AvO4FBvnQyQWB3v419ZwbdRQkgWGhxON6Uqo2pS+QVmdeFwxf+9RJ9dWBTtUGFQp0avKehk/57Ca/RozqVoqIb39BJXsJBkWH9NVrme1g95fRLcM7KUGx1+zGGLR8mQsFA5Xb7v8wd+bXX8su4TmvObAx8BSeBRHa4M4OV5pYcUb2GRnxdabPYMP2Vb4+QQLHDqWcvieg/LDmNA15wXBEk5cM1yoNp8gefoUYt9oQy2EBs/gIOPn4OSD01RHxNSywUJl9U6aXZiCMj4omdS6fsAF9DqzsvBqsw77AUkPUV44EuxYluImt44gG+AMYMkz0GtcpML7ESn97mm78fkaG5EUgRfl+oj9fmCzfUvyU= ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T10:14:40.526 INFO:teuthology.orchestra.run.vm05.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRVxTDdDyFKO6pXsaJyfSFxLj/hX5KHgxPxcbv7r5nyOeZV/Kesbgr1xq1DsxwRPjKqRhK3EAdfixQIRb43FDaxDCmoO8tVeEejwwdslkgW35yd9ymzGbI2u8vUr+wgW0Mx7wO3kZzLuPRxCezlDRFa8AvO4FBvnQyQWB3v419ZwbdRQkgWGhxON6Uqo2pS+QVmdeFwxf+9RJ9dWBTtUGFQp0avKehk/57Ca/RozqVoqIb39BJXsJBkWH9NVrme1g95fRLcM7KUGx1+zGGLR8mQsFA5Xb7v8wd+bXX8su4TmvObAx8BSeBRHa4M4OV5pYcUb2GRnxdabPYMP2Vb4+QQLHDqWcvieg/LDmNA15wXBEk5cM1yoNp8gefoUYt9oQy2EBs/gIOPn4OSD01RHxNSywUJl9U6aXZiCMj4omdS6fsAF9DqzsvBqsw77AUkPUV44EuxYluImt44gG+AMYMkz0GtcpML7ESn97mm78fkaG5EUgRfl+oj9fmCzfUvyU= ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:14:40.537 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T10:14:40.689 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.295+0000 7fcd351e3700 1 -- 192.168.123.102:0/4284043051 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 msgr2=0x7fcd301030c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.295+0000 7fcd351e3700 1 --2- 192.168.123.102:0/4284043051 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 0x7fcd301030c0 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fcd18009b00 tx=0x7fcd18009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.296+0000 7fcd351e3700 1 -- 192.168.123.102:0/4284043051 shutdown_connections 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.296+0000 7fcd351e3700 1 --2- 192.168.123.102:0/4284043051 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 0x7fcd301030c0 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.296+0000 7fcd351e3700 1 -- 192.168.123.102:0/4284043051 >> 192.168.123.102:0/4284043051 conn(0x7fcd300fe220 msgr2=0x7fcd30100680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.296+0000 7fcd351e3700 1 -- 192.168.123.102:0/4284043051 shutdown_connections 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.296+0000 7fcd351e3700 1 -- 192.168.123.102:0/4284043051 wait complete. 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.297+0000 7fcd351e3700 1 Processor -- start 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.297+0000 7fcd351e3700 1 -- start start 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.297+0000 7fcd351e3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 0x7fcd30197d60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.297+0000 7fcd2ed9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 0x7fcd30197d60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.297+0000 7fcd2ed9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 0x7fcd30197d60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33608/0 (socket says 192.168.123.102:33608) 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.297+0000 7fcd2ed9d700 1 -- 192.168.123.102:0/1773449715 learned_addr learned my addr 192.168.123.102:0/1773449715 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.297+0000 7fcd351e3700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcd301982a0 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.297+0000 7fcd2ed9d700 1 -- 192.168.123.102:0/1773449715 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcd180097e0 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.297+0000 7fcd2ed9d700 1 --2- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 0x7fcd30197d60 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7fcd18004750 tx=0x7fcd18005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.298+0000 7fcd27fff700 1 -- 192.168.123.102:0/1773449715 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcd1801c070 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.298+0000 7fcd27fff700 1 -- 192.168.123.102:0/1773449715 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fcd18021470 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.298+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcd301984a0 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.298+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcd30198940 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.298+0000 7fcd27fff700 1 -- 192.168.123.102:0/1773449715 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcd1800f460 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.299+0000 7fcd27fff700 1 -- 192.168.123.102:0/1773449715 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 12) v1 ==== 45291+0+0 (secure 0 0 0) 0x7fcd1800f600 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.299+0000 7fcd27fff700 1 --2- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcd1c0383f0 0x7fcd1c03a8b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.299+0000 7fcd27fff700 1 -- 192.168.123.102:0/1773449715 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fcd1804d490 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.299+0000 7fcd2e59c700 1 --2- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcd1c0383f0 0x7fcd1c03a8b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.299+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcd10005320 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.300+0000 7fcd2e59c700 1 --2- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcd1c0383f0 0x7fcd1c03a8b0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fcd20006fd0 tx=0x7fcd20006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.302+0000 7fcd27fff700 1 -- 192.168.123.102:0/1773449715 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fcd18026070 con 0x7fcd30102ca0 2026-03-10T10:14:41.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.417+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/allow_ptrace}] v 0) v1 -- 0x7fcd10005f70 con 0x7fcd30102ca0 2026-03-10T10:14:41.437 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.436+0000 7fcd27fff700 1 -- 192.168.123.102:0/1773449715 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/allow_ptrace}]=0 v9)=0 v9) v1 ==== 125+0+0 (secure 0 0 0) 0x7fcd18017490 con 0x7fcd30102ca0 2026-03-10T10:14:41.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.441+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcd1c0383f0 msgr2=0x7fcd1c03a8b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:41.443 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.441+0000 7fcd351e3700 1 --2- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcd1c0383f0 0x7fcd1c03a8b0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fcd20006fd0 tx=0x7fcd20006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:41.443 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.441+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 msgr2=0x7fcd30197d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:41.443 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.441+0000 7fcd351e3700 1 --2- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 0x7fcd30197d60 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7fcd18004750 tx=0x7fcd18005dc0 comp rx=0 tx=0).stop 2026-03-10T10:14:41.448 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.448+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 shutdown_connections 2026-03-10T10:14:41.448 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.448+0000 7fcd351e3700 1 --2- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcd1c0383f0 0x7fcd1c03a8b0 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:41.448 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.448+0000 7fcd351e3700 1 --2- 192.168.123.102:0/1773449715 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcd30102ca0 0x7fcd30197d60 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:41.448 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.448+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 >> 192.168.123.102:0/1773449715 conn(0x7fcd300fe220 msgr2=0x7fcd300fef00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:41.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.448+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 shutdown_connections 2026-03-10T10:14:41.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:41.448+0000 7fcd351e3700 1 -- 192.168.123.102:0/1773449715 wait complete. 2026-03-10T10:14:41.633 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T10:14:41.633 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T10:14:41.782 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:14:42.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.225+0000 7fd5cbd5c700 1 -- 192.168.123.102:0/4073979835 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 msgr2=0x7fd5c41038e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:42.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.225+0000 7fd5cbd5c700 1 --2- 192.168.123.102:0/4073979835 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 0x7fd5c41038e0 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7fd5c0009b00 tx=0x7fd5c0009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:42.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.226+0000 7fd5cbd5c700 1 -- 192.168.123.102:0/4073979835 shutdown_connections 2026-03-10T10:14:42.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.226+0000 7fd5cbd5c700 1 --2- 192.168.123.102:0/4073979835 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 0x7fd5c41038e0 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:42.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.227+0000 7fd5cbd5c700 1 -- 192.168.123.102:0/4073979835 >> 192.168.123.102:0/4073979835 conn(0x7fd5c40faf00 msgr2=0x7fd5c40fd340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:42.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.227+0000 7fd5cbd5c700 1 -- 192.168.123.102:0/4073979835 shutdown_connections 2026-03-10T10:14:42.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.227+0000 7fd5cbd5c700 1 -- 192.168.123.102:0/4073979835 wait complete. 2026-03-10T10:14:42.228 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.227+0000 7fd5cbd5c700 1 Processor -- start 2026-03-10T10:14:42.228 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.228+0000 7fd5cbd5c700 1 -- start start 2026-03-10T10:14:42.230 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.229+0000 7fd5cbd5c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 0x7fd5c4197d90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:42.230 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.229+0000 7fd5cbd5c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd5c41982d0 con 0x7fd5c41014f0 2026-03-10T10:14:42.230 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.229+0000 7fd5c9af8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 0x7fd5c4197d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:42.230 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.229+0000 7fd5c9af8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 0x7fd5c4197d90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33640/0 (socket says 192.168.123.102:33640) 2026-03-10T10:14:42.230 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.229+0000 7fd5c9af8700 1 -- 192.168.123.102:0/3857741196 learned_addr learned my addr 192.168.123.102:0/3857741196 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:42.230 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.230+0000 7fd5c9af8700 1 -- 192.168.123.102:0/3857741196 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd5c00097e0 con 0x7fd5c41014f0 2026-03-10T10:14:42.230 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.230+0000 7fd5c9af8700 1 --2- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 0x7fd5c4197d90 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7fd5c0004f40 tx=0x7fd5c0004740 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:42.232 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.230+0000 7fd5baffd700 1 -- 192.168.123.102:0/3857741196 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd5c001c070 con 0x7fd5c41014f0 2026-03-10T10:14:42.232 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.230+0000 7fd5baffd700 1 -- 192.168.123.102:0/3857741196 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd5c00053f0 con 0x7fd5c41014f0 2026-03-10T10:14:42.232 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.230+0000 7fd5baffd700 1 -- 192.168.123.102:0/3857741196 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd5c000f550 con 0x7fd5c41014f0 2026-03-10T10:14:42.232 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.230+0000 7fd5cbd5c700 1 -- 192.168.123.102:0/3857741196 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd5c41984d0 con 0x7fd5c41014f0 2026-03-10T10:14:42.232 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.230+0000 7fd5cbd5c700 1 -- 192.168.123.102:0/3857741196 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd5c4198970 con 0x7fd5c41014f0 2026-03-10T10:14:42.233 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.232+0000 7fd5baffd700 1 -- 192.168.123.102:0/3857741196 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 12) v1 ==== 45291+0+0 (secure 0 0 0) 0x7fd5c0005560 con 0x7fd5c41014f0 2026-03-10T10:14:42.233 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.232+0000 7fd5cbd5c700 1 -- 192.168.123.102:0/3857741196 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd5c4191bf0 con 0x7fd5c41014f0 2026-03-10T10:14:42.233 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.232+0000 7fd5baffd700 1 --2- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd5b0038430 0x7fd5b003a8f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:42.233 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.232+0000 7fd5baffd700 1 -- 192.168.123.102:0/3857741196 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fd5c004c3c0 con 0x7fd5c41014f0 2026-03-10T10:14:42.233 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.233+0000 7fd5c92f7700 1 --2- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd5b0038430 0x7fd5b003a8f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:42.233 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.233+0000 7fd5c92f7700 1 --2- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd5b0038430 0x7fd5b003a8f0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fd5b4006fd0 tx=0x7fd5b4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:42.238 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.237+0000 7fd5baffd700 1 -- 192.168.123.102:0/3857741196 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fd5c0026030 con 0x7fd5c41014f0 2026-03-10T10:14:42.380 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.377+0000 7fd5cbd5c700 1 -- 192.168.123.102:0/3857741196 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}) v1 -- 0x7fd5c40611d0 con 0x7fd5b0038430 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.402+0000 7fd5baffd700 1 -- 192.168.123.102:0/3857741196 <== mgr.14164 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7fd5c40611d0 con 0x7fd5b0038430 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.405+0000 7fd5b8ff9700 1 -- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd5b0038430 msgr2=0x7fd5b003a8f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.405+0000 7fd5b8ff9700 1 --2- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd5b0038430 0x7fd5b003a8f0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fd5b4006fd0 tx=0x7fd5b4006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.405+0000 7fd5b8ff9700 1 -- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 msgr2=0x7fd5c4197d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.405+0000 7fd5b8ff9700 1 --2- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 0x7fd5c4197d90 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7fd5c0004f40 tx=0x7fd5c0004740 comp rx=0 tx=0).stop 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.405+0000 7fd5b8ff9700 1 -- 192.168.123.102:0/3857741196 shutdown_connections 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.405+0000 7fd5b8ff9700 1 --2- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd5b0038430 0x7fd5b003a8f0 secure :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fd5b4006fd0 tx=0x7fd5b4006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.405+0000 7fd5b8ff9700 1 --2- 192.168.123.102:0/3857741196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd5c41014f0 0x7fd5c4197d90 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.405+0000 7fd5b8ff9700 1 -- 192.168.123.102:0/3857741196 >> 192.168.123.102:0/3857741196 conn(0x7fd5c40faf00 msgr2=0x7fd5c40fbbc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.407+0000 7fd5b8ff9700 1 -- 192.168.123.102:0/3857741196 shutdown_connections 2026-03-10T10:14:42.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:42.407+0000 7fd5b8ff9700 1 -- 192.168.123.102:0/3857741196 wait complete. 2026-03-10T10:14:42.487 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm05 2026-03-10T10:14:42.488 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:14:42.488 DEBUG:teuthology.orchestra.run.vm05:> dd of=/etc/ceph/ceph.conf 2026-03-10T10:14:42.504 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:14:42.504 DEBUG:teuthology.orchestra.run.vm05:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:42.560 INFO:tasks.cephadm:Adding host vm05 to orchestrator... 2026-03-10T10:14:42.560 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch host add vm05 2026-03-10T10:14:42.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:42 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1773449715' entity='client.admin' 2026-03-10T10:14:42.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:42 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:42.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:42 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:42.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:42 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:42.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:42 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:42.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:42 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:14:42.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:42 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T10:14:42.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:42 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:42.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:42 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:42.813 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:14:43.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.258+0000 7fd805d31700 1 -- 192.168.123.102:0/3895701468 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 msgr2=0x7fd800102ef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:43.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.258+0000 7fd805d31700 1 --2- 192.168.123.102:0/3895701468 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 0x7fd800102ef0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7fd7f4009b00 tx=0x7fd7f4009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:43.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.259+0000 7fd805d31700 1 -- 192.168.123.102:0/3895701468 shutdown_connections 2026-03-10T10:14:43.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.259+0000 7fd805d31700 1 --2- 192.168.123.102:0/3895701468 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 0x7fd800102ef0 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.259+0000 7fd805d31700 1 -- 192.168.123.102:0/3895701468 >> 192.168.123.102:0/3895701468 conn(0x7fd8000fe050 msgr2=0x7fd8001004b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:43.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.259+0000 7fd805d31700 1 -- 192.168.123.102:0/3895701468 shutdown_connections 2026-03-10T10:14:43.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.259+0000 7fd805d31700 1 -- 192.168.123.102:0/3895701468 wait complete. 2026-03-10T10:14:43.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.260+0000 7fd805d31700 1 Processor -- start 2026-03-10T10:14:43.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.260+0000 7fd805d31700 1 -- start start 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.260+0000 7fd805d31700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 0x7fd800197c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.260+0000 7fd805d31700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd8001981a0 con 0x7fd800102ad0 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.260+0000 7fd804d2f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 0x7fd800197c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.260+0000 7fd804d2f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 0x7fd800197c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33672/0 (socket says 192.168.123.102:33672) 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.260+0000 7fd804d2f700 1 -- 192.168.123.102:0/3419106185 learned_addr learned my addr 192.168.123.102:0/3419106185 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.260+0000 7fd804d2f700 1 -- 192.168.123.102:0/3419106185 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd7f40097e0 con 0x7fd800102ad0 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.260+0000 7fd804d2f700 1 --2- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 0x7fd800197c60 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7fd7f4004d40 tx=0x7fd7f4004e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.261+0000 7fd7fdffb700 1 -- 192.168.123.102:0/3419106185 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd7f401c070 con 0x7fd800102ad0 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.261+0000 7fd7fdffb700 1 -- 192.168.123.102:0/3419106185 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd7f40056f0 con 0x7fd800102ad0 2026-03-10T10:14:43.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.261+0000 7fd7fdffb700 1 -- 192.168.123.102:0/3419106185 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd7f4017440 con 0x7fd800102ad0 2026-03-10T10:14:43.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.261+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd8001983a0 con 0x7fd800102ad0 2026-03-10T10:14:43.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.261+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd800198840 con 0x7fd800102ad0 2026-03-10T10:14:43.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.262+0000 7fd7fdffb700 1 -- 192.168.123.102:0/3419106185 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 12) v1 ==== 45291+0+0 (secure 0 0 0) 0x7fd7f400f460 con 0x7fd800102ad0 2026-03-10T10:14:43.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.262+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd800191860 con 0x7fd800102ad0 2026-03-10T10:14:43.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.262+0000 7fd7fdffb700 1 --2- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd7f00383a0 0x7fd7f003a860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:43.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.262+0000 7fd7fffff700 1 --2- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd7f00383a0 0x7fd7f003a860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:43.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.262+0000 7fd7fdffb700 1 -- 192.168.123.102:0/3419106185 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fd7f40159a0 con 0x7fd800102ad0 2026-03-10T10:14:43.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.262+0000 7fd7fffff700 1 --2- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd7f00383a0 0x7fd7f003a860 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fd7ec006fd0 tx=0x7fd7ec006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:43.266 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.265+0000 7fd7fdffb700 1 -- 192.168.123.102:0/3419106185 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fd7f4025030 con 0x7fd800102ad0 2026-03-10T10:14:43.389 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.386+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm05", "target": ["mon-mgr", ""]}) v1 -- 0x7fd8000611d0 con 0x7fd7f00383a0 2026-03-10T10:14:43.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:43.444+0000 7fd7fdffb700 1 -- 192.168.123.102:0/3419106185 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7fd7f40175a0 con 0x7fd800102ad0 2026-03-10T10:14:43.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:43 vm02 ceph-mon[50200]: Deploying daemon ceph-exporter.vm02 on vm02 2026-03-10T10:14:43.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:43 vm02 ceph-mon[50200]: from='client.14188 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:43.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:43 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:43.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:43 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:43.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:43 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:43.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:43 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:43.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:43 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:14:43.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:43 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T10:14:43.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:43 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:44.453 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:44 vm02 ceph-mon[50200]: Deploying daemon crash.vm02 on vm02 2026-03-10T10:14:44.453 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:44 vm02 ceph-mon[50200]: from='client.14191 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:44.453 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:44 vm02 ceph-mon[50200]: mgrmap e13: vm02.zmavgl(active, since 6s) 2026-03-10T10:14:44.453 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:44 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:44.453 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:44 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:44.453 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:44 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:44.453 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:44 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:45.134 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.132+0000 7fd7fdffb700 1 -- 192.168.123.102:0/3419106185 <== mgr.14164 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+46 (secure 0 0 0) 0x7fd8000611d0 con 0x7fd7f00383a0 2026-03-10T10:14:45.136 INFO:teuthology.orchestra.run.vm02.stdout:Added host 'vm05' with addr '192.168.123.105' 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd7f00383a0 msgr2=0x7fd7f003a860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 --2- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd7f00383a0 0x7fd7f003a860 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fd7ec006fd0 tx=0x7fd7ec006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 msgr2=0x7fd800197c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 --2- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 0x7fd800197c60 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7fd7f4004d40 tx=0x7fd7f4004e20 comp rx=0 tx=0).stop 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 shutdown_connections 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 --2- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fd7f00383a0 0x7fd7f003a860 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 --2- 192.168.123.102:0/3419106185 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd800102ad0 0x7fd800197c60 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 >> 192.168.123.102:0/3419106185 conn(0x7fd8000fe050 msgr2=0x7fd8000fed30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 shutdown_connections 2026-03-10T10:14:45.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.136+0000 7fd805d31700 1 -- 192.168.123.102:0/3419106185 wait complete. 2026-03-10T10:14:45.180 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch host ls --format=json 2026-03-10T10:14:45.313 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:14:45.545 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.544+0000 7f96567f1700 1 -- 192.168.123.102:0/3601890000 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 msgr2=0x7f96501030d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:45.545 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.544+0000 7f96567f1700 1 --2- 192.168.123.102:0/3601890000 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 0x7f96501030d0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f9638009b00 tx=0x7f9638009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:45.545 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.544+0000 7f96567f1700 1 -- 192.168.123.102:0/3601890000 shutdown_connections 2026-03-10T10:14:45.545 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.544+0000 7f96567f1700 1 --2- 192.168.123.102:0/3601890000 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 0x7f96501030d0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:45.545 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.544+0000 7f96567f1700 1 -- 192.168.123.102:0/3601890000 >> 192.168.123.102:0/3601890000 conn(0x7f96500fe250 msgr2=0x7f9650100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:45.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.545+0000 7f96567f1700 1 -- 192.168.123.102:0/3601890000 shutdown_connections 2026-03-10T10:14:45.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.545+0000 7f96567f1700 1 -- 192.168.123.102:0/3601890000 wait complete. 2026-03-10T10:14:45.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.545+0000 7f96567f1700 1 Processor -- start 2026-03-10T10:14:45.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.545+0000 7f96567f1700 1 -- start start 2026-03-10T10:14:45.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.545+0000 7f96567f1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 0x7f9650197e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:45.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.545+0000 7f96567f1700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9650198360 con 0x7f9650102cb0 2026-03-10T10:14:45.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f964ffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 0x7f9650197e20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:45.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f964ffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 0x7f9650197e20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33696/0 (socket says 192.168.123.102:33696) 2026-03-10T10:14:45.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f964ffff700 1 -- 192.168.123.102:0/869763113 learned_addr learned my addr 192.168.123.102:0/869763113 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:45.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f964ffff700 1 -- 192.168.123.102:0/869763113 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f96380097e0 con 0x7f9650102cb0 2026-03-10T10:14:45.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f964ffff700 1 --2- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 0x7f9650197e20 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f9638005f50 tx=0x7f96380050b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f964d7fa700 1 -- 192.168.123.102:0/869763113 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f963801c070 con 0x7f9650102cb0 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f964d7fa700 1 -- 192.168.123.102:0/869763113 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9638021470 con 0x7f9650102cb0 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f964d7fa700 1 -- 192.168.123.102:0/869763113 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f963800f460 con 0x7f9650102cb0 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9650198560 con 0x7f9650102cb0 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.546+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9650198a00 con 0x7f9650102cb0 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.547+0000 7f964d7fa700 1 -- 192.168.123.102:0/869763113 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7f963800f5c0 con 0x7f9650102cb0 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.547+0000 7f964d7fa700 1 --2- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f963c038440 0x7f963c03a900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.547+0000 7f964d7fa700 1 -- 192.168.123.102:0/869763113 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f963804d3e0 con 0x7f9650102cb0 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.548+0000 7f964f7fe700 1 --2- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f963c038440 0x7f963c03a900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:45.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.548+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9650191a40 con 0x7f9650102cb0 2026-03-10T10:14:45.551 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.549+0000 7f964f7fe700 1 --2- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f963c038440 0x7f963c03a900 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f9640006fd0 tx=0x7f9640006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:45.551 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.550+0000 7f964d7fa700 1 -- 192.168.123.102:0/869763113 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f9638026070 con 0x7f9650102cb0 2026-03-10T10:14:45.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:45 vm02 ceph-mon[50200]: Deploying cephadm binary to vm05 2026-03-10T10:14:45.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:45 vm02 ceph-mon[50200]: Deploying daemon node-exporter.vm02 on vm02 2026-03-10T10:14:45.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:45 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:45.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.653+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}) v1 -- 0x7f96500611d0 con 0x7f963c038440 2026-03-10T10:14:45.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.654+0000 7f964d7fa700 1 -- 192.168.123.102:0/869763113 <== mgr.14164 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+155 (secure 0 0 0) 0x7f96500611d0 con 0x7f963c038440 2026-03-10T10:14:45.655 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:14:45.655 INFO:teuthology.orchestra.run.vm02.stdout:[{"addr": "192.168.123.102", "hostname": "vm02", "labels": [], "status": ""}, {"addr": "192.168.123.105", "hostname": "vm05", "labels": [], "status": ""}] 2026-03-10T10:14:45.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.656+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f963c038440 msgr2=0x7f963c03a900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:45.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.657+0000 7f96567f1700 1 --2- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f963c038440 0x7f963c03a900 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f9640006fd0 tx=0x7f9640006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:45.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.657+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 msgr2=0x7f9650197e20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:45.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.657+0000 7f96567f1700 1 --2- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 0x7f9650197e20 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f9638005f50 tx=0x7f96380050b0 comp rx=0 tx=0).stop 2026-03-10T10:14:45.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.657+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 shutdown_connections 2026-03-10T10:14:45.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.657+0000 7f96567f1700 1 --2- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f963c038440 0x7f963c03a900 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:45.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.657+0000 7f96567f1700 1 --2- 192.168.123.102:0/869763113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9650102cb0 0x7f9650197e20 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:45.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.658+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 >> 192.168.123.102:0/869763113 conn(0x7f96500fe250 msgr2=0x7f96500fef10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:45.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.658+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 shutdown_connections 2026-03-10T10:14:45.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:45.658+0000 7f96567f1700 1 -- 192.168.123.102:0/869763113 wait complete. 2026-03-10T10:14:45.719 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T10:14:45.719 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd crush tunables default 2026-03-10T10:14:45.848 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.075+0000 7f1152578700 1 -- 192.168.123.102:0/3995485799 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 msgr2=0x7f114c1030d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.075+0000 7f1152578700 1 --2- 192.168.123.102:0/3995485799 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 0x7f114c1030d0 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f1140009b00 tx=0x7f1140009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.075+0000 7f1152578700 1 -- 192.168.123.102:0/3995485799 shutdown_connections 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.075+0000 7f1152578700 1 --2- 192.168.123.102:0/3995485799 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 0x7f114c1030d0 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.075+0000 7f1152578700 1 -- 192.168.123.102:0/3995485799 >> 192.168.123.102:0/3995485799 conn(0x7f114c0fe250 msgr2=0x7f114c100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.076+0000 7f1152578700 1 -- 192.168.123.102:0/3995485799 shutdown_connections 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.076+0000 7f1152578700 1 -- 192.168.123.102:0/3995485799 wait complete. 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.077+0000 7f1152578700 1 Processor -- start 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.077+0000 7f1152578700 1 -- start start 2026-03-10T10:14:46.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.077+0000 7f1152578700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 0x7f114c197e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:46.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.077+0000 7f1152578700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f114c198360 con 0x7f114c102cb0 2026-03-10T10:14:46.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.077+0000 7f114bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 0x7f114c197e20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:46.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.077+0000 7f114bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 0x7f114c197e20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33718/0 (socket says 192.168.123.102:33718) 2026-03-10T10:14:46.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.077+0000 7f114bfff700 1 -- 192.168.123.102:0/455822306 learned_addr learned my addr 192.168.123.102:0/455822306 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:14:46.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.077+0000 7f114bfff700 1 -- 192.168.123.102:0/455822306 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f11400097e0 con 0x7f114c102cb0 2026-03-10T10:14:46.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.078+0000 7f114bfff700 1 --2- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 0x7f114c197e20 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f1140005f50 tx=0x7f11400050b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:46.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.078+0000 7f11497fa700 1 -- 192.168.123.102:0/455822306 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f114001c070 con 0x7f114c102cb0 2026-03-10T10:14:46.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.078+0000 7f11497fa700 1 -- 192.168.123.102:0/455822306 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1140021470 con 0x7f114c102cb0 2026-03-10T10:14:46.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.078+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f114c198560 con 0x7f114c102cb0 2026-03-10T10:14:46.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.078+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f114c198a00 con 0x7f114c102cb0 2026-03-10T10:14:46.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.079+0000 7f11497fa700 1 -- 192.168.123.102:0/455822306 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f114000f460 con 0x7f114c102cb0 2026-03-10T10:14:46.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.079+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1138005320 con 0x7f114c102cb0 2026-03-10T10:14:46.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.080+0000 7f11497fa700 1 -- 192.168.123.102:0/455822306 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7f114000f680 con 0x7f114c102cb0 2026-03-10T10:14:46.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.080+0000 7f11497fa700 1 --2- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1134038110 0x7f113403a5d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:46.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.080+0000 7f11497fa700 1 -- 192.168.123.102:0/455822306 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f114004d4b0 con 0x7f114c102cb0 2026-03-10T10:14:46.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.082+0000 7f114b7fe700 1 --2- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1134038110 0x7f113403a5d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:46.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.082+0000 7f114b7fe700 1 --2- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1134038110 0x7f113403a5d0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f113c006fd0 tx=0x7f113c006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:46.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.082+0000 7f11497fa700 1 -- 192.168.123.102:0/455822306 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f1140029950 con 0x7f114c102cb0 2026-03-10T10:14:46.195 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:46.193+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) v1 -- 0x7f1138005c90 con 0x7f114c102cb0 2026-03-10T10:14:46.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:46 vm02 ceph-mon[50200]: Added host vm05 2026-03-10T10:14:46.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:46 vm02 ceph-mon[50200]: from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:14:46.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:46 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/455822306' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T10:14:47.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.037+0000 7f11497fa700 1 -- 192.168.123.102:0/455822306 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v4) v1 ==== 124+0+0 (secure 0 0 0) 0x7f1140026070 con 0x7f114c102cb0 2026-03-10T10:14:47.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1134038110 msgr2=0x7f113403a5d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:47.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 --2- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1134038110 0x7f113403a5d0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f113c006fd0 tx=0x7f113c006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:47.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 msgr2=0x7f114c197e20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:47.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 --2- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 0x7f114c197e20 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f1140005f50 tx=0x7f11400050b0 comp rx=0 tx=0).stop 2026-03-10T10:14:47.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 shutdown_connections 2026-03-10T10:14:47.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 --2- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1134038110 0x7f113403a5d0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:47.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 --2- 192.168.123.102:0/455822306 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f114c102cb0 0x7f114c197e20 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:47.042 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 >> 192.168.123.102:0/455822306 conn(0x7f114c0fe250 msgr2=0x7f114c0fef10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:47.042 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 shutdown_connections 2026-03-10T10:14:47.042 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:14:47.040+0000 7f1152578700 1 -- 192.168.123.102:0/455822306 wait complete. 2026-03-10T10:14:47.047 INFO:teuthology.orchestra.run.vm02.stderr:adjusted tunables profile to default 2026-03-10T10:14:47.117 INFO:tasks.cephadm:Adding mon.vm02 on vm02 2026-03-10T10:14:47.117 INFO:tasks.cephadm:Adding mon.vm05 on vm05 2026-03-10T10:14:47.117 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch apply mon '2;vm02:192.168.123.102=vm02;vm05:192.168.123.105=vm05' 2026-03-10T10:14:47.258 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:47.295 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:48.255 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:47 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:48.255 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:47 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:48.255 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:47 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:48.255 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:47 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:48.255 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:47 vm02 ceph-mon[50200]: Deploying daemon alertmanager.vm02 on vm02 2026-03-10T10:14:48.256 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:47 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/455822306' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T10:14:48.256 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:47 vm02 ceph-mon[50200]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:14:48.256 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:47 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:48.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.396+0000 7fdaff957700 1 -- 192.168.123.105:0/1248420073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 msgr2=0x7fdaf81030c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:48.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.396+0000 7fdaff957700 1 --2- 192.168.123.105:0/1248420073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 0x7fdaf81030c0 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fdae8009b00 tx=0x7fdae8009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:48.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.398+0000 7fdaff957700 1 -- 192.168.123.105:0/1248420073 shutdown_connections 2026-03-10T10:14:48.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.398+0000 7fdaff957700 1 --2- 192.168.123.105:0/1248420073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 0x7fdaf81030c0 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.398+0000 7fdaff957700 1 -- 192.168.123.105:0/1248420073 >> 192.168.123.105:0/1248420073 conn(0x7fdaf80fe220 msgr2=0x7fdaf8100680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:48.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.398+0000 7fdaff957700 1 -- 192.168.123.105:0/1248420073 shutdown_connections 2026-03-10T10:14:48.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.398+0000 7fdaff957700 1 -- 192.168.123.105:0/1248420073 wait complete. 2026-03-10T10:14:48.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.398+0000 7fdaff957700 1 Processor -- start 2026-03-10T10:14:48.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.399+0000 7fdaff957700 1 -- start start 2026-03-10T10:14:48.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.399+0000 7fdaff957700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 0x7fdaf8197d60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:48.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.399+0000 7fdaff957700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdaf81982a0 con 0x7fdaf8102ca0 2026-03-10T10:14:48.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.399+0000 7fdafd6f3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 0x7fdaf8197d60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:48.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.399+0000 7fdafd6f3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 0x7fdaf8197d60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:58076/0 (socket says 192.168.123.105:58076) 2026-03-10T10:14:48.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.399+0000 7fdafd6f3700 1 -- 192.168.123.105:0/1065440825 learned_addr learned my addr 192.168.123.105:0/1065440825 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:14:48.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.399+0000 7fdafd6f3700 1 -- 192.168.123.105:0/1065440825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdae80097e0 con 0x7fdaf8102ca0 2026-03-10T10:14:48.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.399+0000 7fdafd6f3700 1 --2- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 0x7fdaf8197d60 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7fdae8004750 tx=0x7fdae8005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:48.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.400+0000 7fdaee7fc700 1 -- 192.168.123.105:0/1065440825 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fdae801c070 con 0x7fdaf8102ca0 2026-03-10T10:14:48.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.400+0000 7fdaee7fc700 1 -- 192.168.123.105:0/1065440825 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fdae8021470 con 0x7fdaf8102ca0 2026-03-10T10:14:48.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.400+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdaf81984a0 con 0x7fdaf8102ca0 2026-03-10T10:14:48.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.400+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdaf8198940 con 0x7fdaf8102ca0 2026-03-10T10:14:48.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.400+0000 7fdaee7fc700 1 -- 192.168.123.105:0/1065440825 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fdae800f460 con 0x7fdaf8102ca0 2026-03-10T10:14:48.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.401+0000 7fdaee7fc700 1 -- 192.168.123.105:0/1065440825 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7fdae800f620 con 0x7fdaf8102ca0 2026-03-10T10:14:48.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.401+0000 7fdaee7fc700 1 --2- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fdae4038440 0x7fdae403a900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:48.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.401+0000 7fdaee7fc700 1 -- 192.168.123.105:0/1065440825 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fdae804c410 con 0x7fdaf8102ca0 2026-03-10T10:14:48.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.401+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdadc005320 con 0x7fdaf8102ca0 2026-03-10T10:14:48.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.401+0000 7fdafcef2700 1 --2- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fdae4038440 0x7fdae403a900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:48.403 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.402+0000 7fdafcef2700 1 --2- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fdae4038440 0x7fdae403a900 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fdaf4006fd0 tx=0x7fdaf4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:48.405 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.404+0000 7fdaee7fc700 1 -- 192.168.123.105:0/1065440825 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fdae8026070 con 0x7fdaf8102ca0 2026-03-10T10:14:48.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.514+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "placement": "2;vm02:192.168.123.102=vm02;vm05:192.168.123.105=vm05", "target": ["mon-mgr", ""]}) v1 -- 0x7fdadc000c90 con 0x7fdae4038440 2026-03-10T10:14:48.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.518+0000 7fdaee7fc700 1 -- 192.168.123.105:0/1065440825 <== mgr.14164 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+24 (secure 0 0 0) 0x7fdadc000c90 con 0x7fdae4038440 2026-03-10T10:14:48.520 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled mon update... 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fdae4038440 msgr2=0x7fdae403a900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 --2- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fdae4038440 0x7fdae403a900 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fdaf4006fd0 tx=0x7fdaf4006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 msgr2=0x7fdaf8197d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 --2- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 0x7fdaf8197d60 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7fdae8004750 tx=0x7fdae8005dc0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 shutdown_connections 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 --2- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fdae4038440 0x7fdae403a900 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 --2- 192.168.123.105:0/1065440825 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdaf8102ca0 0x7fdaf8197d60 unknown :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 >> 192.168.123.105:0/1065440825 conn(0x7fdaf80fe220 msgr2=0x7fdaf80fef00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 shutdown_connections 2026-03-10T10:14:48.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:48.521+0000 7fdaff957700 1 -- 192.168.123.105:0/1065440825 wait complete. 2026-03-10T10:14:48.565 DEBUG:teuthology.orchestra.run.vm05:mon.vm05> sudo journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm05.service 2026-03-10T10:14:48.566 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:14:48.566 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:14:48.730 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:48.769 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:49.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.039+0000 7f61dfd0d700 1 -- 192.168.123.105:0/492097939 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 msgr2=0x7f61d8100e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:49.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.039+0000 7f61dfd0d700 1 --2- 192.168.123.105:0/492097939 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 0x7f61d8100e80 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f61cc009b00 tx=0x7f61cc009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:49.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.040+0000 7f61dfd0d700 1 -- 192.168.123.105:0/492097939 shutdown_connections 2026-03-10T10:14:49.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.040+0000 7f61dfd0d700 1 --2- 192.168.123.105:0/492097939 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 0x7f61d8100e80 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:49.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.040+0000 7f61dfd0d700 1 -- 192.168.123.105:0/492097939 >> 192.168.123.105:0/492097939 conn(0x7f61d80fc000 msgr2=0x7f61d80fe440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:49.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.040+0000 7f61dfd0d700 1 -- 192.168.123.105:0/492097939 shutdown_connections 2026-03-10T10:14:49.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.040+0000 7f61dfd0d700 1 -- 192.168.123.105:0/492097939 wait complete. 2026-03-10T10:14:49.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.041+0000 7f61dfd0d700 1 Processor -- start 2026-03-10T10:14:49.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.041+0000 7f61dfd0d700 1 -- start start 2026-03-10T10:14:49.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.041+0000 7f61dfd0d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 0x7f61d8195b50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:49.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.041+0000 7f61dfd0d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f61d8196090 con 0x7f61d8100a60 2026-03-10T10:14:49.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.041+0000 7f61ddaa9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 0x7f61d8195b50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.041+0000 7f61ddaa9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 0x7f61d8195b50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:36864/0 (socket says 192.168.123.105:36864) 2026-03-10T10:14:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.041+0000 7f61ddaa9700 1 -- 192.168.123.105:0/927106694 learned_addr learned my addr 192.168.123.105:0/927106694 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:14:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.041+0000 7f61ddaa9700 1 -- 192.168.123.105:0/927106694 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f61cc0097e0 con 0x7f61d8100a60 2026-03-10T10:14:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.042+0000 7f61ddaa9700 1 --2- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 0x7f61d8195b50 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f61cc004750 tx=0x7f61cc005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.042+0000 7f61caffd700 1 -- 192.168.123.105:0/927106694 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f61cc01c070 con 0x7f61d8100a60 2026-03-10T10:14:49.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.042+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f61d8196290 con 0x7f61d8100a60 2026-03-10T10:14:49.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.042+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f61d8196730 con 0x7f61d8100a60 2026-03-10T10:14:49.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.042+0000 7f61caffd700 1 -- 192.168.123.105:0/927106694 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f61cc021470 con 0x7f61d8100a60 2026-03-10T10:14:49.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.042+0000 7f61caffd700 1 -- 192.168.123.105:0/927106694 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f61cc00f460 con 0x7f61d8100a60 2026-03-10T10:14:49.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.043+0000 7f61caffd700 1 -- 192.168.123.105:0/927106694 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7f61cc00f620 con 0x7f61d8100a60 2026-03-10T10:14:49.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.043+0000 7f61caffd700 1 --2- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f61c4040c50 0x7f61c4043110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:49.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.043+0000 7f61caffd700 1 -- 192.168.123.105:0/927106694 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f61cc04d470 con 0x7f61d8100a60 2026-03-10T10:14:49.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.043+0000 7f61dd2a8700 1 --2- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f61c4040c50 0x7f61c4043110 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:49.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.045+0000 7f61dd2a8700 1 --2- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f61c4040c50 0x7f61c4043110 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f61d4006fd0 tx=0x7f61d4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:49.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.046+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f61bc005320 con 0x7f61d8100a60 2026-03-10T10:14:49.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.049+0000 7f61caffd700 1 -- 192.168.123.105:0/927106694 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f61cc026070 con 0x7f61d8100a60 2026-03-10T10:14:49.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.198+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f61bc005190 con 0x7f61d8100a60 2026-03-10T10:14:49.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.199+0000 7f61caffd700 1 -- 192.168.123.105:0/927106694 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f61cc029950 con 0x7f61d8100a60 2026-03-10T10:14:49.200 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:14:49.200 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:14:49.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.201+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f61c4040c50 msgr2=0x7f61c4043110 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:49.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.201+0000 7f61dfd0d700 1 --2- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f61c4040c50 0x7f61c4043110 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f61d4006fd0 tx=0x7f61d4006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:49.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.202+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 msgr2=0x7f61d8195b50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:49.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.202+0000 7f61dfd0d700 1 --2- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 0x7f61d8195b50 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f61cc004750 tx=0x7f61cc005dc0 comp rx=0 tx=0).stop 2026-03-10T10:14:49.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.202+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 shutdown_connections 2026-03-10T10:14:49.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.202+0000 7f61dfd0d700 1 --2- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f61c4040c50 0x7f61c4043110 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:49.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.202+0000 7f61dfd0d700 1 --2- 192.168.123.105:0/927106694 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f61d8100a60 0x7f61d8195b50 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:49.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.202+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 >> 192.168.123.105:0/927106694 conn(0x7f61d80fc000 msgr2=0x7f61d80fccc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:49.204 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.203+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 shutdown_connections 2026-03-10T10:14:49.204 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:49.203+0000 7f61dfd0d700 1 -- 192.168.123.105:0/927106694 wait complete. 2026-03-10T10:14:49.205 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:14:49.774 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:49 vm02 ceph-mon[50200]: from='client.14197 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm02:192.168.123.102=vm02;vm05:192.168.123.105=vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:49.774 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:49 vm02 ceph-mon[50200]: Saving service mon spec with placement vm02:192.168.123.102=vm02;vm05:192.168.123.105=vm05;count:2 2026-03-10T10:14:49.774 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:49 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:49.774 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:49 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/927106694' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:14:50.274 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:14:50.274 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:14:50.408 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:50.443 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:50.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.681+0000 7fa8afac2700 1 -- 192.168.123.105:0/894699759 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 msgr2=0x7fa8a81039a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.681+0000 7fa8afac2700 1 --2- 192.168.123.105:0/894699759 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 0x7fa8a81039a0 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7fa898009b00 tx=0x7fa898009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.681+0000 7fa8afac2700 1 -- 192.168.123.105:0/894699759 shutdown_connections 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.681+0000 7fa8afac2700 1 --2- 192.168.123.105:0/894699759 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 0x7fa8a81039a0 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.681+0000 7fa8afac2700 1 -- 192.168.123.105:0/894699759 >> 192.168.123.105:0/894699759 conn(0x7fa8a80faf00 msgr2=0x7fa8a80fd340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.682+0000 7fa8afac2700 1 -- 192.168.123.105:0/894699759 shutdown_connections 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.682+0000 7fa8afac2700 1 -- 192.168.123.105:0/894699759 wait complete. 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.682+0000 7fa8afac2700 1 Processor -- start 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.682+0000 7fa8afac2700 1 -- start start 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.682+0000 7fa8afac2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 0x7fa8a8197d60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:50.683 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.682+0000 7fa8afac2700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa8a81982a0 con 0x7fa8a81015b0 2026-03-10T10:14:50.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa8ad85e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 0x7fa8a8197d60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:50.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa8ad85e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 0x7fa8a8197d60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:36878/0 (socket says 192.168.123.105:36878) 2026-03-10T10:14:50.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa8ad85e700 1 -- 192.168.123.105:0/1812786278 learned_addr learned my addr 192.168.123.105:0/1812786278 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:14:50.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa8ad85e700 1 -- 192.168.123.105:0/1812786278 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa8980097e0 con 0x7fa8a81015b0 2026-03-10T10:14:50.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa8ad85e700 1 --2- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 0x7fa8a8197d60 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7fa898004f40 tx=0x7fa898005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:50.685 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa89effd700 1 -- 192.168.123.105:0/1812786278 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa89801c070 con 0x7fa8a81015b0 2026-03-10T10:14:50.685 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa89effd700 1 -- 192.168.123.105:0/1812786278 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa8980053b0 con 0x7fa8a81015b0 2026-03-10T10:14:50.685 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa89effd700 1 -- 192.168.123.105:0/1812786278 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa89800f550 con 0x7fa8a81015b0 2026-03-10T10:14:50.685 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa8a81984a0 con 0x7fa8a81015b0 2026-03-10T10:14:50.685 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.683+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa8a8198880 con 0x7fa8a81015b0 2026-03-10T10:14:50.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.684+0000 7fa89effd700 1 -- 192.168.123.105:0/1812786278 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7fa898005520 con 0x7fa8a81015b0 2026-03-10T10:14:50.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.684+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa8a804fa90 con 0x7fa8a81015b0 2026-03-10T10:14:50.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.685+0000 7fa89effd700 1 --2- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa8940383f0 0x7fa89403a8b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:50.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.685+0000 7fa89effd700 1 -- 192.168.123.105:0/1812786278 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fa89804c350 con 0x7fa8a81015b0 2026-03-10T10:14:50.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.685+0000 7fa8ad05d700 1 --2- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa8940383f0 0x7fa89403a8b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:50.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.686+0000 7fa8ad05d700 1 --2- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa8940383f0 0x7fa89403a8b0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fa8a4006fd0 tx=0x7fa8a4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:50.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.687+0000 7fa89effd700 1 -- 192.168.123.105:0/1812786278 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa89800f6b0 con 0x7fa8a81015b0 2026-03-10T10:14:50.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.826+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fa8a80623f0 con 0x7fa8a81015b0 2026-03-10T10:14:50.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.827+0000 7fa89effd700 1 -- 192.168.123.105:0/1812786278 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fa898026030 con 0x7fa8a81015b0 2026-03-10T10:14:50.828 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:14:50.828 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:14:50.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.829+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa8940383f0 msgr2=0x7fa89403a8b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:50.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.829+0000 7fa8afac2700 1 --2- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa8940383f0 0x7fa89403a8b0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fa8a4006fd0 tx=0x7fa8a4006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:50.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.830+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 msgr2=0x7fa8a8197d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:50.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.830+0000 7fa8afac2700 1 --2- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 0x7fa8a8197d60 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7fa898004f40 tx=0x7fa898005e70 comp rx=0 tx=0).stop 2026-03-10T10:14:50.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.830+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 shutdown_connections 2026-03-10T10:14:50.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.830+0000 7fa8afac2700 1 --2- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa8940383f0 0x7fa89403a8b0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:50.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.830+0000 7fa8afac2700 1 --2- 192.168.123.105:0/1812786278 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa8a81015b0 0x7fa8a8197d60 secure :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7fa898004f40 tx=0x7fa898005e70 comp rx=0 tx=0).stop 2026-03-10T10:14:50.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.830+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 >> 192.168.123.105:0/1812786278 conn(0x7fa8a80faf00 msgr2=0x7fa8a80fbbc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:50.832 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.831+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 shutdown_connections 2026-03-10T10:14:50.832 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:50.831+0000 7fa8afac2700 1 -- 192.168.123.105:0/1812786278 wait complete. 2026-03-10T10:14:50.832 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: Deploying daemon grafana.vm02 on vm02 2026-03-10T10:14:51.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:51 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/1812786278' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:14:51.884 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:14:51.885 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:14:52.010 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:52.045 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:52.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.305+0000 7f0f82cdd700 1 -- 192.168.123.105:0/1704304244 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 msgr2=0x7f0f7c1038e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:52.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.306+0000 7f0f82cdd700 1 --2- 192.168.123.105:0/1704304244 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 0x7f0f7c1038e0 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f0f6c009b00 tx=0x7f0f6c009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:52.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.306+0000 7f0f82cdd700 1 -- 192.168.123.105:0/1704304244 shutdown_connections 2026-03-10T10:14:52.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.306+0000 7f0f82cdd700 1 --2- 192.168.123.105:0/1704304244 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 0x7f0f7c1038e0 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:52.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.306+0000 7f0f82cdd700 1 -- 192.168.123.105:0/1704304244 >> 192.168.123.105:0/1704304244 conn(0x7f0f7c0faf00 msgr2=0x7f0f7c0fd340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:52.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.307+0000 7f0f82cdd700 1 -- 192.168.123.105:0/1704304244 shutdown_connections 2026-03-10T10:14:52.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.307+0000 7f0f82cdd700 1 -- 192.168.123.105:0/1704304244 wait complete. 2026-03-10T10:14:52.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.307+0000 7f0f82cdd700 1 Processor -- start 2026-03-10T10:14:52.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.307+0000 7f0f82cdd700 1 -- start start 2026-03-10T10:14:52.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.308+0000 7f0f82cdd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 0x7f0f7c197d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:52.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.308+0000 7f0f82cdd700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0f7c198290 con 0x7f0f7c1014f0 2026-03-10T10:14:52.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.308+0000 7f0f80a79700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 0x7f0f7c197d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:52.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.308+0000 7f0f80a79700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 0x7f0f7c197d50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:36896/0 (socket says 192.168.123.105:36896) 2026-03-10T10:14:52.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.308+0000 7f0f80a79700 1 -- 192.168.123.105:0/250708559 learned_addr learned my addr 192.168.123.105:0/250708559 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:14:52.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.308+0000 7f0f80a79700 1 -- 192.168.123.105:0/250708559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0f6c0097e0 con 0x7f0f7c1014f0 2026-03-10T10:14:52.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.308+0000 7f0f80a79700 1 --2- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 0x7f0f7c197d50 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7f0f6c004f40 tx=0x7f0f6c005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:52.310 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.309+0000 7f0f79ffb700 1 -- 192.168.123.105:0/250708559 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0f6c01c070 con 0x7f0f7c1014f0 2026-03-10T10:14:52.310 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.309+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0f7c198490 con 0x7f0f7c1014f0 2026-03-10T10:14:52.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.309+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0f7c198930 con 0x7f0f7c1014f0 2026-03-10T10:14:52.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.309+0000 7f0f79ffb700 1 -- 192.168.123.105:0/250708559 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0f6c0053b0 con 0x7f0f7c1014f0 2026-03-10T10:14:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.309+0000 7f0f79ffb700 1 -- 192.168.123.105:0/250708559 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0f6c00f460 con 0x7f0f7c1014f0 2026-03-10T10:14:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.310+0000 7f0f79ffb700 1 -- 192.168.123.105:0/250708559 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7f0f6c005520 con 0x7f0f7c1014f0 2026-03-10T10:14:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.310+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0f68005320 con 0x7f0f7c1014f0 2026-03-10T10:14:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.310+0000 7f0f79ffb700 1 --2- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0f64040c50 0x7f0f64043110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.310+0000 7f0f79ffb700 1 -- 192.168.123.105:0/250708559 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f0f6c04c420 con 0x7f0f7c1014f0 2026-03-10T10:14:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.310+0000 7f0f7bfff700 1 --2- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0f64040c50 0x7f0f64043110 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.311+0000 7f0f7bfff700 1 --2- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0f64040c50 0x7f0f64043110 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f0f70006fd0 tx=0x7f0f70006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:52.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.313+0000 7f0f79ffb700 1 -- 192.168.123.105:0/250708559 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f0f6c02aba0 con 0x7f0f7c1014f0 2026-03-10T10:14:52.459 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.457+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f0f68005190 con 0x7f0f7c1014f0 2026-03-10T10:14:52.459 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.458+0000 7f0f79ffb700 1 -- 192.168.123.105:0/250708559 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f0f6c026020 con 0x7f0f7c1014f0 2026-03-10T10:14:52.460 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:14:52.460 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:14:52.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.461+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0f64040c50 msgr2=0x7f0f64043110 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:52.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.461+0000 7f0f82cdd700 1 --2- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0f64040c50 0x7f0f64043110 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f0f70006fd0 tx=0x7f0f70006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:52.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.461+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 msgr2=0x7f0f7c197d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:52.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.461+0000 7f0f82cdd700 1 --2- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 0x7f0f7c197d50 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7f0f6c004f40 tx=0x7f0f6c005e70 comp rx=0 tx=0).stop 2026-03-10T10:14:52.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.462+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 shutdown_connections 2026-03-10T10:14:52.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.462+0000 7f0f82cdd700 1 --2- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0f64040c50 0x7f0f64043110 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:52.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.462+0000 7f0f82cdd700 1 --2- 192.168.123.105:0/250708559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f7c1014f0 0x7f0f7c197d50 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:52.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.462+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 >> 192.168.123.105:0/250708559 conn(0x7f0f7c0faf00 msgr2=0x7f0f7c0fd340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:52.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.462+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 shutdown_connections 2026-03-10T10:14:52.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:52.462+0000 7f0f82cdd700 1 -- 192.168.123.105:0/250708559 wait complete. 2026-03-10T10:14:52.464 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:14:53.560 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:14:53.560 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:14:53.698 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:53.733 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:53 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:14:53.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:53 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/250708559' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:14:53.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.961+0000 7f52e0c7b700 1 -- 192.168.123.105:0/892657335 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 msgr2=0x7f52dc1030d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:53.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.961+0000 7f52e0c7b700 1 --2- 192.168.123.105:0/892657335 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 0x7f52dc1030d0 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f52c4009b00 tx=0x7f52c4009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:53.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.962+0000 7f52e0c7b700 1 -- 192.168.123.105:0/892657335 shutdown_connections 2026-03-10T10:14:53.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.962+0000 7f52e0c7b700 1 --2- 192.168.123.105:0/892657335 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 0x7f52dc1030d0 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:53.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.962+0000 7f52e0c7b700 1 -- 192.168.123.105:0/892657335 >> 192.168.123.105:0/892657335 conn(0x7f52dc0fe250 msgr2=0x7f52dc100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:53.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.962+0000 7f52e0c7b700 1 -- 192.168.123.105:0/892657335 shutdown_connections 2026-03-10T10:14:53.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.962+0000 7f52e0c7b700 1 -- 192.168.123.105:0/892657335 wait complete. 2026-03-10T10:14:53.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.963+0000 7f52e0c7b700 1 Processor -- start 2026-03-10T10:14:53.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.963+0000 7f52e0c7b700 1 -- start start 2026-03-10T10:14:53.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.963+0000 7f52e0c7b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 0x7f52dc197e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:53.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.963+0000 7f52e0c7b700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f52dc198390 con 0x7f52dc102cb0 2026-03-10T10:14:53.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.963+0000 7f52da59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 0x7f52dc197e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:53.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.963+0000 7f52da59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 0x7f52dc197e50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:36922/0 (socket says 192.168.123.105:36922) 2026-03-10T10:14:53.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.963+0000 7f52da59c700 1 -- 192.168.123.105:0/3053693438 learned_addr learned my addr 192.168.123.105:0/3053693438 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:14:53.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.964+0000 7f52da59c700 1 -- 192.168.123.105:0/3053693438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f52c40097e0 con 0x7f52dc102cb0 2026-03-10T10:14:53.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.964+0000 7f52da59c700 1 --2- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 0x7f52dc197e50 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f52c4004d10 tx=0x7f52c4004df0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:53.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.964+0000 7f52d37fe700 1 -- 192.168.123.105:0/3053693438 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f52c401d070 con 0x7f52dc102cb0 2026-03-10T10:14:53.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.964+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f52dc198590 con 0x7f52dc102cb0 2026-03-10T10:14:53.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.964+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f52dc198a30 con 0x7f52dc102cb0 2026-03-10T10:14:53.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.964+0000 7f52d37fe700 1 -- 192.168.123.105:0/3053693438 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f52c40056f0 con 0x7f52dc102cb0 2026-03-10T10:14:53.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.964+0000 7f52d37fe700 1 -- 192.168.123.105:0/3053693438 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f52c400f460 con 0x7f52dc102cb0 2026-03-10T10:14:53.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.965+0000 7f52d37fe700 1 -- 192.168.123.105:0/3053693438 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7f52c400f6a0 con 0x7f52dc102cb0 2026-03-10T10:14:53.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.965+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f52dc191a40 con 0x7f52dc102cb0 2026-03-10T10:14:53.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.966+0000 7f52d37fe700 1 --2- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f52c8038480 0x7f52c803a940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:53.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.966+0000 7f52d37fe700 1 -- 192.168.123.105:0/3053693438 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f52c404c3b0 con 0x7f52dc102cb0 2026-03-10T10:14:53.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.968+0000 7f52d9d9b700 1 --2- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f52c8038480 0x7f52c803a940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:53.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.969+0000 7f52d9d9b700 1 --2- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f52c8038480 0x7f52c803a940 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f52cc006fd0 tx=0x7f52cc006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:53.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:53.970+0000 7f52d37fe700 1 -- 192.168.123.105:0/3053693438 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f52c402a950 con 0x7f52dc102cb0 2026-03-10T10:14:54.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.115+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f52dc0623c0 con 0x7f52dc102cb0 2026-03-10T10:14:54.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.116+0000 7f52d37fe700 1 -- 192.168.123.105:0/3053693438 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f52c4031090 con 0x7f52dc102cb0 2026-03-10T10:14:54.118 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:14:54.118 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:14:54.120 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.119+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f52c8038480 msgr2=0x7f52c803a940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:54.120 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.119+0000 7f52e0c7b700 1 --2- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f52c8038480 0x7f52c803a940 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f52cc006fd0 tx=0x7f52cc006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:54.120 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.119+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 msgr2=0x7f52dc197e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:54.120 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.119+0000 7f52e0c7b700 1 --2- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 0x7f52dc197e50 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f52c4004d10 tx=0x7f52c4004df0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.120+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 shutdown_connections 2026-03-10T10:14:54.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.120+0000 7f52e0c7b700 1 --2- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f52c8038480 0x7f52c803a940 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.120+0000 7f52e0c7b700 1 --2- 192.168.123.105:0/3053693438 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52dc102cb0 0x7f52dc197e50 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.120+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 >> 192.168.123.105:0/3053693438 conn(0x7f52dc0fe250 msgr2=0x7f52dc0fef10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:54.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.120+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 shutdown_connections 2026-03-10T10:14:54.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:54.120+0000 7f52e0c7b700 1 -- 192.168.123.105:0/3053693438 wait complete. 2026-03-10T10:14:54.122 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:14:54.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:54 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3053693438' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:14:55.179 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:14:55.179 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:14:55.315 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:55.350 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.601+0000 7fcf645d5700 1 -- 192.168.123.105:0/1752239537 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 msgr2=0x7fcf5c073220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.601+0000 7fcf645d5700 1 --2- 192.168.123.105:0/1752239537 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 0x7fcf5c073220 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7fcf50009b00 tx=0x7fcf50009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.602+0000 7fcf645d5700 1 -- 192.168.123.105:0/1752239537 shutdown_connections 2026-03-10T10:14:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.602+0000 7fcf645d5700 1 --2- 192.168.123.105:0/1752239537 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 0x7fcf5c073220 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.602+0000 7fcf645d5700 1 -- 192.168.123.105:0/1752239537 >> 192.168.123.105:0/1752239537 conn(0x7fcf5c0fc000 msgr2=0x7fcf5c0fe460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.602+0000 7fcf645d5700 1 -- 192.168.123.105:0/1752239537 shutdown_connections 2026-03-10T10:14:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.602+0000 7fcf645d5700 1 -- 192.168.123.105:0/1752239537 wait complete. 2026-03-10T10:14:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.602+0000 7fcf645d5700 1 Processor -- start 2026-03-10T10:14:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.603+0000 7fcf645d5700 1 -- start start 2026-03-10T10:14:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.603+0000 7fcf645d5700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 0x7fcf5c19c180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.603+0000 7fcf645d5700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcf5c19c6c0 con 0x7fcf5c074dc0 2026-03-10T10:14:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.603+0000 7fcf62371700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 0x7fcf5c19c180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.603+0000 7fcf62371700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 0x7fcf5c19c180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:36942/0 (socket says 192.168.123.105:36942) 2026-03-10T10:14:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.603+0000 7fcf62371700 1 -- 192.168.123.105:0/3020543559 learned_addr learned my addr 192.168.123.105:0/3020543559 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:14:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.603+0000 7fcf62371700 1 -- 192.168.123.105:0/3020543559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcf500097e0 con 0x7fcf5c074dc0 2026-03-10T10:14:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.603+0000 7fcf62371700 1 --2- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 0x7fcf5c19c180 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fcf50004750 tx=0x7fcf50005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:55.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.604+0000 7fcf4f7fe700 1 -- 192.168.123.105:0/3020543559 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcf5001c070 con 0x7fcf5c074dc0 2026-03-10T10:14:55.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.604+0000 7fcf4f7fe700 1 -- 192.168.123.105:0/3020543559 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fcf50021470 con 0x7fcf5c074dc0 2026-03-10T10:14:55.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.604+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcf5c19c8c0 con 0x7fcf5c074dc0 2026-03-10T10:14:55.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.604+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcf5c19cd60 con 0x7fcf5c074dc0 2026-03-10T10:14:55.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.604+0000 7fcf4f7fe700 1 -- 192.168.123.105:0/3020543559 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcf5000f460 con 0x7fcf5c074dc0 2026-03-10T10:14:55.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.604+0000 7fcf4f7fe700 1 -- 192.168.123.105:0/3020543559 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7fcf5000f680 con 0x7fcf5c074dc0 2026-03-10T10:14:55.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.604+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcf5c195e20 con 0x7fcf5c074dc0 2026-03-10T10:14:55.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.605+0000 7fcf4f7fe700 1 --2- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcf48038490 0x7fcf4803a950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:55.607 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.605+0000 7fcf4f7fe700 1 -- 192.168.123.105:0/3020543559 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fcf5004d560 con 0x7fcf5c074dc0 2026-03-10T10:14:55.607 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.605+0000 7fcf61b70700 1 --2- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcf48038490 0x7fcf4803a950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:55.607 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.606+0000 7fcf61b70700 1 --2- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcf48038490 0x7fcf4803a950 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fcf58006fd0 tx=0x7fcf58006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:55.608 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.607+0000 7fcf4f7fe700 1 -- 192.168.123.105:0/3020543559 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fcf50029b60 con 0x7fcf5c074dc0 2026-03-10T10:14:55.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.748+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fcf5c0623c0 con 0x7fcf5c074dc0 2026-03-10T10:14:55.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.748+0000 7fcf4f7fe700 1 -- 192.168.123.105:0/3020543559 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fcf50026030 con 0x7fcf5c074dc0 2026-03-10T10:14:55.750 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:14:55.750 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:14:55.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.751+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcf48038490 msgr2=0x7fcf4803a950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:55.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.751+0000 7fcf645d5700 1 --2- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcf48038490 0x7fcf4803a950 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fcf58006fd0 tx=0x7fcf58006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:55.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.751+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 msgr2=0x7fcf5c19c180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:55.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.752+0000 7fcf645d5700 1 --2- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 0x7fcf5c19c180 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fcf50004750 tx=0x7fcf50005dc0 comp rx=0 tx=0).stop 2026-03-10T10:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.752+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 shutdown_connections 2026-03-10T10:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.752+0000 7fcf645d5700 1 --2- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fcf48038490 0x7fcf4803a950 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.752+0000 7fcf645d5700 1 --2- 192.168.123.105:0/3020543559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fcf5c074dc0 0x7fcf5c19c180 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.752+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 >> 192.168.123.105:0/3020543559 conn(0x7fcf5c0fc000 msgr2=0x7fcf5c0fcce0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.752+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 shutdown_connections 2026-03-10T10:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:55.752+0000 7fcf645d5700 1 -- 192.168.123.105:0/3020543559 wait complete. 2026-03-10T10:14:55.754 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:14:56.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:55 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3020543559' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:14:56.813 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:14:56.813 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:14:56.950 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:56.985 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:57.207 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.205+0000 7fce8bb0f700 1 -- 192.168.123.105:0/787125479 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce841015b0 msgr2=0x7fce841039a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:57.207 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.205+0000 7fce8bb0f700 1 --2- 192.168.123.105:0/787125479 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce841015b0 0x7fce841039a0 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fce74009b00 tx=0x7fce74009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:57.207 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.206+0000 7fce8bb0f700 1 -- 192.168.123.105:0/787125479 shutdown_connections 2026-03-10T10:14:57.207 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.206+0000 7fce8bb0f700 1 --2- 192.168.123.105:0/787125479 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce841015b0 0x7fce841039a0 secure :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fce74009b00 tx=0x7fce74009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:57.207 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.206+0000 7fce8bb0f700 1 -- 192.168.123.105:0/787125479 >> 192.168.123.105:0/787125479 conn(0x7fce840faf00 msgr2=0x7fce840fd340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:57.207 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.206+0000 7fce8bb0f700 1 -- 192.168.123.105:0/787125479 shutdown_connections 2026-03-10T10:14:57.207 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.206+0000 7fce8bb0f700 1 -- 192.168.123.105:0/787125479 wait complete. 2026-03-10T10:14:57.207 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.206+0000 7fce8bb0f700 1 Processor -- start 2026-03-10T10:14:57.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.207+0000 7fce8bb0f700 1 -- start start 2026-03-10T10:14:57.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.207+0000 7fce8bb0f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce84197fe0 0x7fce84198400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:57.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.207+0000 7fce8bb0f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fce84198940 con 0x7fce84197fe0 2026-03-10T10:14:57.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.207+0000 7fce898ab700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce84197fe0 0x7fce84198400 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:57.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.207+0000 7fce898ab700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce84197fe0 0x7fce84198400 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:36972/0 (socket says 192.168.123.105:36972) 2026-03-10T10:14:57.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.207+0000 7fce898ab700 1 -- 192.168.123.105:0/1265384028 learned_addr learned my addr 192.168.123.105:0/1265384028 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:14:57.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.207+0000 7fce898ab700 1 -- 192.168.123.105:0/1265384028 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fce740097e0 con 0x7fce84197fe0 2026-03-10T10:14:57.209 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.208+0000 7fce898ab700 1 --2- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce84197fe0 0x7fce84198400 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fce74009fd0 tx=0x7fce74005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:57.209 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.208+0000 7fce7affd700 1 -- 192.168.123.105:0/1265384028 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fce7401d070 con 0x7fce84197fe0 2026-03-10T10:14:57.209 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.208+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fce84198b40 con 0x7fce84197fe0 2026-03-10T10:14:57.209 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.208+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fce8419b790 con 0x7fce84197fe0 2026-03-10T10:14:57.210 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.208+0000 7fce7affd700 1 -- 192.168.123.105:0/1265384028 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fce7400b810 con 0x7fce84197fe0 2026-03-10T10:14:57.210 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.208+0000 7fce7affd700 1 -- 192.168.123.105:0/1265384028 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fce74022e90 con 0x7fce84197fe0 2026-03-10T10:14:57.210 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.209+0000 7fce7affd700 1 -- 192.168.123.105:0/1265384028 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7fce74022890 con 0x7fce84197fe0 2026-03-10T10:14:57.210 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.209+0000 7fce7affd700 1 --2- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fce70038480 0x7fce7003a940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:57.210 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.209+0000 7fce7affd700 1 -- 192.168.123.105:0/1265384028 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fce7404c150 con 0x7fce84197fe0 2026-03-10T10:14:57.210 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.209+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fce84191c00 con 0x7fce84197fe0 2026-03-10T10:14:57.210 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.209+0000 7fce890aa700 1 --2- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fce70038480 0x7fce7003a940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:57.211 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.210+0000 7fce890aa700 1 --2- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fce70038480 0x7fce7003a940 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fce80006fd0 tx=0x7fce80006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:57.213 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.212+0000 7fce7affd700 1 -- 192.168.123.105:0/1265384028 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fce74027070 con 0x7fce84197fe0 2026-03-10T10:14:57.360 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.358+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fce840623c0 con 0x7fce84197fe0 2026-03-10T10:14:57.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.360+0000 7fce7affd700 1 -- 192.168.123.105:0/1265384028 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fce7402a360 con 0x7fce84197fe0 2026-03-10T10:14:57.361 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:14:57.361 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:14:57.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.362+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fce70038480 msgr2=0x7fce7003a940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:57.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.362+0000 7fce8bb0f700 1 --2- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fce70038480 0x7fce7003a940 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fce80006fd0 tx=0x7fce80006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:57.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.362+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce84197fe0 msgr2=0x7fce84198400 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:57.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.362+0000 7fce8bb0f700 1 --2- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce84197fe0 0x7fce84198400 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fce74009fd0 tx=0x7fce74005e70 comp rx=0 tx=0).stop 2026-03-10T10:14:57.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.362+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 shutdown_connections 2026-03-10T10:14:57.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.362+0000 7fce8bb0f700 1 --2- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fce70038480 0x7fce7003a940 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:57.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.362+0000 7fce8bb0f700 1 --2- 192.168.123.105:0/1265384028 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fce84197fe0 0x7fce84198400 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:57.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.363+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 >> 192.168.123.105:0/1265384028 conn(0x7fce840faf00 msgr2=0x7fce840fbbc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:57.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.363+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 shutdown_connections 2026-03-10T10:14:57.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:57.363+0000 7fce8bb0f700 1 -- 192.168.123.105:0/1265384028 wait complete. 2026-03-10T10:14:57.364 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:14:58.423 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:14:58.423 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:14:58.562 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:58.601 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:14:58.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:58 vm02 ceph-mon[50200]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:14:58.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:58 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/1265384028' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:14:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.864+0000 7f0cc6433700 1 -- 192.168.123.105:0/2967326989 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 msgr2=0x7f0cc01030d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.864+0000 7f0cc6433700 1 --2- 192.168.123.105:0/2967326989 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 0x7f0cc01030d0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f0ca8009b00 tx=0x7f0ca8009e10 comp rx=0 tx=0).stop 2026-03-10T10:14:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.865+0000 7f0cc6433700 1 -- 192.168.123.105:0/2967326989 shutdown_connections 2026-03-10T10:14:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.865+0000 7f0cc6433700 1 --2- 192.168.123.105:0/2967326989 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 0x7f0cc01030d0 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.865+0000 7f0cc6433700 1 -- 192.168.123.105:0/2967326989 >> 192.168.123.105:0/2967326989 conn(0x7f0cc00fe250 msgr2=0x7f0cc0100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.865+0000 7f0cc6433700 1 -- 192.168.123.105:0/2967326989 shutdown_connections 2026-03-10T10:14:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.865+0000 7f0cc6433700 1 -- 192.168.123.105:0/2967326989 wait complete. 2026-03-10T10:14:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.866+0000 7f0cc6433700 1 Processor -- start 2026-03-10T10:14:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.866+0000 7f0cc6433700 1 -- start start 2026-03-10T10:14:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.866+0000 7f0cc6433700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 0x7f0cc0197e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.866+0000 7f0cc6433700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0cc0198360 con 0x7f0cc0102cb0 2026-03-10T10:14:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.866+0000 7f0cbffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 0x7f0cc0197e20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.866+0000 7f0cbffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 0x7f0cc0197e20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:37818/0 (socket says 192.168.123.105:37818) 2026-03-10T10:14:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.866+0000 7f0cbffff700 1 -- 192.168.123.105:0/627334875 learned_addr learned my addr 192.168.123.105:0/627334875 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:14:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.867+0000 7f0cbffff700 1 -- 192.168.123.105:0/627334875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0ca80097e0 con 0x7f0cc0102cb0 2026-03-10T10:14:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.867+0000 7f0cbffff700 1 --2- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 0x7f0cc0197e20 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f0ca8004d40 tx=0x7f0ca8004e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.867+0000 7f0cbd7fa700 1 -- 192.168.123.105:0/627334875 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0ca801c070 con 0x7f0cc0102cb0 2026-03-10T10:14:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.867+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0cc0198560 con 0x7f0cc0102cb0 2026-03-10T10:14:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.867+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0cc0198a00 con 0x7f0cc0102cb0 2026-03-10T10:14:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.868+0000 7f0cbd7fa700 1 -- 192.168.123.105:0/627334875 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0ca80054e0 con 0x7f0cc0102cb0 2026-03-10T10:14:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.868+0000 7f0cbd7fa700 1 -- 192.168.123.105:0/627334875 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0ca8003b70 con 0x7f0cc0102cb0 2026-03-10T10:14:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.868+0000 7f0cbd7fa700 1 -- 192.168.123.105:0/627334875 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7f0ca800f460 con 0x7f0cc0102cb0 2026-03-10T10:14:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.868+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0ca0005320 con 0x7f0cc0102cb0 2026-03-10T10:14:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.869+0000 7f0cbd7fa700 1 --2- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0cac038440 0x7f0cac03a900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.869+0000 7f0cbd7fa700 1 -- 192.168.123.105:0/627334875 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f0ca804d340 con 0x7f0cc0102cb0 2026-03-10T10:14:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.869+0000 7f0cbf7fe700 1 --2- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0cac038440 0x7f0cac03a900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.869+0000 7f0cbf7fe700 1 --2- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0cac038440 0x7f0cac03a900 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f0cb0006fd0 tx=0x7f0cb0006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:58.871+0000 7f0cbd7fa700 1 -- 192.168.123.105:0/627334875 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f0ca8029980 con 0x7f0cc0102cb0 2026-03-10T10:14:59.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.019+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f0ca0005190 con 0x7f0cc0102cb0 2026-03-10T10:14:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.020+0000 7f0cbd7fa700 1 -- 192.168.123.105:0/627334875 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f0ca8029390 con 0x7f0cc0102cb0 2026-03-10T10:14:59.022 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:14:59.022 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:14:59.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.023+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0cac038440 msgr2=0x7f0cac03a900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:59.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.023+0000 7f0cc6433700 1 --2- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0cac038440 0x7f0cac03a900 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f0cb0006fd0 tx=0x7f0cb0006e40 comp rx=0 tx=0).stop 2026-03-10T10:14:59.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.023+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 msgr2=0x7f0cc0197e20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:59.025 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.023+0000 7f0cc6433700 1 --2- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 0x7f0cc0197e20 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f0ca8004d40 tx=0x7f0ca8004e20 comp rx=0 tx=0).stop 2026-03-10T10:14:59.025 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.024+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 shutdown_connections 2026-03-10T10:14:59.025 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.024+0000 7f0cc6433700 1 --2- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0cac038440 0x7f0cac03a900 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.025 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.024+0000 7f0cc6433700 1 --2- 192.168.123.105:0/627334875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0cc0102cb0 0x7f0cc0197e20 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.025 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.024+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 >> 192.168.123.105:0/627334875 conn(0x7f0cc00fe250 msgr2=0x7f0cc00fef10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:59.025 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.024+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 shutdown_connections 2026-03-10T10:14:59.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:14:59.025+0000 7f0cc6433700 1 -- 192.168.123.105:0/627334875 wait complete. 2026-03-10T10:14:59.027 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:14:59.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:14:59 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/627334875' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:00.089 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:00.089 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:00.250 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:00.290 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:00.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.697+0000 7fa05923d700 1 -- 192.168.123.105:0/4286102992 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa04800bcf0 con 0x7fa054102cb0 2026-03-10T10:15:00.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.697+0000 7fa05c4a3700 1 -- 192.168.123.105:0/4286102992 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 msgr2=0x7fa0541030d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:00.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.697+0000 7fa05c4a3700 1 --2- 192.168.123.105:0/4286102992 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 0x7fa0541030d0 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fa048009b00 tx=0x7fa048009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:00.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.698+0000 7fa05c4a3700 1 -- 192.168.123.105:0/4286102992 shutdown_connections 2026-03-10T10:15:00.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.698+0000 7fa05c4a3700 1 --2- 192.168.123.105:0/4286102992 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 0x7fa0541030d0 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:00.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.698+0000 7fa05c4a3700 1 -- 192.168.123.105:0/4286102992 >> 192.168.123.105:0/4286102992 conn(0x7fa0540fe250 msgr2=0x7fa054100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:00.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.698+0000 7fa05c4a3700 1 -- 192.168.123.105:0/4286102992 shutdown_connections 2026-03-10T10:15:00.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.698+0000 7fa05c4a3700 1 -- 192.168.123.105:0/4286102992 wait complete. 2026-03-10T10:15:00.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.698+0000 7fa05c4a3700 1 Processor -- start 2026-03-10T10:15:00.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.699+0000 7fa05c4a3700 1 -- start start 2026-03-10T10:15:00.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.699+0000 7fa05c4a3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 0x7fa054078b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:00.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.699+0000 7fa05c4a3700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa054079080 con 0x7fa054102cb0 2026-03-10T10:15:00.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.699+0000 7fa05a23f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 0x7fa054078b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:00.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.699+0000 7fa05a23f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 0x7fa054078b40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:37832/0 (socket says 192.168.123.105:37832) 2026-03-10T10:15:00.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.699+0000 7fa05a23f700 1 -- 192.168.123.105:0/352170269 learned_addr learned my addr 192.168.123.105:0/352170269 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:00.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.699+0000 7fa05a23f700 1 -- 192.168.123.105:0/352170269 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa0480097e0 con 0x7fa054102cb0 2026-03-10T10:15:00.701 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.700+0000 7fa05a23f700 1 --2- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 0x7fa054078b40 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fa0480094d0 tx=0x7fa048005670 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:00.702 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.700+0000 7fa0477fe700 1 -- 192.168.123.105:0/352170269 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa04801d070 con 0x7fa054102cb0 2026-03-10T10:15:00.702 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.700+0000 7fa0477fe700 1 -- 192.168.123.105:0/352170269 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa048004dc0 con 0x7fa054102cb0 2026-03-10T10:15:00.702 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.700+0000 7fa0477fe700 1 -- 192.168.123.105:0/352170269 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa04800bcf0 con 0x7fa054102cb0 2026-03-10T10:15:00.702 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.700+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa054079280 con 0x7fa054102cb0 2026-03-10T10:15:00.702 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.700+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa0540757f0 con 0x7fa054102cb0 2026-03-10T10:15:00.702 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.701+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa054079410 con 0x7fa054102cb0 2026-03-10T10:15:00.703 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.702+0000 7fa0477fe700 1 -- 192.168.123.105:0/352170269 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7fa0480038a0 con 0x7fa054102cb0 2026-03-10T10:15:00.703 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.702+0000 7fa0477fe700 1 --2- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa040038490 0x7fa04003a950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:00.703 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.702+0000 7fa0477fe700 1 -- 192.168.123.105:0/352170269 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fa04804c010 con 0x7fa054102cb0 2026-03-10T10:15:00.703 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.702+0000 7fa059a3e700 1 --2- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa040038490 0x7fa04003a950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:00.703 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.702+0000 7fa059a3e700 1 --2- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa040038490 0x7fa04003a950 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fa050006fd0 tx=0x7fa050006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:00.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.704+0000 7fa0477fe700 1 -- 192.168.123.105:0/352170269 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa04800e5c0 con 0x7fa054102cb0 2026-03-10T10:15:00.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.850+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fa05404fa20 con 0x7fa054102cb0 2026-03-10T10:15:00.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.851+0000 7fa0477fe700 1 -- 192.168.123.105:0/352170269 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fa048026030 con 0x7fa054102cb0 2026-03-10T10:15:00.852 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:00.852 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:00.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.853+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa040038490 msgr2=0x7fa04003a950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:00.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.853+0000 7fa05c4a3700 1 --2- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa040038490 0x7fa04003a950 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fa050006fd0 tx=0x7fa050006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:00.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.853+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 msgr2=0x7fa054078b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:00.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.853+0000 7fa05c4a3700 1 --2- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 0x7fa054078b40 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fa0480094d0 tx=0x7fa048005670 comp rx=0 tx=0).stop 2026-03-10T10:15:00.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.853+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 shutdown_connections 2026-03-10T10:15:00.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.854+0000 7fa05c4a3700 1 --2- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa040038490 0x7fa04003a950 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:00.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.854+0000 7fa05c4a3700 1 --2- 192.168.123.105:0/352170269 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa054102cb0 0x7fa054078b40 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:00.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.854+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 >> 192.168.123.105:0/352170269 conn(0x7fa0540fe250 msgr2=0x7fa0540fef10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:00.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.854+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 shutdown_connections 2026-03-10T10:15:00.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:00.854+0000 7fa05c4a3700 1 -- 192.168.123.105:0/352170269 wait complete. 2026-03-10T10:15:00.856 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:01.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:00 vm02 ceph-mon[50200]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:01.898 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:01.898 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:02.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:01 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/352170269' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:02.036 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:02.073 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:02.397 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.395+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/1190076015 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 msgr2=0x7f3c68075b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:02.397 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.395+0000 7f3c6ff5d700 1 --2- 192.168.123.105:0/1190076015 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 0x7f3c68075b60 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f3c5c009b00 tx=0x7f3c5c009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:02.397 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.396+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/1190076015 shutdown_connections 2026-03-10T10:15:02.397 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.396+0000 7f3c6ff5d700 1 --2- 192.168.123.105:0/1190076015 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 0x7f3c68075b60 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:02.397 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.396+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/1190076015 >> 192.168.123.105:0/1190076015 conn(0x7f3c680fe230 msgr2=0x7f3c68100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:02.397 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.396+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/1190076015 shutdown_connections 2026-03-10T10:15:02.397 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.396+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/1190076015 wait complete. 2026-03-10T10:15:02.398 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.396+0000 7f3c6ff5d700 1 Processor -- start 2026-03-10T10:15:02.398 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.397+0000 7f3c6ff5d700 1 -- start start 2026-03-10T10:15:02.398 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.397+0000 7f3c6ff5d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 0x7f3c6819c1d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:02.398 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.397+0000 7f3c6ff5d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3c6819c710 con 0x7f3c68075740 2026-03-10T10:15:02.398 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.397+0000 7f3c6dcf9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 0x7f3c6819c1d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:02.398 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.397+0000 7f3c6dcf9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 0x7f3c6819c1d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:37864/0 (socket says 192.168.123.105:37864) 2026-03-10T10:15:02.398 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.397+0000 7f3c6dcf9700 1 -- 192.168.123.105:0/58257212 learned_addr learned my addr 192.168.123.105:0/58257212 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:02.398 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.397+0000 7f3c6dcf9700 1 -- 192.168.123.105:0/58257212 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3c5c0097e0 con 0x7f3c68075740 2026-03-10T10:15:02.398 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.397+0000 7f3c6dcf9700 1 --2- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 0x7f3c6819c1d0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f3c5c004f40 tx=0x7f3c5c005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:02.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.398+0000 7f3c5affd700 1 -- 192.168.123.105:0/58257212 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3c5c01d070 con 0x7f3c68075740 2026-03-10T10:15:02.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.398+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3c6819c910 con 0x7f3c68075740 2026-03-10T10:15:02.399 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.398+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3c6819cdb0 con 0x7f3c68075740 2026-03-10T10:15:02.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.398+0000 7f3c5affd700 1 -- 192.168.123.105:0/58257212 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3c5c022470 con 0x7f3c68075740 2026-03-10T10:15:02.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.398+0000 7f3c5affd700 1 -- 192.168.123.105:0/58257212 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3c5c00f460 con 0x7f3c68075740 2026-03-10T10:15:02.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.398+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3c68195e10 con 0x7f3c68075740 2026-03-10T10:15:02.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.399+0000 7f3c5affd700 1 -- 192.168.123.105:0/58257212 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7f3c5c00f650 con 0x7f3c68075740 2026-03-10T10:15:02.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.399+0000 7f3c5affd700 1 --2- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c54038480 0x7f3c5403a940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:02.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.399+0000 7f3c5affd700 1 -- 192.168.123.105:0/58257212 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f3c5c04d440 con 0x7f3c68075740 2026-03-10T10:15:02.400 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.399+0000 7f3c6d4f8700 1 --2- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c54038480 0x7f3c5403a940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:02.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.400+0000 7f3c6d4f8700 1 --2- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c54038480 0x7f3c5403a940 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f3c64006fd0 tx=0x7f3c64006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:02.403 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.402+0000 7f3c5affd700 1 -- 192.168.123.105:0/58257212 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f3c5c02a950 con 0x7f3c68075740 2026-03-10T10:15:02.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.555+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f3c6802cc70 con 0x7f3c68075740 2026-03-10T10:15:02.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.557+0000 7f3c5affd700 1 -- 192.168.123.105:0/58257212 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f3c5c027030 con 0x7f3c68075740 2026-03-10T10:15:02.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:02.558 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:02.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.559+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c54038480 msgr2=0x7f3c5403a940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:02.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.559+0000 7f3c6ff5d700 1 --2- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c54038480 0x7f3c5403a940 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f3c64006fd0 tx=0x7f3c64006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:02.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.559+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 msgr2=0x7f3c6819c1d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:02.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.559+0000 7f3c6ff5d700 1 --2- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 0x7f3c6819c1d0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f3c5c004f40 tx=0x7f3c5c005e70 comp rx=0 tx=0).stop 2026-03-10T10:15:02.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.559+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 shutdown_connections 2026-03-10T10:15:02.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.559+0000 7f3c6ff5d700 1 --2- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c54038480 0x7f3c5403a940 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:02.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.559+0000 7f3c6ff5d700 1 --2- 192.168.123.105:0/58257212 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c68075740 0x7f3c6819c1d0 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:02.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.559+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 >> 192.168.123.105:0/58257212 conn(0x7f3c680fe230 msgr2=0x7f3c680feef0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:02.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.560+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 shutdown_connections 2026-03-10T10:15:02.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:02.560+0000 7f3c6ff5d700 1 -- 192.168.123.105:0/58257212 wait complete. 2026-03-10T10:15:02.561 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:03.161 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:02 vm02 ceph-mon[50200]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:03.161 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:02 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/58257212' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:03.665 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:03.666 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:03.800 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:03.836 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:04.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.089+0000 7fa50e5c5700 1 -- 192.168.123.105:0/1723071919 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 msgr2=0x7fa5081030b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:04.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.089+0000 7fa50e5c5700 1 --2- 192.168.123.105:0/1723071919 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 0x7fa5081030b0 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7fa4f0009b00 tx=0x7fa4f0009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:04.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.090+0000 7fa50e5c5700 1 -- 192.168.123.105:0/1723071919 shutdown_connections 2026-03-10T10:15:04.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.090+0000 7fa50e5c5700 1 --2- 192.168.123.105:0/1723071919 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 0x7fa5081030b0 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.090+0000 7fa50e5c5700 1 -- 192.168.123.105:0/1723071919 >> 192.168.123.105:0/1723071919 conn(0x7fa5080fe230 msgr2=0x7fa508100670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:04.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.090+0000 7fa50e5c5700 1 -- 192.168.123.105:0/1723071919 shutdown_connections 2026-03-10T10:15:04.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.090+0000 7fa50e5c5700 1 -- 192.168.123.105:0/1723071919 wait complete. 2026-03-10T10:15:04.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.091+0000 7fa50e5c5700 1 Processor -- start 2026-03-10T10:15:04.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.091+0000 7fa50e5c5700 1 -- start start 2026-03-10T10:15:04.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.091+0000 7fa50e5c5700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 0x7fa508197d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:04.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.091+0000 7fa50e5c5700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa508198290 con 0x7fa508102c90 2026-03-10T10:15:04.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.091+0000 7fa507fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 0x7fa508197d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:04.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.091+0000 7fa507fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 0x7fa508197d50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:37872/0 (socket says 192.168.123.105:37872) 2026-03-10T10:15:04.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.091+0000 7fa507fff700 1 -- 192.168.123.105:0/2120806027 learned_addr learned my addr 192.168.123.105:0/2120806027 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:04.093 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.092+0000 7fa507fff700 1 -- 192.168.123.105:0/2120806027 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa4f00097e0 con 0x7fa508102c90 2026-03-10T10:15:04.093 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.092+0000 7fa507fff700 1 --2- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 0x7fa508197d50 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7fa4f0004750 tx=0x7fa4f0005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:04.093 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.092+0000 7fa5057fa700 1 -- 192.168.123.105:0/2120806027 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa4f001c070 con 0x7fa508102c90 2026-03-10T10:15:04.093 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.092+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa508198490 con 0x7fa508102c90 2026-03-10T10:15:04.093 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.092+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa508198930 con 0x7fa508102c90 2026-03-10T10:15:04.094 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.092+0000 7fa5057fa700 1 -- 192.168.123.105:0/2120806027 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa4f0021470 con 0x7fa508102c90 2026-03-10T10:15:04.094 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.092+0000 7fa5057fa700 1 -- 192.168.123.105:0/2120806027 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa4f000f460 con 0x7fa508102c90 2026-03-10T10:15:04.094 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.093+0000 7fa5057fa700 1 -- 192.168.123.105:0/2120806027 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7fa4f0005290 con 0x7fa508102c90 2026-03-10T10:15:04.095 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.094+0000 7fa5057fa700 1 --2- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa4f4038490 0x7fa4f403a950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:04.095 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.094+0000 7fa5057fa700 1 -- 192.168.123.105:0/2120806027 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fa4f004c1e0 con 0x7fa508102c90 2026-03-10T10:15:04.095 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.094+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa4e8005320 con 0x7fa508102c90 2026-03-10T10:15:04.095 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.094+0000 7fa5077fe700 1 --2- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa4f4038490 0x7fa4f403a950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:04.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.095+0000 7fa5077fe700 1 --2- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa4f4038490 0x7fa4f403a950 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fa4f8006fd0 tx=0x7fa4f8006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:04.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.097+0000 7fa5057fa700 1 -- 192.168.123.105:0/2120806027 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa4f0026070 con 0x7fa508102c90 2026-03-10T10:15:04.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.254+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fa4e8005190 con 0x7fa508102c90 2026-03-10T10:15:04.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.255+0000 7fa5057fa700 1 -- 192.168.123.105:0/2120806027 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fa4f0021a80 con 0x7fa508102c90 2026-03-10T10:15:04.257 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:04.257 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:04.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.258+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa4f4038490 msgr2=0x7fa4f403a950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:04.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.258+0000 7fa50e5c5700 1 --2- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa4f4038490 0x7fa4f403a950 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fa4f8006fd0 tx=0x7fa4f8006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:04.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.258+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 msgr2=0x7fa508197d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:04.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.258+0000 7fa50e5c5700 1 --2- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 0x7fa508197d50 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7fa4f0004750 tx=0x7fa4f0005dc0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.259+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 shutdown_connections 2026-03-10T10:15:04.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.259+0000 7fa50e5c5700 1 --2- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa4f4038490 0x7fa4f403a950 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.259+0000 7fa50e5c5700 1 --2- 192.168.123.105:0/2120806027 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa508102c90 0x7fa508197d50 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.259+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 >> 192.168.123.105:0/2120806027 conn(0x7fa5080fe230 msgr2=0x7fa5080feef0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:04.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.259+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 shutdown_connections 2026-03-10T10:15:04.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:04.259+0000 7fa50e5c5700 1 -- 192.168.123.105:0/2120806027 wait complete. 2026-03-10T10:15:04.261 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:04.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: Deploying daemon prometheus.vm02 on vm02 2026-03-10T10:15:04.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:04 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/2120806027' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:05.333 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:05.333 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:05.468 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:05.505 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.764+0000 7f743764f700 1 -- 192.168.123.105:0/3797505048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 msgr2=0x7f7430103980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.764+0000 7f743764f700 1 --2- 192.168.123.105:0/3797505048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 0x7f7430103980 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f7420009b00 tx=0x7f7420009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.765+0000 7f743764f700 1 -- 192.168.123.105:0/3797505048 shutdown_connections 2026-03-10T10:15:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.765+0000 7f743764f700 1 --2- 192.168.123.105:0/3797505048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 0x7f7430103980 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.765+0000 7f743764f700 1 -- 192.168.123.105:0/3797505048 >> 192.168.123.105:0/3797505048 conn(0x7f74300faf00 msgr2=0x7f74300fd340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.765+0000 7f743764f700 1 -- 192.168.123.105:0/3797505048 shutdown_connections 2026-03-10T10:15:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.765+0000 7f743764f700 1 -- 192.168.123.105:0/3797505048 wait complete. 2026-03-10T10:15:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.766+0000 7f743764f700 1 Processor -- start 2026-03-10T10:15:05.767 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.766+0000 7f743764f700 1 -- start start 2026-03-10T10:15:05.767 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.766+0000 7f743764f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 0x7f7430197d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:05.767 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.766+0000 7f743764f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7430198270 con 0x7f7430101590 2026-03-10T10:15:05.767 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.766+0000 7f74353eb700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 0x7f7430197d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:05.767 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.766+0000 7f74353eb700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 0x7f7430197d30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:37876/0 (socket says 192.168.123.105:37876) 2026-03-10T10:15:05.767 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.766+0000 7f74353eb700 1 -- 192.168.123.105:0/90340062 learned_addr learned my addr 192.168.123.105:0/90340062 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:05.767 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.766+0000 7f74353eb700 1 -- 192.168.123.105:0/90340062 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f74200097e0 con 0x7f7430101590 2026-03-10T10:15:05.768 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.767+0000 7f74353eb700 1 --2- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 0x7f7430197d30 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f7420004f40 tx=0x7f7420005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.767+0000 7f74267fc700 1 -- 192.168.123.105:0/90340062 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f742001c070 con 0x7f7430101590 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.767+0000 7f74267fc700 1 -- 192.168.123.105:0/90340062 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f74200053b0 con 0x7f7430101590 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.767+0000 7f74267fc700 1 -- 192.168.123.105:0/90340062 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f742000f460 con 0x7f7430101590 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.767+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7430198470 con 0x7f7430101590 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.767+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7430198850 con 0x7f7430101590 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.768+0000 7f74267fc700 1 -- 192.168.123.105:0/90340062 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7f7420021470 con 0x7f7430101590 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.768+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f743004fa90 con 0x7f7430101590 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.768+0000 7f74267fc700 1 --2- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f741c038440 0x7f741c03a900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.768+0000 7f74267fc700 1 -- 192.168.123.105:0/90340062 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f742004c360 con 0x7f7430101590 2026-03-10T10:15:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.768+0000 7f7434bea700 1 --2- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f741c038440 0x7f741c03a900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:05.770 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.769+0000 7f7434bea700 1 --2- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f741c038440 0x7f741c03a900 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f742c006fd0 tx=0x7f742c006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:05.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.770+0000 7f74267fc700 1 -- 192.168.123.105:0/90340062 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f742000f5e0 con 0x7f7430101590 2026-03-10T10:15:05.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.918+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f743002d090 con 0x7f7430101590 2026-03-10T10:15:05.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.919+0000 7f74267fc700 1 -- 192.168.123.105:0/90340062 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f7420030300 con 0x7f7430101590 2026-03-10T10:15:05.921 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:05.921 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:05.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.922+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f741c038440 msgr2=0x7f741c03a900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:05.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.922+0000 7f743764f700 1 --2- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f741c038440 0x7f741c03a900 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f742c006fd0 tx=0x7f742c006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:05.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.922+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 msgr2=0x7f7430197d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:05.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.922+0000 7f743764f700 1 --2- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 0x7f7430197d30 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f7420004f40 tx=0x7f7420005e70 comp rx=0 tx=0).stop 2026-03-10T10:15:05.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.922+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 shutdown_connections 2026-03-10T10:15:05.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.922+0000 7f743764f700 1 --2- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f741c038440 0x7f741c03a900 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:05.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.922+0000 7f743764f700 1 --2- 192.168.123.105:0/90340062 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7430101590 0x7f7430197d30 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:05.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.922+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 >> 192.168.123.105:0/90340062 conn(0x7f74300faf00 msgr2=0x7f74300fbbc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:05.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.923+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 shutdown_connections 2026-03-10T10:15:05.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:05.923+0000 7f743764f700 1 -- 192.168.123.105:0/90340062 wait complete. 2026-03-10T10:15:05.925 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:06.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:06 vm02 ceph-mon[50200]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:06.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:06 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/90340062' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:06.980 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:06.981 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:07.108 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:07.141 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:07.365 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.364+0000 7ff4829a7700 1 -- 192.168.123.105:0/2662197355 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 msgr2=0x7ff47c073220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:07.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.364+0000 7ff4829a7700 1 --2- 192.168.123.105:0/2662197355 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 0x7ff47c073220 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7ff470009b00 tx=0x7ff470009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:07.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.365+0000 7ff4829a7700 1 -- 192.168.123.105:0/2662197355 shutdown_connections 2026-03-10T10:15:07.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.365+0000 7ff4829a7700 1 --2- 192.168.123.105:0/2662197355 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 0x7ff47c073220 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:07.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.365+0000 7ff4829a7700 1 -- 192.168.123.105:0/2662197355 >> 192.168.123.105:0/2662197355 conn(0x7ff47c0fc000 msgr2=0x7ff47c0fe440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:07.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.365+0000 7ff4829a7700 1 -- 192.168.123.105:0/2662197355 shutdown_connections 2026-03-10T10:15:07.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.365+0000 7ff4829a7700 1 -- 192.168.123.105:0/2662197355 wait complete. 2026-03-10T10:15:07.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.366+0000 7ff4829a7700 1 Processor -- start 2026-03-10T10:15:07.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.366+0000 7ff4829a7700 1 -- start start 2026-03-10T10:15:07.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.366+0000 7ff4829a7700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 0x7ff47c19c160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:07.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.366+0000 7ff4829a7700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff47c19c6a0 con 0x7ff47c074dc0 2026-03-10T10:15:07.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.366+0000 7ff47bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 0x7ff47c19c160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:07.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.366+0000 7ff47bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 0x7ff47c19c160 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:37890/0 (socket says 192.168.123.105:37890) 2026-03-10T10:15:07.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.366+0000 7ff47bfff700 1 -- 192.168.123.105:0/2565320510 learned_addr learned my addr 192.168.123.105:0/2565320510 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:07.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.366+0000 7ff47bfff700 1 -- 192.168.123.105:0/2565320510 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff4700097e0 con 0x7ff47c074dc0 2026-03-10T10:15:07.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.367+0000 7ff47bfff700 1 --2- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 0x7ff47c19c160 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7ff470004750 tx=0x7ff470005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:07.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.367+0000 7ff4797fa700 1 -- 192.168.123.105:0/2565320510 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff47001c070 con 0x7ff47c074dc0 2026-03-10T10:15:07.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.367+0000 7ff4797fa700 1 -- 192.168.123.105:0/2565320510 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff470021470 con 0x7ff47c074dc0 2026-03-10T10:15:07.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.367+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff47c19c8a0 con 0x7ff47c074dc0 2026-03-10T10:15:07.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.367+0000 7ff4797fa700 1 -- 192.168.123.105:0/2565320510 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff47000f460 con 0x7ff47c074dc0 2026-03-10T10:15:07.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.367+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff47c19cd40 con 0x7ff47c074dc0 2026-03-10T10:15:07.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.368+0000 7ff4797fa700 1 -- 192.168.123.105:0/2565320510 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7ff47000f5c0 con 0x7ff47c074dc0 2026-03-10T10:15:07.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.368+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff47c195e00 con 0x7ff47c074dc0 2026-03-10T10:15:07.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.368+0000 7ff4797fa700 1 --2- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff464038440 0x7ff46403a900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:07.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.368+0000 7ff4797fa700 1 -- 192.168.123.105:0/2565320510 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7ff47004c6a0 con 0x7ff47c074dc0 2026-03-10T10:15:07.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.368+0000 7ff47b7fe700 1 --2- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff464038440 0x7ff46403a900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:07.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.369+0000 7ff47b7fe700 1 --2- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff464038440 0x7ff46403a900 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7ff46c006fd0 tx=0x7ff46c006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:07.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.370+0000 7ff4797fa700 1 -- 192.168.123.105:0/2565320510 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7ff470029950 con 0x7ff47c074dc0 2026-03-10T10:15:07.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.516+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7ff47c0623c0 con 0x7ff47c074dc0 2026-03-10T10:15:07.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.518+0000 7ff4797fa700 1 -- 192.168.123.105:0/2565320510 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7ff470030090 con 0x7ff47c074dc0 2026-03-10T10:15:07.519 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:07.519 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff464038440 msgr2=0x7ff46403a900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 --2- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff464038440 0x7ff46403a900 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7ff46c006fd0 tx=0x7ff46c006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 msgr2=0x7ff47c19c160 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 --2- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 0x7ff47c19c160 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7ff470004750 tx=0x7ff470005dc0 comp rx=0 tx=0).stop 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 shutdown_connections 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 --2- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff464038440 0x7ff46403a900 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 --2- 192.168.123.105:0/2565320510 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff47c074dc0 0x7ff47c19c160 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 >> 192.168.123.105:0/2565320510 conn(0x7ff47c0fc000 msgr2=0x7ff47c0fccc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 shutdown_connections 2026-03-10T10:15:07.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:07.521+0000 7ff4829a7700 1 -- 192.168.123.105:0/2565320510 wait complete. 2026-03-10T10:15:07.523 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:08.581 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:08.581 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:08.718 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:08.746 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:08 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:08.746 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:08 vm02 ceph-mon[50200]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:08.746 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:08 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/2565320510' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:08.755 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:08.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.986+0000 7f09d1de6700 1 -- 192.168.123.105:0/3929163714 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 msgr2=0x7f09cc103980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:08.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.986+0000 7f09d1de6700 1 --2- 192.168.123.105:0/3929163714 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 0x7f09cc103980 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f09b4009b00 tx=0x7f09b4009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:08.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.987+0000 7f09d1de6700 1 -- 192.168.123.105:0/3929163714 shutdown_connections 2026-03-10T10:15:08.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.987+0000 7f09d1de6700 1 --2- 192.168.123.105:0/3929163714 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 0x7f09cc103980 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.987+0000 7f09d1de6700 1 -- 192.168.123.105:0/3929163714 >> 192.168.123.105:0/3929163714 conn(0x7f09cc0faf00 msgr2=0x7f09cc0fd340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:08.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.987+0000 7f09d1de6700 1 -- 192.168.123.105:0/3929163714 shutdown_connections 2026-03-10T10:15:08.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.987+0000 7f09d1de6700 1 -- 192.168.123.105:0/3929163714 wait complete. 2026-03-10T10:15:08.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.987+0000 7f09d1de6700 1 Processor -- start 2026-03-10T10:15:08.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.988+0000 7f09d1de6700 1 -- start start 2026-03-10T10:15:08.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.988+0000 7f09d1de6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 0x7f09cc197d40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:08.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.988+0000 7f09d1de6700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f09cc198280 con 0x7f09cc101590 2026-03-10T10:15:08.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.988+0000 7f09cb7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 0x7f09cc197d40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:08.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.988+0000 7f09cb7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 0x7f09cc197d40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:44178/0 (socket says 192.168.123.105:44178) 2026-03-10T10:15:08.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.988+0000 7f09cb7fe700 1 -- 192.168.123.105:0/3542525220 learned_addr learned my addr 192.168.123.105:0/3542525220 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:08.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.988+0000 7f09cb7fe700 1 -- 192.168.123.105:0/3542525220 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f09b40097e0 con 0x7f09cc101590 2026-03-10T10:15:08.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.988+0000 7f09cb7fe700 1 --2- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 0x7f09cc197d40 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f09b4004f40 tx=0x7f09b4005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:08.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.989+0000 7f09c8ff9700 1 -- 192.168.123.105:0/3542525220 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f09b401c070 con 0x7f09cc101590 2026-03-10T10:15:08.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.989+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f09cc198480 con 0x7f09cc101590 2026-03-10T10:15:08.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.989+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f09cc198920 con 0x7f09cc101590 2026-03-10T10:15:08.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.989+0000 7f09c8ff9700 1 -- 192.168.123.105:0/3542525220 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f09b40053b0 con 0x7f09cc101590 2026-03-10T10:15:08.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.989+0000 7f09c8ff9700 1 -- 192.168.123.105:0/3542525220 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f09b400f460 con 0x7f09cc101590 2026-03-10T10:15:08.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.989+0000 7f09c8ff9700 1 -- 192.168.123.105:0/3542525220 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 13) v1 ==== 45337+0+0 (secure 0 0 0) 0x7f09b400f5e0 con 0x7f09cc101590 2026-03-10T10:15:08.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.990+0000 7f09c8ff9700 1 --2- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f09b8038470 0x7f09b803a930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:08.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.990+0000 7f09c8ff9700 1 -- 192.168.123.105:0/3542525220 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f09b404d4a0 con 0x7f09cc101590 2026-03-10T10:15:08.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.990+0000 7f09caffd700 1 --2- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f09b8038470 0x7f09b803a930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:08.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.990+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f09cc191bf0 con 0x7f09cc101590 2026-03-10T10:15:08.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.991+0000 7f09caffd700 1 --2- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f09b8038470 0x7f09b803a930 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f09bc006fd0 tx=0x7f09bc006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:08.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:08.994+0000 7f09c8ff9700 1 -- 192.168.123.105:0/3542525220 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f09b4026070 con 0x7f09cc101590 2026-03-10T10:15:09.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.133+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f09cc0623c0 con 0x7f09cc101590 2026-03-10T10:15:09.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.134+0000 7f09c8ff9700 1 -- 192.168.123.105:0/3542525220 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f09b4029360 con 0x7f09cc101590 2026-03-10T10:15:09.136 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:09.136 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:09.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.137+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f09b8038470 msgr2=0x7f09b803a930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:09.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.137+0000 7f09d1de6700 1 --2- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f09b8038470 0x7f09b803a930 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f09bc006fd0 tx=0x7f09bc006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:09.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.137+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 msgr2=0x7f09cc197d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:09.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.137+0000 7f09d1de6700 1 --2- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 0x7f09cc197d40 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f09b4004f40 tx=0x7f09b4005e70 comp rx=0 tx=0).stop 2026-03-10T10:15:09.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.138+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 shutdown_connections 2026-03-10T10:15:09.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.138+0000 7f09d1de6700 1 --2- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f09b8038470 0x7f09b803a930 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:09.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.138+0000 7f09d1de6700 1 --2- 192.168.123.105:0/3542525220 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f09cc101590 0x7f09cc197d40 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:09.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.138+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 >> 192.168.123.105:0/3542525220 conn(0x7f09cc0faf00 msgr2=0x7f09cc0fd340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:09.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.138+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 shutdown_connections 2026-03-10T10:15:09.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:09.138+0000 7f09d1de6700 1 -- 192.168.123.105:0/3542525220 wait complete. 2026-03-10T10:15:09.140 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:09.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:09 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3542525220' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:09.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:09 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:09.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:09 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:10.204 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:10.204 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:10.330 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:10.362 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:10.626 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.624+0000 7ff234d63700 1 -- 192.168.123.105:0/3382515840 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 msgr2=0x7ff2301030c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:10.626 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.624+0000 7ff234d63700 1 --2- 192.168.123.105:0/3382515840 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 0x7ff2301030c0 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7ff218009b00 tx=0x7ff218009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:10.626 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.625+0000 7ff234d63700 1 -- 192.168.123.105:0/3382515840 shutdown_connections 2026-03-10T10:15:10.626 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.625+0000 7ff234d63700 1 --2- 192.168.123.105:0/3382515840 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 0x7ff2301030c0 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:10.626 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.625+0000 7ff234d63700 1 -- 192.168.123.105:0/3382515840 >> 192.168.123.105:0/3382515840 conn(0x7ff2300fe220 msgr2=0x7ff230100680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:10.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.625+0000 7ff234d63700 1 -- 192.168.123.105:0/3382515840 shutdown_connections 2026-03-10T10:15:10.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.625+0000 7ff234d63700 1 -- 192.168.123.105:0/3382515840 wait complete. 2026-03-10T10:15:10.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.626+0000 7ff234d63700 1 Processor -- start 2026-03-10T10:15:10.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.626+0000 7ff234d63700 1 -- start start 2026-03-10T10:15:10.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.626+0000 7ff234d63700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 0x7ff230197d60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:10.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.626+0000 7ff234d63700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff2301982a0 con 0x7ff230102ca0 2026-03-10T10:15:10.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.626+0000 7ff22e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 0x7ff230197d60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:10.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.626+0000 7ff22e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 0x7ff230197d60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:44198/0 (socket says 192.168.123.105:44198) 2026-03-10T10:15:10.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.626+0000 7ff22e59c700 1 -- 192.168.123.105:0/1544606366 learned_addr learned my addr 192.168.123.105:0/1544606366 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:10.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.627+0000 7ff22e59c700 1 -- 192.168.123.105:0/1544606366 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff2180097e0 con 0x7ff230102ca0 2026-03-10T10:15:10.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.627+0000 7ff22e59c700 1 --2- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 0x7ff230197d60 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7ff218004750 tx=0x7ff218005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.627+0000 7ff2277fe700 1 -- 192.168.123.105:0/1544606366 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff21801c070 con 0x7ff230102ca0 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.627+0000 7ff2277fe700 1 -- 192.168.123.105:0/1544606366 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff218021470 con 0x7ff230102ca0 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.627+0000 7ff2277fe700 1 -- 192.168.123.105:0/1544606366 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff21800f460 con 0x7ff230102ca0 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.627+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff2301984a0 con 0x7ff230102ca0 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.627+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff230198940 con 0x7ff230102ca0 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.628+0000 7ff2277fe700 1 -- 192.168.123.105:0/1544606366 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 14) v1 ==== 45351+0+0 (secure 0 0 0) 0x7ff218005290 con 0x7ff230102ca0 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.629+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff230191a30 con 0x7ff230102ca0 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.629+0000 7ff2277fe700 1 --2- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff21c038490 0x7ff21c03a950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.629+0000 7ff2277fe700 1 -- 192.168.123.105:0/1544606366 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7ff21804c3b0 con 0x7ff230102ca0 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.629+0000 7ff22dd9b700 1 -- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff21c038490 msgr2=0x7ff21c03a950 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:15:10.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.629+0000 7ff22dd9b700 1 --2- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff21c038490 0x7ff21c03a950 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:15:10.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.631+0000 7ff2277fe700 1 -- 192.168.123.105:0/1544606366 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7ff2180215e0 con 0x7ff230102ca0 2026-03-10T10:15:10.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.775+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7ff23002cc70 con 0x7ff230102ca0 2026-03-10T10:15:10.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.777+0000 7ff2277fe700 1 -- 192.168.123.105:0/1544606366 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7ff218026030 con 0x7ff230102ca0 2026-03-10T10:15:10.779 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:10.779 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:10.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:10 vm02 ceph-mon[50200]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:10.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:10 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:10.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:10 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff21c038490 msgr2=0x7ff21c03a950 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 --2- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff21c038490 0x7ff21c03a950 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 msgr2=0x7ff230197d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 --2- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 0x7ff230197d60 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7ff218004750 tx=0x7ff218005dc0 comp rx=0 tx=0).stop 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 shutdown_connections 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 --2- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff21c038490 0x7ff21c03a950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 --2- 192.168.123.105:0/1544606366 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff230102ca0 0x7ff230197d60 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 >> 192.168.123.105:0/1544606366 conn(0x7ff2300fe220 msgr2=0x7ff2300fef00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 shutdown_connections 2026-03-10T10:15:10.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:10.780+0000 7ff234d63700 1 -- 192.168.123.105:0/1544606366 wait complete. 2026-03-10T10:15:10.782 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:11.709 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:11 vm02 ceph-mon[50200]: from='mgr.14164 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T10:15:11.709 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:11 vm02 ceph-mon[50200]: mgrmap e14: vm02.zmavgl(active, since 33s) 2026-03-10T10:15:11.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:11 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/1544606366' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:11.847 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:11.847 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:11.985 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:12.089 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.333+0000 7f80a75f8700 1 -- 192.168.123.105:0/2870836024 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 msgr2=0x7f80a01030d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.333+0000 7f80a75f8700 1 --2- 192.168.123.105:0/2870836024 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 0x7f80a01030d0 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f8090009b00 tx=0x7f8090009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.333+0000 7f80a75f8700 1 -- 192.168.123.105:0/2870836024 shutdown_connections 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.333+0000 7f80a75f8700 1 --2- 192.168.123.105:0/2870836024 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 0x7f80a01030d0 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.333+0000 7f80a75f8700 1 -- 192.168.123.105:0/2870836024 >> 192.168.123.105:0/2870836024 conn(0x7f80a00fe250 msgr2=0x7f80a0100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.334+0000 7f80a75f8700 1 -- 192.168.123.105:0/2870836024 shutdown_connections 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.334+0000 7f80a75f8700 1 -- 192.168.123.105:0/2870836024 wait complete. 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.334+0000 7f80a75f8700 1 Processor -- start 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.334+0000 7f80a75f8700 1 -- start start 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.334+0000 7f80a75f8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 0x7f80a0197e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:12.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.334+0000 7f80a75f8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f80a0198360 con 0x7f80a0102cb0 2026-03-10T10:15:12.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80a5394700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 0x7f80a0197e20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:12.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80a5394700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 0x7f80a0197e20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:44214/0 (socket says 192.168.123.105:44214) 2026-03-10T10:15:12.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80a5394700 1 -- 192.168.123.105:0/2802796088 learned_addr learned my addr 192.168.123.105:0/2802796088 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:12.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80a5394700 1 -- 192.168.123.105:0/2802796088 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f80900097e0 con 0x7f80a0102cb0 2026-03-10T10:15:12.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80a5394700 1 --2- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 0x7f80a0197e20 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f8090004d40 tx=0x7f8090004e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:12.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80967fc700 1 -- 192.168.123.105:0/2802796088 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f809001c070 con 0x7f80a0102cb0 2026-03-10T10:15:12.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80967fc700 1 -- 192.168.123.105:0/2802796088 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f80900056f0 con 0x7f80a0102cb0 2026-03-10T10:15:12.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80967fc700 1 -- 192.168.123.105:0/2802796088 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f8090017440 con 0x7f80a0102cb0 2026-03-10T10:15:12.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f80a0198560 con 0x7f80a0102cb0 2026-03-10T10:15:12.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.335+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f80a0198a00 con 0x7f80a0102cb0 2026-03-10T10:15:12.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.336+0000 7f80967fc700 1 -- 192.168.123.105:0/2802796088 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 14) v1 ==== 45351+0+0 (secure 0 0 0) 0x7f809000f460 con 0x7f80a0102cb0 2026-03-10T10:15:12.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.336+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f80a0191a40 con 0x7f80a0102cb0 2026-03-10T10:15:12.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.336+0000 7f80967fc700 1 --2- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f808c038490 0x7f808c03a950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:12.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.337+0000 7f80967fc700 1 -- 192.168.123.105:0/2802796088 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f809004bfd0 con 0x7f80a0102cb0 2026-03-10T10:15:12.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.337+0000 7f80a4b93700 1 -- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f808c038490 msgr2=0x7f808c03a950 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:15:12.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.337+0000 7f80a4b93700 1 --2- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f808c038490 0x7f808c03a950 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:15:12.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.340+0000 7f80967fc700 1 -- 192.168.123.105:0/2802796088 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f8090005210 con 0x7f80a0102cb0 2026-03-10T10:15:12.488 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.486+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f80a00623c0 con 0x7f80a0102cb0 2026-03-10T10:15:12.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.488+0000 7f80967fc700 1 -- 192.168.123.105:0/2802796088 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f8090025020 con 0x7f80a0102cb0 2026-03-10T10:15:12.489 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:12.489 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f808c038490 msgr2=0x7f808c03a950 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 --2- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f808c038490 0x7f808c03a950 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 msgr2=0x7f80a0197e20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 --2- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 0x7f80a0197e20 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f8090004d40 tx=0x7f8090004e20 comp rx=0 tx=0).stop 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 shutdown_connections 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 --2- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f808c038490 0x7f808c03a950 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 --2- 192.168.123.105:0/2802796088 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f80a0102cb0 0x7f80a0197e20 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 >> 192.168.123.105:0/2802796088 conn(0x7f80a00fe250 msgr2=0x7f80a00fef10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 shutdown_connections 2026-03-10T10:15:12.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:12.491+0000 7f80a75f8700 1 -- 192.168.123.105:0/2802796088 wait complete. 2026-03-10T10:15:12.493 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:12.706 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:12 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/2802796088' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:13.553 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:13.553 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:13.689 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:13.726 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.967+0000 7ffbecae9700 1 -- 192.168.123.105:0/930696985 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 msgr2=0x7ffbe8100eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.967+0000 7ffbecae9700 1 --2- 192.168.123.105:0/930696985 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 0x7ffbe8100eb0 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7ffbd0009b00 tx=0x7ffbd0009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.967+0000 7ffbecae9700 1 -- 192.168.123.105:0/930696985 shutdown_connections 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.967+0000 7ffbecae9700 1 --2- 192.168.123.105:0/930696985 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 0x7ffbe8100eb0 unknown :-1 s=CLOSED pgs=137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.967+0000 7ffbecae9700 1 -- 192.168.123.105:0/930696985 >> 192.168.123.105:0/930696985 conn(0x7ffbe80fc030 msgr2=0x7ffbe80fe470 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.968+0000 7ffbecae9700 1 -- 192.168.123.105:0/930696985 shutdown_connections 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.968+0000 7ffbecae9700 1 -- 192.168.123.105:0/930696985 wait complete. 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.968+0000 7ffbecae9700 1 Processor -- start 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.968+0000 7ffbecae9700 1 -- start start 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.968+0000 7ffbecae9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 0x7ffbe81a4980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.968+0000 7ffbecae9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ffbe81a4ec0 con 0x7ffbe8100a90 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.968+0000 7ffbe659c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 0x7ffbe81a4980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:13.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.969+0000 7ffbe659c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 0x7ffbe81a4980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:44232/0 (socket says 192.168.123.105:44232) 2026-03-10T10:15:13.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.969+0000 7ffbe659c700 1 -- 192.168.123.105:0/512416927 learned_addr learned my addr 192.168.123.105:0/512416927 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:13.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.969+0000 7ffbe659c700 1 -- 192.168.123.105:0/512416927 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ffbd00097e0 con 0x7ffbe8100a90 2026-03-10T10:15:13.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.969+0000 7ffbe659c700 1 --2- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 0x7ffbe81a4980 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7ffbd0000c00 tx=0x7ffbd0004740 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:13.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.969+0000 7ffbdf7fe700 1 -- 192.168.123.105:0/512416927 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ffbd001c070 con 0x7ffbe8100a90 2026-03-10T10:15:13.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.969+0000 7ffbdf7fe700 1 -- 192.168.123.105:0/512416927 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ffbd00053b0 con 0x7ffbe8100a90 2026-03-10T10:15:13.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.969+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ffbe81a50c0 con 0x7ffbe8100a90 2026-03-10T10:15:13.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.969+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ffbe81a5560 con 0x7ffbe8100a90 2026-03-10T10:15:13.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.970+0000 7ffbdf7fe700 1 -- 192.168.123.105:0/512416927 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ffbd000f460 con 0x7ffbe8100a90 2026-03-10T10:15:13.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.970+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ffbe806df60 con 0x7ffbe8100a90 2026-03-10T10:15:13.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.970+0000 7ffbdf7fe700 1 -- 192.168.123.105:0/512416927 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 14) v1 ==== 45351+0+0 (secure 0 0 0) 0x7ffbd000f660 con 0x7ffbe8100a90 2026-03-10T10:15:13.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.970+0000 7ffbdf7fe700 1 --2- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ffbd40384d0 0x7ffbd403a990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:13.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.971+0000 7ffbdf7fe700 1 -- 192.168.123.105:0/512416927 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7ffbd004d640 con 0x7ffbe8100a90 2026-03-10T10:15:13.972 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.971+0000 7ffbe5d9b700 1 -- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ffbd40384d0 msgr2=0x7ffbd403a990 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.102:6800/2 2026-03-10T10:15:13.972 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.971+0000 7ffbe5d9b700 1 --2- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ffbd40384d0 0x7ffbd403a990 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:15:13.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:13.973+0000 7ffbdf7fe700 1 -- 192.168.123.105:0/512416927 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7ffbd0029bb0 con 0x7ffbe8100a90 2026-03-10T10:15:14.115 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.114+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7ffbe802cc70 con 0x7ffbe8100a90 2026-03-10T10:15:14.116 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.115+0000 7ffbdf7fe700 1 -- 192.168.123.105:0/512416927 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7ffbd0026030 con 0x7ffbe8100a90 2026-03-10T10:15:14.116 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:14.116 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:14.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.117+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ffbd40384d0 msgr2=0x7ffbd403a990 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:14.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.117+0000 7ffbecae9700 1 --2- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ffbd40384d0 0x7ffbd403a990 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.117+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 msgr2=0x7ffbe81a4980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:14.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.117+0000 7ffbecae9700 1 --2- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 0x7ffbe81a4980 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7ffbd0000c00 tx=0x7ffbd0004740 comp rx=0 tx=0).stop 2026-03-10T10:15:14.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.117+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 shutdown_connections 2026-03-10T10:15:14.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.117+0000 7ffbecae9700 1 --2- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ffbd40384d0 0x7ffbd403a990 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.117+0000 7ffbecae9700 1 --2- 192.168.123.105:0/512416927 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffbe8100a90 0x7ffbe81a4980 unknown :-1 s=CLOSED pgs=138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.117+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 >> 192.168.123.105:0/512416927 conn(0x7ffbe80fc030 msgr2=0x7ffbe80fccf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:14.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.118+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 shutdown_connections 2026-03-10T10:15:14.119 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:14.118+0000 7ffbecae9700 1 -- 192.168.123.105:0/512416927 wait complete. 2026-03-10T10:15:14.119 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:14.396 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:14 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/512416927' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:15.190 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:15.190 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: Active manager daemon vm02.zmavgl restarted 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: Activating manager daemon vm02.zmavgl 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: mgrmap e15: vm02.zmavgl(active, starting, since 0.00474651s) 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: Manager daemon vm02.zmavgl is now available 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:15:15.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:15:15.351 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:15.398 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.683+0000 7fa65be69700 1 -- 192.168.123.105:0/2001437718 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa654072120 msgr2=0x7fa654072540 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.683+0000 7fa65be69700 1 --2- 192.168.123.105:0/2001437718 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa654072120 0x7fa654072540 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7fa650009b00 tx=0x7fa650009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.683+0000 7fa65be69700 1 -- 192.168.123.105:0/2001437718 shutdown_connections 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.683+0000 7fa65be69700 1 --2- 192.168.123.105:0/2001437718 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa654072120 0x7fa654072540 unknown :-1 s=CLOSED pgs=146 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.683+0000 7fa65be69700 1 -- 192.168.123.105:0/2001437718 >> 192.168.123.105:0/2001437718 conn(0x7fa65406d680 msgr2=0x7fa65406fae0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.683+0000 7fa65be69700 1 -- 192.168.123.105:0/2001437718 shutdown_connections 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.683+0000 7fa65be69700 1 -- 192.168.123.105:0/2001437718 wait complete. 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.684+0000 7fa65be69700 1 Processor -- start 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.684+0000 7fa65be69700 1 -- start start 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.684+0000 7fa65be69700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa65419c9e0 0x7fa65419ee10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.684+0000 7fa65be69700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa650012070 con 0x7fa65419c9e0 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.684+0000 7fa659c05700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa65419c9e0 0x7fa65419ee10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.684+0000 7fa659c05700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa65419c9e0 0x7fa65419ee10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:44262/0 (socket says 192.168.123.105:44262) 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.684+0000 7fa659c05700 1 -- 192.168.123.105:0/443732037 learned_addr learned my addr 192.168.123.105:0/443732037 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.684+0000 7fa659c05700 1 -- 192.168.123.105:0/443732037 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa6500097e0 con 0x7fa65419c9e0 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.684+0000 7fa659c05700 1 --2- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa65419c9e0 0x7fa65419ee10 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7fa650006010 tx=0x7fa65000bba0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.685+0000 7fa64affd700 1 -- 192.168.123.105:0/443732037 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa65001c070 con 0x7fa65419c9e0 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.685+0000 7fa64affd700 1 -- 192.168.123.105:0/443732037 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa650003d70 con 0x7fa65419c9e0 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.685+0000 7fa64affd700 1 -- 192.168.123.105:0/443732037 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa650017440 con 0x7fa65419c9e0 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.685+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa654117ac0 con 0x7fa65419c9e0 2026-03-10T10:15:15.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.685+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa65419f600 con 0x7fa65419c9e0 2026-03-10T10:15:15.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.686+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa654110c60 con 0x7fa65419c9e0 2026-03-10T10:15:15.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.687+0000 7fa64affd700 1 -- 192.168.123.105:0/443732037 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 15) v1 ==== 45072+0+0 (secure 0 0 0) 0x7fa650003890 con 0x7fa65419c9e0 2026-03-10T10:15:15.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.687+0000 7fa64affd700 1 -- 192.168.123.105:0/443732037 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7fa650029d50 con 0x7fa65419c9e0 2026-03-10T10:15:15.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.691+0000 7fa64affd700 1 -- 192.168.123.105:0/443732037 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa650017c60 con 0x7fa65419c9e0 2026-03-10T10:15:15.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.715+0000 7fa64affd700 1 -- 192.168.123.105:0/443732037 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mgrmap(e 16) v1 ==== 45199+0+0 (secure 0 0 0) 0x7fa650003890 con 0x7fa65419c9e0 2026-03-10T10:15:15.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.715+0000 7fa64affd700 1 --2- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa640039e50 0x7fa64003c240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:15.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.726+0000 7fa659404700 1 --2- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa640039e50 0x7fa64003c240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:15.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.726+0000 7fa659404700 1 --2- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa640039e50 0x7fa64003c240 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fa644006fd0 tx=0x7fa644006e40 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:15.849 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.847+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fa654062440 con 0x7fa65419c9e0 2026-03-10T10:15:15.849 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.848+0000 7fa64affd700 1 -- 192.168.123.105:0/443732037 <== mon.0 v2:192.168.123.102:3300/0 8 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fa65000e3e0 con 0x7fa65419c9e0 2026-03-10T10:15:15.849 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:15.849 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.851+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa640039e50 msgr2=0x7fa64003c240 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.851+0000 7fa65be69700 1 --2- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa640039e50 0x7fa64003c240 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fa644006fd0 tx=0x7fa644006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.851+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa65419c9e0 msgr2=0x7fa65419ee10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.851+0000 7fa65be69700 1 --2- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa65419c9e0 0x7fa65419ee10 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7fa650006010 tx=0x7fa65000bba0 comp rx=0 tx=0).stop 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.851+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 shutdown_connections 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.851+0000 7fa65be69700 1 --2- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa640039e50 0x7fa64003c240 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.851+0000 7fa65be69700 1 --2- 192.168.123.105:0/443732037 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa65419c9e0 0x7fa65419ee10 unknown :-1 s=CLOSED pgs=147 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.851+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 >> 192.168.123.105:0/443732037 conn(0x7fa65406d680 msgr2=0x7fa65406fae0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.851+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 shutdown_connections 2026-03-10T10:15:15.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:15.852+0000 7fa65be69700 1 -- 192.168.123.105:0/443732037 wait complete. 2026-03-10T10:15:15.853 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:16.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:16 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:16.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:16 vm02 ceph-mon[50200]: [10/Mar/2026:10:15:15] ENGINE Bus STARTING 2026-03-10T10:15:16.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:16 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:16.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:16 vm02 ceph-mon[50200]: mgrmap e16: vm02.zmavgl(active, since 1.0087s) 2026-03-10T10:15:16.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:16 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/443732037' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:16.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:16 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:16.898 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:16.899 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:17.043 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:17.085 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T10:15:17.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.334+0000 7fecb1bea700 1 -- 192.168.123.105:0/1935418693 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 msgr2=0x7feca40989c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:17.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.334+0000 7fecb1bea700 1 --2- 192.168.123.105:0/1935418693 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 0x7feca40989c0 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7feca0009b10 tx=0x7feca0009e20 comp rx=0 tx=0).stop 2026-03-10T10:15:17.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.335+0000 7fecb1bea700 1 -- 192.168.123.105:0/1935418693 shutdown_connections 2026-03-10T10:15:17.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.335+0000 7fecb1bea700 1 --2- 192.168.123.105:0/1935418693 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 0x7feca40989c0 unknown :-1 s=CLOSED pgs=148 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:17.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.335+0000 7fecb1bea700 1 -- 192.168.123.105:0/1935418693 >> 192.168.123.105:0/1935418693 conn(0x7feca408ffe0 msgr2=0x7feca4092420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:17.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.335+0000 7fecb1bea700 1 -- 192.168.123.105:0/1935418693 shutdown_connections 2026-03-10T10:15:17.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.335+0000 7fecb1bea700 1 -- 192.168.123.105:0/1935418693 wait complete. 2026-03-10T10:15:17.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.336+0000 7fecb1bea700 1 Processor -- start 2026-03-10T10:15:17.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.336+0000 7fecb1bea700 1 -- start start 2026-03-10T10:15:17.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.336+0000 7fecb1bea700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 0x7feca412aab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:17.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.336+0000 7fecb1bea700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feca412aff0 con 0x7feca40965d0 2026-03-10T10:15:17.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.336+0000 7fecb0be8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 0x7feca412aab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:17.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.336+0000 7fecb0be8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 0x7feca412aab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:44274/0 (socket says 192.168.123.105:44274) 2026-03-10T10:15:17.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.336+0000 7fecb0be8700 1 -- 192.168.123.105:0/3189858250 learned_addr learned my addr 192.168.123.105:0/3189858250 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:17.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.338+0000 7fecb0be8700 1 -- 192.168.123.105:0/3189858250 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7feca0009770 con 0x7feca40965d0 2026-03-10T10:15:17.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.338+0000 7fecb0be8700 1 --2- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 0x7feca412aab0 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7feca000ba70 tx=0x7feca000ff60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:17.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.338+0000 7feca9ffb700 1 -- 192.168.123.105:0/3189858250 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7feca001c070 con 0x7feca40965d0 2026-03-10T10:15:17.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.338+0000 7feca9ffb700 1 -- 192.168.123.105:0/3189858250 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7feca0021950 con 0x7feca40965d0 2026-03-10T10:15:17.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.338+0000 7feca9ffb700 1 -- 192.168.123.105:0/3189858250 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7feca0017930 con 0x7feca40965d0 2026-03-10T10:15:17.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.338+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7feca412b1f0 con 0x7feca40965d0 2026-03-10T10:15:17.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.339+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7feca412b690 con 0x7feca40965d0 2026-03-10T10:15:17.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.340+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7feca41248d0 con 0x7feca40965d0 2026-03-10T10:15:17.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.340+0000 7feca9ffb700 1 -- 192.168.123.105:0/3189858250 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 17) v1 ==== 45397+0+0 (secure 0 0 0) 0x7feca0021470 con 0x7feca40965d0 2026-03-10T10:15:17.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.340+0000 7feca9ffb700 1 --2- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fec9c038560 0x7fec9c03aa20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:17.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.340+0000 7feca9ffb700 1 -- 192.168.123.105:0/3189858250 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7feca004c510 con 0x7feca40965d0 2026-03-10T10:15:17.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.340+0000 7fecabfff700 1 --2- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fec9c038560 0x7fec9c03aa20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:17.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.341+0000 7fecabfff700 1 --2- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fec9c038560 0x7fec9c03aa20 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fec98006fd0 tx=0x7fec98006e40 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:17.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.343+0000 7feca9ffb700 1 -- 192.168.123.105:0/3189858250 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7feca0021ac0 con 0x7feca40965d0 2026-03-10T10:15:17.495 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:17.495 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:17.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.489+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7feca40052d0 con 0x7feca40965d0 2026-03-10T10:15:17.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.492+0000 7feca9ffb700 1 -- 192.168.123.105:0/3189858250 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7feca0030300 con 0x7feca40965d0 2026-03-10T10:15:17.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.495+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fec9c038560 msgr2=0x7fec9c03aa20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:17.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.495+0000 7fecb1bea700 1 --2- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fec9c038560 0x7fec9c03aa20 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fec98006fd0 tx=0x7fec98006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:17.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.495+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 msgr2=0x7feca412aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:17.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.495+0000 7fecb1bea700 1 --2- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 0x7feca412aab0 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7feca000ba70 tx=0x7feca000ff60 comp rx=0 tx=0).stop 2026-03-10T10:15:17.497 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.496+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 shutdown_connections 2026-03-10T10:15:17.497 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.496+0000 7fecb1bea700 1 --2- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fec9c038560 0x7fec9c03aa20 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:17.497 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.496+0000 7fecb1bea700 1 --2- 192.168.123.105:0/3189858250 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feca40965d0 0x7feca412aab0 unknown :-1 s=CLOSED pgs=149 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:17.497 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.496+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 >> 192.168.123.105:0/3189858250 conn(0x7feca408ffe0 msgr2=0x7feca4090c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:17.497 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.496+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 shutdown_connections 2026-03-10T10:15:17.497 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:17.496+0000 7fecb1bea700 1 -- 192.168.123.105:0/3189858250 wait complete. 2026-03-10T10:15:17.499 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:17.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:17 vm02 ceph-mon[50200]: [10/Mar/2026:10:15:15] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T10:15:17.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:17 vm02 ceph-mon[50200]: [10/Mar/2026:10:15:15] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T10:15:17.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:17 vm02 ceph-mon[50200]: [10/Mar/2026:10:15:15] ENGINE Bus STARTED 2026-03-10T10:15:17.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:17 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:17.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:17 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:17.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:17 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:17.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:17 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:17.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:17 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: mgrmap e17: vm02.zmavgl(active, since 2s) 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T10:15:18.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:18 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3189858250' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:18.548 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:18.548 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:18.714 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:19.385 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.383+0000 7f628b764700 1 -- 192.168.123.105:0/2664095520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 msgr2=0x7f6284103250 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.385 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.383+0000 7f628b764700 1 --2- 192.168.123.105:0/2664095520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 0x7f6284103250 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7f6278009b00 tx=0x7f6278009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:19.385 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.383+0000 7f628b764700 1 -- 192.168.123.105:0/2664095520 shutdown_connections 2026-03-10T10:15:19.385 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.383+0000 7f628b764700 1 --2- 192.168.123.105:0/2664095520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 0x7f6284103250 unknown :-1 s=CLOSED pgs=150 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.385 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.383+0000 7f628b764700 1 -- 192.168.123.105:0/2664095520 >> 192.168.123.105:0/2664095520 conn(0x7f62840fe760 msgr2=0x7f6284100b80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:19.385 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.383+0000 7f628b764700 1 -- 192.168.123.105:0/2664095520 shutdown_connections 2026-03-10T10:15:19.385 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.383+0000 7f628b764700 1 -- 192.168.123.105:0/2664095520 wait complete. 2026-03-10T10:15:19.385 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.384+0000 7f628b764700 1 Processor -- start 2026-03-10T10:15:19.386 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.384+0000 7f628b764700 1 -- start start 2026-03-10T10:15:19.386 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.385+0000 7f628b764700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 0x7f62841980c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.386 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.385+0000 7f628b764700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6284198600 con 0x7f6284102e70 2026-03-10T10:15:19.386 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.385+0000 7f6289500700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 0x7f62841980c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.386 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.385+0000 7f6289500700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 0x7f62841980c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:55018/0 (socket says 192.168.123.105:55018) 2026-03-10T10:15:19.386 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.385+0000 7f6289500700 1 -- 192.168.123.105:0/3036251448 learned_addr learned my addr 192.168.123.105:0/3036251448 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:19.387 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.385+0000 7f6289500700 1 -- 192.168.123.105:0/3036251448 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f62780097e0 con 0x7f6284102e70 2026-03-10T10:15:19.387 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.386+0000 7f6289500700 1 --2- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 0x7f62841980c0 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7f6278006010 tx=0x7f6278004dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.387 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.386+0000 7f62767fc700 1 -- 192.168.123.105:0/3036251448 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f627801c070 con 0x7f6284102e70 2026-03-10T10:15:19.388 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.386+0000 7f62767fc700 1 -- 192.168.123.105:0/3036251448 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6278021470 con 0x7f6284102e70 2026-03-10T10:15:19.388 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.386+0000 7f62767fc700 1 -- 192.168.123.105:0/3036251448 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f627800f460 con 0x7f6284102e70 2026-03-10T10:15:19.388 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.387+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6284198800 con 0x7f6284102e70 2026-03-10T10:15:19.388 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.387+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6284198c20 con 0x7f6284102e70 2026-03-10T10:15:19.388 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.387+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f628404fa20 con 0x7f6284102e70 2026-03-10T10:15:19.389 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.387+0000 7f62767fc700 1 -- 192.168.123.105:0/3036251448 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 17) v1 ==== 45397+0+0 (secure 0 0 0) 0x7f62780215e0 con 0x7f6284102e70 2026-03-10T10:15:19.389 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.388+0000 7f62767fc700 1 --2- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62700381b0 0x7f627003a670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.389 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.388+0000 7f62767fc700 1 -- 192.168.123.105:0/3036251448 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7f627804c590 con 0x7f6284102e70 2026-03-10T10:15:19.389 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.388+0000 7f6288cff700 1 --2- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62700381b0 0x7f627003a670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.389+0000 7f6288cff700 1 --2- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62700381b0 0x7f627003a670 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f6280006fd0 tx=0x7f6280006e40 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.392 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.390+0000 7f62767fc700 1 -- 192.168.123.105:0/3036251448 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f6278005320 con 0x7f6284102e70 2026-03-10T10:15:19.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.537+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f628410ab60 con 0x7f6284102e70 2026-03-10T10:15:19.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.537+0000 7f62767fc700 1 -- 192.168.123.105:0/3036251448 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f6278026020 con 0x7f6284102e70 2026-03-10T10:15:19.541 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:19.541 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:19.541 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.540+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62700381b0 msgr2=0x7f627003a670 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.540+0000 7f628b764700 1 --2- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62700381b0 0x7f627003a670 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f6280006fd0 tx=0x7f6280006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:19.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.540+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 msgr2=0x7f62841980c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.540+0000 7f628b764700 1 --2- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 0x7f62841980c0 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7f6278006010 tx=0x7f6278004dc0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.541+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 shutdown_connections 2026-03-10T10:15:19.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.541+0000 7f628b764700 1 --2- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f62700381b0 0x7f627003a670 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.541+0000 7f628b764700 1 --2- 192.168.123.105:0/3036251448 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6284102e70 0x7f62841980c0 unknown :-1 s=CLOSED pgs=151 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.541+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 >> 192.168.123.105:0/3036251448 conn(0x7f62840fe760 msgr2=0x7f62841070e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:19.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.541+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 shutdown_connections 2026-03-10T10:15:19.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:19.541+0000 7f628b764700 1 -- 192.168.123.105:0/3036251448 wait complete. 2026-03-10T10:15:19.543 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:19.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:19 vm02 ceph-mon[50200]: Deploying daemon ceph-exporter.vm05 on vm05 2026-03-10T10:15:20.602 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:20.602 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:20.762 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:20.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:20 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3036251448' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:20.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:20 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:20.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:20 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:20.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:20 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:20.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:20 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:20.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:20 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:15:20.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:20 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T10:15:20.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:20 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:20.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:20 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:21.242 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.240+0000 7ff512be4700 1 -- 192.168.123.105:0/1237472072 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 msgr2=0x7ff50c1083d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:21.242 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.240+0000 7ff512be4700 1 --2- 192.168.123.105:0/1237472072 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 0x7ff50c1083d0 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7ff4fc007780 tx=0x7ff4fc00c050 comp rx=0 tx=0).stop 2026-03-10T10:15:21.242 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.241+0000 7ff512be4700 1 -- 192.168.123.105:0/1237472072 shutdown_connections 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.241+0000 7ff512be4700 1 --2- 192.168.123.105:0/1237472072 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 0x7ff50c1083d0 unknown :-1 s=CLOSED pgs=153 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.241+0000 7ff512be4700 1 -- 192.168.123.105:0/1237472072 >> 192.168.123.105:0/1237472072 conn(0x7ff50c06cb50 msgr2=0x7ff50c06cf60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.241+0000 7ff512be4700 1 -- 192.168.123.105:0/1237472072 shutdown_connections 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.241+0000 7ff512be4700 1 -- 192.168.123.105:0/1237472072 wait complete. 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.241+0000 7ff512be4700 1 Processor -- start 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff512be4700 1 -- start start 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff512be4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 0x7ff50c083740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff512be4700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff4fc003680 con 0x7ff50c107ff0 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff510980700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 0x7ff50c083740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:21.243 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff510980700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 0x7ff50c083740 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:55062/0 (socket says 192.168.123.105:55062) 2026-03-10T10:15:21.244 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff510980700 1 -- 192.168.123.105:0/246664560 learned_addr learned my addr 192.168.123.105:0/246664560 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:21.244 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff510980700 1 -- 192.168.123.105:0/246664560 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff4fc007430 con 0x7ff50c107ff0 2026-03-10T10:15:21.244 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff510980700 1 --2- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 0x7ff50c083740 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7ff4fc000c00 tx=0x7ff4fc00c9e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:21.244 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff509ffb700 1 -- 192.168.123.105:0/246664560 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff4fc00f050 con 0x7ff50c107ff0 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff509ffb700 1 -- 192.168.123.105:0/246664560 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff4fc004680 con 0x7ff50c107ff0 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.242+0000 7ff509ffb700 1 -- 192.168.123.105:0/246664560 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff4fc00a6b0 con 0x7ff50c107ff0 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.243+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff50c083d40 con 0x7ff50c107ff0 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.243+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff50c07c990 con 0x7ff50c107ff0 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.243+0000 7ff509ffb700 1 -- 192.168.123.105:0/246664560 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 17) v1 ==== 45397+0+0 (secure 0 0 0) 0x7ff4fc01a040 con 0x7ff50c107ff0 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.243+0000 7ff509ffb700 1 --2- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4f40385c0 0x7ff4f403aa80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.243+0000 7ff509ffb700 1 -- 192.168.123.105:0/246664560 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7ff4fc04c070 con 0x7ff50c107ff0 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.244+0000 7ff50bfff700 1 --2- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4f40385c0 0x7ff4f403aa80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.243+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff50c04fa20 con 0x7ff50c107ff0 2026-03-10T10:15:21.245 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.244+0000 7ff50bfff700 1 --2- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4f40385c0 0x7ff4f403aa80 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7ff50400ad30 tx=0x7ff5040093f0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:21.247 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.246+0000 7ff509ffb700 1 -- 192.168.123.105:0/246664560 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7ff4fc018080 con 0x7ff50c107ff0 2026-03-10T10:15:21.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.389+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7ff50c0623c0 con 0x7ff50c107ff0 2026-03-10T10:15:21.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.390+0000 7ff509ffb700 1 -- 192.168.123.105:0/246664560 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7ff4fc022030 con 0x7ff50c107ff0 2026-03-10T10:15:21.391 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:21.391 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:21.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.392+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4f40385c0 msgr2=0x7ff4f403aa80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:21.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.392+0000 7ff512be4700 1 --2- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4f40385c0 0x7ff4f403aa80 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7ff50400ad30 tx=0x7ff5040093f0 comp rx=0 tx=0).stop 2026-03-10T10:15:21.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.392+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 msgr2=0x7ff50c083740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:21.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.392+0000 7ff512be4700 1 --2- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 0x7ff50c083740 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7ff4fc000c00 tx=0x7ff4fc00c9e0 comp rx=0 tx=0).stop 2026-03-10T10:15:21.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.392+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 shutdown_connections 2026-03-10T10:15:21.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.392+0000 7ff512be4700 1 --2- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4f40385c0 0x7ff4f403aa80 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:21.394 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.393+0000 7ff512be4700 1 --2- 192.168.123.105:0/246664560 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff50c107ff0 0x7ff50c083740 unknown :-1 s=CLOSED pgs=154 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:21.394 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.393+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 >> 192.168.123.105:0/246664560 conn(0x7ff50c06cb50 msgr2=0x7ff50c06feb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:21.395 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.393+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 shutdown_connections 2026-03-10T10:15:21.395 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:21.394+0000 7ff512be4700 1 -- 192.168.123.105:0/246664560 wait complete. 2026-03-10T10:15:21.396 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:21 vm02 ceph-mon[50200]: Deploying daemon crash.vm05 on vm05 2026-03-10T10:15:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:21 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:21 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:21 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:21 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:22.438 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:22.438 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:22.567 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:22 vm02 ceph-mon[50200]: Deploying daemon node-exporter.vm05 on vm05 2026-03-10T10:15:22.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:22 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/246664560' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:22.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.818+0000 7f15614d0700 1 -- 192.168.123.105:0/1955458066 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 msgr2=0x7f155c106dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:22.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.818+0000 7f15614d0700 1 --2- 192.168.123.105:0/1955458066 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 0x7f155c106dd0 secure :-1 s=READY pgs=155 cs=0 l=1 rev1=1 crypto rx=0x7f1544009b00 tx=0x7f1544009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:22.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.819+0000 7f15614d0700 1 -- 192.168.123.105:0/1955458066 shutdown_connections 2026-03-10T10:15:22.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.819+0000 7f15614d0700 1 --2- 192.168.123.105:0/1955458066 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 0x7f155c106dd0 unknown :-1 s=CLOSED pgs=155 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:22.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.819+0000 7f15614d0700 1 -- 192.168.123.105:0/1955458066 >> 192.168.123.105:0/1955458066 conn(0x7f155c0753a0 msgr2=0x7f155c0757b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:22.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.819+0000 7f15614d0700 1 -- 192.168.123.105:0/1955458066 shutdown_connections 2026-03-10T10:15:22.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.819+0000 7f15614d0700 1 -- 192.168.123.105:0/1955458066 wait complete. 2026-03-10T10:15:22.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.819+0000 7f15614d0700 1 Processor -- start 2026-03-10T10:15:22.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.819+0000 7f15614d0700 1 -- start start 2026-03-10T10:15:22.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.820+0000 7f15614d0700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 0x7f155c198090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:22.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.820+0000 7f15614d0700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f155c1985d0 con 0x7f155c1069f0 2026-03-10T10:15:22.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.820+0000 7f155affd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 0x7f155c198090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:22.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.820+0000 7f155affd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 0x7f155c198090 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:55074/0 (socket says 192.168.123.105:55074) 2026-03-10T10:15:22.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.820+0000 7f155affd700 1 -- 192.168.123.105:0/3050139833 learned_addr learned my addr 192.168.123.105:0/3050139833 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:22.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.820+0000 7f155affd700 1 -- 192.168.123.105:0/3050139833 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f15440097e0 con 0x7f155c1069f0 2026-03-10T10:15:22.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.820+0000 7f155affd700 1 --2- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 0x7f155c198090 secure :-1 s=READY pgs=156 cs=0 l=1 rev1=1 crypto rx=0x7f1544006010 tx=0x7f1544004dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:22.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.821+0000 7f1553fff700 1 -- 192.168.123.105:0/3050139833 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f154401c070 con 0x7f155c1069f0 2026-03-10T10:15:22.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.821+0000 7f1553fff700 1 -- 192.168.123.105:0/3050139833 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1544021470 con 0x7f155c1069f0 2026-03-10T10:15:22.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.821+0000 7f1553fff700 1 -- 192.168.123.105:0/3050139833 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f154400f460 con 0x7f155c1069f0 2026-03-10T10:15:22.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.821+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f155c198830 con 0x7f155c1069f0 2026-03-10T10:15:22.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.821+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f155c198cb0 con 0x7f155c1069f0 2026-03-10T10:15:22.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.822+0000 7f1553fff700 1 -- 192.168.123.105:0/3050139833 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 17) v1 ==== 45397+0+0 (secure 0 0 0) 0x7f15440215e0 con 0x7f155c1069f0 2026-03-10T10:15:22.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.822+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f155c04fa20 con 0x7f155c1069f0 2026-03-10T10:15:22.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.822+0000 7f1553fff700 1 --2- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1548038530 0x7f154803a9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:22.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.822+0000 7f1553fff700 1 -- 192.168.123.105:0/3050139833 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7f154404c420 con 0x7f155c1069f0 2026-03-10T10:15:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.825+0000 7f155a7fc700 1 --2- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1548038530 0x7f154803a9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.825+0000 7f1553fff700 1 -- 192.168.123.105:0/3050139833 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f1544029770 con 0x7f155c1069f0 2026-03-10T10:15:22.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.827+0000 7f155a7fc700 1 --2- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1548038530 0x7f154803a9f0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f154c006fd0 tx=0x7f154c006e40 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:22.977 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.976+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f155c0623c0 con 0x7f155c1069f0 2026-03-10T10:15:22.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.979+0000 7f1553fff700 1 -- 192.168.123.105:0/3050139833 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f1544026030 con 0x7f155c1069f0 2026-03-10T10:15:22.980 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:22.981 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:22.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.981+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1548038530 msgr2=0x7f154803a9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:22.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.981+0000 7f15614d0700 1 --2- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1548038530 0x7f154803a9f0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f154c006fd0 tx=0x7f154c006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:22.983 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.982+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 msgr2=0x7f155c198090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:22.983 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.982+0000 7f15614d0700 1 --2- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 0x7f155c198090 secure :-1 s=READY pgs=156 cs=0 l=1 rev1=1 crypto rx=0x7f1544006010 tx=0x7f1544004dc0 comp rx=0 tx=0).stop 2026-03-10T10:15:22.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.983+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 shutdown_connections 2026-03-10T10:15:22.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.983+0000 7f15614d0700 1 --2- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1548038530 0x7f154803a9f0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:22.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.983+0000 7f15614d0700 1 --2- 192.168.123.105:0/3050139833 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f155c1069f0 0x7f155c198090 unknown :-1 s=CLOSED pgs=156 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:22.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.983+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 >> 192.168.123.105:0/3050139833 conn(0x7f155c0753a0 msgr2=0x7f155c102770 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:22.985 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.984+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 shutdown_connections 2026-03-10T10:15:22.985 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:22.984+0000 7f15614d0700 1 -- 192.168.123.105:0/3050139833 wait complete. 2026-03-10T10:15:22.986 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:23.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:23 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3050139833' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:24.055 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:24.055 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:24.244 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:24.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.534+0000 7fb274d6f700 1 -- 192.168.123.105:0/1611158052 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 msgr2=0x7fb27010edf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:24.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.534+0000 7fb274d6f700 1 --2- 192.168.123.105:0/1611158052 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 0x7fb27010edf0 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7fb260009b00 tx=0x7fb260009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:24.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.537+0000 7fb274d6f700 1 -- 192.168.123.105:0/1611158052 shutdown_connections 2026-03-10T10:15:24.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.537+0000 7fb274d6f700 1 --2- 192.168.123.105:0/1611158052 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 0x7fb27010edf0 unknown :-1 s=CLOSED pgs=159 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:24.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.537+0000 7fb274d6f700 1 -- 192.168.123.105:0/1611158052 >> 192.168.123.105:0/1611158052 conn(0x7fb27006c970 msgr2=0x7fb27006cd80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:24.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.537+0000 7fb274d6f700 1 -- 192.168.123.105:0/1611158052 shutdown_connections 2026-03-10T10:15:24.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.537+0000 7fb274d6f700 1 -- 192.168.123.105:0/1611158052 wait complete. 2026-03-10T10:15:24.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.537+0000 7fb274d6f700 1 Processor -- start 2026-03-10T10:15:24.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.537+0000 7fb274d6f700 1 -- start start 2026-03-10T10:15:24.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.537+0000 7fb274d6f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 0x7fb270114b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:24.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.537+0000 7fb274d6f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb2701150c0 con 0x7fb270107ff0 2026-03-10T10:15:24.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.538+0000 7fb26f7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 0x7fb270114b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:24.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.538+0000 7fb26f7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 0x7fb270114b80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:55110/0 (socket says 192.168.123.105:55110) 2026-03-10T10:15:24.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.538+0000 7fb26f7fe700 1 -- 192.168.123.105:0/1549980975 learned_addr learned my addr 192.168.123.105:0/1549980975 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:24.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.538+0000 7fb26f7fe700 1 -- 192.168.123.105:0/1549980975 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb2600097e0 con 0x7fb270107ff0 2026-03-10T10:15:24.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.538+0000 7fb26f7fe700 1 --2- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 0x7fb270114b80 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7fb260005b40 tx=0x7fb260004dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:24.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.538+0000 7fb26cff9700 1 -- 192.168.123.105:0/1549980975 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb26001c070 con 0x7fb270107ff0 2026-03-10T10:15:24.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.538+0000 7fb26cff9700 1 -- 192.168.123.105:0/1549980975 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb2600056f0 con 0x7fb270107ff0 2026-03-10T10:15:24.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.539+0000 7fb26cff9700 1 -- 192.168.123.105:0/1549980975 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb26000f460 con 0x7fb270107ff0 2026-03-10T10:15:24.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.544+0000 7fb274d6f700 1 -- 192.168.123.105:0/1549980975 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb2701152c0 con 0x7fb270107ff0 2026-03-10T10:15:24.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.544+0000 7fb274d6f700 1 -- 192.168.123.105:0/1549980975 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb2701156e0 con 0x7fb270107ff0 2026-03-10T10:15:24.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.544+0000 7fb26cff9700 1 -- 192.168.123.105:0/1549980975 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 17) v1 ==== 45397+0+0 (secure 0 0 0) 0x7fb260005210 con 0x7fb270107ff0 2026-03-10T10:15:24.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.545+0000 7fb26cff9700 1 --2- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb258038160 0x7fb25803a620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:24.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.545+0000 7fb26cff9700 1 -- 192.168.123.105:0/1549980975 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7fb26004c5d0 con 0x7fb270107ff0 2026-03-10T10:15:24.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.545+0000 7fb274d6f700 1 -- 192.168.123.105:0/1549980975 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb27010e500 con 0x7fb270107ff0 2026-03-10T10:15:24.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.545+0000 7fb26effd700 1 --2- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb258038160 0x7fb25803a620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:24.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.548+0000 7fb26cff9700 1 -- 192.168.123.105:0/1549980975 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fb260026030 con 0x7fb270107ff0 2026-03-10T10:15:24.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.548+0000 7fb26effd700 1 --2- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb258038160 0x7fb25803a620 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fb264006fd0 tx=0x7fb264006e40 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:24.756 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.754+0000 7fb274d6f700 1 -- 192.168.123.105:0/1549980975 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fb270115930 con 0x7fb270107ff0 2026-03-10T10:15:24.760 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.758+0000 7fb26cff9700 1 -- 192.168.123.105:0/1549980975 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fb26000f690 con 0x7fb270107ff0 2026-03-10T10:15:24.760 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:24.760 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:14:07.630583Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T10:15:24.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.761+0000 7fb2567fc700 1 -- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb258038160 msgr2=0x7fb25803a620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:24.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.761+0000 7fb2567fc700 1 --2- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb258038160 0x7fb25803a620 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fb264006fd0 tx=0x7fb264006e40 comp rx=0 tx=0).stop 2026-03-10T10:15:24.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.762+0000 7fb2567fc700 1 -- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 msgr2=0x7fb270114b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:24.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.762+0000 7fb2567fc700 1 --2- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 0x7fb270114b80 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7fb260005b40 tx=0x7fb260004dc0 comp rx=0 tx=0).stop 2026-03-10T10:15:24.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.763+0000 7fb2567fc700 1 -- 192.168.123.105:0/1549980975 shutdown_connections 2026-03-10T10:15:24.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.763+0000 7fb2567fc700 1 --2- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb258038160 0x7fb25803a620 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:24.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.763+0000 7fb2567fc700 1 --2- 192.168.123.105:0/1549980975 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb270107ff0 0x7fb270114b80 unknown :-1 s=CLOSED pgs=160 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:24.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.763+0000 7fb2567fc700 1 -- 192.168.123.105:0/1549980975 >> 192.168.123.105:0/1549980975 conn(0x7fb27006c970 msgr2=0x7fb27010b640 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:24.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.763+0000 7fb2567fc700 1 -- 192.168.123.105:0/1549980975 shutdown_connections 2026-03-10T10:15:24.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:24.763+0000 7fb2567fc700 1 -- 192.168.123.105:0/1549980975 wait complete. 2026-03-10T10:15:24.772 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: Deploying daemon mgr.vm05.coparq on vm05 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:15:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:25.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:25 vm02 ceph-mon[50200]: Deploying daemon mon.vm05 on vm05 2026-03-10T10:15:25.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:25 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:25.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:25 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/1549980975' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:25.879 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T10:15:25.879 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mon dump -f json 2026-03-10T10:15:26.039 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm05/config 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.727+0000 7efe9f336700 1 -- 192.168.123.105:0/1549441269 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 msgr2=0x7efe84005610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.727+0000 7efe9f336700 1 --2- 192.168.123.105:0/1549441269 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 0x7efe84005610 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7efe94005fa0 tx=0x7efe9400ff70 comp rx=0 tx=0).stop 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.727+0000 7efe9f336700 1 -- 192.168.123.105:0/1549441269 shutdown_connections 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.727+0000 7efe9f336700 1 --2- 192.168.123.105:0/1549441269 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 0x7efe84005610 unknown :-1 s=CLOSED pgs=163 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.727+0000 7efe9f336700 1 -- 192.168.123.105:0/1549441269 >> 192.168.123.105:0/1549441269 conn(0x7efe9806cab0 msgr2=0x7efe9806cec0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.727+0000 7efe9f336700 1 -- 192.168.123.105:0/1549441269 shutdown_connections 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.727+0000 7efe9f336700 1 -- 192.168.123.105:0/1549441269 wait complete. 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.728+0000 7efe9f336700 1 Processor -- start 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.728+0000 7efe9f336700 1 -- start start 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.728+0000 7efe9f336700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 0x7efe981ae140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.728+0000 7efe9f336700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe981ae680 0x7efe981b1f40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.728+0000 7efe9f336700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efe981b2480 con 0x7efe98107ff0 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.728+0000 7efe9f336700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efe981b25f0 con 0x7efe981ae680 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.728+0000 7efe9d0d2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 0x7efe981ae140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.728+0000 7efe9d0d2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 0x7efe981ae140 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:46866/0 (socket says 192.168.123.105:46866) 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.728+0000 7efe9d0d2700 1 -- 192.168.123.105:0/1040623875 learned_addr learned my addr 192.168.123.105:0/1040623875 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.729+0000 7efe9d0d2700 1 -- 192.168.123.105:0/1040623875 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe981ae680 msgr2=0x7efe981b1f40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.729+0000 7efe9d0d2700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe981ae680 0x7efe981b1f40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.729+0000 7efe9d0d2700 1 -- 192.168.123.105:0/1040623875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7efe9400f970 con 0x7efe98107ff0 2026-03-10T10:15:30.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.730+0000 7efe9d0d2700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 0x7efe981ae140 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7efe940127e0 tx=0x7efe94014f90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:30.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.730+0000 7efe8e7fc700 1 -- 192.168.123.105:0/1040623875 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efe94012a30 con 0x7efe98107ff0 2026-03-10T10:15:30.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.730+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7efe981b2870 con 0x7efe98107ff0 2026-03-10T10:15:30.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.730+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7efe981b2dc0 con 0x7efe98107ff0 2026-03-10T10:15:30.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.730+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7efe981adf50 con 0x7efe98107ff0 2026-03-10T10:15:30.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.731+0000 7efe8e7fc700 1 -- 192.168.123.105:0/1040623875 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7efe9401dd50 con 0x7efe98107ff0 2026-03-10T10:15:30.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.731+0000 7efe8e7fc700 1 -- 192.168.123.105:0/1040623875 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efe9401d650 con 0x7efe98107ff0 2026-03-10T10:15:30.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.731+0000 7efe8e7fc700 1 -- 192.168.123.105:0/1040623875 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 17) v1 ==== 45397+0+0 (secure 0 0 0) 0x7efe94024070 con 0x7efe98107ff0 2026-03-10T10:15:30.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.731+0000 7efe8e7fc700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efe7c038650 0x7efe7c03ab10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:30.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.732+0000 7efe8e7fc700 1 -- 192.168.123.105:0/1040623875 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7efe9404ea40 con 0x7efe98107ff0 2026-03-10T10:15:30.733 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.732+0000 7efe9c8d1700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efe7c038650 0x7efe7c03ab10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:30.733 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.732+0000 7efe9c8d1700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efe7c038650 0x7efe7c03ab10 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7efe880098e0 tx=0x7efe88006dd0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:30.735 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.734+0000 7efe8e7fc700 1 -- 192.168.123.105:0/1040623875 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7efe94011850 con 0x7efe98107ff0 2026-03-10T10:15:30.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.773+0000 7efe8e7fc700 1 -- 192.168.123.105:0/1040623875 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mgrmap(e 18) v1 ==== 90252+0+0 (secure 0 0 0) 0x7efe94004a70 con 0x7efe98107ff0 2026-03-10T10:15:30.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.894+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7efe980623c0 con 0x7efe98107ff0 2026-03-10T10:15:30.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.894+0000 7efe8e7fc700 1 -- 192.168.123.105:0/1040623875 <== mon.0 v2:192.168.123.102:3300/0 8 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 2 v2) v1 ==== 95+0+1032 (secure 0 0 0) 0x7efe9402b420 con 0x7efe98107ff0 2026-03-10T10:15:30.898 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:15:30.898 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":2,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","modified":"2026-03-10T10:15:25.674350Z","created":"2026-03-10T10:14:07.630583Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T10:15:30.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efe7c038650 msgr2=0x7efe7c03ab10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:30.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efe7c038650 0x7efe7c03ab10 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7efe880098e0 tx=0x7efe88006dd0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 msgr2=0x7efe981ae140 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:30.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 0x7efe981ae140 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7efe940127e0 tx=0x7efe94014f90 comp rx=0 tx=0).stop 2026-03-10T10:15:30.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 shutdown_connections 2026-03-10T10:15:30.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe98107ff0 0x7efe981ae140 unknown :-1 s=CLOSED pgs=164 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efe7c038650 0x7efe7c03ab10 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 --2- 192.168.123.105:0/1040623875 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe981ae680 0x7efe981b1f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 >> 192.168.123.105:0/1040623875 conn(0x7efe9806cab0 msgr2=0x7efe9810fdf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:30.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 shutdown_connections 2026-03-10T10:15:30.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:15:30.898+0000 7efe9f336700 1 -- 192.168.123.105:0/1040623875 wait complete. 2026-03-10T10:15:30.904 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 2 2026-03-10T10:15:30.947 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T10:15:30.947 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph config generate-minimal-conf 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: mon.vm02 calling monitor election 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: mon.vm05 calling monitor election 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.? 192.168.123.105:0/614160158' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/crt"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: mon.vm02 is new leader, mons vm02,vm05 in quorum (ranks 0,1) 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: monmap e2: 2 mons at {vm02=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],vm05=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0]} removed_ranks: {} disallowed_leaders: {} 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: fsmap 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: mgrmap e17: vm02.zmavgl(active, since 16s) 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: Standby manager daemon vm05.coparq started 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.? 192.168.123.105:0/614160158' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.? 192.168.123.105:0/614160158' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/key"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.? 192.168.123.105:0/614160158' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: overall HEALTH_OK 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:31.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:31.089 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:15:31.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.327+0000 7fe24959c700 1 -- 192.168.123.102:0/2119534052 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244102ea0 msgr2=0x7fe244103280 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.327+0000 7fe24959c700 1 --2- 192.168.123.102:0/2119534052 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244102ea0 0x7fe244103280 secure :-1 s=READY pgs=165 cs=0 l=1 rev1=1 crypto rx=0x7fe22c009b50 tx=0x7fe22c009e60 comp rx=0 tx=0).stop 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.328+0000 7fe24959c700 1 -- 192.168.123.102:0/2119534052 shutdown_connections 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.328+0000 7fe24959c700 1 --2- 192.168.123.102:0/2119534052 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244102ea0 0x7fe244103280 unknown :-1 s=CLOSED pgs=165 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.328+0000 7fe24959c700 1 -- 192.168.123.102:0/2119534052 >> 192.168.123.102:0/2119534052 conn(0x7fe2440fe750 msgr2=0x7fe244100b70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.328+0000 7fe24959c700 1 -- 192.168.123.102:0/2119534052 shutdown_connections 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.328+0000 7fe24959c700 1 -- 192.168.123.102:0/2119534052 wait complete. 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.328+0000 7fe24959c700 1 Processor -- start 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.328+0000 7fe24959c700 1 -- start start 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe24959c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe244102ea0 0x7fe244198630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe24959c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244198b70 0x7fe24419cdf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe24959c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe24419d330 con 0x7fe244198b70 2026-03-10T10:15:31.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe24959c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe24419d4a0 con 0x7fe244102ea0 2026-03-10T10:15:31.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe2427fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244198b70 0x7fe24419cdf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:31.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe2427fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244198b70 0x7fe24419cdf0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:50580/0 (socket says 192.168.123.102:50580) 2026-03-10T10:15:31.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe2427fc700 1 -- 192.168.123.102:0/1618330085 learned_addr learned my addr 192.168.123.102:0/1618330085 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:15:31.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe2427fc700 1 -- 192.168.123.102:0/1618330085 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe244102ea0 msgr2=0x7fe244198630 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:15:31.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe2427fc700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe244102ea0 0x7fe244198630 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:31.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe2427fc700 1 -- 192.168.123.102:0/1618330085 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe22c0097e0 con 0x7fe244198b70 2026-03-10T10:15:31.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.329+0000 7fe2427fc700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244198b70 0x7fe24419cdf0 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7fe23400b770 tx=0x7fe23400bb30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:31.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.330+0000 7fe23bfff700 1 -- 192.168.123.102:0/1618330085 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe23400f820 con 0x7fe244198b70 2026-03-10T10:15:31.332 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.330+0000 7fe23bfff700 1 -- 192.168.123.102:0/1618330085 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe23400fe60 con 0x7fe244198b70 2026-03-10T10:15:31.332 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.330+0000 7fe23bfff700 1 -- 192.168.123.102:0/1618330085 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe23400d400 con 0x7fe244198b70 2026-03-10T10:15:31.332 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.330+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe24419d780 con 0x7fe244198b70 2026-03-10T10:15:31.332 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.330+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe2441a3090 con 0x7fe244198b70 2026-03-10T10:15:31.335 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.331+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe244198440 con 0x7fe244198b70 2026-03-10T10:15:31.335 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.332+0000 7fe23bfff700 1 -- 192.168.123.102:0/1618330085 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 18) v1 ==== 90252+0+0 (secure 0 0 0) 0x7fe23401e030 con 0x7fe244198b70 2026-03-10T10:15:31.335 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.333+0000 7fe23bfff700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe23006c5b0 0x7fe23006ea70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:31.335 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.333+0000 7fe23bfff700 1 -- 192.168.123.102:0/1618330085 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7fe23408a180 con 0x7fe244198b70 2026-03-10T10:15:31.335 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.334+0000 7fe23bfff700 1 -- 192.168.123.102:0/1618330085 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fe234055b50 con 0x7fe244198b70 2026-03-10T10:15:31.335 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.335+0000 7fe242ffd700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe23006c5b0 0x7fe23006ea70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:31.336 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.335+0000 7fe242ffd700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe23006c5b0 0x7fe23006ea70 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fe22c005e20 tx=0x7fe22c005fb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:31.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.433+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) v1 -- 0x7fe24404fa20 con 0x7fe244198b70 2026-03-10T10:15:31.437 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.436+0000 7fe23bfff700 1 -- 192.168.123.102:0/1618330085 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v10) v1 ==== 76+0+235 (secure 0 0 0) 0x7fe234059170 con 0x7fe244198b70 2026-03-10T10:15:31.437 INFO:teuthology.orchestra.run.vm02.stdout:# minimal ceph.conf for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:15:31.437 INFO:teuthology.orchestra.run.vm02.stdout:[global] 2026-03-10T10:15:31.437 INFO:teuthology.orchestra.run.vm02.stdout: fsid = d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:15:31.437 INFO:teuthology.orchestra.run.vm02.stdout: mon_host = [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] 2026-03-10T10:15:31.439 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.438+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe23006c5b0 msgr2=0x7fe23006ea70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:31.439 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.438+0000 7fe24959c700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe23006c5b0 0x7fe23006ea70 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fe22c005e20 tx=0x7fe22c005fb0 comp rx=0 tx=0).stop 2026-03-10T10:15:31.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.440+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244198b70 msgr2=0x7fe24419cdf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:31.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.440+0000 7fe24959c700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244198b70 0x7fe24419cdf0 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7fe23400b770 tx=0x7fe23400bb30 comp rx=0 tx=0).stop 2026-03-10T10:15:31.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.441+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 shutdown_connections 2026-03-10T10:15:31.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.441+0000 7fe24959c700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe23006c5b0 0x7fe23006ea70 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:31.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.441+0000 7fe24959c700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe244102ea0 0x7fe244198630 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:31.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.441+0000 7fe24959c700 1 --2- 192.168.123.102:0/1618330085 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe244198b70 0x7fe24419cdf0 unknown :-1 s=CLOSED pgs=166 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:31.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.441+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 >> 192.168.123.102:0/1618330085 conn(0x7fe2440fe750 msgr2=0x7fe244108a00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:31.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.441+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 shutdown_connections 2026-03-10T10:15:31.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:31.442+0000 7fe24959c700 1 -- 192.168.123.102:0/1618330085 wait complete. 2026-03-10T10:15:31.498 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T10:15:31.499 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:15:31.499 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T10:15:31.523 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:15:31.523 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:15:31.587 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:15:31.588 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T10:15:31.615 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:15:31.616 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:15:31.679 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T10:15:31.679 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:15:31.679 DEBUG:teuthology.orchestra.run.vm02:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T10:15:31.702 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:15:31.702 DEBUG:teuthology.orchestra.run.vm02:> ls /dev/[sv]d? 2026-03-10T10:15:31.759 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vda 2026-03-10T10:15:31.759 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdb 2026-03-10T10:15:31.759 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdc 2026-03-10T10:15:31.759 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdd 2026-03-10T10:15:31.759 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vde 2026-03-10T10:15:31.759 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T10:15:31.759 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T10:15:31.759 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdb 2026-03-10T10:15:31.822 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdb 2026-03-10T10:15:31.822 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T10:15:31.822 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T10:15:31.822 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:15:31.822 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T10:15:31.822 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 10:14:41.804525793 +0000 2026-03-10T10:15:31.822 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 10:09:27.318000000 +0000 2026-03-10T10:15:31.822 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 10:09:27.318000000 +0000 2026-03-10T10:15:31.822 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 10:09:25.290000000 +0000 2026-03-10T10:15:31.822 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T10:15:31.892 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T10:15:31.892 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T10:15:31.892 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000160601 s, 3.2 MB/s 2026-03-10T10:15:31.893 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T10:15:31.950 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdc 2026-03-10T10:15:32.016 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdc 2026-03-10T10:15:32.017 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T10:15:32.017 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T10:15:32.017 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:15:32.017 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T10:15:32.017 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 10:14:41.862525873 +0000 2026-03-10T10:15:32.017 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 10:09:27.290000000 +0000 2026-03-10T10:15:32.017 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 10:09:27.290000000 +0000 2026-03-10T10:15:32.017 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 10:09:25.294000000 +0000 2026-03-10T10:15:32.017 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T10:15:32.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:31 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:32.089 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T10:15:32.090 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T10:15:32.090 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000199685 s, 2.6 MB/s 2026-03-10T10:15:32.091 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T10:15:32.097 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: mgrmap e18: vm02.zmavgl(active, since 16s), standbys: vm05.coparq 2026-03-10T10:15:32.097 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm05.coparq", "id": "vm05.coparq"}]: dispatch 2026-03-10T10:15:32.097 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:32.098 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/1040623875' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:15:32.098 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1618330085' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:32.098 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:15:32.098 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:32.098 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:32.098 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:32.098 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:31 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:32.119 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdd 2026-03-10T10:15:32.177 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdd 2026-03-10T10:15:32.177 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T10:15:32.177 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T10:15:32.177 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:15:32.177 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T10:15:32.177 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 10:14:41.921525954 +0000 2026-03-10T10:15:32.177 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 10:09:27.292000000 +0000 2026-03-10T10:15:32.177 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 10:09:27.292000000 +0000 2026-03-10T10:15:32.177 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 10:09:25.299000000 +0000 2026-03-10T10:15:32.177 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T10:15:32.246 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T10:15:32.246 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T10:15:32.246 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000126407 s, 4.1 MB/s 2026-03-10T10:15:32.247 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T10:15:32.304 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vde 2026-03-10T10:15:32.361 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vde 2026-03-10T10:15:32.361 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T10:15:32.361 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T10:15:32.361 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:15:32.361 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T10:15:32.361 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 10:14:41.975526029 +0000 2026-03-10T10:15:32.361 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 10:09:27.282000000 +0000 2026-03-10T10:15:32.361 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 10:09:27.282000000 +0000 2026-03-10T10:15:32.361 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-10 10:09:25.326000000 +0000 2026-03-10T10:15:32.361 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T10:15:32.427 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T10:15:32.427 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T10:15:32.427 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000234719 s, 2.2 MB/s 2026-03-10T10:15:32.428 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T10:15:32.486 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:15:32.486 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T10:15:32.499 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:15:32.500 DEBUG:teuthology.orchestra.run.vm05:> ls /dev/[sv]d? 2026-03-10T10:15:32.556 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vda 2026-03-10T10:15:32.556 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdb 2026-03-10T10:15:32.556 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdc 2026-03-10T10:15:32.556 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdd 2026-03-10T10:15:32.556 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vde 2026-03-10T10:15:32.556 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T10:15:32.556 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T10:15:32.556 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdb 2026-03-10T10:15:32.615 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdb 2026-03-10T10:15:32.615 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T10:15:32.615 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 223 Links: 1 Device type: fc,10 2026-03-10T10:15:32.615 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:15:32.615 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T10:15:32.615 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 10:15:16.924650176 +0000 2026-03-10T10:15:32.615 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 10:09:59.077000000 +0000 2026-03-10T10:15:32.615 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 10:09:59.077000000 +0000 2026-03-10T10:15:32.615 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-10 10:09:57.249000000 +0000 2026-03-10T10:15:32.615 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T10:15:32.677 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T10:15:32.677 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T10:15:32.677 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000143549 s, 3.6 MB/s 2026-03-10T10:15:32.678 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T10:15:32.734 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdc 2026-03-10T10:15:32.792 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdc 2026-03-10T10:15:32.792 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T10:15:32.792 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 244 Links: 1 Device type: fc,20 2026-03-10T10:15:32.792 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:15:32.792 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T10:15:32.792 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 10:15:16.991650282 +0000 2026-03-10T10:15:32.792 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 10:09:59.078000000 +0000 2026-03-10T10:15:32.792 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 10:09:59.078000000 +0000 2026-03-10T10:15:32.792 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-10 10:09:57.259000000 +0000 2026-03-10T10:15:32.792 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T10:15:32.857 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T10:15:32.857 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T10:15:32.857 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000164207 s, 3.1 MB/s 2026-03-10T10:15:32.858 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T10:15:32.914 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdd 2026-03-10T10:15:32.972 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdd 2026-03-10T10:15:32.972 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T10:15:32.972 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T10:15:32.972 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:15:32.972 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T10:15:32.972 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 10:15:17.039650358 +0000 2026-03-10T10:15:32.972 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 10:09:59.082000000 +0000 2026-03-10T10:15:32.972 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 10:09:59.082000000 +0000 2026-03-10T10:15:32.972 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-10 10:09:57.267000000 +0000 2026-03-10T10:15:32.972 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T10:15:33.037 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T10:15:33.037 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T10:15:33.037 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000128159 s, 4.0 MB/s 2026-03-10T10:15:33.038 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T10:15:33.095 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vde 2026-03-10T10:15:33.152 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vde 2026-03-10T10:15:33.153 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T10:15:33.153 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T10:15:33.153 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:15:33.153 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T10:15:33.153 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 10:15:17.096650448 +0000 2026-03-10T10:15:33.153 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 10:09:59.092000000 +0000 2026-03-10T10:15:33.153 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 10:09:59.092000000 +0000 2026-03-10T10:15:33.153 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-10 10:09:57.304000000 +0000 2026-03-10T10:15:33.153 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T10:15:33.216 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T10:15:33.216 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T10:15:33.216 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000184195 s, 2.8 MB/s 2026-03-10T10:15:33.217 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T10:15:33.278 INFO:tasks.cephadm:Deploying osd.0 on vm02 with /dev/vde... 2026-03-10T10:15:33.278 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- lvm zap /dev/vde 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: Reconfiguring mon.vm02 (unknown last config time)... 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:15:33.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: Reconfiguring daemon mon.vm02 on vm02 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.zmavgl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:15:33.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: Reconfiguring mon.vm02 (unknown last config time)... 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: Reconfiguring daemon mon.vm02 on vm02 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.zmavgl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:15:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:15:33.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:33.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:33.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:15:33.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:33.465 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:15:33.986 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:15:34.003 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch daemon add osd vm02:/dev/vde 2026-03-10T10:15:34.219 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:15:34.497 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: Reconfiguring mgr.vm02.zmavgl (unknown last config time)... 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: Reconfiguring daemon mgr.vm02.zmavgl on vm02 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: Reconfiguring ceph-exporter.vm02 (monmap changed)... 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: Reconfiguring daemon ceph-exporter.vm02 on vm02 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: Reconfiguring crash.vm02 (monmap changed)... 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: Reconfiguring daemon crash.vm02 on vm02 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:34.498 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:34 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:34.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.503+0000 7ff518e6e700 1 -- 192.168.123.102:0/3674427917 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514101a90 msgr2=0x7ff514103e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:34.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.503+0000 7ff518e6e700 1 --2- 192.168.123.102:0/3674427917 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514101a90 0x7ff514103e80 secure :-1 s=READY pgs=167 cs=0 l=1 rev1=1 crypto rx=0x7ff504009b50 tx=0x7ff504009e60 comp rx=0 tx=0).stop 2026-03-10T10:15:34.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.507+0000 7ff518e6e700 1 -- 192.168.123.102:0/3674427917 shutdown_connections 2026-03-10T10:15:34.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.507+0000 7ff518e6e700 1 --2- 192.168.123.102:0/3674427917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 0x7ff5141067b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.507+0000 7ff518e6e700 1 --2- 192.168.123.102:0/3674427917 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514101a90 0x7ff514103e80 unknown :-1 s=CLOSED pgs=167 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.507+0000 7ff518e6e700 1 -- 192.168.123.102:0/3674427917 >> 192.168.123.102:0/3674427917 conn(0x7ff5140fb3c0 msgr2=0x7ff5140fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:34.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.507+0000 7ff518e6e700 1 -- 192.168.123.102:0/3674427917 shutdown_connections 2026-03-10T10:15:34.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.507+0000 7ff518e6e700 1 -- 192.168.123.102:0/3674427917 wait complete. 2026-03-10T10:15:34.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff518e6e700 1 Processor -- start 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff518e6e700 1 -- start start 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff518e6e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514101a90 0x7ff51410a650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff518e6e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 0x7ff51410ab90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff518e6e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff51410b1b0 con 0x7ff514101a90 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff518e6e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5141070f0 con 0x7ff5141043c0 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff511d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 0x7ff51410ab90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff51259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514101a90 0x7ff51410a650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff511d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 0x7ff51410ab90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:42336/0 (socket says 192.168.123.102:42336) 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.508+0000 7ff511d9b700 1 -- 192.168.123.102:0/37633917 learned_addr learned my addr 192.168.123.102:0/37633917 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.509+0000 7ff511d9b700 1 -- 192.168.123.102:0/37633917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 msgr2=0x7ff51410ab90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 12 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.509+0000 7ff511d9b700 1 -- 192.168.123.102:0/37633917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 msgr2=0x7ff51410ab90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.509+0000 7ff511d9b700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 0x7ff51410ab90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.509+0000 7ff511d9b700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 0x7ff51410ab90 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.509+0000 7ff51259c700 1 -- 192.168.123.102:0/37633917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 msgr2=0x7ff51410ab90 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:34.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.509+0000 7ff51259c700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 0x7ff51410ab90 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.510 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.509+0000 7ff51259c700 1 -- 192.168.123.102:0/37633917 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff5040097e0 con 0x7ff514101a90 2026-03-10T10:15:34.510 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.509+0000 7ff51259c700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514101a90 0x7ff51410a650 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7ff504009b20 tx=0x7ff504004f70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:34.510 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.509+0000 7ff5037fe700 1 -- 192.168.123.102:0/37633917 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff50401d070 con 0x7ff514101a90 2026-03-10T10:15:34.510 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.510+0000 7ff518e6e700 1 -- 192.168.123.102:0/37633917 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff514107370 con 0x7ff514101a90 2026-03-10T10:15:34.512 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.510+0000 7ff518e6e700 1 -- 192.168.123.102:0/37633917 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff514107860 con 0x7ff514101a90 2026-03-10T10:15:34.512 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.512+0000 7ff5037fe700 1 -- 192.168.123.102:0/37633917 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff50400bc10 con 0x7ff514101a90 2026-03-10T10:15:34.512 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.512+0000 7ff5037fe700 1 -- 192.168.123.102:0/37633917 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff50400f700 con 0x7ff514101a90 2026-03-10T10:15:34.512 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.512+0000 7ff5037fe700 1 -- 192.168.123.102:0/37633917 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 18) v1 ==== 90252+0+0 (secure 0 0 0) 0x7ff50400f940 con 0x7ff514101a90 2026-03-10T10:15:34.513 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.512+0000 7ff5037fe700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4fc06c650 0x7ff4fc06eb10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:34.513 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.512+0000 7ff5037fe700 1 -- 192.168.123.102:0/37633917 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7ff50408ce40 con 0x7ff514101a90 2026-03-10T10:15:34.513 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.512+0000 7ff511d9b700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4fc06c650 0x7ff4fc06eb10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:34.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.513+0000 7ff511d9b700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4fc06c650 0x7ff4fc06eb10 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7ff508005d90 tx=0x7ff50800b2c0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:34.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.513+0000 7ff5017fa700 1 -- 192.168.123.102:0/37633917 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff4f4005320 con 0x7ff514101a90 2026-03-10T10:15:34.518 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.518+0000 7ff5037fe700 1 -- 192.168.123.102:0/37633917 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7ff504027080 con 0x7ff514101a90 2026-03-10T10:15:34.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:34.668+0000 7ff5017fa700 1 -- 192.168.123.102:0/37633917 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}) v1 -- 0x7ff4f4000bf0 con 0x7ff4fc06c650 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: Reconfiguring mgr.vm02.zmavgl (unknown last config time)... 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: Reconfiguring daemon mgr.vm02.zmavgl on vm02 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: Reconfiguring ceph-exporter.vm02 (monmap changed)... 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: Reconfiguring daemon ceph-exporter.vm02 on vm02 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: Reconfiguring crash.vm02 (monmap changed)... 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: Reconfiguring daemon crash.vm02 on vm02 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:34 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:35 vm02 ceph-mon[50200]: Reconfiguring alertmanager.vm02 (dependencies changed)... 2026-03-10T10:15:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:35 vm02 ceph-mon[50200]: Reconfiguring daemon alertmanager.vm02 on vm02 2026-03-10T10:15:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:35 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:15:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:35 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:15:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:35 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:35 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:35 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:35 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:35 vm05 ceph-mon[59051]: Reconfiguring alertmanager.vm02 (dependencies changed)... 2026-03-10T10:15:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:35 vm05 ceph-mon[59051]: Reconfiguring daemon alertmanager.vm02 on vm02 2026-03-10T10:15:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:35 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:15:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:35 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:15:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:35 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:35 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:35 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:35 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:36 vm02 ceph-mon[50200]: from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:15:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:36 vm02 ceph-mon[50200]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:36 vm02 ceph-mon[50200]: Reconfiguring grafana.vm02 (dependencies changed)... 2026-03-10T10:15:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:36 vm02 ceph-mon[50200]: Reconfiguring daemon grafana.vm02 on vm02 2026-03-10T10:15:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:36 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3827769065' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f90b5cc0-11ce-4915-a46a-c23fb52a4ba2"}]: dispatch 2026-03-10T10:15:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:36 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3827769065' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f90b5cc0-11ce-4915-a46a-c23fb52a4ba2"}]': finished 2026-03-10T10:15:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:36 vm02 ceph-mon[50200]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T10:15:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:36 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:36 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/2352302081' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:15:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:36 vm05 ceph-mon[59051]: from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:15:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:36 vm05 ceph-mon[59051]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:36 vm05 ceph-mon[59051]: Reconfiguring grafana.vm02 (dependencies changed)... 2026-03-10T10:15:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:36 vm05 ceph-mon[59051]: Reconfiguring daemon grafana.vm02 on vm02 2026-03-10T10:15:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:36 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3827769065' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f90b5cc0-11ce-4915-a46a-c23fb52a4ba2"}]: dispatch 2026-03-10T10:15:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:36 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3827769065' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f90b5cc0-11ce-4915-a46a-c23fb52a4ba2"}]': finished 2026-03-10T10:15:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:36 vm05 ceph-mon[59051]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T10:15:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:36 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:36 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/2352302081' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:15:38.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:38 vm02 ceph-mon[50200]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:38.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:38 vm05 ceph-mon[59051]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:40.697 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:40 vm02 ceph-mon[50200]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:40.697 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:40 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:40.697 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:40 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:40.697 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:40 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:15:40.697 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:40 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:40.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:40 vm05 ceph-mon[59051]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:40.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:40 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:40.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:40 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:40.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:40 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:15:40.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:40 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:41.496 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:41 vm02 ceph-mon[50200]: Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-10T10:15:41.496 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:41 vm02 ceph-mon[50200]: Reconfiguring daemon prometheus.vm02 on vm02 2026-03-10T10:15:41.496 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:41 vm02 ceph-mon[50200]: Deploying daemon osd.0 on vm02 2026-03-10T10:15:41.496 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:41 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:41.497 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:41 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:41.497 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:41 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:15:41.497 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:41 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:41 vm05 ceph-mon[59051]: Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-10T10:15:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:41 vm05 ceph-mon[59051]: Reconfiguring daemon prometheus.vm02 on vm02 2026-03-10T10:15:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:41 vm05 ceph-mon[59051]: Deploying daemon osd.0 on vm02 2026-03-10T10:15:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:41 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:41 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:41 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:15:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:41 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:15:42.487 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:42 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:42.511 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:15:42.512 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:42 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:43.504 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: Reconfiguring crash.vm05 (monmap changed)... 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: Reconfiguring daemon crash.vm05 on vm05 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: Reconfiguring mgr.vm05.coparq (monmap changed)... 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: Reconfiguring daemon mgr.vm05.coparq on vm05 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: Reconfiguring mon.vm05 (monmap changed)... 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: Reconfiguring daemon mon.vm05 on vm05 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.505 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:43 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: Reconfiguring crash.vm05 (monmap changed)... 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: Reconfiguring daemon crash.vm05 on vm05 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: Reconfiguring mgr.vm05.coparq (monmap changed)... 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: Reconfiguring daemon mgr.vm05.coparq on vm05 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: Reconfiguring mon.vm05 (monmap changed)... 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: Reconfiguring daemon mon.vm05 on vm05 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-10T10:15:43.681 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:43.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:43 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:43.684 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 0 on host 'vm02' 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.680+0000 7ff5037fe700 1 -- 192.168.123.102:0/37633917 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7ff4f4000bf0 con 0x7ff4fc06c650 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 -- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4fc06c650 msgr2=0x7ff4fc06eb10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4fc06c650 0x7ff4fc06eb10 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7ff508005d90 tx=0x7ff50800b2c0 comp rx=0 tx=0).stop 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 -- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514101a90 msgr2=0x7ff51410a650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514101a90 0x7ff51410a650 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7ff504009b20 tx=0x7ff504004f70 comp rx=0 tx=0).stop 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 -- 192.168.123.102:0/37633917 shutdown_connections 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514101a90 0x7ff51410a650 unknown :-1 s=CLOSED pgs=168 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff4fc06c650 0x7ff4fc06eb10 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 --2- 192.168.123.102:0/37633917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5141043c0 0x7ff51410ab90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 -- 192.168.123.102:0/37633917 >> 192.168.123.102:0/37633917 conn(0x7ff5140fb3c0 msgr2=0x7ff5140fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 -- 192.168.123.102:0/37633917 shutdown_connections 2026-03-10T10:15:43.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:43.683+0000 7ff5017fa700 1 -- 192.168.123.102:0/37633917 wait complete. 2026-03-10T10:15:43.741 DEBUG:teuthology.orchestra.run.vm02:osd.0> sudo journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.0.service 2026-03-10T10:15:43.742 INFO:tasks.cephadm:Deploying osd.1 on vm02 with /dev/vdd... 2026-03-10T10:15:43.742 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- lvm zap /dev/vdd 2026-03-10T10:15:43.950 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:15:44.535 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:15:44.548 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch daemon add osd vm02:/dev/vdd 2026-03-10T10:15:44.569 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:44.569 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:44.569 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T10:15:44.569 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:44.570 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-10T10:15:44.570 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:44.570 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-10T10:15:44.570 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.570 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.570 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.570 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.570 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.570 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:44 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.732 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:44.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:44 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:45.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.062+0000 7f056bfff700 1 -- 192.168.123.102:0/774304361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c10a700 msgr2=0x7f056c10cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:45.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.062+0000 7f056bfff700 1 --2- 192.168.123.102:0/774304361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c10a700 0x7f056c10cb90 secure :-1 s=READY pgs=175 cs=0 l=1 rev1=1 crypto rx=0x7f0560009b50 tx=0x7f0560009e60 comp rx=0 tx=0).stop 2026-03-10T10:15:45.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.065+0000 7f056bfff700 1 -- 192.168.123.102:0/774304361 shutdown_connections 2026-03-10T10:15:45.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.065+0000 7f056bfff700 1 --2- 192.168.123.102:0/774304361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c10a700 0x7f056c10cb90 unknown :-1 s=CLOSED pgs=175 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:45.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.065+0000 7f056bfff700 1 --2- 192.168.123.102:0/774304361 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f056c107d90 0x7f056c10a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:45.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.065+0000 7f056bfff700 1 -- 192.168.123.102:0/774304361 >> 192.168.123.102:0/774304361 conn(0x7f056c06daa0 msgr2=0x7f056c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:45.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.066+0000 7f056bfff700 1 -- 192.168.123.102:0/774304361 shutdown_connections 2026-03-10T10:15:45.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.066+0000 7f056bfff700 1 -- 192.168.123.102:0/774304361 wait complete. 2026-03-10T10:15:45.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.067+0000 7f056bfff700 1 Processor -- start 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.067+0000 7f056bfff700 1 -- start start 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.067+0000 7f056bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c107d90 0x7f056c1a5660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.067+0000 7f056bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f056c10a700 0x7f056c1a5ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.067+0000 7f056bfff700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f056c1a61c0 con 0x7f056c107d90 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.067+0000 7f056bfff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f056c1a6300 con 0x7f056c10a700 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.068+0000 7f056affd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c107d90 0x7f056c1a5660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.068+0000 7f056affd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c107d90 0x7f056c1a5660 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:37642/0 (socket says 192.168.123.102:37642) 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.068+0000 7f056affd700 1 -- 192.168.123.102:0/1862778128 learned_addr learned my addr 192.168.123.102:0/1862778128 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.068+0000 7f056affd700 1 -- 192.168.123.102:0/1862778128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f056c10a700 msgr2=0x7f056c1a5ba0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.068+0000 7f056affd700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f056c10a700 0x7f056c1a5ba0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:45.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.068+0000 7f056affd700 1 -- 192.168.123.102:0/1862778128 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f05600097e0 con 0x7f056c107d90 2026-03-10T10:15:45.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.068+0000 7f056affd700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c107d90 0x7f056c1a5660 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7f055c00ba70 tx=0x7f055c00be30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:45.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.068+0000 7f0553fff700 1 -- 192.168.123.102:0/1862778128 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f055c00c7e0 con 0x7f056c107d90 2026-03-10T10:15:45.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.069+0000 7f0553fff700 1 -- 192.168.123.102:0/1862778128 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f055c00ce20 con 0x7f056c107d90 2026-03-10T10:15:45.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.069+0000 7f056bfff700 1 -- 192.168.123.102:0/1862778128 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f056c10f660 con 0x7f056c107d90 2026-03-10T10:15:45.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.069+0000 7f056bfff700 1 -- 192.168.123.102:0/1862778128 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f056c10fbb0 con 0x7f056c107d90 2026-03-10T10:15:45.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.069+0000 7f0553fff700 1 -- 192.168.123.102:0/1862778128 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f055c012550 con 0x7f056c107d90 2026-03-10T10:15:45.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.070+0000 7f0553fff700 1 -- 192.168.123.102:0/1862778128 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 18) v1 ==== 90252+0+0 (secure 0 0 0) 0x7f055c00c940 con 0x7f056c107d90 2026-03-10T10:15:45.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.070+0000 7f056bfff700 1 -- 192.168.123.102:0/1862778128 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f056c04ea90 con 0x7f056c107d90 2026-03-10T10:15:45.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.071+0000 7f0553fff700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f055406c360 0x7f055406e820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:45.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.071+0000 7f0553fff700 1 -- 192.168.123.102:0/1862778128 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(6..6 src has 1..6) v4 ==== 1313+0+0 (secure 0 0 0) 0x7f055c0899e0 con 0x7f056c107d90 2026-03-10T10:15:45.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.074+0000 7f0553fff700 1 -- 192.168.123.102:0/1862778128 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f055c058dc0 con 0x7f056c107d90 2026-03-10T10:15:45.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.077+0000 7f056a7fc700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f055406c360 0x7f055406e820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:45.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.077+0000 7f056a7fc700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f055406c360 0x7f055406e820 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f0560005fd0 tx=0x7f0560005bc0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:45.211 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:45.210+0000 7f056bfff700 1 -- 192.168.123.102:0/1862778128 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}) v1 -- 0x7f056c10f7f0 con 0x7f055406c360 2026-03-10T10:15:45.411 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:15:45 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[67686]: 2026-03-10T10:15:45.409+0000 7f71a06ae640 -1 osd.0 0 log_to_monitors true 2026-03-10T10:15:45.665 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:45 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:45.665 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:45 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:15:45.665 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:45 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:15:45.665 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:45 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:45.665 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:45 vm02 ceph-mon[50200]: from='osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:15:45.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:45 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:45.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:45 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:15:45.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:45 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:15:45.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:45 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:45.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:45 vm05 ceph-mon[59051]: from='osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: Detected new or changed devices on vm02 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8bd56e09-7dad-4b23-847e-c7afae0d2f41"}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3076662680' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8bd56e09-7dad-4b23-847e-c7afae0d2f41"}]: dispatch 2026-03-10T10:15:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T10:15:46.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8bd56e09-7dad-4b23-847e-c7afae0d2f41"}]': finished 2026-03-10T10:15:46.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T10:15:46.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:46.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:46.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:46 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: Detected new or changed devices on vm02 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8bd56e09-7dad-4b23-847e-c7afae0d2f41"}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3076662680' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8bd56e09-7dad-4b23-847e-c7afae0d2f41"}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8bd56e09-7dad-4b23-847e-c7afae0d2f41"}]': finished 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:46 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:46.682 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:15:46 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[67686]: 2026-03-10T10:15:46.287+0000 7f7196d14700 -1 osd.0 0 waiting for initial osdmap 2026-03-10T10:15:46.682 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:15:46 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[67686]: 2026-03-10T10:15:46.300+0000 7f7191306700 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3670364500' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] boot 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:48.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:47 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3670364500' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: osd.0 [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] boot 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:47 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:49.704 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:49 vm05 ceph-mon[59051]: purged_snaps scrub starts 2026-03-10T10:15:49.704 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:49 vm05 ceph-mon[59051]: purged_snaps scrub ok 2026-03-10T10:15:49.704 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:49 vm05 ceph-mon[59051]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T10:15:49.704 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:49 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:49.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:49 vm02 ceph-mon[50200]: purged_snaps scrub starts 2026-03-10T10:15:49.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:49 vm02 ceph-mon[50200]: purged_snaps scrub ok 2026-03-10T10:15:49.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:49 vm02 ceph-mon[50200]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T10:15:49.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:49 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:50.644 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:50 vm02 ceph-mon[50200]: pgmap v15: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:50.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:50 vm05 ceph-mon[59051]: pgmap v15: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:51.528 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:51 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:15:51.528 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:51 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:51.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:51 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:15:51.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:51 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:52.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:52 vm02 ceph-mon[50200]: Deploying daemon osd.1 on vm02 2026-03-10T10:15:52.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:52 vm02 ceph-mon[50200]: pgmap v16: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:52.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:52 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:52.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:52 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:52.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:52 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:52.640 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:52 vm05 ceph-mon[59051]: Deploying daemon osd.1 on vm02 2026-03-10T10:15:52.640 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:52 vm05 ceph-mon[59051]: pgmap v16: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:52.640 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:52 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:52.640 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:52 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:52.640 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:52 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 1 on host 'vm02' 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.247+0000 7f0553fff700 1 -- 192.168.123.102:0/1862778128 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f056c10f7f0 con 0x7f055406c360 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 -- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f055406c360 msgr2=0x7f055406e820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f055406c360 0x7f055406e820 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f0560005fd0 tx=0x7f0560005bc0 comp rx=0 tx=0).stop 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 -- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c107d90 msgr2=0x7f056c1a5660 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c107d90 0x7f056c1a5660 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7f055c00ba70 tx=0x7f055c00be30 comp rx=0 tx=0).stop 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 -- 192.168.123.102:0/1862778128 shutdown_connections 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f056c107d90 0x7f056c1a5660 unknown :-1 s=CLOSED pgs=176 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f055406c360 0x7f055406e820 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 --2- 192.168.123.102:0/1862778128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f056c10a700 0x7f056c1a5ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 -- 192.168.123.102:0/1862778128 >> 192.168.123.102:0/1862778128 conn(0x7f056c06daa0 msgr2=0x7f056c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 -- 192.168.123.102:0/1862778128 shutdown_connections 2026-03-10T10:15:53.251 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:53.250+0000 7f0551ffb700 1 -- 192.168.123.102:0/1862778128 wait complete. 2026-03-10T10:15:53.315 DEBUG:teuthology.orchestra.run.vm02:osd.1> sudo journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.1.service 2026-03-10T10:15:53.358 INFO:tasks.cephadm:Deploying osd.2 on vm02 with /dev/vdc... 2026-03-10T10:15:53.358 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- lvm zap /dev/vdc 2026-03-10T10:15:53.584 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:15:54.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:53 vm02 ceph-mon[50200]: pgmap v17: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:54.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:53 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:54.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:53 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:54.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:53 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:54.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:53 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:54.187 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:15:54.201 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch daemon add osd vm02:/dev/vdc 2026-03-10T10:15:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:53 vm05 ceph-mon[59051]: pgmap v17: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:53 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:53 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:53 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:53 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:54.382 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:15:54.437 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:15:54 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[74265]: 2026-03-10T10:15:54.413+0000 7faffce4f640 -1 osd.1 0 log_to_monitors true 2026-03-10T10:15:54.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.688+0000 7f5a0ca80700 1 -- 192.168.123.102:0/2870796166 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08104060 msgr2=0x7f5a081044e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:54.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.688+0000 7f5a0ca80700 1 --2- 192.168.123.102:0/2870796166 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08104060 0x7f5a081044e0 secure :-1 s=READY pgs=182 cs=0 l=1 rev1=1 crypto rx=0x7f59f0009b00 tx=0x7f59f0009e10 comp rx=0 tx=0).stop 2026-03-10T10:15:54.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.690+0000 7f5a0ca80700 1 -- 192.168.123.102:0/2870796166 shutdown_connections 2026-03-10T10:15:54.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.690+0000 7f5a0ca80700 1 --2- 192.168.123.102:0/2870796166 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08104060 0x7f5a081044e0 unknown :-1 s=CLOSED pgs=182 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:54.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.690+0000 7f5a0ca80700 1 --2- 192.168.123.102:0/2870796166 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a08102e70 0x7f5a08103290 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:54.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.690+0000 7f5a0ca80700 1 -- 192.168.123.102:0/2870796166 >> 192.168.123.102:0/2870796166 conn(0x7f5a080fe440 msgr2=0x7f5a081008a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:54.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.690+0000 7f5a0ca80700 1 -- 192.168.123.102:0/2870796166 shutdown_connections 2026-03-10T10:15:54.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.690+0000 7f5a0ca80700 1 -- 192.168.123.102:0/2870796166 wait complete. 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a0ca80700 1 Processor -- start 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a0ca80700 1 -- start start 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a0ca80700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08102e70 0x7f5a08198850 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a0ca80700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a08104060 0x7f5a08198d90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a0ca80700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5a081993b0 con 0x7f5a08102e70 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a0ca80700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5a081994f0 con 0x7f5a08104060 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a077fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08102e70 0x7f5a08198850 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a077fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08102e70 0x7f5a08198850 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:35774/0 (socket says 192.168.123.102:35774) 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a077fe700 1 -- 192.168.123.102:0/1502981128 learned_addr learned my addr 192.168.123.102:0/1502981128 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:15:54.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a06ffd700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a08104060 0x7f5a08198d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:54.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a077fe700 1 -- 192.168.123.102:0/1502981128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a08104060 msgr2=0x7f5a08198d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:54.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a077fe700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a08104060 0x7f5a08198d90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:54.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a077fe700 1 -- 192.168.123.102:0/1502981128 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f59f00097e0 con 0x7f5a08102e70 2026-03-10T10:15:54.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a077fe700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08102e70 0x7f5a08198850 secure :-1 s=READY pgs=183 cs=0 l=1 rev1=1 crypto rx=0x7f59f800ba70 tx=0x7f59f800be30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:54.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.691+0000 7f5a04ff9700 1 -- 192.168.123.102:0/1502981128 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f59f800c760 con 0x7f5a08102e70 2026-03-10T10:15:54.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.692+0000 7f5a0ca80700 1 -- 192.168.123.102:0/1502981128 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5a0819dfa0 con 0x7f5a08102e70 2026-03-10T10:15:54.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.692+0000 7f5a0ca80700 1 -- 192.168.123.102:0/1502981128 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5a0819e460 con 0x7f5a08102e70 2026-03-10T10:15:54.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.693+0000 7f5a0ca80700 1 -- 192.168.123.102:0/1502981128 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5a0804ea90 con 0x7f5a08102e70 2026-03-10T10:15:54.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.696+0000 7f5a04ff9700 1 -- 192.168.123.102:0/1502981128 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f59f800cda0 con 0x7f5a08102e70 2026-03-10T10:15:54.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.696+0000 7f5a04ff9700 1 -- 192.168.123.102:0/1502981128 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f59f8012550 con 0x7f5a08102e70 2026-03-10T10:15:54.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.696+0000 7f5a04ff9700 1 -- 192.168.123.102:0/1502981128 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 18) v1 ==== 90252+0+0 (secure 0 0 0) 0x7f59f8012770 con 0x7f5a08102e70 2026-03-10T10:15:54.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.696+0000 7f5a04ff9700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f59f406c650 0x7f59f406eb10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:54.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.696+0000 7f5a04ff9700 1 -- 192.168.123.102:0/1502981128 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(10..10 src has 1..10) v4 ==== 1915+0+0 (secure 0 0 0) 0x7f59f808ac90 con 0x7f5a08102e70 2026-03-10T10:15:54.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.696+0000 7f5a04ff9700 1 -- 192.168.123.102:0/1502981128 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f59f808e2a0 con 0x7f5a08102e70 2026-03-10T10:15:54.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.699+0000 7f5a06ffd700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f59f406c650 0x7f59f406eb10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:54.705 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.703+0000 7f5a06ffd700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f59f406c650 0x7f59f406eb10 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f59f00052d0 tx=0x7f59f0005c00 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:54.830 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:15:54.829+0000 7f5a0ca80700 1 -- 192.168.123.102:0/1502981128 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}) v1 -- 0x7f5a08108a40 con 0x7f59f406c650 2026-03-10T10:15:55.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:55.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:55.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:55 vm02 ceph-mon[50200]: from='osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:15:55.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:15:55.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:15:55.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:55.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:55.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:55.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:55 vm05 ceph-mon[59051]: from='osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:15:55.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:15:55.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:15:55.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:56.280 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:15:56 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[74265]: 2026-03-10T10:15:56.030+0000 7faff1cb2700 -1 osd.1 0 waiting for initial osdmap 2026-03-10T10:15:56.280 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:15:56 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[74265]: 2026-03-10T10:15:56.061+0000 7fafee2a8700 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:15:56.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: pgmap v18: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:56.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='client.14300 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:15:56.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: Detected new or changed devices on vm02 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ccdc548-a0cb-41e0-bc7a-21b41198ffea"}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/657477165' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ccdc548-a0cb-41e0-bc7a-21b41198ffea"}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1ccdc548-a0cb-41e0-bc7a-21b41198ffea"}]': finished 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: osdmap e12: 3 total, 1 up, 3 in 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:56.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: pgmap v18: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='client.14300 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: Detected new or changed devices on vm02 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ccdc548-a0cb-41e0-bc7a-21b41198ffea"}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/657477165' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ccdc548-a0cb-41e0-bc7a-21b41198ffea"}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1ccdc548-a0cb-41e0-bc7a-21b41198ffea"}]': finished 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: osdmap e12: 3 total, 1 up, 3 in 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:57.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/428633899' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:15:57.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] boot 2026-03-10T10:15:57.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T10:15:57.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:57.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:15:57.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:57.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:57.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:57.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:57.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/428633899' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: osd.1 [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] boot 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:57.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:15:58.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:58 vm02 ceph-mon[50200]: purged_snaps scrub starts 2026-03-10T10:15:58.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:58 vm02 ceph-mon[50200]: purged_snaps scrub ok 2026-03-10T10:15:58.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:58 vm02 ceph-mon[50200]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:58.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:58 vm02 ceph-mon[50200]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T10:15:58.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:15:58 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:15:58.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:58 vm05 ceph-mon[59051]: purged_snaps scrub starts 2026-03-10T10:15:58.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:58 vm05 ceph-mon[59051]: purged_snaps scrub ok 2026-03-10T10:15:58.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:58 vm05 ceph-mon[59051]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:15:58.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:58 vm05 ceph-mon[59051]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T10:15:58.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:15:58 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:16:00.639 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:00 vm02 ceph-mon[50200]: pgmap v24: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:16:00.639 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:00.639 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:16:00.639 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:00.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:00 vm05 ceph-mon[59051]: pgmap v24: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:16:00.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:00.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:16:00.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:01.522 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:01 vm02 ceph-mon[50200]: Deploying daemon osd.2 on vm02 2026-03-10T10:16:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:01 vm05 ceph-mon[59051]: Deploying daemon osd.2 on vm02 2026-03-10T10:16:02.568 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:02 vm02 ceph-mon[50200]: pgmap v25: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:16:02.568 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:02 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:02.568 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:02 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:02.568 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:02 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:02 vm05 ceph-mon[59051]: pgmap v25: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:16:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:02 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:02 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:02 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:03.038 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 2 on host 'vm02' 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.035+0000 7f5a04ff9700 1 -- 192.168.123.102:0/1502981128 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f5a08108a40 con 0x7f59f406c650 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 -- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f59f406c650 msgr2=0x7f59f406eb10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f59f406c650 0x7f59f406eb10 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f59f00052d0 tx=0x7f59f0005c00 comp rx=0 tx=0).stop 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 -- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08102e70 msgr2=0x7f5a08198850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08102e70 0x7f5a08198850 secure :-1 s=READY pgs=183 cs=0 l=1 rev1=1 crypto rx=0x7f59f800ba70 tx=0x7f59f800be30 comp rx=0 tx=0).stop 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 -- 192.168.123.102:0/1502981128 shutdown_connections 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a08102e70 0x7f5a08198850 unknown :-1 s=CLOSED pgs=183 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f59f406c650 0x7f59f406eb10 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 --2- 192.168.123.102:0/1502981128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a08104060 0x7f5a08198d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 -- 192.168.123.102:0/1502981128 >> 192.168.123.102:0/1502981128 conn(0x7f5a080fe440 msgr2=0x7f5a08107320 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 -- 192.168.123.102:0/1502981128 shutdown_connections 2026-03-10T10:16:03.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:03.038+0000 7f59fe7fc700 1 -- 192.168.123.102:0/1502981128 wait complete. 2026-03-10T10:16:03.099 DEBUG:teuthology.orchestra.run.vm02:osd.2> sudo journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.2.service 2026-03-10T10:16:03.101 INFO:tasks.cephadm:Deploying osd.3 on vm05 with /dev/vde... 2026-03-10T10:16:03.101 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- lvm zap /dev/vde 2026-03-10T10:16:03.241 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm05/config 2026-03-10T10:16:03.742 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:16:03.755 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch daemon add osd vm05:/dev/vde 2026-03-10T10:16:03.895 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm05/config 2026-03-10T10:16:04.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.145+0000 7fa780a47700 1 -- 192.168.123.105:0/887174195 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 msgr2=0x7fa77c100280 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:04.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.145+0000 7fa780a47700 1 --2- 192.168.123.105:0/887174195 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 0x7fa77c100280 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7fa764009b00 tx=0x7fa764009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:04.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.146+0000 7fa780a47700 1 -- 192.168.123.105:0/887174195 shutdown_connections 2026-03-10T10:16:04.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.146+0000 7fa780a47700 1 --2- 192.168.123.105:0/887174195 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa77c100fc0 0x7fa77c101440 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:04.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.146+0000 7fa780a47700 1 --2- 192.168.123.105:0/887174195 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 0x7fa77c100280 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:04.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.146+0000 7fa780a47700 1 -- 192.168.123.105:0/887174195 >> 192.168.123.105:0/887174195 conn(0x7fa77c0fb3c0 msgr2=0x7fa77c0fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:04.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.146+0000 7fa780a47700 1 -- 192.168.123.105:0/887174195 shutdown_connections 2026-03-10T10:16:04.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.146+0000 7fa780a47700 1 -- 192.168.123.105:0/887174195 wait complete. 2026-03-10T10:16:04.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.146+0000 7fa780a47700 1 Processor -- start 2026-03-10T10:16:04.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa780a47700 1 -- start start 2026-03-10T10:16:04.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa780a47700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 0x7fa77c1945d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:04.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa780a47700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa77c100fc0 0x7fa77c194b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:04.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa780a47700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa77c195130 con 0x7fa77c100fc0 2026-03-10T10:16:04.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa780a47700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa77c195270 con 0x7fa77c0ffe60 2026-03-10T10:16:04.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa77a59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 0x7fa77c1945d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:04.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa77a59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 0x7fa77c1945d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:60820/0 (socket says 192.168.123.105:60820) 2026-03-10T10:16:04.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa77a59c700 1 -- 192.168.123.105:0/1456139142 learned_addr learned my addr 192.168.123.105:0/1456139142 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:16:04.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa77a59c700 1 -- 192.168.123.105:0/1456139142 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa77c100fc0 msgr2=0x7fa77c194b10 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:16:04.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa77a59c700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa77c100fc0 0x7fa77c194b10 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:04.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa77a59c700 1 -- 192.168.123.105:0/1456139142 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa7640097e0 con 0x7fa77c0ffe60 2026-03-10T10:16:04.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa77a59c700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 0x7fa77c1945d0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fa764000c00 tx=0x7fa764005dc0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:04.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.147+0000 7fa7737fe700 1 -- 192.168.123.105:0/1456139142 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa7640050d0 con 0x7fa77c0ffe60 2026-03-10T10:16:04.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.148+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa77c199cc0 con 0x7fa77c0ffe60 2026-03-10T10:16:04.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.148+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa77c19a210 con 0x7fa77c0ffe60 2026-03-10T10:16:04.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.148+0000 7fa7737fe700 1 -- 192.168.123.105:0/1456139142 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa764005230 con 0x7fa77c0ffe60 2026-03-10T10:16:04.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.148+0000 7fa7737fe700 1 -- 192.168.123.105:0/1456139142 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa7640295e0 con 0x7fa77c0ffe60 2026-03-10T10:16:04.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.149+0000 7fa7737fe700 1 -- 192.168.123.105:0/1456139142 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 18) v1 ==== 90252+0+0 (secure 0 0 0) 0x7fa764027020 con 0x7fa77c0ffe60 2026-03-10T10:16:04.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.149+0000 7fa7737fe700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa76806c580 0x7fa76806ea40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:04.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.150+0000 7fa7737fe700 1 -- 192.168.123.105:0/1456139142 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(14..14 src has 1..14) v4 ==== 2347+0+0 (secure 0 0 0) 0x7fa7640952a0 con 0x7fa77c0ffe60 2026-03-10T10:16:04.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.150+0000 7fa779d9b700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa76806c580 0x7fa76806ea40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:04.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.150+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa75c005320 con 0x7fa77c0ffe60 2026-03-10T10:16:04.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.151+0000 7fa779d9b700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa76806c580 0x7fa76806ea40 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fa76c00ba10 tx=0x7fa76c00b3f0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:04.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.153+0000 7fa7737fe700 1 -- 192.168.123.105:0/1456139142 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa76402f080 con 0x7fa77c0ffe60 2026-03-10T10:16:04.263 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:04 vm05 ceph-mon[59051]: pgmap v26: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:16:04.263 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:04 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:04.263 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:04 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:04.263 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:04 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:04.263 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:04 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:04.263 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:04.260+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}) v1 -- 0x7fa75c000bf0 con 0x7fa76806c580 2026-03-10T10:16:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:04 vm02 ceph-mon[50200]: pgmap v26: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:16:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:04 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:04 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:04 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:04 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:04.783 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:16:04 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[81491]: 2026-03-10T10:16:04.458+0000 7f586d235640 -1 osd.2 0 log_to_monitors true 2026-03-10T10:16:05.462 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:16:05 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[81491]: 2026-03-10T10:16:05.146+0000 7f5862098700 -1 osd.2 0 waiting for initial osdmap 2026-03-10T10:16:05.462 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:16:05 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[81491]: 2026-03-10T10:16:05.152+0000 7f585e68e700 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:05.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:05 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:05.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:05 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: pgmap v27: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: Detected new or changed devices on vm02 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/2129365918' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "70fa78db-d544-4037-a4e5-e2b601b924d7"}]: dispatch 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "70fa78db-d544-4037-a4e5-e2b601b924d7"}]: dispatch 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "70fa78db-d544-4037-a4e5-e2b601b924d7"}]': finished 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: osdmap e16: 4 total, 2 up, 4 in 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:16:06.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:06 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/1436776121' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: pgmap v27: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: Detected new or changed devices on vm02 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='client.? 192.168.123.105:0/2129365918' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "70fa78db-d544-4037-a4e5-e2b601b924d7"}]: dispatch 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "70fa78db-d544-4037-a4e5-e2b601b924d7"}]: dispatch 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "70fa78db-d544-4037-a4e5-e2b601b924d7"}]': finished 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: osdmap e16: 4 total, 2 up, 4 in 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:16:06.614 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:06 vm05 ceph-mon[59051]: from='client.? 192.168.123.105:0/1436776121' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] boot 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: osdmap e17: 4 total, 3 up, 4 in 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:07.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:07 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: osd.2 [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] boot 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: osdmap e17: 4 total, 3 up, 4 in 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:07 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:08.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:08 vm02 ceph-mon[50200]: purged_snaps scrub starts 2026-03-10T10:16:08.479 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:08 vm02 ceph-mon[50200]: purged_snaps scrub ok 2026-03-10T10:16:08.479 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:08 vm02 ceph-mon[50200]: pgmap v31: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:08.479 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:08 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:08.479 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:08 vm02 ceph-mon[50200]: osdmap e18: 4 total, 3 up, 4 in 2026-03-10T10:16:08.479 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:08 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:08.479 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:08 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:08 vm05 ceph-mon[59051]: purged_snaps scrub starts 2026-03-10T10:16:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:08 vm05 ceph-mon[59051]: purged_snaps scrub ok 2026-03-10T10:16:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:08 vm05 ceph-mon[59051]: pgmap v31: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:08 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:08 vm05 ceph-mon[59051]: osdmap e18: 4 total, 3 up, 4 in 2026-03-10T10:16:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:08 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:08 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:08.780 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:16:08 vm02 sudo[86802]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T10:16:08.780 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:16:08 vm02 sudo[86802]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T10:16:08.780 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:16:08 vm02 sudo[86802]: pam_unix(sudo:session): session opened for user root by (uid=167) 2026-03-10T10:16:08.780 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:16:08 vm02 sudo[86802]: pam_unix(sudo:session): session closed for user root 2026-03-10T10:16:09.215 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:16:09 vm02 sudo[86808]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-10T10:16:09.215 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:16:09 vm02 sudo[86808]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T10:16:09.215 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:16:09 vm02 sudo[86808]: pam_unix(sudo:session): session opened for user root by (uid=167) 2026-03-10T10:16:09.215 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:16:09 vm02 sudo[86808]: pam_unix(sudo:session): session closed for user root 2026-03-10T10:16:09.215 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:16:08 vm02 sudo[86805]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-10T10:16:09.215 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:16:08 vm02 sudo[86805]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T10:16:09.215 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:16:08 vm02 sudo[86805]: pam_unix(sudo:session): session opened for user root by (uid=167) 2026-03-10T10:16:09.215 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:16:08 vm02 sudo[86805]: pam_unix(sudo:session): session closed for user root 2026-03-10T10:16:09.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:09 vm05 sudo[64392]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T10:16:09.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:09 vm05 sudo[64392]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T10:16:09.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:09 vm05 sudo[64392]: pam_unix(sudo:session): session opened for user root by (uid=167) 2026-03-10T10:16:09.435 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:09 vm05 sudo[64392]: pam_unix(sudo:session): session closed for user root 2026-03-10T10:16:09.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:09.438+0000 7fa7737fe700 1 -- 192.168.123.105:0/1456139142 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fa764060bc0 con 0x7fa77c0ffe60 2026-03-10T10:16:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:09 vm02 sudo[86811]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T10:16:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:09 vm02 sudo[86811]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T10:16:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:09 vm02 sudo[86811]: pam_unix(sudo:session): session opened for user root by (uid=167) 2026-03-10T10:16:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:09 vm02 sudo[86811]: pam_unix(sudo:session): session closed for user root 2026-03-10T10:16:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:09 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:09 vm02 ceph-mon[50200]: osdmap e19: 4 total, 3 up, 4 in 2026-03-10T10:16:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:09 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:09 vm02 ceph-mon[50200]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:16:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:09 vm02 ceph-mon[50200]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:16:09.723 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:09 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:09.723 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:09 vm05 ceph-mon[59051]: osdmap e19: 4 total, 3 up, 4 in 2026-03-10T10:16:09.723 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:09 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:09.723 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:09 vm05 ceph-mon[59051]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:16:09.723 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:09 vm05 ceph-mon[59051]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:16:10.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: pgmap v34: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: osdmap e20: 4 total, 3 up, 4 in 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: mgrmap e19: vm02.zmavgl(active, since 54s), standbys: vm05.coparq 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:16:10.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:10 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: pgmap v34: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: osdmap e20: 4 total, 3 up, 4 in 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: mgrmap e19: vm02.zmavgl(active, since 54s), standbys: vm05.coparq 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:16:10.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:10 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:11.443 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:11 vm05 ceph-mon[59051]: Deploying daemon osd.3 on vm05 2026-03-10T10:16:11.443 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:11 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:11.443 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:11 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:11.443 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:11 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:11.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:11 vm02 ceph-mon[50200]: Deploying daemon osd.3 on vm05 2026-03-10T10:16:11.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:11 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:11.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:11 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:11.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:11 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 3 on host 'vm05' 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.141+0000 7fa7737fe700 1 -- 192.168.123.105:0/1456139142 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7fa75c000bf0 con 0x7fa76806c580 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa76806c580 msgr2=0x7fa76806ea40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa76806c580 0x7fa76806ea40 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fa76c00ba10 tx=0x7fa76c00b3f0 comp rx=0 tx=0).stop 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 msgr2=0x7fa77c1945d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 0x7fa77c1945d0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fa764000c00 tx=0x7fa764005dc0 comp rx=0 tx=0).stop 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 shutdown_connections 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa76806c580 0x7fa76806ea40 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa77c0ffe60 0x7fa77c1945d0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 --2- 192.168.123.105:0/1456139142 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa77c100fc0 0x7fa77c194b10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 >> 192.168.123.105:0/1456139142 conn(0x7fa77c0fb3c0 msgr2=0x7fa77c104280 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 shutdown_connections 2026-03-10T10:16:12.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:12.144+0000 7fa780a47700 1 -- 192.168.123.105:0/1456139142 wait complete. 2026-03-10T10:16:12.197 DEBUG:teuthology.orchestra.run.vm05:osd.3> sudo journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.3.service 2026-03-10T10:16:12.199 INFO:tasks.cephadm:Deploying osd.4 on vm05 with /dev/vdd... 2026-03-10T10:16:12.199 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- lvm zap /dev/vdd 2026-03-10T10:16:12.401 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm05/config 2026-03-10T10:16:12.700 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:12 vm05 ceph-mon[59051]: pgmap v36: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:12.700 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:12 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:12.700 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:12 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:12.700 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:12 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:12.700 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:12 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:12.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:12 vm02 ceph-mon[50200]: pgmap v36: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:12.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:12 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:12.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:12 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:12.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:12 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:12.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:12 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:12.966 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:16:12.981 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch daemon add osd vm05:/dev/vdd 2026-03-10T10:16:13.155 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm05/config 2026-03-10T10:16:13.433 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.431+0000 7fbb38dd8700 1 -- 192.168.123.105:0/39234143 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a4cb0 msgr2=0x7fbb2c0a50d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:13.433 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.431+0000 7fbb38dd8700 1 --2- 192.168.123.105:0/39234143 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a4cb0 0x7fbb2c0a50d0 secure :-1 s=READY pgs=191 cs=0 l=1 rev1=1 crypto rx=0x7fbb28009a50 tx=0x7fbb28009d60 comp rx=0 tx=0).stop 2026-03-10T10:16:13.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.434+0000 7fbb38dd8700 1 -- 192.168.123.105:0/39234143 shutdown_connections 2026-03-10T10:16:13.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.434+0000 7fbb38dd8700 1 --2- 192.168.123.105:0/39234143 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb2c0a5df0 0x7fbb2c0a6270 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:13.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.434+0000 7fbb38dd8700 1 --2- 192.168.123.105:0/39234143 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a4cb0 0x7fbb2c0a50d0 unknown :-1 s=CLOSED pgs=191 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:13.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.434+0000 7fbb38dd8700 1 -- 192.168.123.105:0/39234143 >> 192.168.123.105:0/39234143 conn(0x7fbb2c0a0170 msgr2=0x7fbb2c0a25d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.440+0000 7fbb38dd8700 1 -- 192.168.123.105:0/39234143 shutdown_connections 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.440+0000 7fbb38dd8700 1 -- 192.168.123.105:0/39234143 wait complete. 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.440+0000 7fbb38dd8700 1 Processor -- start 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.440+0000 7fbb38dd8700 1 -- start start 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.440+0000 7fbb38dd8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb2c0a4cb0 0x7fbb2c142f70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.440+0000 7fbb38dd8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a5df0 0x7fbb2c1434b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.440+0000 7fbb38dd8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbb2c143ad0 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.440+0000 7fbb38dd8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbb2c143c10 con 0x7fbb2c0a4cb0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.441+0000 7fbb32ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a5df0 0x7fbb2c1434b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.441+0000 7fbb32ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a5df0 0x7fbb2c1434b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:44486/0 (socket says 192.168.123.105:44486) 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.441+0000 7fbb32ffd700 1 -- 192.168.123.105:0/1613512628 learned_addr learned my addr 192.168.123.105:0/1613512628 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.441+0000 7fbb32ffd700 1 -- 192.168.123.105:0/1613512628 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb2c0a4cb0 msgr2=0x7fbb2c142f70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.441+0000 7fbb32ffd700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb2c0a4cb0 0x7fbb2c142f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.441+0000 7fbb32ffd700 1 -- 192.168.123.105:0/1613512628 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbb280096b0 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.441+0000 7fbb32ffd700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a5df0 0x7fbb2c1434b0 secure :-1 s=READY pgs=192 cs=0 l=1 rev1=1 crypto rx=0x7fbb2000ea30 tx=0x7fbb2000edf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.441+0000 7fbb30ff9700 1 -- 192.168.123.105:0/1613512628 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbb2000cc40 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.441+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbb2c009b30 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.442+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbb2c00a080 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.442+0000 7fbb30ff9700 1 -- 192.168.123.105:0/1613512628 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbb2000cda0 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.442+0000 7fbb30ff9700 1 -- 192.168.123.105:0/1613512628 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbb20010430 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.444+0000 7fbb30ff9700 1 -- 192.168.123.105:0/1613512628 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fbb20004750 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.445+0000 7fbb30ff9700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb2406c530 0x7fbb2406e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.445+0000 7fbb30ff9700 1 -- 192.168.123.105:0/1613512628 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(20..20 src has 1..20) v4 ==== 3165+0+0 (secure 0 0 0) 0x7fbb20014070 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.448+0000 7fbb337fe700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb2406c530 0x7fbb2406e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:13.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.449+0000 7fbb337fe700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb2406c530 0x7fbb2406e9f0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fbb28009a50 tx=0x7fbb2800b560 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:13.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.451+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbb18005320 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.455+0000 7fbb30ff9700 1 -- 192.168.123.105:0/1613512628 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fbb20059fe0 con 0x7fbb2c0a5df0 2026-03-10T10:16:13.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:13.591+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}) v1 -- 0x7fbb18000bf0 con 0x7fbb2406c530 2026-03-10T10:16:14.451 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:16:14 vm05 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[64918]: 2026-03-10T10:16:14.181+0000 7fd9658af640 -1 osd.3 0 log_to_monitors true 2026-03-10T10:16:14.451 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='osd.3 [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:16:14.727 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:14 vm05 ceph-mon[59051]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='osd.3 [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:16:14.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:14 vm02 ceph-mon[50200]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='client.14330 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: Detected new or changed devices on vm05 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d0b95380-36d0-4fea-a134-f6abcd77b2ee"}]: dispatch 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/157022048' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d0b95380-36d0-4fea-a134-f6abcd77b2ee"}]: dispatch 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d0b95380-36d0-4fea-a134-f6abcd77b2ee"}]': finished 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: osdmap e21: 5 total, 3 up, 5 in 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='osd.3 [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:15.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:15 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/1697913053' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='client.14330 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: Detected new or changed devices on vm05 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d0b95380-36d0-4fea-a134-f6abcd77b2ee"}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='client.? 192.168.123.105:0/157022048' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d0b95380-36d0-4fea-a134-f6abcd77b2ee"}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d0b95380-36d0-4fea-a134-f6abcd77b2ee"}]': finished 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: osdmap e21: 5 total, 3 up, 5 in 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='osd.3 [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:15 vm05 ceph-mon[59051]: from='client.? 192.168.123.105:0/1697913053' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:16:15.787 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:16:15 vm05 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[64918]: 2026-03-10T10:16:15.465+0000 7fd95bf15700 -1 osd.3 0 waiting for initial osdmap 2026-03-10T10:16:15.787 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:16:15 vm05 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[64918]: 2026-03-10T10:16:15.480+0000 7fd954d04700 -1 osd.3 22 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:16:16.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:16.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T10:16:16.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: osdmap e22: 5 total, 3 up, 5 in 2026-03-10T10:16:16.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:16.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:16.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:16.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: osd.3 [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] boot 2026-03-10T10:16:16.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: osdmap e23: 5 total, 4 up, 5 in 2026-03-10T10:16:16.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:16.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:16 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: osdmap e22: 5 total, 3 up, 5 in 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: osd.3 [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] boot 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: osdmap e23: 5 total, 4 up, 5 in 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:16:16.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:16 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:18 vm05 ceph-mon[59051]: purged_snaps scrub starts 2026-03-10T10:16:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:18 vm05 ceph-mon[59051]: purged_snaps scrub ok 2026-03-10T10:16:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:18 vm05 ceph-mon[59051]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:16:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:18 vm05 ceph-mon[59051]: osdmap e24: 5 total, 4 up, 5 in 2026-03-10T10:16:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:18 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:18.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:18 vm02 ceph-mon[50200]: purged_snaps scrub starts 2026-03-10T10:16:18.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:18 vm02 ceph-mon[50200]: purged_snaps scrub ok 2026-03-10T10:16:18.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:18 vm02 ceph-mon[50200]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:16:18.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:18 vm02 ceph-mon[50200]: osdmap e24: 5 total, 4 up, 5 in 2026-03-10T10:16:18.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:18 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:19.370 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:19 vm05 ceph-mon[59051]: osdmap e25: 5 total, 4 up, 5 in 2026-03-10T10:16:19.370 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:19 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:19.370 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:19 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:16:19.370 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:19 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:19.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:19 vm02 ceph-mon[50200]: osdmap e25: 5 total, 4 up, 5 in 2026-03-10T10:16:19.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:19.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:16:19.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:19 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:20.159 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:20 vm05 ceph-mon[59051]: pgmap v45: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:16:20.159 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:20 vm05 ceph-mon[59051]: Deploying daemon osd.4 on vm05 2026-03-10T10:16:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:20 vm02 ceph-mon[50200]: pgmap v45: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:16:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:20 vm02 ceph-mon[50200]: Deploying daemon osd.4 on vm05 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 4 on host 'vm05' 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.368+0000 7fbb30ff9700 1 -- 192.168.123.105:0/1613512628 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7fbb18000bf0 con 0x7fbb2406c530 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb2406c530 msgr2=0x7fbb2406e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb2406c530 0x7fbb2406e9f0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fbb28009a50 tx=0x7fbb2800b560 comp rx=0 tx=0).stop 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a5df0 msgr2=0x7fbb2c1434b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a5df0 0x7fbb2c1434b0 secure :-1 s=READY pgs=192 cs=0 l=1 rev1=1 crypto rx=0x7fbb2000ea30 tx=0x7fbb2000edf0 comp rx=0 tx=0).stop 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 shutdown_connections 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbb2406c530 0x7fbb2406e9f0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb2c0a4cb0 0x7fbb2c142f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 --2- 192.168.123.105:0/1613512628 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb2c0a5df0 0x7fbb2c1434b0 unknown :-1 s=CLOSED pgs=192 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 >> 192.168.123.105:0/1613512628 conn(0x7fbb2c0a0170 msgr2=0x7fbb2c0a9020 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 shutdown_connections 2026-03-10T10:16:21.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:21.371+0000 7fbb38dd8700 1 -- 192.168.123.105:0/1613512628 wait complete. 2026-03-10T10:16:21.437 DEBUG:teuthology.orchestra.run.vm05:osd.4> sudo journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.4.service 2026-03-10T10:16:21.442 INFO:tasks.cephadm:Deploying osd.5 on vm05 with /dev/vdc... 2026-03-10T10:16:21.442 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- lvm zap /dev/vdc 2026-03-10T10:16:21.647 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm05/config 2026-03-10T10:16:21.682 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:21 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:21.682 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:21 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:21.682 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:21 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:21.682 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:21 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:21.682 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:21 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:21 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:21 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:21 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:21 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:21.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:21 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:22.195 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:16:22.214 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph orch daemon add osd vm05:/dev/vdc 2026-03-10T10:16:22.405 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:22 vm05 ceph-mon[59051]: pgmap v46: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:16:22.406 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:22 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:22.406 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:22 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:22.507 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm05/config 2026-03-10T10:16:22.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:22 vm02 ceph-mon[50200]: pgmap v46: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:16:22.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:22 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:22.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:22 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:22.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.823+0000 7f1974de0700 1 -- 192.168.123.105:0/1248310575 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 msgr2=0x7f1970103290 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:22.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.823+0000 7f1974de0700 1 --2- 192.168.123.105:0/1248310575 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 0x7f1970103290 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f1958009b00 tx=0x7f1958009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:22.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.824+0000 7f1974de0700 1 -- 192.168.123.105:0/1248310575 shutdown_connections 2026-03-10T10:16:22.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.824+0000 7f1974de0700 1 --2- 192.168.123.105:0/1248310575 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1970104060 0x7f19701044e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:22.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.824+0000 7f1974de0700 1 --2- 192.168.123.105:0/1248310575 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 0x7f1970103290 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:22.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.824+0000 7f1974de0700 1 -- 192.168.123.105:0/1248310575 >> 192.168.123.105:0/1248310575 conn(0x7f19700fe440 msgr2=0x7f19701008a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:22.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.824+0000 7f1974de0700 1 -- 192.168.123.105:0/1248310575 shutdown_connections 2026-03-10T10:16:22.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.824+0000 7f1974de0700 1 -- 192.168.123.105:0/1248310575 wait complete. 2026-03-10T10:16:22.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.824+0000 7f1974de0700 1 Processor -- start 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f1974de0700 1 -- start start 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f1974de0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 0x7f1970198780 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f1974de0700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1970104060 0x7f1970198cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f1974de0700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f19701992e0 con 0x7f1970104060 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f1974de0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1970199420 con 0x7f1970102e70 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f196f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 0x7f1970198780 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f196f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 0x7f1970198780 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:53910/0 (socket says 192.168.123.105:53910) 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f196f7fe700 1 -- 192.168.123.105:0/1478994016 learned_addr learned my addr 192.168.123.105:0/1478994016 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f196f7fe700 1 -- 192.168.123.105:0/1478994016 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1970104060 msgr2=0x7f1970198cc0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:22.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f196f7fe700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1970104060 0x7f1970198cc0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:22.827 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f196f7fe700 1 -- 192.168.123.105:0/1478994016 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f19580097e0 con 0x7f1970102e70 2026-03-10T10:16:22.827 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.825+0000 7f196f7fe700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 0x7f1970198780 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f1958000c00 tx=0x7f19580056c0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:22.827 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.826+0000 7f196cff9700 1 -- 192.168.123.105:0/1478994016 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f195800bcd0 con 0x7f1970102e70 2026-03-10T10:16:22.827 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.826+0000 7f1974de0700 1 -- 192.168.123.105:0/1478994016 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f19700751d0 con 0x7f1970102e70 2026-03-10T10:16:22.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.826+0000 7f1974de0700 1 -- 192.168.123.105:0/1478994016 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f19700756c0 con 0x7f1970102e70 2026-03-10T10:16:22.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.826+0000 7f196cff9700 1 -- 192.168.123.105:0/1478994016 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1958021d30 con 0x7f1970102e70 2026-03-10T10:16:22.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.826+0000 7f196cff9700 1 -- 192.168.123.105:0/1478994016 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f195800fe20 con 0x7f1970102e70 2026-03-10T10:16:22.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.827+0000 7f196cff9700 1 -- 192.168.123.105:0/1478994016 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f1958034430 con 0x7f1970102e70 2026-03-10T10:16:22.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.827+0000 7f196cff9700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f195c06c600 0x7f195c06eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:22.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.827+0000 7f196cff9700 1 -- 192.168.123.105:0/1478994016 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(25..25 src has 1..25) v4 ==== 3697+0+0 (secure 0 0 0) 0x7f1958096330 con 0x7f1970102e70 2026-03-10T10:16:22.828 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.827+0000 7f196effd700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f195c06c600 0x7f195c06eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:22.829 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.828+0000 7f1974de0700 1 -- 192.168.123.105:0/1478994016 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1950005320 con 0x7f1970102e70 2026-03-10T10:16:22.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.828+0000 7f196effd700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f195c06c600 0x7f195c06eac0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f1970102ba0 tx=0x7f1960005c30 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:22.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.832+0000 7f196cff9700 1 -- 192.168.123.105:0/1478994016 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f1958064ce0 con 0x7f1970102e70 2026-03-10T10:16:22.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:22.944+0000 7f1974de0700 1 -- 192.168.123.105:0/1478994016 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}) v1 -- 0x7f1950000bf0 con 0x7f195c06c600 2026-03-10T10:16:23.291 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:16:23 vm05 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[70837]: 2026-03-10T10:16:23.059+0000 7f4f86590640 -1 osd.4 0 log_to_monitors true 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='osd.4 [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:23.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:23 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='osd.4 [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:23 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:24.280 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:16:23 vm05 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[70837]: 2026-03-10T10:16:23.916+0000 7f4f7b3f3700 -1 osd.4 0 waiting for initial osdmap 2026-03-10T10:16:24.280 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:16:23 vm05 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[70837]: 2026-03-10T10:16:23.928+0000 7f4f779e9700 -1 osd.4 27 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: Detected new or changed devices on vm05 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: osdmap e26: 5 total, 4 up, 5 in 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='osd.4 [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='client.? 192.168.123.105:0/3197403261' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf16e555-2559-41cf-b9cc-38646188d928"}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='client.? 192.168.123.105:0/3197403261' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bf16e555-2559-41cf-b9cc-38646188d928"}]': finished 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: osdmap e27: 6 total, 4 up, 6 in 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:24 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: Detected new or changed devices on vm05 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: osdmap e26: 5 total, 4 up, 5 in 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='osd.4 [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3197403261' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf16e555-2559-41cf-b9cc-38646188d928"}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3197403261' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bf16e555-2559-41cf-b9cc-38646188d928"}]': finished 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: osdmap e27: 6 total, 4 up, 6 in 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:24.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:24 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:25.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:25 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/275342444' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:16:25.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:25 vm02 ceph-mon[50200]: osd.4 [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] boot 2026-03-10T10:16:25.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:25 vm02 ceph-mon[50200]: osdmap e28: 6 total, 5 up, 6 in 2026-03-10T10:16:25.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:25 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:25.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:25 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:25.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:25 vm05 ceph-mon[59051]: from='client.? 192.168.123.105:0/275342444' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:16:25.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:25 vm05 ceph-mon[59051]: osd.4 [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] boot 2026-03-10T10:16:25.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:25 vm05 ceph-mon[59051]: osdmap e28: 6 total, 5 up, 6 in 2026-03-10T10:16:25.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:25 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:16:25.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:25 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:26.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:26 vm02 ceph-mon[50200]: purged_snaps scrub starts 2026-03-10T10:16:26.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:26 vm02 ceph-mon[50200]: purged_snaps scrub ok 2026-03-10T10:16:26.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:26 vm02 ceph-mon[50200]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:16:26.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:26 vm02 ceph-mon[50200]: osdmap e29: 6 total, 5 up, 6 in 2026-03-10T10:16:26.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:26 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:26.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:26 vm05 ceph-mon[59051]: purged_snaps scrub starts 2026-03-10T10:16:26.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:26 vm05 ceph-mon[59051]: purged_snaps scrub ok 2026-03-10T10:16:26.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:26 vm05 ceph-mon[59051]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:16:26.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:26 vm05 ceph-mon[59051]: osdmap e29: 6 total, 5 up, 6 in 2026-03-10T10:16:26.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:26 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:28.480 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:28 vm05 ceph-mon[59051]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:28.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:28 vm02 ceph-mon[50200]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:29.701 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:29 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:16:29.701 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:29 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:29.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:29 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:16:29.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:29 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:30.807 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:30 vm05 ceph-mon[59051]: Deploying daemon osd.5 on vm05 2026-03-10T10:16:30.807 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:30 vm05 ceph-mon[59051]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:30.807 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:30 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:30.807 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:30 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:30.807 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:30 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:30.807 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:30 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:31.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:30 vm02 ceph-mon[50200]: Deploying daemon osd.5 on vm05 2026-03-10T10:16:31.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:30 vm02 ceph-mon[50200]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:31.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:31.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:31.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:31.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:30 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 5 on host 'vm05' 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.225+0000 7f196cff9700 1 -- 192.168.123.105:0/1478994016 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f1950000bf0 con 0x7f195c06c600 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 -- 192.168.123.105:0/1478994016 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f195c06c600 msgr2=0x7f195c06eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f195c06c600 0x7f195c06eac0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f1970102ba0 tx=0x7f1960005c30 comp rx=0 tx=0).stop 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 -- 192.168.123.105:0/1478994016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 msgr2=0x7f1970198780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 0x7f1970198780 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f1958000c00 tx=0x7f19580056c0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 -- 192.168.123.105:0/1478994016 shutdown_connections 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f195c06c600 0x7f195c06eac0 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1970102e70 0x7f1970198780 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 --2- 192.168.123.105:0/1478994016 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1970104060 0x7f1970198cc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 -- 192.168.123.105:0/1478994016 >> 192.168.123.105:0/1478994016 conn(0x7f19700fe440 msgr2=0x7f1970107320 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 -- 192.168.123.105:0/1478994016 shutdown_connections 2026-03-10T10:16:31.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:31.228+0000 7f19667fc700 1 -- 192.168.123.105:0/1478994016 wait complete. 2026-03-10T10:16:31.319 DEBUG:teuthology.orchestra.run.vm05:osd.5> sudo journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.5.service 2026-03-10T10:16:31.321 INFO:tasks.cephadm:Waiting for 6 OSDs to come up... 2026-03-10T10:16:31.321 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd stat -f json 2026-03-10T10:16:31.486 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:31.845 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.842+0000 7f977da4a700 1 -- 192.168.123.102:0/31277890 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778102e70 msgr2=0x7f9778103290 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:31.845 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.842+0000 7f977da4a700 1 --2- 192.168.123.102:0/31277890 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778102e70 0x7f9778103290 secure :-1 s=READY pgs=195 cs=0 l=1 rev1=1 crypto rx=0x7f9768009b00 tx=0x7f9768009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:31.845 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.843+0000 7f977da4a700 1 -- 192.168.123.102:0/31277890 shutdown_connections 2026-03-10T10:16:31.845 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.843+0000 7f977da4a700 1 --2- 192.168.123.102:0/31277890 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9778104060 0x7f97781044e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.845 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.843+0000 7f977da4a700 1 --2- 192.168.123.102:0/31277890 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778102e70 0x7f9778103290 unknown :-1 s=CLOSED pgs=195 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.845 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.844+0000 7f977da4a700 1 -- 192.168.123.102:0/31277890 >> 192.168.123.102:0/31277890 conn(0x7f97780fe440 msgr2=0x7f97781008a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:31.845 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.844+0000 7f977da4a700 1 -- 192.168.123.102:0/31277890 shutdown_connections 2026-03-10T10:16:31.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.844+0000 7f977da4a700 1 -- 192.168.123.102:0/31277890 wait complete. 2026-03-10T10:16:31.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.844+0000 7f977da4a700 1 Processor -- start 2026-03-10T10:16:31.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.845+0000 7f977da4a700 1 -- start start 2026-03-10T10:16:31.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.845+0000 7f977da4a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9778104060 0x7f9778198a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:31.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.845+0000 7f977da4a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778198f80 0x7f977819dff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:31.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.845+0000 7f977da4a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9778199490 con 0x7f9778198f80 2026-03-10T10:16:31.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.845+0000 7f977da4a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9778199600 con 0x7f9778104060 2026-03-10T10:16:31.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.845+0000 7f9777fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778198f80 0x7f977819dff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:31.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.845+0000 7f9777fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778198f80 0x7f977819dff0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:60150/0 (socket says 192.168.123.102:60150) 2026-03-10T10:16:31.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.845+0000 7f9777fff700 1 -- 192.168.123.102:0/1325853993 learned_addr learned my addr 192.168.123.102:0/1325853993 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:31.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.846+0000 7f9777fff700 1 -- 192.168.123.102:0/1325853993 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9778104060 msgr2=0x7f9778198a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:31.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.846+0000 7f9777fff700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9778104060 0x7f9778198a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.846+0000 7f9777fff700 1 -- 192.168.123.102:0/1325853993 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f97680097e0 con 0x7f9778198f80 2026-03-10T10:16:31.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.846+0000 7f9777fff700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778198f80 0x7f977819dff0 secure :-1 s=READY pgs=196 cs=0 l=1 rev1=1 crypto rx=0x7f976c00ba70 tx=0x7f976c00be30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:31.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.846+0000 7f9775ffb700 1 -- 192.168.123.102:0/1325853993 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f976c00c760 con 0x7f9778198f80 2026-03-10T10:16:31.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.846+0000 7f9775ffb700 1 -- 192.168.123.102:0/1325853993 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f976c00cda0 con 0x7f9778198f80 2026-03-10T10:16:31.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.846+0000 7f9775ffb700 1 -- 192.168.123.102:0/1325853993 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f976c012550 con 0x7f9778198f80 2026-03-10T10:16:31.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.846+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f977819e590 con 0x7f9778198f80 2026-03-10T10:16:31.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.846+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f977819eb60 con 0x7f9778198f80 2026-03-10T10:16:31.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.847+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f977804ea90 con 0x7f9778198f80 2026-03-10T10:16:31.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.850+0000 7f9775ffb700 1 -- 192.168.123.102:0/1325853993 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f976c014440 con 0x7f9778198f80 2026-03-10T10:16:31.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.851+0000 7f9775ffb700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f976006c5b0 0x7f976006ea70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:31.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.851+0000 7f9775ffb700 1 -- 192.168.123.102:0/1325853993 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(29..29 src has 1..29) v4 ==== 4129+0+0 (secure 0 0 0) 0x7f976c08ad20 con 0x7f9778198f80 2026-03-10T10:16:31.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.851+0000 7f9775ffb700 1 -- 192.168.123.102:0/1325853993 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f976c08b1a0 con 0x7f9778198f80 2026-03-10T10:16:31.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.852+0000 7f977ca48700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f976006c5b0 0x7f976006ea70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:31.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.853+0000 7f977ca48700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f976006c5b0 0x7f976006ea70 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f976800b5c0 tx=0x7f9768005fd0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:31.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.963+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7f977819e720 con 0x7f9778198f80 2026-03-10T10:16:31.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.964+0000 7f9775ffb700 1 -- 192.168.123.102:0/1325853993 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v29) v1 ==== 74+0+130 (secure 0 0 0) 0x7f977819e720 con 0x7f9778198f80 2026-03-10T10:16:31.965 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:31.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.966+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f976006c5b0 msgr2=0x7f976006ea70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:31.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.966+0000 7f977da4a700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f976006c5b0 0x7f976006ea70 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f976800b5c0 tx=0x7f9768005fd0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.966+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778198f80 msgr2=0x7f977819dff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:31.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.967+0000 7f977da4a700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778198f80 0x7f977819dff0 secure :-1 s=READY pgs=196 cs=0 l=1 rev1=1 crypto rx=0x7f976c00ba70 tx=0x7f976c00be30 comp rx=0 tx=0).stop 2026-03-10T10:16:31.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.967+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 shutdown_connections 2026-03-10T10:16:31.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.967+0000 7f977da4a700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f976006c5b0 0x7f976006ea70 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.967+0000 7f977da4a700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9778104060 0x7f9778198a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.967+0000 7f977da4a700 1 --2- 192.168.123.102:0/1325853993 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9778198f80 0x7f977819dff0 unknown :-1 s=CLOSED pgs=196 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:31.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.967+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 >> 192.168.123.102:0/1325853993 conn(0x7f97780fe440 msgr2=0x7f9778107320 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:31.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.967+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 shutdown_connections 2026-03-10T10:16:31.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:31.967+0000 7f977da4a700 1 -- 192.168.123.102:0/1325853993 wait complete. 2026-03-10T10:16:32.033 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":29,"num_osds":6,"num_up_osds":5,"osd_up_since":1773137784,"num_in_osds":6,"osd_in_since":1773137783,"num_remapped_pgs":0} 2026-03-10T10:16:32.225 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:32 vm02 ceph-mon[50200]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:32.225 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:32 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:32.225 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:32 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:32.225 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:32 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:32.225 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:32 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:32.225 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:32 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1325853993' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:16:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:32 vm05 ceph-mon[59051]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:32 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:32 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:32 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:32 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:32 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/1325853993' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:16:33.035 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd stat -f json 2026-03-10T10:16:33.037 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:16:32 vm05 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[77624]: 2026-03-10T10:16:32.612+0000 7fbfa51df640 -1 osd.5 0 log_to_monitors true 2026-03-10T10:16:33.174 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='osd.5 [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:33.325 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:33 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:33.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.429+0000 7f3379462700 1 -- 192.168.123.102:0/2524863518 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3374103140 msgr2=0x7f3374103560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:33.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.429+0000 7f3379462700 1 --2- 192.168.123.102:0/2524863518 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3374103140 0x7f3374103560 secure :-1 s=READY pgs=197 cs=0 l=1 rev1=1 crypto rx=0x7f335c009b50 tx=0x7f335c009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:33.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.430+0000 7f3379462700 1 -- 192.168.123.102:0/2524863518 shutdown_connections 2026-03-10T10:16:33.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.430+0000 7f3379462700 1 --2- 192.168.123.102:0/2524863518 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3374104340 0x7f33741047a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:33.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.430+0000 7f3379462700 1 --2- 192.168.123.102:0/2524863518 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3374103140 0x7f3374103560 unknown :-1 s=CLOSED pgs=197 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:33.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.430+0000 7f3379462700 1 -- 192.168.123.102:0/2524863518 >> 192.168.123.102:0/2524863518 conn(0x7f33740fe6c0 msgr2=0x7f3374100b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:33.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.430+0000 7f3379462700 1 -- 192.168.123.102:0/2524863518 shutdown_connections 2026-03-10T10:16:33.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.430+0000 7f3379462700 1 -- 192.168.123.102:0/2524863518 wait complete. 2026-03-10T10:16:33.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.430+0000 7f3379462700 1 Processor -- start 2026-03-10T10:16:33.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.431+0000 7f3379462700 1 -- start start 2026-03-10T10:16:33.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.431+0000 7f3379462700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3374104340 0x7f3374198cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.431+0000 7f3379462700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3374199230 0x7f337419e260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.431+0000 7f3379462700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f33741996b0 con 0x7f3374104340 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.431+0000 7f3379462700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3374199820 con 0x7f3374199230 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.431+0000 7f33727fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3374199230 0x7f337419e260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.431+0000 7f33727fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3374199230 0x7f337419e260 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:39086/0 (socket says 192.168.123.102:39086) 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.431+0000 7f33727fc700 1 -- 192.168.123.102:0/3758223322 learned_addr learned my addr 192.168.123.102:0/3758223322 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.431+0000 7f33727fc700 1 -- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3374104340 msgr2=0x7f3374198cf0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f3372ffd700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3374104340 0x7f3374198cf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f33727fc700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3374104340 0x7f3374198cf0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f33727fc700 1 -- 192.168.123.102:0/3758223322 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f335c0097e0 con 0x7f3374199230 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f3372ffd700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3374104340 0x7f3374198cf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:33.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f33727fc700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3374199230 0x7f337419e260 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f336400d8d0 tx=0x7f336400dc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:33.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f336bfff700 1 -- 192.168.123.102:0/3758223322 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3364009940 con 0x7f3374199230 2026-03-10T10:16:33.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f336bfff700 1 -- 192.168.123.102:0/3758223322 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3364010460 con 0x7f3374199230 2026-03-10T10:16:33.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f336bfff700 1 -- 192.168.123.102:0/3758223322 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f336400f5d0 con 0x7f3374199230 2026-03-10T10:16:33.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f3379462700 1 -- 192.168.123.102:0/3758223322 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f337419e800 con 0x7f3374199230 2026-03-10T10:16:33.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.432+0000 7f3379462700 1 -- 192.168.123.102:0/3758223322 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f337419ed20 con 0x7f3374199230 2026-03-10T10:16:33.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.433+0000 7f3379462700 1 -- 192.168.123.102:0/3758223322 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3374066e80 con 0x7f3374199230 2026-03-10T10:16:33.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.434+0000 7f336bfff700 1 -- 192.168.123.102:0/3758223322 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f33640105d0 con 0x7f3374199230 2026-03-10T10:16:33.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.434+0000 7f336bfff700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f336006c4e0 0x7f336006e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:33.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.434+0000 7f336bfff700 1 -- 192.168.123.102:0/3758223322 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(30..30 src has 1..30) v4 ==== 4150+0+0 (secure 0 0 0) 0x7f336408b7b0 con 0x7f3374199230 2026-03-10T10:16:33.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.435+0000 7f3372ffd700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f336006c4e0 0x7f336006e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:33.437 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.435+0000 7f3372ffd700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f336006c4e0 0x7f336006e9a0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f337419a1d0 tx=0x7f335c0058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:33.439 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.437+0000 7f336bfff700 1 -- 192.168.123.102:0/3758223322 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f3364059fa0 con 0x7f3374199230 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='osd.5 [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:33 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:33.542 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.541+0000 7f3379462700 1 -- 192.168.123.102:0/3758223322 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7f337419f000 con 0x7f3374199230 2026-03-10T10:16:33.543 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.541+0000 7f336bfff700 1 -- 192.168.123.102:0/3758223322 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v30) v1 ==== 74+0+130 (secure 0 0 0) 0x7f3364059b30 con 0x7f3374199230 2026-03-10T10:16:33.543 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:33.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 -- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f336006c4e0 msgr2=0x7f336006e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:33.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f336006c4e0 0x7f336006e9a0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f337419a1d0 tx=0x7f335c0058e0 comp rx=0 tx=0).stop 2026-03-10T10:16:33.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 -- 192.168.123.102:0/3758223322 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3374199230 msgr2=0x7f337419e260 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:33.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3374199230 0x7f337419e260 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f336400d8d0 tx=0x7f336400dc90 comp rx=0 tx=0).stop 2026-03-10T10:16:33.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 -- 192.168.123.102:0/3758223322 shutdown_connections 2026-03-10T10:16:33.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3374104340 0x7f3374198cf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:33.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f336006c4e0 0x7f336006e9a0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:33.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 --2- 192.168.123.102:0/3758223322 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3374199230 0x7f337419e260 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:33.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 -- 192.168.123.102:0/3758223322 >> 192.168.123.102:0/3758223322 conn(0x7f33740fe6c0 msgr2=0x7f3374107570 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:33.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 -- 192.168.123.102:0/3758223322 shutdown_connections 2026-03-10T10:16:33.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:33.545+0000 7f3369ffb700 1 -- 192.168.123.102:0/3758223322 wait complete. 2026-03-10T10:16:33.597 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":30,"num_osds":6,"num_up_osds":5,"osd_up_since":1773137784,"num_in_osds":6,"osd_in_since":1773137783,"num_remapped_pgs":0} 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: Detected new or changed devices on vm05 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: osdmap e30: 6 total, 5 up, 6 in 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: from='osd.5 [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3758223322' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: osdmap e31: 6 total, 5 up, 6 in 2026-03-10T10:16:34.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:34 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: Detected new or changed devices on vm05 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: osdmap e30: 6 total, 5 up, 6 in 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: from='osd.5 [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3758223322' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: osdmap e31: 6 total, 5 up, 6 in 2026-03-10T10:16:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:34 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:34.537 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:16:34 vm05 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[77624]: 2026-03-10T10:16:34.233+0000 7fbf9a042700 -1 osd.5 0 waiting for initial osdmap 2026-03-10T10:16:34.537 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:16:34 vm05 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[77624]: 2026-03-10T10:16:34.238+0000 7fbf93e33700 -1 osd.5 31 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:16:34.598 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd stat -f json 2026-03-10T10:16:34.741 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:34.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.966+0000 7efdd45df700 1 -- 192.168.123.102:0/3866477297 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc074dc0 msgr2=0x7efdcc073220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:34.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.966+0000 7efdd45df700 1 --2- 192.168.123.102:0/3866477297 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc074dc0 0x7efdcc073220 secure :-1 s=READY pgs=198 cs=0 l=1 rev1=1 crypto rx=0x7efdbc009b50 tx=0x7efdbc009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:34.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.967+0000 7efdd45df700 1 -- 192.168.123.102:0/3866477297 shutdown_connections 2026-03-10T10:16:34.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.967+0000 7efdd45df700 1 --2- 192.168.123.102:0/3866477297 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efdcc0737f0 0x7efdcc073c70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:34.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.967+0000 7efdd45df700 1 --2- 192.168.123.102:0/3866477297 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc074dc0 0x7efdcc073220 unknown :-1 s=CLOSED pgs=198 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:34.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.967+0000 7efdd45df700 1 -- 192.168.123.102:0/3866477297 >> 192.168.123.102:0/3866477297 conn(0x7efdcc0fc4d0 msgr2=0x7efdcc0fe930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:34.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.967+0000 7efdd45df700 1 -- 192.168.123.102:0/3866477297 shutdown_connections 2026-03-10T10:16:34.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.967+0000 7efdd45df700 1 -- 192.168.123.102:0/3866477297 wait complete. 2026-03-10T10:16:34.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.968+0000 7efdd45df700 1 Processor -- start 2026-03-10T10:16:34.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.968+0000 7efdd45df700 1 -- start start 2026-03-10T10:16:34.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.968+0000 7efdd45df700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc0737f0 0x7efdcc198a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:34.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.968+0000 7efdd237b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc0737f0 0x7efdcc198a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:34.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.968+0000 7efdd237b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc0737f0 0x7efdcc198a80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:60192/0 (socket says 192.168.123.102:60192) 2026-03-10T10:16:34.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.969+0000 7efdd45df700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efdcc074dc0 0x7efdcc198fc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:34.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.969+0000 7efdd45df700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efdcc1995e0 con 0x7efdcc0737f0 2026-03-10T10:16:34.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.969+0000 7efdd237b700 1 -- 192.168.123.102:0/3585827454 learned_addr learned my addr 192.168.123.102:0/3585827454 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:34.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.969+0000 7efdd1b7a700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efdcc074dc0 0x7efdcc198fc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:34.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.969+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efdcc199720 con 0x7efdcc074dc0 2026-03-10T10:16:34.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.969+0000 7efdd237b700 1 -- 192.168.123.102:0/3585827454 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efdcc074dc0 msgr2=0x7efdcc198fc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:34.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.969+0000 7efdd237b700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efdcc074dc0 0x7efdcc198fc0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:34.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.969+0000 7efdd237b700 1 -- 192.168.123.102:0/3585827454 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7efdbc0097e0 con 0x7efdcc0737f0 2026-03-10T10:16:34.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.969+0000 7efdd237b700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc0737f0 0x7efdcc198a80 secure :-1 s=READY pgs=199 cs=0 l=1 rev1=1 crypto rx=0x7efdbc004ce0 tx=0x7efdbc005740 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:34.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.970+0000 7efdc37fe700 1 -- 192.168.123.102:0/3585827454 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efdbc01d070 con 0x7efdcc0737f0 2026-03-10T10:16:34.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.970+0000 7efdc37fe700 1 -- 192.168.123.102:0/3585827454 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7efdbc022470 con 0x7efdcc0737f0 2026-03-10T10:16:34.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.970+0000 7efdc37fe700 1 -- 192.168.123.102:0/3585827454 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efdbc00f650 con 0x7efdcc0737f0 2026-03-10T10:16:34.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.970+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7efdcc19e170 con 0x7efdcc0737f0 2026-03-10T10:16:34.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.970+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7efdcc19e610 con 0x7efdcc0737f0 2026-03-10T10:16:34.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.971+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7efdcc066e80 con 0x7efdcc0737f0 2026-03-10T10:16:34.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.971+0000 7efdc37fe700 1 -- 192.168.123.102:0/3585827454 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7efdbc022a60 con 0x7efdcc0737f0 2026-03-10T10:16:34.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.974+0000 7efdc37fe700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efdb806c5b0 0x7efdb806ea70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:34.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.974+0000 7efdc37fe700 1 -- 192.168.123.102:0/3585827454 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(31..31 src has 1..31) v4 ==== 4166+0+0 (secure 0 0 0) 0x7efdbc08ca40 con 0x7efdcc0737f0 2026-03-10T10:16:34.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.974+0000 7efdd1b7a700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efdb806c5b0 0x7efdb806ea70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:34.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.974+0000 7efdd1b7a700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efdb806c5b0 0x7efdb806ea70 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7efdcc074af0 tx=0x7efdc8007400 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:34.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:34.974+0000 7efdc37fe700 1 -- 192.168.123.102:0/3585827454 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7efdbc05b340 con 0x7efdcc0737f0 2026-03-10T10:16:35.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.077+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7efdcc19e9e0 con 0x7efdcc0737f0 2026-03-10T10:16:35.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.078+0000 7efdc37fe700 1 -- 192.168.123.102:0/3585827454 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v31) v1 ==== 74+0+130 (secure 0 0 0) 0x7efdbc027090 con 0x7efdcc0737f0 2026-03-10T10:16:35.080 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:35.081 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efdb806c5b0 msgr2=0x7efdb806ea70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:35.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efdb806c5b0 0x7efdb806ea70 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7efdcc074af0 tx=0x7efdc8007400 comp rx=0 tx=0).stop 2026-03-10T10:16:35.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc0737f0 msgr2=0x7efdcc198a80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:35.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc0737f0 0x7efdcc198a80 secure :-1 s=READY pgs=199 cs=0 l=1 rev1=1 crypto rx=0x7efdbc004ce0 tx=0x7efdbc005740 comp rx=0 tx=0).stop 2026-03-10T10:16:35.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 shutdown_connections 2026-03-10T10:16:35.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efdcc0737f0 0x7efdcc198a80 unknown :-1 s=CLOSED pgs=199 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:35.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efdb806c5b0 0x7efdb806ea70 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:35.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 --2- 192.168.123.102:0/3585827454 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efdcc074dc0 0x7efdcc198fc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:35.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 >> 192.168.123.102:0/3585827454 conn(0x7efdcc0fc4d0 msgr2=0x7efdcc106d30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:35.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 shutdown_connections 2026-03-10T10:16:35.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:35.080+0000 7efdd45df700 1 -- 192.168.123.102:0/3585827454 wait complete. 2026-03-10T10:16:35.144 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":31,"num_osds":6,"num_up_osds":5,"osd_up_since":1773137784,"num_in_osds":6,"osd_in_since":1773137783,"num_remapped_pgs":0} 2026-03-10T10:16:35.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:35 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:35.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:35 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3585827454' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:16:35.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:35 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:35 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:35 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3585827454' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:16:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:35 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:36.145 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd stat -f json 2026-03-10T10:16:36.296 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:36.440 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:36 vm02 ceph-mon[50200]: purged_snaps scrub starts 2026-03-10T10:16:36.440 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:36 vm02 ceph-mon[50200]: purged_snaps scrub ok 2026-03-10T10:16:36.440 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:36 vm02 ceph-mon[50200]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:36.440 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:36 vm02 ceph-mon[50200]: osd.5 [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] boot 2026-03-10T10:16:36.440 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:36 vm02 ceph-mon[50200]: osdmap e32: 6 total, 6 up, 6 in 2026-03-10T10:16:36.440 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:36 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:36.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.545+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3334904061 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34103140 msgr2=0x7f5c34103560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:36.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.545+0000 7f5c3b9eb700 1 --2- 192.168.123.102:0/3334904061 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34103140 0x7f5c34103560 secure :-1 s=READY pgs=200 cs=0 l=1 rev1=1 crypto rx=0x7f5c24009b50 tx=0x7f5c24009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:36.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.546+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3334904061 shutdown_connections 2026-03-10T10:16:36.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.546+0000 7f5c3b9eb700 1 --2- 192.168.123.102:0/3334904061 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5c34104340 0x7f5c341047a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:36.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.546+0000 7f5c3b9eb700 1 --2- 192.168.123.102:0/3334904061 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34103140 0x7f5c34103560 unknown :-1 s=CLOSED pgs=200 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:36.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.546+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3334904061 >> 192.168.123.102:0/3334904061 conn(0x7f5c340fe6c0 msgr2=0x7f5c34100b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:36.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.547+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3334904061 shutdown_connections 2026-03-10T10:16:36.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.547+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3334904061 wait complete. 2026-03-10T10:16:36.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.547+0000 7f5c3b9eb700 1 Processor -- start 2026-03-10T10:16:36.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.548+0000 7f5c3b9eb700 1 -- start start 2026-03-10T10:16:36.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.548+0000 7f5c3b9eb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5c34104340 0x7f5c34198d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:36.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.548+0000 7f5c3b9eb700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34199290 0x7f5c3419e300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:36.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.548+0000 7f5c3b9eb700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5c341997a0 con 0x7f5c34199290 2026-03-10T10:16:36.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.548+0000 7f5c3b9eb700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5c34199910 con 0x7f5c34104340 2026-03-10T10:16:36.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.548+0000 7f5c38f86700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34199290 0x7f5c3419e300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:36.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.548+0000 7f5c38f86700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34199290 0x7f5c3419e300 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:60210/0 (socket says 192.168.123.102:60210) 2026-03-10T10:16:36.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.548+0000 7f5c38f86700 1 -- 192.168.123.102:0/3079847771 learned_addr learned my addr 192.168.123.102:0/3079847771 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:36.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.549+0000 7f5c38f86700 1 -- 192.168.123.102:0/3079847771 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5c34104340 msgr2=0x7f5c34198d50 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:16:36.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.549+0000 7f5c39787700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5c34104340 0x7f5c34198d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:36.551 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.549+0000 7f5c38f86700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5c34104340 0x7f5c34198d50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:36.551 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.549+0000 7f5c38f86700 1 -- 192.168.123.102:0/3079847771 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5c240097e0 con 0x7f5c34199290 2026-03-10T10:16:36.551 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.549+0000 7f5c39787700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5c34104340 0x7f5c34198d50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:36.551 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.549+0000 7f5c38f86700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34199290 0x7f5c3419e300 secure :-1 s=READY pgs=201 cs=0 l=1 rev1=1 crypto rx=0x7f5c3000b700 tx=0x7f5c3000bac0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:36.551 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.549+0000 7f5c2a7fc700 1 -- 192.168.123.102:0/3079847771 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5c30010840 con 0x7f5c34199290 2026-03-10T10:16:36.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.549+0000 7f5c2a7fc700 1 -- 192.168.123.102:0/3079847771 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5c30010e80 con 0x7f5c34199290 2026-03-10T10:16:36.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.549+0000 7f5c2a7fc700 1 -- 192.168.123.102:0/3079847771 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5c3000d590 con 0x7f5c34199290 2026-03-10T10:16:36.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.550+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5c3419e8a0 con 0x7f5c34199290 2026-03-10T10:16:36.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.550+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5c3419edf0 con 0x7f5c34199290 2026-03-10T10:16:36.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.550+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5c34066e80 con 0x7f5c34199290 2026-03-10T10:16:36.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.553+0000 7f5c2a7fc700 1 -- 192.168.123.102:0/3079847771 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f5c3000f3e0 con 0x7f5c34199290 2026-03-10T10:16:36.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.554+0000 7f5c2a7fc700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5c2006c4e0 0x7f5c2006e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:36.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.554+0000 7f5c2a7fc700 1 -- 192.168.123.102:0/3079847771 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f5c3008b700 con 0x7f5c34199290 2026-03-10T10:16:36.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.554+0000 7f5c2a7fc700 1 -- 192.168.123.102:0/3079847771 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f5c3008bae0 con 0x7f5c34199290 2026-03-10T10:16:36.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.554+0000 7f5c39787700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5c2006c4e0 0x7f5c2006e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:36.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.555+0000 7f5c39787700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5c2006c4e0 0x7f5c2006e9a0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f5c24005340 tx=0x7f5c240058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:36.662 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.660+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7f5c3419f1c0 con 0x7f5c34199290 2026-03-10T10:16:36.662 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.661+0000 7f5c2a7fc700 1 -- 192.168.123.102:0/3079847771 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v33) v1 ==== 74+0+130 (secure 0 0 0) 0x7f5c300140a0 con 0x7f5c34199290 2026-03-10T10:16:36.663 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:36.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.663+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5c2006c4e0 msgr2=0x7f5c2006e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:36.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.663+0000 7f5c3b9eb700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5c2006c4e0 0x7f5c2006e9a0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f5c24005340 tx=0x7f5c240058e0 comp rx=0 tx=0).stop 2026-03-10T10:16:36.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.663+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34199290 msgr2=0x7f5c3419e300 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:36.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.663+0000 7f5c3b9eb700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34199290 0x7f5c3419e300 secure :-1 s=READY pgs=201 cs=0 l=1 rev1=1 crypto rx=0x7f5c3000b700 tx=0x7f5c3000bac0 comp rx=0 tx=0).stop 2026-03-10T10:16:36.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.664+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 shutdown_connections 2026-03-10T10:16:36.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.664+0000 7f5c3b9eb700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5c2006c4e0 0x7f5c2006e9a0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:36.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.664+0000 7f5c3b9eb700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5c34104340 0x7f5c34198d50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:36.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.664+0000 7f5c3b9eb700 1 --2- 192.168.123.102:0/3079847771 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5c34199290 0x7f5c3419e300 unknown :-1 s=CLOSED pgs=201 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:36.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.664+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 >> 192.168.123.102:0/3079847771 conn(0x7f5c340fe6c0 msgr2=0x7f5c34107570 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:36.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.664+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 shutdown_connections 2026-03-10T10:16:36.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:36.664+0000 7f5c3b9eb700 1 -- 192.168.123.102:0/3079847771 wait complete. 2026-03-10T10:16:36.729 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":33,"num_osds":6,"num_up_osds":6,"osd_up_since":1773137795,"num_in_osds":6,"osd_in_since":1773137783,"num_remapped_pgs":0} 2026-03-10T10:16:36.729 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd dump --format=json 2026-03-10T10:16:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:36 vm05 ceph-mon[59051]: purged_snaps scrub starts 2026-03-10T10:16:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:36 vm05 ceph-mon[59051]: purged_snaps scrub ok 2026-03-10T10:16:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:36 vm05 ceph-mon[59051]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:16:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:36 vm05 ceph-mon[59051]: osd.5 [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] boot 2026-03-10T10:16:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:36 vm05 ceph-mon[59051]: osdmap e32: 6 total, 6 up, 6 in 2026-03-10T10:16:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:36 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:16:36.873 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:37.122 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.120+0000 7fb9bb78b700 1 -- 192.168.123.102:0/1697289208 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb9b4069380 msgr2=0x7fb9b4069760 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:37.122 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.120+0000 7fb9bb78b700 1 --2- 192.168.123.102:0/1697289208 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb9b4069380 0x7fb9b4069760 secure :-1 s=READY pgs=202 cs=0 l=1 rev1=1 crypto rx=0x7fb9b0009b00 tx=0x7fb9b0009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.121+0000 7fb9bb78b700 1 -- 192.168.123.102:0/1697289208 shutdown_connections 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.121+0000 7fb9bb78b700 1 --2- 192.168.123.102:0/1697289208 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b4069d30 0x7fb9b4105ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.121+0000 7fb9bb78b700 1 --2- 192.168.123.102:0/1697289208 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb9b4069380 0x7fb9b4069760 unknown :-1 s=CLOSED pgs=202 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.121+0000 7fb9bb78b700 1 -- 192.168.123.102:0/1697289208 >> 192.168.123.102:0/1697289208 conn(0x7fb9b4076c60 msgr2=0x7fb9b4077070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.121+0000 7fb9bb78b700 1 -- 192.168.123.102:0/1697289208 shutdown_connections 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.121+0000 7fb9bb78b700 1 -- 192.168.123.102:0/1697289208 wait complete. 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.121+0000 7fb9bb78b700 1 Processor -- start 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.121+0000 7fb9bb78b700 1 -- start start 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9bb78b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b4069380 0x7fb9b419a900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:37.123 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9bb78b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb9b4069d30 0x7fb9b419ae40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9bb78b700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb9b419b4d0 con 0x7fb9b4069d30 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9bb78b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb9b4194980 con 0x7fb9b4069380 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9b9527700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b4069380 0x7fb9b419a900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9b9527700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b4069380 0x7fb9b419a900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38132/0 (socket says 192.168.123.102:38132) 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9b9527700 1 -- 192.168.123.102:0/2915814921 learned_addr learned my addr 192.168.123.102:0/2915814921 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9b8d26700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb9b4069d30 0x7fb9b419ae40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9b9527700 1 -- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb9b4069d30 msgr2=0x7fb9b419ae40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9b9527700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb9b4069d30 0x7fb9b419ae40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.122+0000 7fb9b9527700 1 -- 192.168.123.102:0/2915814921 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb9b00097e0 con 0x7fb9b4069380 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.123+0000 7fb9b8d26700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb9b4069d30 0x7fb9b419ae40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:37.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.123+0000 7fb9b9527700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b4069380 0x7fb9b419a900 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fb9b0004930 tx=0x7fb9b0004a10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:37.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.123+0000 7fb9aa7fc700 1 -- 192.168.123.102:0/2915814921 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb9b001d070 con 0x7fb9b4069380 2026-03-10T10:16:37.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.123+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb9b4194c00 con 0x7fb9b4069380 2026-03-10T10:16:37.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.123+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb9b41950f0 con 0x7fb9b4069380 2026-03-10T10:16:37.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.123+0000 7fb9aa7fc700 1 -- 192.168.123.102:0/2915814921 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb9b000bc50 con 0x7fb9b4069380 2026-03-10T10:16:37.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.123+0000 7fb9aa7fc700 1 -- 192.168.123.102:0/2915814921 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb9b000f700 con 0x7fb9b4069380 2026-03-10T10:16:37.126 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.124+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb998005320 con 0x7fb9b4069380 2026-03-10T10:16:37.126 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.125+0000 7fb9aa7fc700 1 -- 192.168.123.102:0/2915814921 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fb9b000f920 con 0x7fb9b4069380 2026-03-10T10:16:37.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.125+0000 7fb9aa7fc700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb9a00708f0 0x7fb9a0072db0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:37.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.125+0000 7fb9aa7fc700 1 -- 192.168.123.102:0/2915814921 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fb9b008ca60 con 0x7fb9b4069380 2026-03-10T10:16:37.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.125+0000 7fb9b8d26700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb9a00708f0 0x7fb9a0072db0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:37.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.126+0000 7fb9b8d26700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb9a00708f0 0x7fb9a0072db0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fb9b4107b30 tx=0x7fb9a4008040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:37.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.128+0000 7fb9aa7fc700 1 -- 192.168.123.102:0/2915814921 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fb9b005b160 con 0x7fb9b4069380 2026-03-10T10:16:37.238 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.235+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) v1 -- 0x7fb998005190 con 0x7fb9b4069380 2026-03-10T10:16:37.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.237+0000 7fb9aa7fc700 1 -- 192.168.123.102:0/2915814921 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v33) v1 ==== 74+0+11285 (secure 0 0 0) 0x7fb9b005acf0 con 0x7fb9b4069380 2026-03-10T10:16:37.239 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:37.239 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":33,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","created":"2026-03-10T10:14:08.583559+0000","modified":"2026-03-10T10:16:36.329347+0000","last_up_change":"2026-03-10T10:16:35.321460+0000","last_in_change":"2026-03-10T10:16:23.907084+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":14,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"reef","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T10:16:06.912779+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_acting":6,"score_stable":6,"optimal_score":0.5,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"f90b5cc0-11ce-4915-a46a-c23fb52a4ba2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6803","nonce":2756332558}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6805","nonce":2756332558}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6809","nonce":2756332558}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6807","nonce":2756332558}]},"public_addr":"192.168.123.102:6803/2756332558","cluster_addr":"192.168.123.102:6805/2756332558","heartbeat_back_addr":"192.168.123.102:6809/2756332558","heartbeat_front_addr":"192.168.123.102:6807/2756332558","state":["exists","up"]},{"osd":1,"uuid":"8bd56e09-7dad-4b23-847e-c7afae0d2f41","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":24,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6811","nonce":1060043977}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6813","nonce":1060043977}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6817","nonce":1060043977}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6815","nonce":1060043977}]},"public_addr":"192.168.123.102:6811/1060043977","cluster_addr":"192.168.123.102:6813/1060043977","heartbeat_back_addr":"192.168.123.102:6817/1060043977","heartbeat_front_addr":"192.168.123.102:6815/1060043977","state":["exists","up"]},{"osd":2,"uuid":"1ccdc548-a0cb-41e0-bc7a-21b41198ffea","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6819","nonce":3838117302}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6821","nonce":3838117302}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6825","nonce":3838117302}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6823","nonce":3838117302}]},"public_addr":"192.168.123.102:6819/3838117302","cluster_addr":"192.168.123.102:6821/3838117302","heartbeat_back_addr":"192.168.123.102:6825/3838117302","heartbeat_front_addr":"192.168.123.102:6823/3838117302","state":["exists","up"]},{"osd":3,"uuid":"70fa78db-d544-4037-a4e5-e2b601b924d7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6801","nonce":4254210589}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6803","nonce":4254210589}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6807","nonce":4254210589}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6805","nonce":4254210589}]},"public_addr":"192.168.123.105:6801/4254210589","cluster_addr":"192.168.123.105:6803/4254210589","heartbeat_back_addr":"192.168.123.105:6807/4254210589","heartbeat_front_addr":"192.168.123.105:6805/4254210589","state":["exists","up"]},{"osd":4,"uuid":"d0b95380-36d0-4fea-a134-f6abcd77b2ee","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6809","nonce":4051935333}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6811","nonce":4051935333}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6815","nonce":4051935333}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6813","nonce":4051935333}]},"public_addr":"192.168.123.105:6809/4051935333","cluster_addr":"192.168.123.105:6811/4051935333","heartbeat_back_addr":"192.168.123.105:6815/4051935333","heartbeat_front_addr":"192.168.123.105:6813/4051935333","state":["exists","up"]},{"osd":5,"uuid":"bf16e555-2559-41cf-b9cc-38646188d928","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6817","nonce":1475090979}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6819","nonce":1475090979}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6823","nonce":1475090979}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6821","nonce":1475090979}]},"public_addr":"192.168.123.105:6817/1475090979","cluster_addr":"192.168.123.105:6819/1475090979","heartbeat_back_addr":"192.168.123.105:6823/1475090979","heartbeat_front_addr":"192.168.123.105:6821/1475090979","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:15:46.411665+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:15:55.401808+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:05.469558+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:15.184930+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:24.070276+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:33.661652+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:0/1508780527":"2026-03-11T10:15:14.708009+0000","192.168.123.102:0/1164545653":"2026-03-11T10:15:14.708009+0000","192.168.123.102:0/2365816117":"2026-03-11T10:15:14.708009+0000","192.168.123.102:0/1117450327":"2026-03-11T10:14:37.093481+0000","192.168.123.102:6800/2":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/1091112719":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/2700080577":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/3931430898":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/1517189708":"2026-03-11T10:14:37.093481+0000","192.168.123.102:6801/2":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/982532372":"2026-03-11T10:14:37.093481+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T10:16:37.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb9a00708f0 msgr2=0x7fb9a0072db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb9a00708f0 0x7fb9a0072db0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fb9b4107b30 tx=0x7fb9a4008040 comp rx=0 tx=0).stop 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b4069380 msgr2=0x7fb9b419a900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b4069380 0x7fb9b419a900 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fb9b0004930 tx=0x7fb9b0004a10 comp rx=0 tx=0).stop 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 shutdown_connections 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb9a00708f0 0x7fb9a0072db0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b4069380 0x7fb9b419a900 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 --2- 192.168.123.102:0/2915814921 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb9b4069d30 0x7fb9b419ae40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 >> 192.168.123.102:0/2915814921 conn(0x7fb9b4076c60 msgr2=0x7fb9b40fe9b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 shutdown_connections 2026-03-10T10:16:37.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.240+0000 7fb9bb78b700 1 -- 192.168.123.102:0/2915814921 wait complete. 2026-03-10T10:16:37.305 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T10:16:06.912779+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6, 'score_stable': 6, 'optimal_score': 0.5, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T10:16:37.305 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd pool get .mgr pg_num 2026-03-10T10:16:37.470 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:37.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:37 vm02 ceph-mon[50200]: osdmap e33: 6 total, 6 up, 6 in 2026-03-10T10:16:37.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:37 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3079847771' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:16:37.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:37 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/2915814921' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:16:37.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.732+0000 7f98ddb0d700 1 -- 192.168.123.102:0/2170947088 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d8073130 msgr2=0x7f98d8073510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:37.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.732+0000 7f98ddb0d700 1 --2- 192.168.123.102:0/2170947088 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d8073130 0x7f98d8073510 secure :-1 s=READY pgs=203 cs=0 l=1 rev1=1 crypto rx=0x7f98c0009b50 tx=0x7f98c0009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:37.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.732+0000 7f98ddb0d700 1 -- 192.168.123.102:0/2170947088 shutdown_connections 2026-03-10T10:16:37.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.732+0000 7f98ddb0d700 1 --2- 192.168.123.102:0/2170947088 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98d8073a50 0x7f98d8111960 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.732+0000 7f98ddb0d700 1 --2- 192.168.123.102:0/2170947088 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d8073130 0x7f98d8073510 unknown :-1 s=CLOSED pgs=203 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.732+0000 7f98ddb0d700 1 -- 192.168.123.102:0/2170947088 >> 192.168.123.102:0/2170947088 conn(0x7f98d80fc920 msgr2=0x7f98d80fed40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:37.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.732+0000 7f98ddb0d700 1 -- 192.168.123.102:0/2170947088 shutdown_connections 2026-03-10T10:16:37.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.732+0000 7f98ddb0d700 1 -- 192.168.123.102:0/2170947088 wait complete. 2026-03-10T10:16:37.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.733+0000 7f98ddb0d700 1 Processor -- start 2026-03-10T10:16:37.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.733+0000 7f98ddb0d700 1 -- start start 2026-03-10T10:16:37.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.734+0000 7f98ddb0d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98d8073a50 0x7f98d819d480 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:37.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.734+0000 7f98ddb0d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d819d9c0 0x7f98d81a1e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:37.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.734+0000 7f98ddb0d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f98d819dfe0 con 0x7f98d819d9c0 2026-03-10T10:16:37.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.734+0000 7f98ddb0d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f98d819e150 con 0x7f98d8073a50 2026-03-10T10:16:37.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.734+0000 7f98d6ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d819d9c0 0x7f98d81a1e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:37.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.734+0000 7f98d6ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d819d9c0 0x7f98d81a1e30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44922/0 (socket says 192.168.123.102:44922) 2026-03-10T10:16:37.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.734+0000 7f98d6ffd700 1 -- 192.168.123.102:0/137612842 learned_addr learned my addr 192.168.123.102:0/137612842 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:37.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.734+0000 7f98d77fe700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98d8073a50 0x7f98d819d480 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:37.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.735+0000 7f98d6ffd700 1 -- 192.168.123.102:0/137612842 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98d8073a50 msgr2=0x7f98d819d480 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:37.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.735+0000 7f98d6ffd700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98d8073a50 0x7f98d819d480 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.735+0000 7f98d6ffd700 1 -- 192.168.123.102:0/137612842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f98c00097e0 con 0x7f98d819d9c0 2026-03-10T10:16:37.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.735+0000 7f98d6ffd700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d819d9c0 0x7f98d81a1e30 secure :-1 s=READY pgs=204 cs=0 l=1 rev1=1 crypto rx=0x7f98c800d900 tx=0x7f98c800dc10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:37.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.735+0000 7f98d4ff9700 1 -- 192.168.123.102:0/137612842 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f98c80041d0 con 0x7f98d819d9c0 2026-03-10T10:16:37.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.735+0000 7f98d4ff9700 1 -- 192.168.123.102:0/137612842 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f98c8004d10 con 0x7f98d819d9c0 2026-03-10T10:16:37.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.735+0000 7f98d4ff9700 1 -- 192.168.123.102:0/137612842 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f98c800b750 con 0x7f98d819d9c0 2026-03-10T10:16:37.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.735+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f98d81a2430 con 0x7f98d819d9c0 2026-03-10T10:16:37.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.735+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f98d81a2980 con 0x7f98d819d9c0 2026-03-10T10:16:37.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.736+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f98d810f0e0 con 0x7f98d819d9c0 2026-03-10T10:16:37.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.739+0000 7f98d4ff9700 1 -- 192.168.123.102:0/137612842 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f98c8004e80 con 0x7f98d819d9c0 2026-03-10T10:16:37.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.739+0000 7f98d4ff9700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f98c406c4e0 0x7f98c406e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:37.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.740+0000 7f98d4ff9700 1 -- 192.168.123.102:0/137612842 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f98c801f030 con 0x7f98d819d9c0 2026-03-10T10:16:37.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.740+0000 7f98d4ff9700 1 -- 192.168.123.102:0/137612842 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f98c808b990 con 0x7f98d819d9c0 2026-03-10T10:16:37.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.740+0000 7f98d77fe700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f98c406c4e0 0x7f98c406e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:37.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.740+0000 7f98d77fe700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f98c406c4e0 0x7f98c406e9a0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f98c000b5c0 tx=0x7f98c0005fb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:37 vm05 ceph-mon[59051]: osdmap e33: 6 total, 6 up, 6 in 2026-03-10T10:16:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:37 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3079847771' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:16:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:37 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/2915814921' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:16:37.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.848+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"} v 0) v1 -- 0x7f98d81a25c0 con 0x7f98d819d9c0 2026-03-10T10:16:37.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.848+0000 7f98d4ff9700 1 -- 192.168.123.102:0/137612842 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]=0 v33) v1 ==== 93+0+10 (secure 0 0 0) 0x7f98d81a25c0 con 0x7f98d819d9c0 2026-03-10T10:16:37.850 INFO:teuthology.orchestra.run.vm02.stdout:pg_num: 1 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f98c406c4e0 msgr2=0x7f98c406e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f98c406c4e0 0x7f98c406e9a0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f98c000b5c0 tx=0x7f98c0005fb0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d819d9c0 msgr2=0x7f98d81a1e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d819d9c0 0x7f98d81a1e30 secure :-1 s=READY pgs=204 cs=0 l=1 rev1=1 crypto rx=0x7f98c800d900 tx=0x7f98c800dc10 comp rx=0 tx=0).stop 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 shutdown_connections 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f98c406c4e0 0x7f98c406e9a0 secure :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f98c000b5c0 tx=0x7f98c0005fb0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98d8073a50 0x7f98d819d480 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 --2- 192.168.123.102:0/137612842 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98d819d9c0 0x7f98d81a1e30 unknown :-1 s=CLOSED pgs=204 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 >> 192.168.123.102:0/137612842 conn(0x7f98d80fc920 msgr2=0x7f98d8103510 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 shutdown_connections 2026-03-10T10:16:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:37.851+0000 7f98ddb0d700 1 -- 192.168.123.102:0/137612842 wait complete. 2026-03-10T10:16:37.911 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T10:16:37.911 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T10:16:38.048 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:38.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.300+0000 7f23318cd700 1 -- 192.168.123.102:0/1265330188 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 msgr2=0x7f232c075b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:38.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.300+0000 7f23318cd700 1 --2- 192.168.123.102:0/1265330188 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 0x7f232c075b60 secure :-1 s=READY pgs=205 cs=0 l=1 rev1=1 crypto rx=0x7f2314009b50 tx=0x7f2314009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:38.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.301+0000 7f23318cd700 1 -- 192.168.123.102:0/1265330188 shutdown_connections 2026-03-10T10:16:38.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.301+0000 7f23318cd700 1 --2- 192.168.123.102:0/1265330188 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f232c076990 0x7f232c076e10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.301+0000 7f23318cd700 1 --2- 192.168.123.102:0/1265330188 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 0x7f232c075b60 unknown :-1 s=CLOSED pgs=205 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.301+0000 7f23318cd700 1 -- 192.168.123.102:0/1265330188 >> 192.168.123.102:0/1265330188 conn(0x7f232c0fe6c0 msgr2=0x7f232c100b00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:38.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.301+0000 7f23318cd700 1 -- 192.168.123.102:0/1265330188 shutdown_connections 2026-03-10T10:16:38.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.301+0000 7f23318cd700 1 -- 192.168.123.102:0/1265330188 wait complete. 2026-03-10T10:16:38.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.301+0000 7f23318cd700 1 Processor -- start 2026-03-10T10:16:38.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.301+0000 7f23318cd700 1 -- start start 2026-03-10T10:16:38.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f23318cd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 0x7f232c071da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:38.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f23318cd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f232c076990 0x7f232c0722e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:38.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f23318cd700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f232c072900 con 0x7f232c075740 2026-03-10T10:16:38.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f23318cd700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f232c1aabf0 con 0x7f232c076990 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f232a7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f232c076990 0x7f232c0722e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f232affd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 0x7f232c071da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f232a7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f232c076990 0x7f232c0722e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38176/0 (socket says 192.168.123.102:38176) 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f232a7fc700 1 -- 192.168.123.102:0/1735955893 learned_addr learned my addr 192.168.123.102:0/1735955893 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f232affd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 0x7f232c071da0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44954/0 (socket says 192.168.123.102:44954) 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f232a7fc700 1 -- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 msgr2=0x7f232c071da0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f232a7fc700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 0x7f232c071da0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f232a7fc700 1 -- 192.168.123.102:0/1735955893 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f23140097e0 con 0x7f232c076990 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.302+0000 7f232affd700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 0x7f232c071da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.303+0000 7f232a7fc700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f232c076990 0x7f232c0722e0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f231c00eb10 tx=0x7f231c00eed0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:38.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.303+0000 7f23308cb700 1 -- 192.168.123.102:0/1735955893 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f231c00cca0 con 0x7f232c076990 2026-03-10T10:16:38.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.303+0000 7f23308cb700 1 -- 192.168.123.102:0/1735955893 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f231c00ce00 con 0x7f232c076990 2026-03-10T10:16:38.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.303+0000 7f23308cb700 1 -- 192.168.123.102:0/1735955893 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f231c018910 con 0x7f232c076990 2026-03-10T10:16:38.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.303+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f232c1aadf0 con 0x7f232c076990 2026-03-10T10:16:38.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.303+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f232c1ab2c0 con 0x7f232c076990 2026-03-10T10:16:38.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.304+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f232c066e80 con 0x7f232c076990 2026-03-10T10:16:38.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.305+0000 7f23308cb700 1 -- 192.168.123.102:0/1735955893 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f231c018a70 con 0x7f232c076990 2026-03-10T10:16:38.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.306+0000 7f23308cb700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f231806c2e0 0x7f231806e7a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:38.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.306+0000 7f232affd700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f231806c2e0 0x7f231806e7a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:38.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.306+0000 7f23308cb700 1 -- 192.168.123.102:0/1735955893 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f231c014070 con 0x7f232c076990 2026-03-10T10:16:38.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.307+0000 7f232affd700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f231806c2e0 0x7f231806e7a0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f2314006010 tx=0x7f231400b540 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:38.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.307+0000 7f23308cb700 1 -- 192.168.123.102:0/1735955893 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f231c05a680 con 0x7f232c076990 2026-03-10T10:16:38.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.452+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) v1 -- 0x7f232c1ab660 con 0x7f232c076990 2026-03-10T10:16:38.454 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:38 vm02 ceph-mon[50200]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:38.454 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:38 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/137612842' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T10:16:38.459 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.457+0000 7f23308cb700 1 -- 192.168.123.102:0/1735955893 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v16) v1 ==== 170+0+59 (secure 0 0 0) 0x7f231c05a210 con 0x7f232c076990 2026-03-10T10:16:38.459 INFO:teuthology.orchestra.run.vm02.stdout:[client.0] 2026-03-10T10:16:38.459 INFO:teuthology.orchestra.run.vm02.stdout: key = AQCG769pY1soGxAAF2vz6W2EJ5pnGotfBd+Cfw== 2026-03-10T10:16:38.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.460+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f231806c2e0 msgr2=0x7f231806e7a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:38.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.460+0000 7f23318cd700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f231806c2e0 0x7f231806e7a0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f2314006010 tx=0x7f231400b540 comp rx=0 tx=0).stop 2026-03-10T10:16:38.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.460+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f232c076990 msgr2=0x7f232c0722e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:38.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.460+0000 7f23318cd700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f232c076990 0x7f232c0722e0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f231c00eb10 tx=0x7f231c00eed0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.461+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 shutdown_connections 2026-03-10T10:16:38.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.461+0000 7f23318cd700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f232c075740 0x7f232c071da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.461+0000 7f23318cd700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f231806c2e0 0x7f231806e7a0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.461+0000 7f23318cd700 1 --2- 192.168.123.102:0/1735955893 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f232c076990 0x7f232c0722e0 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.463 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.461+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 >> 192.168.123.102:0/1735955893 conn(0x7f232c0fe6c0 msgr2=0x7f232c10d380 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:38.463 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.461+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 shutdown_connections 2026-03-10T10:16:38.463 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:38.461+0000 7f23318cd700 1 -- 192.168.123.102:0/1735955893 wait complete. 2026-03-10T10:16:38.503 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:16:38.503 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T10:16:38.503 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T10:16:38.541 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T10:16:38.688 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm05/config 2026-03-10T10:16:38.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:38 vm05 ceph-mon[59051]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:38.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:38 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/137612842' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T10:16:38.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.953+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/1714717472 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a440737f0 msgr2=0x7f2a44073c70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:38.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.953+0000 7f2a4b1d3700 1 --2- 192.168.123.105:0/1714717472 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a440737f0 0x7f2a44073c70 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f2a38009b00 tx=0x7f2a38009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:38.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.954+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/1714717472 shutdown_connections 2026-03-10T10:16:38.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.954+0000 7f2a4b1d3700 1 --2- 192.168.123.105:0/1714717472 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a440737f0 0x7f2a44073c70 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.954+0000 7f2a4b1d3700 1 --2- 192.168.123.105:0/1714717472 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a44074dc0 0x7f2a44073220 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.954+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/1714717472 >> 192.168.123.105:0/1714717472 conn(0x7f2a440fc4c0 msgr2=0x7f2a440fe920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:38.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.954+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/1714717472 shutdown_connections 2026-03-10T10:16:38.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.955+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/1714717472 wait complete. 2026-03-10T10:16:38.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.955+0000 7f2a4b1d3700 1 Processor -- start 2026-03-10T10:16:38.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.955+0000 7f2a4b1d3700 1 -- start start 2026-03-10T10:16:38.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.955+0000 7f2a4b1d3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a440737f0 0x7f2a4419ce30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.955+0000 7f2a4b1d3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a44074dc0 0x7f2a4419d370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.955+0000 7f2a4b1d3700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a4419d900 con 0x7f2a44074dc0 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.955+0000 7f2a4b1d3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a4419da40 con 0x7f2a440737f0 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a43fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a44074dc0 0x7f2a4419d370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a43fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a44074dc0 0x7f2a4419d370 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.105:53790/0 (socket says 192.168.123.105:53790) 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a43fff700 1 -- 192.168.123.105:0/3467704230 learned_addr learned my addr 192.168.123.105:0/3467704230 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a48f6f700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a440737f0 0x7f2a4419ce30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a43fff700 1 -- 192.168.123.105:0/3467704230 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a440737f0 msgr2=0x7f2a4419ce30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a43fff700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a440737f0 0x7f2a4419ce30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a43fff700 1 -- 192.168.123.105:0/3467704230 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2a380097e0 con 0x7f2a44074dc0 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a48f6f700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a440737f0 0x7f2a4419ce30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a43fff700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a44074dc0 0x7f2a4419d370 secure :-1 s=READY pgs=206 cs=0 l=1 rev1=1 crypto rx=0x7f2a38009fd0 tx=0x7f2a38004ab0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:38.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a41ffb700 1 -- 192.168.123.105:0/3467704230 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2a3801d070 con 0x7f2a44074dc0 2026-03-10T10:16:38.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a41ffb700 1 -- 192.168.123.105:0/3467704230 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f2a3800bc50 con 0x7f2a44074dc0 2026-03-10T10:16:38.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a41ffb700 1 -- 192.168.123.105:0/3467704230 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2a3800f7e0 con 0x7f2a44074dc0 2026-03-10T10:16:38.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.956+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2a441a24a0 con 0x7f2a44074dc0 2026-03-10T10:16:38.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.957+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2a441a2960 con 0x7f2a44074dc0 2026-03-10T10:16:38.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.957+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2a44066e80 con 0x7f2a44074dc0 2026-03-10T10:16:38.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.958+0000 7f2a41ffb700 1 -- 192.168.123.105:0/3467704230 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f2a3800f940 con 0x7f2a44074dc0 2026-03-10T10:16:38.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.959+0000 7f2a41ffb700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a2c06c4e0 0x7f2a2c06e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:38.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.959+0000 7f2a41ffb700 1 -- 192.168.123.105:0/3467704230 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f2a3808d820 con 0x7f2a44074dc0 2026-03-10T10:16:38.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.959+0000 7f2a48f6f700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a2c06c4e0 0x7f2a2c06e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:38.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.961+0000 7f2a41ffb700 1 -- 192.168.123.105:0/3467704230 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f2a3805bf90 con 0x7f2a44074dc0 2026-03-10T10:16:38.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:38.961+0000 7f2a48f6f700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a2c06c4e0 0x7f2a2c06e9a0 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f2a34005fd0 tx=0x7f2a34005e20 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:39.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.102+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) v1 -- 0x7f2a441a2ce0 con 0x7f2a44074dc0 2026-03-10T10:16:39.109 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.108+0000 7f2a41ffb700 1 -- 192.168.123.105:0/3467704230 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v17) v1 ==== 170+0+59 (secure 0 0 0) 0x7f2a380270f0 con 0x7f2a44074dc0 2026-03-10T10:16:39.109 INFO:teuthology.orchestra.run.vm05.stdout:[client.1] 2026-03-10T10:16:39.109 INFO:teuthology.orchestra.run.vm05.stdout: key = AQCH769pTLpBBhAAcUGmFYBPNLcMEUSC0ckCeg== 2026-03-10T10:16:39.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.111+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a2c06c4e0 msgr2=0x7f2a2c06e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:39.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.111+0000 7f2a4b1d3700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a2c06c4e0 0x7f2a2c06e9a0 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f2a34005fd0 tx=0x7f2a34005e20 comp rx=0 tx=0).stop 2026-03-10T10:16:39.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.111+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a44074dc0 msgr2=0x7f2a4419d370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:39.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.111+0000 7f2a4b1d3700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a44074dc0 0x7f2a4419d370 secure :-1 s=READY pgs=206 cs=0 l=1 rev1=1 crypto rx=0x7f2a38009fd0 tx=0x7f2a38004ab0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.111+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 shutdown_connections 2026-03-10T10:16:39.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.111+0000 7f2a4b1d3700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a2c06c4e0 0x7f2a2c06e9a0 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.111+0000 7f2a4b1d3700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a440737f0 0x7f2a4419ce30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.111+0000 7f2a4b1d3700 1 --2- 192.168.123.105:0/3467704230 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a44074dc0 0x7f2a4419d370 unknown :-1 s=CLOSED pgs=206 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.111+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 >> 192.168.123.105:0/3467704230 conn(0x7f2a440fc4c0 msgr2=0x7f2a441027c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:39.113 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.112+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 shutdown_connections 2026-03-10T10:16:39.113 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:16:39.112+0000 7f2a4b1d3700 1 -- 192.168.123.105:0/3467704230 wait complete. 2026-03-10T10:16:39.174 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:16:39.174 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T10:16:39.174 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T10:16:39.208 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T10:16:39.208 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T10:16:39.208 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mgr dump --format=json 2026-03-10T10:16:39.373 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:39.519 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:39 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1735955893' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:16:39.519 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:39 vm02 ceph-mon[50200]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:16:39.519 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:39 vm02 ceph-mon[50200]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:16:39.519 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:39 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3467704230' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:16:39.519 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:39 vm02 ceph-mon[50200]: from='client.? 192.168.123.105:0/3467704230' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:16:39.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.632+0000 7f20eb25c700 1 -- 192.168.123.102:0/1731734375 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20e4107500 msgr2=0x7f20e4107980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:39.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.632+0000 7f20eb25c700 1 --2- 192.168.123.102:0/1731734375 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20e4107500 0x7f20e4107980 secure :-1 s=READY pgs=207 cs=0 l=1 rev1=1 crypto rx=0x7f20d8009b00 tx=0x7f20d8009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:39.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.633+0000 7f20eb25c700 1 -- 192.168.123.102:0/1731734375 shutdown_connections 2026-03-10T10:16:39.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.633+0000 7f20eb25c700 1 --2- 192.168.123.102:0/1731734375 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20e4107500 0x7f20e4107980 secure :-1 s=CLOSED pgs=207 cs=0 l=1 rev1=1 crypto rx=0x7f20d8009b00 tx=0x7f20d8009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:39.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.633+0000 7f20eb25c700 1 --2- 192.168.123.102:0/1731734375 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f20e410d5b0 0x7f20e410d990 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.633+0000 7f20eb25c700 1 -- 192.168.123.102:0/1731734375 >> 192.168.123.102:0/1731734375 conn(0x7f20e4075450 msgr2=0x7f20e4077870 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:39.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.634+0000 7f20eb25c700 1 -- 192.168.123.102:0/1731734375 shutdown_connections 2026-03-10T10:16:39.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.634+0000 7f20eb25c700 1 -- 192.168.123.102:0/1731734375 wait complete. 2026-03-10T10:16:39.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.634+0000 7f20eb25c700 1 Processor -- start 2026-03-10T10:16:39.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.634+0000 7f20eb25c700 1 -- start start 2026-03-10T10:16:39.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.634+0000 7f20eb25c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f20e410d5b0 0x7f20e419d300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:39.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20eb25c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20e419d840 0x7f20e41a1cb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:39.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20eb25c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f20e419ddd0 con 0x7f20e419d840 2026-03-10T10:16:39.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20eb25c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f20e419df40 con 0x7f20e410d5b0 2026-03-10T10:16:39.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20e8ff8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f20e410d5b0 0x7f20e419d300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:39.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20e8ff8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f20e410d5b0 0x7f20e419d300 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38194/0 (socket says 192.168.123.102:38194) 2026-03-10T10:16:39.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20e8ff8700 1 -- 192.168.123.102:0/3658960079 learned_addr learned my addr 192.168.123.102:0/3658960079 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:39.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20e8ff8700 1 -- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20e419d840 msgr2=0x7f20e41a1cb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:39.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20e3fff700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20e419d840 0x7f20e41a1cb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:39.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20e8ff8700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20e419d840 0x7f20e41a1cb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20e8ff8700 1 -- 192.168.123.102:0/3658960079 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f20d80097e0 con 0x7f20e410d5b0 2026-03-10T10:16:39.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.635+0000 7f20e8ff8700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f20e410d5b0 0x7f20e419d300 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f20d400c930 tx=0x7f20d400ccf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:39.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.636+0000 7f20e1ffb700 1 -- 192.168.123.102:0/3658960079 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f20d4007ab0 con 0x7f20e410d5b0 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.636+0000 7f20e1ffb700 1 -- 192.168.123.102:0/3658960079 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f20d4007c10 con 0x7f20e410d5b0 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.636+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f20e41a2250 con 0x7f20e410d5b0 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.636+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f20e41a27a0 con 0x7f20e410d5b0 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.637+0000 7f20e1ffb700 1 -- 192.168.123.102:0/3658960079 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f20d40186e0 con 0x7f20e410d5b0 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.637+0000 7f20e3fff700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20e419d840 0x7f20e41a1cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.637+0000 7f20e1ffb700 1 -- 192.168.123.102:0/3658960079 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f20d4018990 con 0x7f20e410d5b0 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.637+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f20e410f9e0 con 0x7f20e410d5b0 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.637+0000 7f20e1ffb700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f20cc06c4e0 0x7f20cc06e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.637+0000 7f20e1ffb700 1 -- 192.168.123.102:0/3658960079 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f20d408b950 con 0x7f20e410d5b0 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.638+0000 7f20e3fff700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f20cc06c4e0 0x7f20cc06e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:39.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.638+0000 7f20e3fff700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f20cc06c4e0 0x7f20cc06e9a0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f20d80059d0 tx=0x7f20d8005960 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:39.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.640+0000 7f20e1ffb700 1 -- 192.168.123.102:0/3658960079 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f20d4056570 con 0x7f20e410d5b0 2026-03-10T10:16:39.777 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.775+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) v1 -- 0x7f20e41a2a80 con 0x7f20e410d5b0 2026-03-10T10:16:39.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.779+0000 7f20e1ffb700 1 -- 192.168.123.102:0/3658960079 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v19) v1 ==== 74+0+173029 (secure 0 0 0) 0x7f20d4059b90 con 0x7f20e410d5b0 2026-03-10T10:16:39.781 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:39.785 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.783+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f20cc06c4e0 msgr2=0x7f20cc06e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:39.785 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.783+0000 7f20eb25c700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f20cc06c4e0 0x7f20cc06e9a0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f20d80059d0 tx=0x7f20d8005960 comp rx=0 tx=0).stop 2026-03-10T10:16:39.785 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.783+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f20e410d5b0 msgr2=0x7f20e419d300 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:39.785 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.783+0000 7f20eb25c700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f20e410d5b0 0x7f20e419d300 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f20d400c930 tx=0x7f20d400ccf0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.785 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.784+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 shutdown_connections 2026-03-10T10:16:39.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.784+0000 7f20eb25c700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f20cc06c4e0 0x7f20cc06e9a0 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.784+0000 7f20eb25c700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f20e410d5b0 0x7f20e419d300 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.784+0000 7f20eb25c700 1 --2- 192.168.123.102:0/3658960079 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20e419d840 0x7f20e41a1cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:39.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.784+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 >> 192.168.123.102:0/3658960079 conn(0x7f20e4075450 msgr2=0x7f20e4076f50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:39.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.784+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 shutdown_connections 2026-03-10T10:16:39.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:39.784+0000 7f20eb25c700 1 -- 192.168.123.102:0/3658960079 wait complete. 2026-03-10T10:16:39.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:39 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/1735955893' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:16:39.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:39 vm05 ceph-mon[59051]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:16:39.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:39 vm05 ceph-mon[59051]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:16:39.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:39 vm05 ceph-mon[59051]: from='client.? 192.168.123.105:0/3467704230' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:16:39.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:39 vm05 ceph-mon[59051]: from='client.? 192.168.123.105:0/3467704230' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:16:39.855 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":19,"active_gid":14225,"active_name":"vm02.zmavgl","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":2},{"type":"v1","addr":"192.168.123.102:6801","nonce":2}]},"active_addr":"192.168.123.102:6801/2","active_change":"2026-03-10T10:15:14.708100+0000","active_mgr_features":4540138322906710015,"available":true,"standbys":[{"gid":14250,"name":"vm05.coparq","mgr_features":4540138322906710015,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:9.4.7","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:2.4.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.5.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:0.0.2","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.43.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:2.4.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"secs","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:9.4.7","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:2.4.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.5.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:0.0.2","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.43.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:2.4.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"secs","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.102:8443/","prometheus":"http://192.168.123.102:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"last_failure_osd_epoch":5,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":1808407342}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":3564367406}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":2663851614}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":3947397798}]}]} 2026-03-10T10:16:39.857 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T10:16:39.857 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T10:16:39.857 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd dump --format=json 2026-03-10T10:16:40.013 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:40.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.277+0000 7f24bdf49700 1 -- 192.168.123.102:0/376250339 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103cf0 msgr2=0x7f24b8107d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:40.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.277+0000 7f24bdf49700 1 --2- 192.168.123.102:0/376250339 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103cf0 0x7f24b8107d40 secure :-1 s=READY pgs=208 cs=0 l=1 rev1=1 crypto rx=0x7f24a8009b00 tx=0x7f24a8009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:40.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.277+0000 7f24bdf49700 1 -- 192.168.123.102:0/376250339 shutdown_connections 2026-03-10T10:16:40.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.277+0000 7f24bdf49700 1 --2- 192.168.123.102:0/376250339 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103cf0 0x7f24b8107d40 unknown :-1 s=CLOSED pgs=208 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:40.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.277+0000 7f24bdf49700 1 --2- 192.168.123.102:0/376250339 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24b8103340 0x7f24b8103720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:40.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.277+0000 7f24bdf49700 1 -- 192.168.123.102:0/376250339 >> 192.168.123.102:0/376250339 conn(0x7f24b80feb90 msgr2=0x7f24b8100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:40.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.278+0000 7f24bdf49700 1 -- 192.168.123.102:0/376250339 shutdown_connections 2026-03-10T10:16:40.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.278+0000 7f24bdf49700 1 -- 192.168.123.102:0/376250339 wait complete. 2026-03-10T10:16:40.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.278+0000 7f24bdf49700 1 Processor -- start 2026-03-10T10:16:40.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.278+0000 7f24bdf49700 1 -- start start 2026-03-10T10:16:40.281 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24bdf49700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103340 0x7f24b8075260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:40.281 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24bdf49700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24b8103cf0 0x7f24b80757a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:40.281 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24bdf49700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f24b80793f0 con 0x7f24b8103340 2026-03-10T10:16:40.281 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24bdf49700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f24b8075ce0 con 0x7f24b8103cf0 2026-03-10T10:16:40.281 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24b77fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103340 0x7f24b8075260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:40.281 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24b77fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103340 0x7f24b8075260 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44984/0 (socket says 192.168.123.102:44984) 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24b77fe700 1 -- 192.168.123.102:0/2249727427 learned_addr learned my addr 192.168.123.102:0/2249727427 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24b6ffd700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24b8103cf0 0x7f24b80757a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24b77fe700 1 -- 192.168.123.102:0/2249727427 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24b8103cf0 msgr2=0x7f24b80757a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24b77fe700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24b8103cf0 0x7f24b80757a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24b77fe700 1 -- 192.168.123.102:0/2249727427 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f24a80097e0 con 0x7f24b8103340 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.279+0000 7f24b77fe700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103340 0x7f24b8075260 secure :-1 s=READY pgs=209 cs=0 l=1 rev1=1 crypto rx=0x7f24a000b700 tx=0x7f24a000ba10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.280+0000 7f24b4ff9700 1 -- 192.168.123.102:0/2249727427 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f24a00107c0 con 0x7f24b8103340 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.280+0000 7f24b4ff9700 1 -- 192.168.123.102:0/2249727427 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f24a0010e00 con 0x7f24b8103340 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.280+0000 7f24b4ff9700 1 -- 192.168.123.102:0/2249727427 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f24a000f360 con 0x7f24b8103340 2026-03-10T10:16:40.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.280+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f24b8075f40 con 0x7f24b8103340 2026-03-10T10:16:40.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.281+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f24b80719f0 con 0x7f24b8103340 2026-03-10T10:16:40.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.282+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f24b8072e20 con 0x7f24b8103340 2026-03-10T10:16:40.287 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.282+0000 7f24b4ff9700 1 -- 192.168.123.102:0/2249727427 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f24a0017360 con 0x7f24b8103340 2026-03-10T10:16:40.287 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.283+0000 7f24b4ff9700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f24a406c490 0x7f24a406e950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:40.287 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.283+0000 7f24b4ff9700 1 -- 192.168.123.102:0/2249727427 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f24a0059100 con 0x7f24b8103340 2026-03-10T10:16:40.287 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.285+0000 7f24b6ffd700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f24a406c490 0x7f24a406e950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:40.287 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.285+0000 7f24b4ff9700 1 -- 192.168.123.102:0/2249727427 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f24b8072e20 con 0x7f24b8103340 2026-03-10T10:16:40.287 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.286+0000 7f24b6ffd700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f24a406c490 0x7f24a406e950 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f24b8076a50 tx=0x7f24a800b540 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:40.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.395+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) v1 -- 0x7f24b804ea90 con 0x7f24b8103340 2026-03-10T10:16:40.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.398+0000 7f24b4ff9700 1 -- 192.168.123.102:0/2249727427 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v33) v1 ==== 74+0+11285 (secure 0 0 0) 0x7f24a0054d40 con 0x7f24b8103340 2026-03-10T10:16:40.400 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:40.400 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":33,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","created":"2026-03-10T10:14:08.583559+0000","modified":"2026-03-10T10:16:36.329347+0000","last_up_change":"2026-03-10T10:16:35.321460+0000","last_in_change":"2026-03-10T10:16:23.907084+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":14,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"reef","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T10:16:06.912779+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_acting":6,"score_stable":6,"optimal_score":0.5,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"f90b5cc0-11ce-4915-a46a-c23fb52a4ba2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6803","nonce":2756332558}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6805","nonce":2756332558}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6809","nonce":2756332558}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6807","nonce":2756332558}]},"public_addr":"192.168.123.102:6803/2756332558","cluster_addr":"192.168.123.102:6805/2756332558","heartbeat_back_addr":"192.168.123.102:6809/2756332558","heartbeat_front_addr":"192.168.123.102:6807/2756332558","state":["exists","up"]},{"osd":1,"uuid":"8bd56e09-7dad-4b23-847e-c7afae0d2f41","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":24,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6811","nonce":1060043977}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6813","nonce":1060043977}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6817","nonce":1060043977}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6815","nonce":1060043977}]},"public_addr":"192.168.123.102:6811/1060043977","cluster_addr":"192.168.123.102:6813/1060043977","heartbeat_back_addr":"192.168.123.102:6817/1060043977","heartbeat_front_addr":"192.168.123.102:6815/1060043977","state":["exists","up"]},{"osd":2,"uuid":"1ccdc548-a0cb-41e0-bc7a-21b41198ffea","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6819","nonce":3838117302}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6821","nonce":3838117302}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6825","nonce":3838117302}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6823","nonce":3838117302}]},"public_addr":"192.168.123.102:6819/3838117302","cluster_addr":"192.168.123.102:6821/3838117302","heartbeat_back_addr":"192.168.123.102:6825/3838117302","heartbeat_front_addr":"192.168.123.102:6823/3838117302","state":["exists","up"]},{"osd":3,"uuid":"70fa78db-d544-4037-a4e5-e2b601b924d7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6801","nonce":4254210589}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6803","nonce":4254210589}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6807","nonce":4254210589}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6805","nonce":4254210589}]},"public_addr":"192.168.123.105:6801/4254210589","cluster_addr":"192.168.123.105:6803/4254210589","heartbeat_back_addr":"192.168.123.105:6807/4254210589","heartbeat_front_addr":"192.168.123.105:6805/4254210589","state":["exists","up"]},{"osd":4,"uuid":"d0b95380-36d0-4fea-a134-f6abcd77b2ee","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6809","nonce":4051935333}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6811","nonce":4051935333}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6815","nonce":4051935333}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6813","nonce":4051935333}]},"public_addr":"192.168.123.105:6809/4051935333","cluster_addr":"192.168.123.105:6811/4051935333","heartbeat_back_addr":"192.168.123.105:6815/4051935333","heartbeat_front_addr":"192.168.123.105:6813/4051935333","state":["exists","up"]},{"osd":5,"uuid":"bf16e555-2559-41cf-b9cc-38646188d928","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6817","nonce":1475090979}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6819","nonce":1475090979}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6823","nonce":1475090979}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6821","nonce":1475090979}]},"public_addr":"192.168.123.105:6817/1475090979","cluster_addr":"192.168.123.105:6819/1475090979","heartbeat_back_addr":"192.168.123.105:6823/1475090979","heartbeat_front_addr":"192.168.123.105:6821/1475090979","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:15:46.411665+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:15:55.401808+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:05.469558+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:15.184930+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:24.070276+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:33.661652+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:0/1508780527":"2026-03-11T10:15:14.708009+0000","192.168.123.102:0/1164545653":"2026-03-11T10:15:14.708009+0000","192.168.123.102:0/2365816117":"2026-03-11T10:15:14.708009+0000","192.168.123.102:0/1117450327":"2026-03-11T10:14:37.093481+0000","192.168.123.102:6800/2":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/1091112719":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/2700080577":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/3931430898":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/1517189708":"2026-03-11T10:14:37.093481+0000","192.168.123.102:6801/2":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/982532372":"2026-03-11T10:14:37.093481+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T10:16:40.405 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.403+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f24a406c490 msgr2=0x7f24a406e950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.403+0000 7f24bdf49700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f24a406c490 0x7f24a406e950 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f24b8076a50 tx=0x7f24a800b540 comp rx=0 tx=0).stop 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.403+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103340 msgr2=0x7f24b8075260 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.403+0000 7f24bdf49700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103340 0x7f24b8075260 secure :-1 s=READY pgs=209 cs=0 l=1 rev1=1 crypto rx=0x7f24a000b700 tx=0x7f24a000ba10 comp rx=0 tx=0).stop 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.404+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 shutdown_connections 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.404+0000 7f24bdf49700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f24b8103340 0x7f24b8075260 unknown :-1 s=CLOSED pgs=209 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.404+0000 7f24bdf49700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f24a406c490 0x7f24a406e950 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.404+0000 7f24bdf49700 1 --2- 192.168.123.102:0/2249727427 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24b8103cf0 0x7f24b80757a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.404+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 >> 192.168.123.102:0/2249727427 conn(0x7f24b80feb90 msgr2=0x7f24b8100f50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.404+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 shutdown_connections 2026-03-10T10:16:40.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.404+0000 7f24bdf49700 1 -- 192.168.123.102:0/2249727427 wait complete. 2026-03-10T10:16:40.572 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T10:16:40.572 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd dump --format=json 2026-03-10T10:16:40.727 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:40.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:40 vm02 ceph-mon[50200]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:40.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:40 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3658960079' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T10:16:40.756 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:40 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/2249727427' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:16:40.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:40 vm05 ceph-mon[59051]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:40.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:40 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3658960079' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T10:16:40.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:40 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/2249727427' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:16:40.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.982+0000 7f6ce002c700 1 -- 192.168.123.102:0/2377029436 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 msgr2=0x7f6cd8107dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:40.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.982+0000 7f6ce002c700 1 --2- 192.168.123.102:0/2377029436 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 0x7f6cd8107dc0 secure :-1 s=READY pgs=210 cs=0 l=1 rev1=1 crypto rx=0x7f6cd4009b50 tx=0x7f6cd4009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:40.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.982+0000 7f6ce002c700 1 -- 192.168.123.102:0/2377029436 shutdown_connections 2026-03-10T10:16:40.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.982+0000 7f6ce002c700 1 --2- 192.168.123.102:0/2377029436 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 0x7f6cd8107dc0 unknown :-1 s=CLOSED pgs=210 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:40.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.982+0000 7f6ce002c700 1 --2- 192.168.123.102:0/2377029436 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6cd81033c0 0x7f6cd81037a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:40.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.982+0000 7f6ce002c700 1 -- 192.168.123.102:0/2377029436 >> 192.168.123.102:0/2377029436 conn(0x7f6cd80fec30 msgr2=0x7f6cd8101050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:40.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.982+0000 7f6ce002c700 1 -- 192.168.123.102:0/2377029436 shutdown_connections 2026-03-10T10:16:40.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.983+0000 7f6ce002c700 1 -- 192.168.123.102:0/2377029436 wait complete. 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.983+0000 7f6ce002c700 1 Processor -- start 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.983+0000 7f6ce002c700 1 -- start start 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.983+0000 7f6ce002c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6cd81033c0 0x7f6cd8198e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.983+0000 7f6ce002c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 0x7f6cd8199370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.983+0000 7f6ce002c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6cd8199a50 con 0x7f6cd8103d70 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.983+0000 7f6ce002c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6cd819d7e0 con 0x7f6cd81033c0 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cdd5c7700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 0x7f6cd8199370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cdd5c7700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 0x7f6cd8199370 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44998/0 (socket says 192.168.123.102:44998) 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cdd5c7700 1 -- 192.168.123.102:0/2839459104 learned_addr learned my addr 192.168.123.102:0/2839459104 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:40.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cdddc8700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6cd81033c0 0x7f6cd8198e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:40.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cdd5c7700 1 -- 192.168.123.102:0/2839459104 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6cd81033c0 msgr2=0x7f6cd8198e30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:40.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cdd5c7700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6cd81033c0 0x7f6cd8198e30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:40.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cdd5c7700 1 -- 192.168.123.102:0/2839459104 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6cd40097e0 con 0x7f6cd8103d70 2026-03-10T10:16:40.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cdd5c7700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 0x7f6cd8199370 secure :-1 s=READY pgs=211 cs=0 l=1 rev1=1 crypto rx=0x7f6cd4005950 tx=0x7f6cd4004e80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:40.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cceffd700 1 -- 192.168.123.102:0/2839459104 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6cd401d070 con 0x7f6cd8103d70 2026-03-10T10:16:40.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cceffd700 1 -- 192.168.123.102:0/2839459104 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6cd4022470 con 0x7f6cd8103d70 2026-03-10T10:16:40.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6ce002c700 1 -- 192.168.123.102:0/2839459104 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6cd819da60 con 0x7f6cd8103d70 2026-03-10T10:16:40.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.984+0000 7f6cceffd700 1 -- 192.168.123.102:0/2839459104 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6cd400f650 con 0x7f6cd8103d70 2026-03-10T10:16:40.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.985+0000 7f6ce002c700 1 -- 192.168.123.102:0/2839459104 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6cd819df50 con 0x7f6cd8103d70 2026-03-10T10:16:40.990 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.986+0000 7f6cccff9700 1 -- 192.168.123.102:0/2839459104 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6cc00052f0 con 0x7f6cd8103d70 2026-03-10T10:16:40.991 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.989+0000 7f6cceffd700 1 -- 192.168.123.102:0/2839459104 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f6cd40225e0 con 0x7f6cd8103d70 2026-03-10T10:16:40.991 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.989+0000 7f6cceffd700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cc406c600 0x7f6cc406eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:40.991 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.989+0000 7f6cceffd700 1 -- 192.168.123.102:0/2839459104 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f6cd408d580 con 0x7f6cd8103d70 2026-03-10T10:16:40.991 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.990+0000 7f6cceffd700 1 -- 192.168.123.102:0/2839459104 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f6cd408d960 con 0x7f6cd8103d70 2026-03-10T10:16:40.992 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.990+0000 7f6cdddc8700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cc406c600 0x7f6cc406eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:40.992 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:40.990+0000 7f6cdddc8700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cc406c600 0x7f6cc406eac0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f6cc8005950 tx=0x7f6cc80058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:41.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.100+0000 7f6cccff9700 1 -- 192.168.123.102:0/2839459104 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) v1 -- 0x7f6cc0005160 con 0x7f6cd8103d70 2026-03-10T10:16:41.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.101+0000 7f6cceffd700 1 -- 192.168.123.102:0/2839459104 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v33) v1 ==== 74+0+11285 (secure 0 0 0) 0x7f6cd4027090 con 0x7f6cd8103d70 2026-03-10T10:16:41.103 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:41.103 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":33,"fsid":"d0ab5dc6-1c69-11f1-8798-3b5e87c3385d","created":"2026-03-10T10:14:08.583559+0000","modified":"2026-03-10T10:16:36.329347+0000","last_up_change":"2026-03-10T10:16:35.321460+0000","last_in_change":"2026-03-10T10:16:23.907084+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":14,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"reef","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T10:16:06.912779+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_acting":6,"score_stable":6,"optimal_score":0.5,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"f90b5cc0-11ce-4915-a46a-c23fb52a4ba2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6803","nonce":2756332558}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6805","nonce":2756332558}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6809","nonce":2756332558}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":2756332558},{"type":"v1","addr":"192.168.123.102:6807","nonce":2756332558}]},"public_addr":"192.168.123.102:6803/2756332558","cluster_addr":"192.168.123.102:6805/2756332558","heartbeat_back_addr":"192.168.123.102:6809/2756332558","heartbeat_front_addr":"192.168.123.102:6807/2756332558","state":["exists","up"]},{"osd":1,"uuid":"8bd56e09-7dad-4b23-847e-c7afae0d2f41","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":24,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6811","nonce":1060043977}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6813","nonce":1060043977}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6817","nonce":1060043977}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":1060043977},{"type":"v1","addr":"192.168.123.102:6815","nonce":1060043977}]},"public_addr":"192.168.123.102:6811/1060043977","cluster_addr":"192.168.123.102:6813/1060043977","heartbeat_back_addr":"192.168.123.102:6817/1060043977","heartbeat_front_addr":"192.168.123.102:6815/1060043977","state":["exists","up"]},{"osd":2,"uuid":"1ccdc548-a0cb-41e0-bc7a-21b41198ffea","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6819","nonce":3838117302}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6821","nonce":3838117302}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6825","nonce":3838117302}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":3838117302},{"type":"v1","addr":"192.168.123.102:6823","nonce":3838117302}]},"public_addr":"192.168.123.102:6819/3838117302","cluster_addr":"192.168.123.102:6821/3838117302","heartbeat_back_addr":"192.168.123.102:6825/3838117302","heartbeat_front_addr":"192.168.123.102:6823/3838117302","state":["exists","up"]},{"osd":3,"uuid":"70fa78db-d544-4037-a4e5-e2b601b924d7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6801","nonce":4254210589}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6803","nonce":4254210589}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6807","nonce":4254210589}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":4254210589},{"type":"v1","addr":"192.168.123.105:6805","nonce":4254210589}]},"public_addr":"192.168.123.105:6801/4254210589","cluster_addr":"192.168.123.105:6803/4254210589","heartbeat_back_addr":"192.168.123.105:6807/4254210589","heartbeat_front_addr":"192.168.123.105:6805/4254210589","state":["exists","up"]},{"osd":4,"uuid":"d0b95380-36d0-4fea-a134-f6abcd77b2ee","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6809","nonce":4051935333}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6811","nonce":4051935333}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6815","nonce":4051935333}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":4051935333},{"type":"v1","addr":"192.168.123.105:6813","nonce":4051935333}]},"public_addr":"192.168.123.105:6809/4051935333","cluster_addr":"192.168.123.105:6811/4051935333","heartbeat_back_addr":"192.168.123.105:6815/4051935333","heartbeat_front_addr":"192.168.123.105:6813/4051935333","state":["exists","up"]},{"osd":5,"uuid":"bf16e555-2559-41cf-b9cc-38646188d928","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6817","nonce":1475090979}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6819","nonce":1475090979}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6823","nonce":1475090979}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":1475090979},{"type":"v1","addr":"192.168.123.105:6821","nonce":1475090979}]},"public_addr":"192.168.123.105:6817/1475090979","cluster_addr":"192.168.123.105:6819/1475090979","heartbeat_back_addr":"192.168.123.105:6823/1475090979","heartbeat_front_addr":"192.168.123.105:6821/1475090979","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:15:46.411665+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:15:55.401808+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:05.469558+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:15.184930+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:24.070276+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:16:33.661652+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:0/1508780527":"2026-03-11T10:15:14.708009+0000","192.168.123.102:0/1164545653":"2026-03-11T10:15:14.708009+0000","192.168.123.102:0/2365816117":"2026-03-11T10:15:14.708009+0000","192.168.123.102:0/1117450327":"2026-03-11T10:14:37.093481+0000","192.168.123.102:6800/2":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/1091112719":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/2700080577":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/3931430898":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/1517189708":"2026-03-11T10:14:37.093481+0000","192.168.123.102:6801/2":"2026-03-11T10:14:24.165331+0000","192.168.123.102:0/982532372":"2026-03-11T10:14:37.093481+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 -- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cc406c600 msgr2=0x7f6cc406eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cc406c600 0x7f6cc406eac0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f6cc8005950 tx=0x7f6cc80058e0 comp rx=0 tx=0).stop 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 -- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 msgr2=0x7f6cd8199370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 0x7f6cd8199370 secure :-1 s=READY pgs=211 cs=0 l=1 rev1=1 crypto rx=0x7f6cd4005950 tx=0x7f6cd4004e80 comp rx=0 tx=0).stop 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 -- 192.168.123.102:0/2839459104 shutdown_connections 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cc406c600 0x7f6cc406eac0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6cd81033c0 0x7f6cd8198e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 --2- 192.168.123.102:0/2839459104 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6cd8103d70 0x7f6cd8199370 unknown :-1 s=CLOSED pgs=211 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 -- 192.168.123.102:0/2839459104 >> 192.168.123.102:0/2839459104 conn(0x7f6cd80fec30 msgr2=0x7f6cd81001e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.104+0000 7f6cccff9700 1 -- 192.168.123.102:0/2839459104 shutdown_connections 2026-03-10T10:16:41.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:41.105+0000 7f6cccff9700 1 -- 192.168.123.102:0/2839459104 wait complete. 2026-03-10T10:16:41.177 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph tell osd.0 flush_pg_stats 2026-03-10T10:16:41.177 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph tell osd.1 flush_pg_stats 2026-03-10T10:16:41.177 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph tell osd.2 flush_pg_stats 2026-03-10T10:16:41.177 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph tell osd.3 flush_pg_stats 2026-03-10T10:16:41.177 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph tell osd.4 flush_pg_stats 2026-03-10T10:16:41.177 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph tell osd.5 flush_pg_stats 2026-03-10T10:16:41.674 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:41.684 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:41.771 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:41 vm02 ceph-mon[50200]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:41.771 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:41 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/2839459104' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:16:41.778 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:41.781 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:41.785 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:41.786 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:41 vm05 ceph-mon[59051]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:41 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/2839459104' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:16:42.362 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.358+0000 7f9f09f43700 1 -- 192.168.123.102:0/3164527893 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f04101950 msgr2=0x7f9f04101d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.363 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.358+0000 7f9f09f43700 1 --2- 192.168.123.102:0/3164527893 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f04101950 0x7f9f04101d30 secure :-1 s=READY pgs=212 cs=0 l=1 rev1=1 crypto rx=0x7f9ef8009b00 tx=0x7f9ef8009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:42.364 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.363+0000 7f9f09f43700 1 -- 192.168.123.102:0/3164527893 shutdown_connections 2026-03-10T10:16:42.364 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.363+0000 7f9f09f43700 1 --2- 192.168.123.102:0/3164527893 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f040ff410 0x7f9f040ff890 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.364 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.363+0000 7f9f09f43700 1 --2- 192.168.123.102:0/3164527893 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f04101950 0x7f9f04101d30 unknown :-1 s=CLOSED pgs=212 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.364 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.363+0000 7f9f09f43700 1 -- 192.168.123.102:0/3164527893 >> 192.168.123.102:0/3164527893 conn(0x7f9f04074df0 msgr2=0x7f9f04075200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:42.366 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.363+0000 7f9f09f43700 1 -- 192.168.123.102:0/3164527893 shutdown_connections 2026-03-10T10:16:42.371 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.368+0000 7f9f09f43700 1 -- 192.168.123.102:0/3164527893 wait complete. 2026-03-10T10:16:42.371 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.369+0000 7f9f09f43700 1 Processor -- start 2026-03-10T10:16:42.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.379+0000 7f9f09f43700 1 -- start start 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.380+0000 7f9f09f43700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f040ff410 0x7f9f0419cf50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.380+0000 7f9f09f43700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f04101950 0x7f9f0419d490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.380+0000 7f9f09f43700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9f0419db70 con 0x7f9f04101950 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.380+0000 7f9f09f43700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9f041a1900 con 0x7f9f040ff410 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.381+0000 7f9f08f41700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f040ff410 0x7f9f0419cf50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.381+0000 7f9f08f41700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f040ff410 0x7f9f0419cf50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38230/0 (socket says 192.168.123.102:38230) 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.381+0000 7f9f08f41700 1 -- 192.168.123.102:0/4120202228 learned_addr learned my addr 192.168.123.102:0/4120202228 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.381+0000 7f9f03fff700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f04101950 0x7f9f0419d490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.382+0000 7f9f08f41700 1 -- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f04101950 msgr2=0x7f9f0419d490 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.382+0000 7f9f08f41700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f04101950 0x7f9f0419d490 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.382+0000 7f9f08f41700 1 -- 192.168.123.102:0/4120202228 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9ef80097e0 con 0x7f9f040ff410 2026-03-10T10:16:42.384 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.382+0000 7f9f08f41700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f040ff410 0x7f9f0419cf50 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f9ef8009fd0 tx=0x7f9ef80048c0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.384+0000 7f9f01ffb700 1 -- 192.168.123.102:0/4120202228 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9ef801d070 con 0x7f9f040ff410 2026-03-10T10:16:42.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.384+0000 7f9f01ffb700 1 -- 192.168.123.102:0/4120202228 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9ef8022470 con 0x7f9f040ff410 2026-03-10T10:16:42.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.384+0000 7f9f01ffb700 1 -- 192.168.123.102:0/4120202228 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9ef800f670 con 0x7f9f040ff410 2026-03-10T10:16:42.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.384+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9f041a1b80 con 0x7f9f040ff410 2026-03-10T10:16:42.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.385+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9f041a2070 con 0x7f9f040ff410 2026-03-10T10:16:42.389 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.387+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_get_version(what=osdmap handle=1) v1 -- 0x7f9ee8000ff0 con 0x7f9f040ff410 2026-03-10T10:16:42.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.390+0000 7f9f01ffb700 1 -- 192.168.123.102:0/4120202228 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f9ef800ba40 con 0x7f9f040ff410 2026-03-10T10:16:42.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.391+0000 7f9f01ffb700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9ef406c600 0x7f9ef406eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.391+0000 7f9f01ffb700 1 -- 192.168.123.102:0/4120202228 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f9ef808d2d0 con 0x7f9f040ff410 2026-03-10T10:16:42.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.391+0000 7f9f01ffb700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] conn(0x7f9ef40721a0 0x7f9ef40745c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.391+0000 7f9f01ffb700 1 -- 192.168.123.102:0/4120202228 --> [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f9ef4074c70 con 0x7f9ef40721a0 2026-03-10T10:16:42.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.392+0000 7f9f09742700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] conn(0x7f9ef40721a0 0x7f9ef40745c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.392+0000 7f9f09742700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] conn(0x7f9ef40721a0 0x7f9ef40745c0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.393+0000 7f9f01ffb700 1 -- 192.168.123.102:0/4120202228 <== osd.2 v2:192.168.123.102:6818/3838117302 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7f9ef4074c70 con 0x7f9ef40721a0 2026-03-10T10:16:42.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.406+0000 7f9f03fff700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9ef406c600 0x7f9ef406eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.409+0000 7f9f01ffb700 1 -- 192.168.123.102:0/4120202228 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_get_version_reply(handle=1 version=33) v2 ==== 24+0+0 (secure 0 0 0) 0x7f9ef808d680 con 0x7f9f040ff410 2026-03-10T10:16:42.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.409+0000 7f9f03fff700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9ef406c600 0x7f9ef406eac0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f9f0419e570 tx=0x7f9ef000b410 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.438+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 --> [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f9ee8002d70 con 0x7f9ef40721a0 2026-03-10T10:16:42.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.438+0000 7f9f01ffb700 1 -- 192.168.123.102:0/4120202228 <== osd.2 v2:192.168.123.102:6818/3838117302 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7f9ee8002d70 con 0x7f9ef40721a0 2026-03-10T10:16:42.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.439+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] conn(0x7f9ef40721a0 msgr2=0x7f9ef40745c0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.439+0000 7f9f09f43700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] conn(0x7f9ef40721a0 0x7f9ef40745c0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9ef406c600 msgr2=0x7f9ef406eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9ef406c600 0x7f9ef406eac0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f9f0419e570 tx=0x7f9ef000b410 comp rx=0 tx=0).stop 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f040ff410 msgr2=0x7f9f0419cf50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f040ff410 0x7f9f0419cf50 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f9ef8009fd0 tx=0x7f9ef80048c0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 shutdown_connections 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6818/3838117302,v1:192.168.123.102:6819/3838117302] conn(0x7f9ef40721a0 0x7f9ef40745c0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9ef406c600 0x7f9ef406eac0 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f040ff410 0x7f9f0419cf50 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 --2- 192.168.123.102:0/4120202228 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f04101950 0x7f9f0419d490 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.441+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 >> 192.168.123.102:0/4120202228 conn(0x7f9f04074df0 msgr2=0x7f9f040fd380 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.442+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 shutdown_connections 2026-03-10T10:16:42.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.442+0000 7f9f09f43700 1 -- 192.168.123.102:0/4120202228 wait complete. 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.511+0000 7f41e359e700 1 -- 192.168.123.102:0/2079833972 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f41e410d0f0 msgr2=0x7f41e410d570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.511+0000 7f41e359e700 1 --2- 192.168.123.102:0/2079833972 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f41e410d0f0 0x7f41e410d570 secure :-1 s=READY pgs=213 cs=0 l=1 rev1=1 crypto rx=0x7f41d4009b50 tx=0x7f41d4009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.511+0000 7f41e359e700 1 -- 192.168.123.102:0/2079833972 shutdown_connections 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.511+0000 7f41e359e700 1 --2- 192.168.123.102:0/2079833972 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f41e410d0f0 0x7f41e410d570 secure :-1 s=CLOSED pgs=213 cs=0 l=1 rev1=1 crypto rx=0x7f41d4009b50 tx=0x7f41d4009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.511+0000 7f41e359e700 1 --2- 192.168.123.102:0/2079833972 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41e410f340 0x7f41e410f720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.511+0000 7f41e359e700 1 -- 192.168.123.102:0/2079833972 >> 192.168.123.102:0/2079833972 conn(0x7f41e406ce20 msgr2=0x7f41e406d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.511+0000 7f41e359e700 1 -- 192.168.123.102:0/2079833972 shutdown_connections 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.511+0000 7f41e359e700 1 -- 192.168.123.102:0/2079833972 wait complete. 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.512+0000 7f41e359e700 1 Processor -- start 2026-03-10T10:16:42.514 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.512+0000 7f41e359e700 1 -- start start 2026-03-10T10:16:42.515 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.512+0000 7f41e359e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41e410f340 0x7f41e4118360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.515 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.512+0000 7f41e359e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f41e41188a0 0x7f41e4118d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.515 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.512+0000 7f41e359e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f41e411c3e0 con 0x7f41e41188a0 2026-03-10T10:16:42.515 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.512+0000 7f41e359e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f41e41b82e0 con 0x7f41e410f340 2026-03-10T10:16:42.515 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.513+0000 7f41e259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41e410f340 0x7f41e4118360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.515 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.513+0000 7f41e259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41e410f340 0x7f41e4118360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38242/0 (socket says 192.168.123.102:38242) 2026-03-10T10:16:42.515 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.513+0000 7f41e259c700 1 -- 192.168.123.102:0/2129452910 learned_addr learned my addr 192.168.123.102:0/2129452910 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:42.515 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.513+0000 7f41e259c700 1 -- 192.168.123.102:0/2129452910 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f41e41188a0 msgr2=0x7f41e4118d20 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:42.515 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.513+0000 7f41e259c700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f41e41188a0 0x7f41e4118d20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.516 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.513+0000 7f41e259c700 1 -- 192.168.123.102:0/2129452910 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f41dc009e30 con 0x7f41e410f340 2026-03-10T10:16:42.516 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.513+0000 7f41e259c700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41e410f340 0x7f41e4118360 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f41dc00e3f0 tx=0x7f41dc00e7b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.516 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.513+0000 7f41d37fe700 1 -- 192.168.123.102:0/2129452910 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f41dc019070 con 0x7f41e410f340 2026-03-10T10:16:42.516 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.514+0000 7f41e359e700 1 -- 192.168.123.102:0/2129452910 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f41d40097e0 con 0x7f41e410f340 2026-03-10T10:16:42.516 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.514+0000 7f41e359e700 1 -- 192.168.123.102:0/2129452910 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f41e41b8840 con 0x7f41e410f340 2026-03-10T10:16:42.516 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.514+0000 7f41d37fe700 1 -- 192.168.123.102:0/2129452910 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f41dc00f040 con 0x7f41e410f340 2026-03-10T10:16:42.516 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.514+0000 7f41d37fe700 1 -- 192.168.123.102:0/2129452910 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f41dc01e6b0 con 0x7f41e410f340 2026-03-10T10:16:42.518 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.515+0000 7f41d37fe700 1 -- 192.168.123.102:0/2129452910 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f41dc015410 con 0x7f41e410f340 2026-03-10T10:16:42.518 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.515+0000 7f41d37fe700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f41cc06c4e0 0x7f41cc06e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.518 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.515+0000 7f41d37fe700 1 -- 192.168.123.102:0/2129452910 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f41dc08c840 con 0x7f41e410f340 2026-03-10T10:16:42.518 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.515+0000 7f41e359e700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] conn(0x7f41e40616e0 0x7f41e4061ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.518 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.515+0000 7f41e359e700 1 -- 192.168.123.102:0/2129452910 --> [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f41e404f2e0 con 0x7f41e40616e0 2026-03-10T10:16:42.518 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.517+0000 7f41e2d9d700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] conn(0x7f41e40616e0 0x7f41e4061ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.518 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.517+0000 7f41e1d9b700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f41cc06c4e0 0x7f41cc06e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.519 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.518+0000 7f41e2d9d700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] conn(0x7f41e40616e0 0x7f41e4061ae0 crc :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.3 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.519 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.518+0000 7f41e1d9b700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f41cc06c4e0 0x7f41cc06e9a0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f41d400b5c0 tx=0x7f41d4005fb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.523 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.519+0000 7f41d37fe700 1 -- 192.168.123.102:0/2129452910 <== osd.3 v2:192.168.123.105:6800/4254210589 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7f41e404f2e0 con 0x7f41e40616e0 2026-03-10T10:16:42.564 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.556+0000 7f41e359e700 1 -- 192.168.123.102:0/2129452910 --> [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f41e404ea90 con 0x7f41e40616e0 2026-03-10T10:16:42.579 INFO:teuthology.orchestra.run.vm02.stdout:73014444041 2026-03-10T10:16:42.579 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd last-stat-seq osd.2 2026-03-10T10:16:42.596 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.594+0000 7f41d37fe700 1 -- 192.168.123.102:0/2129452910 <== osd.3 v2:192.168.123.105:6800/4254210589 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7f41e404ea90 con 0x7f41e40616e0 2026-03-10T10:16:42.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.595+0000 7f41d17fa700 1 -- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] conn(0x7f41e40616e0 msgr2=0x7f41e4061ae0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] conn(0x7f41e40616e0 0x7f41e4061ae0 crc :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 -- 192.168.123.102:0/2129452910 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f41cc06c4e0 msgr2=0x7f41cc06e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f41cc06c4e0 0x7f41cc06e9a0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f41d400b5c0 tx=0x7f41d4005fb0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 -- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41e410f340 msgr2=0x7f41e4118360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41e410f340 0x7f41e4118360 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f41dc00e3f0 tx=0x7f41dc00e7b0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 -- 192.168.123.102:0/2129452910 shutdown_connections 2026-03-10T10:16:42.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:6800/4254210589,v1:192.168.123.105:6801/4254210589] conn(0x7f41e40616e0 0x7f41e4061ae0 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f41cc06c4e0 0x7f41cc06e9a0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.599 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41e410f340 0x7f41e4118360 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.599 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 --2- 192.168.123.102:0/2129452910 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f41e41188a0 0x7f41e4118d20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.599 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.596+0000 7f41d17fa700 1 -- 192.168.123.102:0/2129452910 >> 192.168.123.102:0/2129452910 conn(0x7f41e406ce20 msgr2=0x7f41e4070770 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:42.599 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.597+0000 7f41d17fa700 1 -- 192.168.123.102:0/2129452910 shutdown_connections 2026-03-10T10:16:42.600 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.597+0000 7f41d17fa700 1 -- 192.168.123.102:0/2129452910 wait complete. 2026-03-10T10:16:42.724 INFO:teuthology.orchestra.run.vm02.stdout:98784247815 2026-03-10T10:16:42.724 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd last-stat-seq osd.3 2026-03-10T10:16:42.836 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.832+0000 7f9258dd0700 1 -- 192.168.123.102:0/3455745442 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f925410f340 msgr2=0x7f925410f720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.841 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.832+0000 7f9258dd0700 1 --2- 192.168.123.102:0/3455745442 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f925410f340 0x7f925410f720 secure :-1 s=READY pgs=214 cs=0 l=1 rev1=1 crypto rx=0x7f9244009b50 tx=0x7f9244009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:42.841 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.836+0000 7f9258dd0700 1 -- 192.168.123.102:0/3455745442 shutdown_connections 2026-03-10T10:16:42.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.836+0000 7f9258dd0700 1 --2- 192.168.123.102:0/3455745442 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f925410d0f0 0x7f925410d570 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.836+0000 7f9258dd0700 1 --2- 192.168.123.102:0/3455745442 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f925410f340 0x7f925410f720 unknown :-1 s=CLOSED pgs=214 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.836+0000 7f9258dd0700 1 -- 192.168.123.102:0/3455745442 >> 192.168.123.102:0/3455745442 conn(0x7f925406ce20 msgr2=0x7f925406d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:42.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.845+0000 7f9258dd0700 1 -- 192.168.123.102:0/3455745442 shutdown_connections 2026-03-10T10:16:42.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.845+0000 7f9258dd0700 1 -- 192.168.123.102:0/3455745442 wait complete. 2026-03-10T10:16:42.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.845+0000 7f9258dd0700 1 Processor -- start 2026-03-10T10:16:42.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.846+0000 7f9258dd0700 1 -- start start 2026-03-10T10:16:42.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.846+0000 7f9258dd0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f925410d0f0 0x7f925411bef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.846+0000 7f9258dd0700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9254116ef0 0x7f9254117370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.846+0000 7f9258dd0700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f92541178b0 con 0x7f9254116ef0 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.846+0000 7f9258dd0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9254117a20 con 0x7f925410d0f0 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.853+0000 7f9252ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9254116ef0 0x7f9254117370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.853+0000 7f9252ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9254116ef0 0x7f9254117370 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:45046/0 (socket says 192.168.123.102:45046) 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.853+0000 7f9252ffd700 1 -- 192.168.123.102:0/3324203767 learned_addr learned my addr 192.168.123.102:0/3324203767 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.854+0000 7f9252ffd700 1 -- 192.168.123.102:0/3324203767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f925410d0f0 msgr2=0x7f925411bef0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.854+0000 7f9252ffd700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f925410d0f0 0x7f925411bef0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.854+0000 7f9252ffd700 1 -- 192.168.123.102:0/3324203767 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f92440097e0 con 0x7f9254116ef0 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.854+0000 7f9252ffd700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9254116ef0 0x7f9254117370 secure :-1 s=READY pgs=215 cs=0 l=1 rev1=1 crypto rx=0x7f924800ed70 tx=0x7f924800c5b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.854+0000 7f9250ff9700 1 -- 192.168.123.102:0/3324203767 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9248009980 con 0x7f9254116ef0 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.854+0000 7f9258dd0700 1 -- 192.168.123.102:0/3324203767 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9254117d00 con 0x7f9254116ef0 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.854+0000 7f9258dd0700 1 -- 192.168.123.102:0/3324203767 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f92541b86e0 con 0x7f9254116ef0 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.856+0000 7f9250ff9700 1 -- 192.168.123.102:0/3324203767 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f924800cd70 con 0x7f9254116ef0 2026-03-10T10:16:42.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.856+0000 7f9250ff9700 1 -- 192.168.123.102:0/3324203767 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f92480189c0 con 0x7f9254116ef0 2026-03-10T10:16:42.865 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.857+0000 7f9250ff9700 1 -- 192.168.123.102:0/3324203767 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f9248018be0 con 0x7f9254116ef0 2026-03-10T10:16:42.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.866+0000 7f9250ff9700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f923c06c6d0 0x7f923c06eb90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.871+0000 7f92537fe700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f923c06c6d0 0x7f923c06eb90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.871+0000 7f92537fe700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f923c06c6d0 0x7f923c06eb90 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f9244009b20 tx=0x7f9244005bc0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.875+0000 7f9250ff9700 1 -- 192.168.123.102:0/3324203767 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f9248014070 con 0x7f9254116ef0 2026-03-10T10:16:42.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.876+0000 7f9258dd0700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] conn(0x7f9240001610 0x7f9240003ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.876+0000 7f9253fff700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] conn(0x7f9240001610 0x7f9240003ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.877+0000 7f9253fff700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] conn(0x7f9240001610 0x7f9240003ad0 crc :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.877+0000 7f9258dd0700 1 -- 192.168.123.102:0/3324203767 --> [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f9240006c00 con 0x7f9240001610 2026-03-10T10:16:42.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.877+0000 7f9250ff9700 1 -- 192.168.123.102:0/3324203767 <== osd.1 v2:192.168.123.102:6810/1060043977 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7f9240006c00 con 0x7f9240001610 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.890+0000 7f9258dd0700 1 -- 192.168.123.102:0/3324203767 --> [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f9240005ce0 con 0x7f9240001610 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.891+0000 7f9250ff9700 1 -- 192.168.123.102:0/3324203767 <== osd.1 v2:192.168.123.102:6810/1060043977 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7f9240005ce0 con 0x7f9240001610 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.891+0000 7f923a7fc700 1 -- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] conn(0x7f9240001610 msgr2=0x7f9240003ad0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.891+0000 7f923a7fc700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] conn(0x7f9240001610 0x7f9240003ad0 crc :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 -- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f923c06c6d0 msgr2=0x7f923c06eb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f923c06c6d0 0x7f923c06eb90 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f9244009b20 tx=0x7f9244005bc0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 -- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9254116ef0 msgr2=0x7f9254117370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9254116ef0 0x7f9254117370 secure :-1 s=READY pgs=215 cs=0 l=1 rev1=1 crypto rx=0x7f924800ed70 tx=0x7f924800c5b0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 -- 192.168.123.102:0/3324203767 shutdown_connections 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6810/1060043977,v1:192.168.123.102:6811/1060043977] conn(0x7f9240001610 0x7f9240003ad0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f923c06c6d0 0x7f923c06eb90 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f925410d0f0 0x7f925411bef0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 --2- 192.168.123.102:0/3324203767 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9254116ef0 0x7f9254117370 unknown :-1 s=CLOSED pgs=215 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.892+0000 7f923a7fc700 1 -- 192.168.123.102:0/3324203767 >> 192.168.123.102:0/3324203767 conn(0x7f925406ce20 msgr2=0x7f925410ae50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.893+0000 7f923a7fc700 1 -- 192.168.123.102:0/3324203767 shutdown_connections 2026-03-10T10:16:42.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.893+0000 7f923a7fc700 1 -- 192.168.123.102:0/3324203767 wait complete. 2026-03-10T10:16:42.930 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.927+0000 7f0c28b3f700 1 -- 192.168.123.102:0/97735114 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0c2410d0f0 msgr2=0x7f0c2410d570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.927+0000 7f0c28b3f700 1 --2- 192.168.123.102:0/97735114 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0c2410d0f0 0x7f0c2410d570 secure :-1 s=READY pgs=216 cs=0 l=1 rev1=1 crypto rx=0x7f0c18009b00 tx=0x7f0c18009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:42.934 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.931+0000 7f0c28b3f700 1 -- 192.168.123.102:0/97735114 shutdown_connections 2026-03-10T10:16:42.934 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.931+0000 7f0c28b3f700 1 --2- 192.168.123.102:0/97735114 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0c2410d0f0 0x7f0c2410d570 unknown :-1 s=CLOSED pgs=216 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.934 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.931+0000 7f0c28b3f700 1 --2- 192.168.123.102:0/97735114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c2410f340 0x7f0c2410f720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.934 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.931+0000 7f0c28b3f700 1 -- 192.168.123.102:0/97735114 >> 192.168.123.102:0/97735114 conn(0x7f0c2406ce20 msgr2=0x7f0c2406d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:42.945 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.942+0000 7f0c28b3f700 1 -- 192.168.123.102:0/97735114 shutdown_connections 2026-03-10T10:16:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.947+0000 7f0c28b3f700 1 -- 192.168.123.102:0/97735114 wait complete. 2026-03-10T10:16:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.947+0000 7f0c28b3f700 1 Processor -- start 2026-03-10T10:16:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.947+0000 7f0c28b3f700 1 -- start start 2026-03-10T10:16:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.948+0000 7f0c28b3f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0c2410d0f0 0x7f0c241180e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.948+0000 7f0c28b3f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c2410f340 0x7f0c24118620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.948+0000 7f0c28b3f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c2411c270 con 0x7f0c2410d0f0 2026-03-10T10:16:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.948+0000 7f0c28b3f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c2411c3e0 con 0x7f0c2410f340 2026-03-10T10:16:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.950+0000 7f0c237fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0c2410d0f0 0x7f0c241180e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.951+0000 7f0c22ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c2410f340 0x7f0c24118620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.951+0000 7f0c22ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c2410f340 0x7f0c24118620 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38260/0 (socket says 192.168.123.102:38260) 2026-03-10T10:16:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.951+0000 7f0c22ffd700 1 -- 192.168.123.102:0/1557492134 learned_addr learned my addr 192.168.123.102:0/1557492134 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:42.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.952+0000 7f0c22ffd700 1 -- 192.168.123.102:0/1557492134 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0c2410d0f0 msgr2=0x7f0c241180e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:42.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.952+0000 7f0c22ffd700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0c2410d0f0 0x7f0c241180e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:42.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.952+0000 7f0c22ffd700 1 -- 192.168.123.102:0/1557492134 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0c180097e0 con 0x7f0c2410f340 2026-03-10T10:16:42.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.954+0000 7f0c22ffd700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c2410f340 0x7f0c24118620 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f0c18009ad0 tx=0x7f0c1800ba00 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.955+0000 7f0c20ff9700 1 -- 192.168.123.102:0/1557492134 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0c1801d070 con 0x7f0c2410f340 2026-03-10T10:16:42.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.955+0000 7f0c20ff9700 1 -- 192.168.123.102:0/1557492134 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0c1800f460 con 0x7f0c2410f340 2026-03-10T10:16:42.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.955+0000 7f0c20ff9700 1 -- 192.168.123.102:0/1557492134 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0c18021620 con 0x7f0c2410f340 2026-03-10T10:16:42.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.955+0000 7f0c28b3f700 1 -- 192.168.123.102:0/1557492134 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0c24118c20 con 0x7f0c2410f340 2026-03-10T10:16:42.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.955+0000 7f0c28b3f700 1 -- 192.168.123.102:0/1557492134 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0c241b82e0 con 0x7f0c2410f340 2026-03-10T10:16:42.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.956+0000 7f0c28b3f700 1 -- 192.168.123.102:0/1557492134 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_get_version(what=osdmap handle=1) v1 -- 0x7f0c241107a0 con 0x7f0c2410f340 2026-03-10T10:16:42.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.960+0000 7f0c20ff9700 1 -- 192.168.123.102:0/1557492134 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f0c1802b430 con 0x7f0c2410f340 2026-03-10T10:16:42.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.960+0000 7f0c20ff9700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0c0c06c600 0x7f0c0c06eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.962+0000 7f0c237fe700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0c0c06c600 0x7f0c0c06eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.964+0000 7f0c237fe700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0c0c06c600 0x7f0c0c06eac0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f0c14009e50 tx=0x7f0c14009450 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.964+0000 7f0c20ff9700 1 -- 192.168.123.102:0/1557492134 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f0c1808d650 con 0x7f0c2410f340 2026-03-10T10:16:42.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.964+0000 7f0c20ff9700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] conn(0x7f0c0c0721a0 0x7f0c0c0745c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:42.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.964+0000 7f0c20ff9700 1 -- 192.168.123.102:0/1557492134 --> [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f0c0c074c70 con 0x7f0c0c0721a0 2026-03-10T10:16:42.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.964+0000 7f0c20ff9700 1 -- 192.168.123.102:0/1557492134 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_get_version_reply(handle=1 version=33) v2 ==== 24+0+0 (secure 0 0 0) 0x7f0c1808fc70 con 0x7f0c2410f340 2026-03-10T10:16:42.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.966+0000 7f0c23fff700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] conn(0x7f0c0c0721a0 0x7f0c0c0745c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:42.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.983+0000 7f0c23fff700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] conn(0x7f0c0c0721a0 0x7f0c0c0745c0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.5 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:42.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:42.983+0000 7f0c20ff9700 1 -- 192.168.123.102:0/1557492134 <== osd.5 v2:192.168.123.105:6816/1475090979 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7f0c0c074c70 con 0x7f0c0c0721a0 2026-03-10T10:16:43.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.013+0000 7f0c28b3f700 1 -- 192.168.123.102:0/1557492134 --> [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f0c2404f2e0 con 0x7f0c0c0721a0 2026-03-10T10:16:43.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.013+0000 7f0c20ff9700 1 -- 192.168.123.102:0/1557492134 <== osd.5 v2:192.168.123.105:6816/1475090979 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+12 (crc 0 0 0) 0x7f0c2404f2e0 con 0x7f0c0c0721a0 2026-03-10T10:16:43.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.027+0000 7f0c0a7fc700 1 -- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] conn(0x7f0c0c0721a0 msgr2=0x7f0c0c0745c0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.027+0000 7f0c0a7fc700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] conn(0x7f0c0c0721a0 0x7f0c0c0745c0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.027+0000 7f0c0a7fc700 1 -- 192.168.123.102:0/1557492134 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0c0c06c600 msgr2=0x7f0c0c06eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.027+0000 7f0c0a7fc700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0c0c06c600 0x7f0c0c06eac0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f0c14009e50 tx=0x7f0c14009450 comp rx=0 tx=0).stop 2026-03-10T10:16:43.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.027+0000 7f0c0a7fc700 1 -- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c2410f340 msgr2=0x7f0c24118620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.027+0000 7f0c0a7fc700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c2410f340 0x7f0c24118620 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f0c18009ad0 tx=0x7f0c1800ba00 comp rx=0 tx=0).stop 2026-03-10T10:16:43.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.033+0000 7f0c0a7fc700 1 -- 192.168.123.102:0/1557492134 shutdown_connections 2026-03-10T10:16:43.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.033+0000 7f0c0a7fc700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:6816/1475090979,v1:192.168.123.105:6817/1475090979] conn(0x7f0c0c0721a0 0x7f0c0c0745c0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.033+0000 7f0c0a7fc700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0c2410d0f0 0x7f0c241180e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.033+0000 7f0c0a7fc700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0c0c06c600 0x7f0c0c06eac0 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.033+0000 7f0c0a7fc700 1 --2- 192.168.123.102:0/1557492134 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c2410f340 0x7f0c24118620 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.033+0000 7f0c0a7fc700 1 -- 192.168.123.102:0/1557492134 >> 192.168.123.102:0/1557492134 conn(0x7f0c2406ce20 msgr2=0x7f0c24109ce0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:43.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.034+0000 7f0c0a7fc700 1 -- 192.168.123.102:0/1557492134 shutdown_connections 2026-03-10T10:16:43.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.035+0000 7f0c0a7fc700 1 -- 192.168.123.102:0/1557492134 wait complete. 2026-03-10T10:16:43.043 INFO:teuthology.orchestra.run.vm02.stdout:55834574859 2026-03-10T10:16:43.043 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd last-stat-seq osd.1 2026-03-10T10:16:43.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.057+0000 7f12f4d5f700 1 -- 192.168.123.102:0/1405578770 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f010f340 msgr2=0x7f12f010f720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.057+0000 7f12f4d5f700 1 --2- 192.168.123.102:0/1405578770 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f010f340 0x7f12f010f720 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f12e0009b00 tx=0x7f12e0009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:43.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.057+0000 7f12f4d5f700 1 -- 192.168.123.102:0/1405578770 shutdown_connections 2026-03-10T10:16:43.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.057+0000 7f12f4d5f700 1 --2- 192.168.123.102:0/1405578770 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f010d0f0 0x7f12f010d570 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.057+0000 7f12f4d5f700 1 --2- 192.168.123.102:0/1405578770 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f010f340 0x7f12f010f720 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.057+0000 7f12f4d5f700 1 -- 192.168.123.102:0/1405578770 >> 192.168.123.102:0/1405578770 conn(0x7f12f006ce20 msgr2=0x7f12f006d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:43.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.062+0000 7f12f4d5f700 1 -- 192.168.123.102:0/1405578770 shutdown_connections 2026-03-10T10:16:43.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.062+0000 7f12f4d5f700 1 -- 192.168.123.102:0/1405578770 wait complete. 2026-03-10T10:16:43.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.062+0000 7f12f4d5f700 1 Processor -- start 2026-03-10T10:16:43.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.062+0000 7f12f4d5f700 1 -- start start 2026-03-10T10:16:43.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.065+0000 7f12f4d5f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f010d0f0 0x7f12f011bff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.065+0000 7f12f4d5f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f0116fa0 0x7f12f0117420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.065+0000 7f12f4d5f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f12f0117ab0 con 0x7f12f010d0f0 2026-03-10T10:16:43.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.065+0000 7f12f4d5f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f12f0117c20 con 0x7f12f0116fa0 2026-03-10T10:16:43.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.068+0000 7f12eeffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f0116fa0 0x7f12f0117420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.068+0000 7f12eeffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f0116fa0 0x7f12f0117420 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38274/0 (socket says 192.168.123.102:38274) 2026-03-10T10:16:43.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.068+0000 7f12eeffd700 1 -- 192.168.123.102:0/4005353060 learned_addr learned my addr 192.168.123.102:0/4005353060 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:43.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.068+0000 7f12ef7fe700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f010d0f0 0x7f12f011bff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.072+0000 7f12eeffd700 1 -- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f010d0f0 msgr2=0x7f12f011bff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.072+0000 7f12eeffd700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f010d0f0 0x7f12f011bff0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.072+0000 7f12eeffd700 1 -- 192.168.123.102:0/4005353060 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f12e00097e0 con 0x7f12f0116fa0 2026-03-10T10:16:43.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.072+0000 7f12eeffd700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f0116fa0 0x7f12f0117420 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f12e400eb10 tx=0x7f12e400ee20 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:43.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.076+0000 7f12ecff9700 1 -- 192.168.123.102:0/4005353060 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f12e400cc40 con 0x7f12f0116fa0 2026-03-10T10:16:43.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.076+0000 7f12f4d5f700 1 -- 192.168.123.102:0/4005353060 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f12f01b8400 con 0x7f12f0116fa0 2026-03-10T10:16:43.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.076+0000 7f12f4d5f700 1 -- 192.168.123.102:0/4005353060 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f12f01b87b0 con 0x7f12f0116fa0 2026-03-10T10:16:43.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.076+0000 7f12ecff9700 1 -- 192.168.123.102:0/4005353060 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f12e400cda0 con 0x7f12f0116fa0 2026-03-10T10:16:43.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.076+0000 7f12ecff9700 1 -- 192.168.123.102:0/4005353060 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f12e4018800 con 0x7f12f0116fa0 2026-03-10T10:16:43.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.079+0000 7f12ecff9700 1 -- 192.168.123.102:0/4005353060 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f12e4018960 con 0x7f12f0116fa0 2026-03-10T10:16:43.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.079+0000 7f12ecff9700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f12d806c600 0x7f12d806eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.085 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.081+0000 7f12ef7fe700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f12d806c600 0x7f12d806eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.086 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.081+0000 7f12ef7fe700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f12d806c600 0x7f12d806eac0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f12e0000c00 tx=0x7f12e0019040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:43.086 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.081+0000 7f12ecff9700 1 -- 192.168.123.102:0/4005353060 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f12e4014070 con 0x7f12f0116fa0 2026-03-10T10:16:43.086 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.081+0000 7f12d67fc700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] conn(0x7f12dc001610 0x7f12dc003ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.086 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.082+0000 7f12d67fc700 1 -- 192.168.123.102:0/4005353060 --> [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f12dc006c00 con 0x7f12dc001610 2026-03-10T10:16:43.089 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.085+0000 7f12effff700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] conn(0x7f12dc001610 0x7f12dc003ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.089+0000 7f12effff700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] conn(0x7f12dc001610 0x7f12dc003ad0 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.4 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:43.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.090+0000 7f12ecff9700 1 -- 192.168.123.102:0/4005353060 <== osd.4 v2:192.168.123.105:6808/4051935333 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7f12dc006c00 con 0x7f12dc001610 2026-03-10T10:16:43.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.107+0000 7f148579e700 1 -- 192.168.123.102:0/1319789491 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1480107d90 msgr2=0x7f1480108170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.107+0000 7f148579e700 1 --2- 192.168.123.102:0/1319789491 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1480107d90 0x7f1480108170 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f1470009a60 tx=0x7f1470009d70 comp rx=0 tx=0).stop 2026-03-10T10:16:43.121 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.116+0000 7f148579e700 1 -- 192.168.123.102:0/1319789491 shutdown_connections 2026-03-10T10:16:43.121 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.117+0000 7f148579e700 1 --2- 192.168.123.102:0/1319789491 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f14801086b0 0x7f14801139a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.121 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.117+0000 7f148579e700 1 --2- 192.168.123.102:0/1319789491 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1480107d90 0x7f1480108170 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.121 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.117+0000 7f148579e700 1 -- 192.168.123.102:0/1319789491 >> 192.168.123.102:0/1319789491 conn(0x7f148006ce20 msgr2=0x7f148006d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:43.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.119+0000 7f12d67fc700 1 -- 192.168.123.102:0/4005353060 --> [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f12dc005ce0 con 0x7f12dc001610 2026-03-10T10:16:43.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.121+0000 7f12ecff9700 1 -- 192.168.123.102:0/4005353060 <== osd.4 v2:192.168.123.105:6808/4051935333 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+12 (crc 0 0 0) 0x7f12dc005ce0 con 0x7f12dc001610 2026-03-10T10:16:43.126 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.124+0000 7f148579e700 1 -- 192.168.123.102:0/1319789491 shutdown_connections 2026-03-10T10:16:43.128 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.126+0000 7f148579e700 1 -- 192.168.123.102:0/1319789491 wait complete. 2026-03-10T10:16:43.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.122+0000 7f12d67fc700 1 -- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] conn(0x7f12dc001610 msgr2=0x7f12dc003ad0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.122+0000 7f12d67fc700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] conn(0x7f12dc001610 0x7f12dc003ad0 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.127+0000 7f148579e700 1 Processor -- start 2026-03-10T10:16:43.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.127+0000 7f148579e700 1 -- start start 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.127+0000 7f148579e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1480107d90 0x7f14801a5600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.127+0000 7f148579e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f14801086b0 0x7f14801a5b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.127+0000 7f148579e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f14801a6220 con 0x7f1480107d90 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.127+0000 7f148579e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f14801a9fb0 con 0x7f14801086b0 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.128+0000 7f147e7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f14801086b0 0x7f14801a5b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.128+0000 7f147e7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f14801086b0 0x7f14801a5b40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38306/0 (socket says 192.168.123.102:38306) 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.128+0000 7f147e7fc700 1 -- 192.168.123.102:0/3493830598 learned_addr learned my addr 192.168.123.102:0/3493830598 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.128+0000 7f147effd700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1480107d90 0x7f14801a5600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.129+0000 7f147e7fc700 1 -- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1480107d90 msgr2=0x7f14801a5600 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.129+0000 7f147e7fc700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1480107d90 0x7f14801a5600 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.129+0000 7f147e7fc700 1 -- 192.168.123.102:0/3493830598 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1470009710 con 0x7f14801086b0 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.129+0000 7f147e7fc700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f14801086b0 0x7f14801a5b40 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f147400ea30 tx=0x7f147400edf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.125+0000 7f12d67fc700 1 -- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f12d806c600 msgr2=0x7f12d806eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.125+0000 7f12d67fc700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f12d806c600 0x7f12d806eac0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f12e0000c00 tx=0x7f12e0019040 comp rx=0 tx=0).stop 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.125+0000 7f12d67fc700 1 -- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f0116fa0 msgr2=0x7f12f0117420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.125+0000 7f12d67fc700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f0116fa0 0x7f12f0117420 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f12e400eb10 tx=0x7f12e400ee20 comp rx=0 tx=0).stop 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f1467fff700 1 -- 192.168.123.102:0/3493830598 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f147400cc40 con 0x7f14801086b0 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f148579e700 1 -- 192.168.123.102:0/3493830598 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f14801aa290 con 0x7f14801086b0 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f148579e700 1 -- 192.168.123.102:0/3493830598 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f14801aa7e0 con 0x7f14801086b0 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f1467fff700 1 -- 192.168.123.102:0/3493830598 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f147400cda0 con 0x7f14801086b0 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f1467fff700 1 -- 192.168.123.102:0/3493830598 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1474010430 con 0x7f14801086b0 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f12d67fc700 1 -- 192.168.123.102:0/4005353060 shutdown_connections 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f12d67fc700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:6808/4051935333,v1:192.168.123.105:6809/4051935333] conn(0x7f12dc001610 0x7f12dc003ad0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f12d67fc700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12f010d0f0 0x7f12f011bff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f12d67fc700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f12d806c600 0x7f12d806eac0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f12d67fc700 1 --2- 192.168.123.102:0/4005353060 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f12f0116fa0 0x7f12f0117420 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.130+0000 7f12d67fc700 1 -- 192.168.123.102:0/4005353060 >> 192.168.123.102:0/4005353060 conn(0x7f12f006ce20 msgr2=0x7f12f010ae50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.131+0000 7f12d67fc700 1 -- 192.168.123.102:0/4005353060 shutdown_connections 2026-03-10T10:16:43.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.131+0000 7f12d67fc700 1 -- 192.168.123.102:0/4005353060 wait complete. 2026-03-10T10:16:43.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.140+0000 7f148579e700 1 -- 192.168.123.102:0/3493830598 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_get_version(what=osdmap handle=1) v1 -- 0x7f146c000ff0 con 0x7f14801086b0 2026-03-10T10:16:43.144 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.142+0000 7f1467fff700 1 -- 192.168.123.102:0/3493830598 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f14740105f0 con 0x7f14801086b0 2026-03-10T10:16:43.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.143+0000 7f1467fff700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f146806c600 0x7f146806eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.143+0000 7f1467fff700 1 -- 192.168.123.102:0/3493830598 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f1474014070 con 0x7f14801086b0 2026-03-10T10:16:43.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.143+0000 7f1467fff700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] conn(0x7f14680721a0 0x7f14680745c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.143+0000 7f1467fff700 1 -- 192.168.123.102:0/3493830598 --> [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f1468074c70 con 0x7f14680721a0 2026-03-10T10:16:43.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.144+0000 7f147f7fe700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] conn(0x7f14680721a0 0x7f14680745c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.145+0000 7f147effd700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f146806c600 0x7f146806eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.146+0000 7f147f7fe700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] conn(0x7f14680721a0 0x7f14680745c0 crc :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:43.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.146+0000 7f1467fff700 1 -- 192.168.123.102:0/3493830598 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_get_version_reply(handle=1 version=33) v2 ==== 24+0+0 (secure 0 0 0) 0x7f1474057540 con 0x7f14801086b0 2026-03-10T10:16:43.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.146+0000 7f147effd700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f146806c600 0x7f146806eac0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f147000b5c0 tx=0x7f14700058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:43.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.148+0000 7f1467fff700 1 -- 192.168.123.102:0/3493830598 <== osd.0 v2:192.168.123.102:6802/2756332558 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7f1468074c70 con 0x7f14680721a0 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.195+0000 7f148579e700 1 -- 192.168.123.102:0/3493830598 --> [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f146c002da0 con 0x7f14680721a0 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.199+0000 7f1467fff700 1 -- 192.168.123.102:0/3493830598 <== osd.0 v2:192.168.123.102:6802/2756332558 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7f146c002da0 con 0x7f14680721a0 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.199+0000 7f1465ffb700 1 -- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] conn(0x7f14680721a0 msgr2=0x7f14680745c0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.199+0000 7f1465ffb700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] conn(0x7f14680721a0 0x7f14680745c0 crc :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 -- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f146806c600 msgr2=0x7f146806eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f146806c600 0x7f146806eac0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f147000b5c0 tx=0x7f14700058e0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 -- 192.168.123.102:0/3493830598 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f14801086b0 msgr2=0x7f14801a5b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f14801086b0 0x7f14801a5b40 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f147400ea30 tx=0x7f147400edf0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 -- 192.168.123.102:0/3493830598 shutdown_connections 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6802/2756332558,v1:192.168.123.102:6803/2756332558] conn(0x7f14680721a0 0x7f14680745c0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1480107d90 0x7f14801a5600 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f146806c600 0x7f146806eac0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 --2- 192.168.123.102:0/3493830598 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f14801086b0 0x7f14801a5b40 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.204 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.200+0000 7f1465ffb700 1 -- 192.168.123.102:0/3493830598 >> 192.168.123.102:0/3493830598 conn(0x7f148006ce20 msgr2=0x7f148010e1e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:43.209 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.202+0000 7f1465ffb700 1 -- 192.168.123.102:0/3493830598 shutdown_connections 2026-03-10T10:16:43.209 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.203+0000 7f1465ffb700 1 -- 192.168.123.102:0/3493830598 wait complete. 2026-03-10T10:16:43.250 INFO:teuthology.orchestra.run.vm02.stdout:137438953476 2026-03-10T10:16:43.250 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd last-stat-seq osd.5 2026-03-10T10:16:43.284 INFO:teuthology.orchestra.run.vm02.stdout:38654705676 2026-03-10T10:16:43.284 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd last-stat-seq osd.0 2026-03-10T10:16:43.296 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:43.343 INFO:teuthology.orchestra.run.vm02.stdout:120259084293 2026-03-10T10:16:43.343 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd last-stat-seq osd.4 2026-03-10T10:16:43.357 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:43.836 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:43.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.878+0000 7f6500be1700 1 -- 192.168.123.102:0/857863437 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64fc068730 msgr2=0x7f64fc068b10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.878+0000 7f6500be1700 1 --2- 192.168.123.102:0/857863437 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64fc068730 0x7f64fc068b10 secure :-1 s=READY pgs=217 cs=0 l=1 rev1=1 crypto rx=0x7f64e4009b50 tx=0x7f64e4009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:43.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.880+0000 7f6500be1700 1 -- 192.168.123.102:0/857863437 shutdown_connections 2026-03-10T10:16:43.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.880+0000 7f6500be1700 1 --2- 192.168.123.102:0/857863437 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64fc0690e0 0x7f64fc105b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.880+0000 7f6500be1700 1 --2- 192.168.123.102:0/857863437 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64fc068730 0x7f64fc068b10 unknown :-1 s=CLOSED pgs=217 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.880+0000 7f6500be1700 1 -- 192.168.123.102:0/857863437 >> 192.168.123.102:0/857863437 conn(0x7f64fc075960 msgr2=0x7f64fc075d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:43.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.880+0000 7f6500be1700 1 -- 192.168.123.102:0/857863437 shutdown_connections 2026-03-10T10:16:43.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.880+0000 7f6500be1700 1 -- 192.168.123.102:0/857863437 wait complete. 2026-03-10T10:16:43.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.881+0000 7f6500be1700 1 Processor -- start 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.885+0000 7f6500be1700 1 -- start start 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.885+0000 7f6500be1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64fc068730 0x7f64fc198eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.885+0000 7f6500be1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64fc0690e0 0x7f64fc1993f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.885+0000 7f6500be1700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f64fc199ad0 con 0x7f64fc068730 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.885+0000 7f6500be1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f64fc19d860 con 0x7f64fc0690e0 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.885+0000 7f64fa59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64fc068730 0x7f64fc198eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.886+0000 7f64f9d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64fc0690e0 0x7f64fc1993f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.886+0000 7f64f9d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64fc0690e0 0x7f64fc1993f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38316/0 (socket says 192.168.123.102:38316) 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.886+0000 7f64f9d9b700 1 -- 192.168.123.102:0/2534227542 learned_addr learned my addr 192.168.123.102:0/2534227542 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.887+0000 7f64f9d9b700 1 -- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64fc068730 msgr2=0x7f64fc198eb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.888+0000 7f64f9d9b700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64fc068730 0x7f64fc198eb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.888+0000 7f64f9d9b700 1 -- 192.168.123.102:0/2534227542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f64e40097e0 con 0x7f64fc0690e0 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.889+0000 7f64f9d9b700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64fc0690e0 0x7f64fc1993f0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f64ec00b700 tx=0x7f64ec00bac0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:43.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.889+0000 7f64f37fe700 1 -- 192.168.123.102:0/2534227542 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f64ec010820 con 0x7f64fc0690e0 2026-03-10T10:16:43.891 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.889+0000 7f64f37fe700 1 -- 192.168.123.102:0/2534227542 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f64ec010e60 con 0x7f64fc0690e0 2026-03-10T10:16:43.891 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.890+0000 7f64f37fe700 1 -- 192.168.123.102:0/2534227542 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f64ec017570 con 0x7f64fc0690e0 2026-03-10T10:16:43.896 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:43 vm02 ceph-mon[50200]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.891+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f64fc19da60 con 0x7f64fc0690e0 2026-03-10T10:16:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.891+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f64fc19df30 con 0x7f64fc0690e0 2026-03-10T10:16:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.892+0000 7f64fa59c700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64fc068730 0x7f64fc198eb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.894+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f64fc04ea90 con 0x7f64fc0690e0 2026-03-10T10:16:43.900 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.897+0000 7f64f37fe700 1 -- 192.168.123.102:0/2534227542 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f64ec010980 con 0x7f64fc0690e0 2026-03-10T10:16:43.900 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.898+0000 7f64f37fe700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f64e806c330 0x7f64e806e7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:43.900 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.898+0000 7f64f37fe700 1 -- 192.168.123.102:0/2534227542 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f64ec08b6e0 con 0x7f64fc0690e0 2026-03-10T10:16:43.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.898+0000 7f64fa59c700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f64e806c330 0x7f64e806e7f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:43.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.902+0000 7f64fa59c700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f64e806c330 0x7f64e806e7f0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f64e4005cd0 tx=0x7f64e4005c20 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:43.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:43.917+0000 7f64f37fe700 1 -- 192.168.123.102:0/2534227542 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f64ec056220 con 0x7f64fc0690e0 2026-03-10T10:16:44.029 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:44.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:43 vm05 ceph-mon[59051]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:44.047 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.043+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) v1 -- 0x7f64fc066e80 con 0x7f64fc0690e0 2026-03-10T10:16:44.048 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.044+0000 7f64f37fe700 1 -- 192.168.123.102:0/2534227542 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7f64ec059840 con 0x7f64fc0690e0 2026-03-10T10:16:44.048 INFO:teuthology.orchestra.run.vm02.stdout:73014444041 2026-03-10T10:16:44.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.048+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f64e806c330 msgr2=0x7f64e806e7f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.048+0000 7f6500be1700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f64e806c330 0x7f64e806e7f0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f64e4005cd0 tx=0x7f64e4005c20 comp rx=0 tx=0).stop 2026-03-10T10:16:44.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.048+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64fc0690e0 msgr2=0x7f64fc1993f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.048+0000 7f6500be1700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64fc0690e0 0x7f64fc1993f0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f64ec00b700 tx=0x7f64ec00bac0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.054+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 shutdown_connections 2026-03-10T10:16:44.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.054+0000 7f6500be1700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64fc068730 0x7f64fc198eb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.054+0000 7f6500be1700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f64e806c330 0x7f64e806e7f0 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.054+0000 7f6500be1700 1 --2- 192.168.123.102:0/2534227542 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64fc0690e0 0x7f64fc1993f0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.054+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 >> 192.168.123.102:0/2534227542 conn(0x7f64fc075960 msgr2=0x7f64fc0feac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:44.063 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.059+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 shutdown_connections 2026-03-10T10:16:44.065 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.059+0000 7f6500be1700 1 -- 192.168.123.102:0/2534227542 wait complete. 2026-03-10T10:16:44.145 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.143+0000 7f615aa34700 1 -- 192.168.123.102:0/4171779458 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f615410d310 msgr2=0x7f615410d6f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.145 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.143+0000 7f615aa34700 1 --2- 192.168.123.102:0/4171779458 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f615410d310 0x7f615410d6f0 secure :-1 s=READY pgs=218 cs=0 l=1 rev1=1 crypto rx=0x7f615000bc70 tx=0x7f615000bf80 comp rx=0 tx=0).stop 2026-03-10T10:16:44.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.143+0000 7f615aa34700 1 -- 192.168.123.102:0/4171779458 shutdown_connections 2026-03-10T10:16:44.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.143+0000 7f615aa34700 1 --2- 192.168.123.102:0/4171779458 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6154107d90 0x7f61541081f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.143+0000 7f615aa34700 1 --2- 192.168.123.102:0/4171779458 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f615410d310 0x7f615410d6f0 unknown :-1 s=CLOSED pgs=218 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.144+0000 7f615aa34700 1 -- 192.168.123.102:0/4171779458 >> 192.168.123.102:0/4171779458 conn(0x7f615406ce20 msgr2=0x7f615406d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:44.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.144+0000 7f615aa34700 1 -- 192.168.123.102:0/4171779458 shutdown_connections 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.144+0000 7f615aa34700 1 -- 192.168.123.102:0/4171779458 wait complete. 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.144+0000 7f615aa34700 1 Processor -- start 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.144+0000 7f615aa34700 1 -- start start 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.144+0000 7f615aa34700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6154107d90 0x7f615407ce50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.144+0000 7f615aa34700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f615407d390 0x7f615407d810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.144+0000 7f615aa34700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6154083e70 con 0x7f6154107d90 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.144+0000 7f615aa34700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6154081980 con 0x7f615407d390 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.146+0000 7f6159231700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f615407d390 0x7f615407d810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.146+0000 7f6159231700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f615407d390 0x7f615407d810 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38334/0 (socket says 192.168.123.102:38334) 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.146+0000 7f6159231700 1 -- 192.168.123.102:0/3148833375 learned_addr learned my addr 192.168.123.102:0/3148833375 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.146+0000 7f6159a32700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6154107d90 0x7f615407ce50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.146+0000 7f6159231700 1 -- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6154107d90 msgr2=0x7f615407ce50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.146+0000 7f6159231700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6154107d90 0x7f615407ce50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.146+0000 7f6159231700 1 -- 192.168.123.102:0/3148833375 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f615000b920 con 0x7f615407d390 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.147+0000 7f6159231700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f615407d390 0x7f615407d810 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f614c00e9d0 tx=0x7f614c00ed90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.147+0000 7f614affd700 1 -- 192.168.123.102:0/3148833375 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f614c00c4f0 con 0x7f615407d390 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.147+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6154081c60 con 0x7f615407d390 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.147+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f61540821b0 con 0x7f615407d390 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.147+0000 7f614affd700 1 -- 192.168.123.102:0/3148833375 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f614c013070 con 0x7f615407d390 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.147+0000 7f614affd700 1 -- 192.168.123.102:0/3148833375 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f614c00faa0 con 0x7f615407d390 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.148+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6138005320 con 0x7f615407d390 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.150+0000 7f614affd700 1 -- 192.168.123.102:0/3148833375 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f614c00fc00 con 0x7f615407d390 2026-03-10T10:16:44.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.151+0000 7f614affd700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f614006c530 0x7f614006e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.153 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.151+0000 7f6159a32700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f614006c530 0x7f614006e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.153 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.152+0000 7f6159a32700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f614006c530 0x7f614006e9f0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f615000bc70 tx=0x7f615000d370 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:44.153 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.152+0000 7f614affd700 1 -- 192.168.123.102:0/3148833375 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f614c015070 con 0x7f615407d390 2026-03-10T10:16:44.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.153+0000 7f614affd700 1 -- 192.168.123.102:0/3148833375 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f614c056ca0 con 0x7f615407d390 2026-03-10T10:16:44.279 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:44.322 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444041 got 73014444041 for osd.2 2026-03-10T10:16:44.322 DEBUG:teuthology.parallel:result is None 2026-03-10T10:16:44.352 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:44.364 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.362+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 3} v 0) v1 -- 0x7f6138005190 con 0x7f615407d390 2026-03-10T10:16:44.367 INFO:teuthology.orchestra.run.vm02.stdout:98784247815 2026-03-10T10:16:44.367 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.362+0000 7f614affd700 1 -- 192.168.123.102:0/3148833375 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 3}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7f614c05a2c0 con 0x7f615407d390 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f614006c530 msgr2=0x7f614006e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f614006c530 0x7f614006e9f0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f615000bc70 tx=0x7f615000d370 comp rx=0 tx=0).stop 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f615407d390 msgr2=0x7f615407d810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f615407d390 0x7f615407d810 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f614c00e9d0 tx=0x7f614c00ed90 comp rx=0 tx=0).stop 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 shutdown_connections 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6154107d90 0x7f615407ce50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f614006c530 0x7f614006e9f0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 --2- 192.168.123.102:0/3148833375 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f615407d390 0x7f615407d810 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 >> 192.168.123.102:0/3148833375 conn(0x7f615406ce20 msgr2=0x7f6154071e20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 shutdown_connections 2026-03-10T10:16:44.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.368+0000 7f615aa34700 1 -- 192.168.123.102:0/3148833375 wait complete. 2026-03-10T10:16:44.481 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247815 got 98784247815 for osd.3 2026-03-10T10:16:44.481 DEBUG:teuthology.parallel:result is None 2026-03-10T10:16:44.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.726+0000 7ff51c706700 1 -- 192.168.123.102:0/3403382048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 msgr2=0x7ff514107d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.727+0000 7ff51c706700 1 --2- 192.168.123.102:0/3403382048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 0x7ff514107d40 secure :-1 s=READY pgs=219 cs=0 l=1 rev1=1 crypto rx=0x7ff504009b00 tx=0x7ff504009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:44.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.732+0000 7ff51c706700 1 -- 192.168.123.102:0/3403382048 shutdown_connections 2026-03-10T10:16:44.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.732+0000 7ff51c706700 1 --2- 192.168.123.102:0/3403382048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 0x7ff514107d40 unknown :-1 s=CLOSED pgs=219 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.732+0000 7ff51c706700 1 --2- 192.168.123.102:0/3403382048 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff514103340 0x7ff514103720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.732+0000 7ff51c706700 1 -- 192.168.123.102:0/3403382048 >> 192.168.123.102:0/3403382048 conn(0x7ff5140feb90 msgr2=0x7ff514100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:44.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.736+0000 7ff51c706700 1 -- 192.168.123.102:0/3403382048 shutdown_connections 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.739+0000 7ff51c706700 1 -- 192.168.123.102:0/3403382048 wait complete. 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.739+0000 7ff51c706700 1 Processor -- start 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.739+0000 7ff51c706700 1 -- start start 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.739+0000 7ff51c706700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff514103340 0x7ff514072a50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.739+0000 7ff51c706700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 0x7ff5140752a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.739+0000 7ff51c706700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5140793a0 con 0x7ff514103cf0 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.739+0000 7ff51c706700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5140757e0 con 0x7ff514103340 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.739+0000 7ff519ca1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 0x7ff5140752a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.740+0000 7ff519ca1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 0x7ff5140752a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:45140/0 (socket says 192.168.123.102:45140) 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.740+0000 7ff519ca1700 1 -- 192.168.123.102:0/3575594793 learned_addr learned my addr 192.168.123.102:0/3575594793 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.740+0000 7ff519ca1700 1 -- 192.168.123.102:0/3575594793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff514103340 msgr2=0x7ff514072a50 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.740+0000 7ff519ca1700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff514103340 0x7ff514072a50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.740+0000 7ff519ca1700 1 -- 192.168.123.102:0/3575594793 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff5040097e0 con 0x7ff514103cf0 2026-03-10T10:16:44.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.740+0000 7ff519ca1700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 0x7ff5140752a0 secure :-1 s=READY pgs=220 cs=0 l=1 rev1=1 crypto rx=0x7ff504005850 tx=0x7ff50400f7f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:44.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.741+0000 7ff50b7fe700 1 -- 192.168.123.102:0/3575594793 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff50401d070 con 0x7ff514103cf0 2026-03-10T10:16:44.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.741+0000 7ff50b7fe700 1 -- 192.168.123.102:0/3575594793 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff50400fe80 con 0x7ff514103cf0 2026-03-10T10:16:44.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.741+0000 7ff50b7fe700 1 -- 192.168.123.102:0/3575594793 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff504017a20 con 0x7ff514103cf0 2026-03-10T10:16:44.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.742+0000 7ff51c706700 1 -- 192.168.123.102:0/3575594793 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff514075980 con 0x7ff514103cf0 2026-03-10T10:16:44.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.742+0000 7ff51c706700 1 -- 192.168.123.102:0/3575594793 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff514075e40 con 0x7ff514103cf0 2026-03-10T10:16:44.751 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.745+0000 7ff50b7fe700 1 -- 192.168.123.102:0/3575594793 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7ff504017b80 con 0x7ff514103cf0 2026-03-10T10:16:44.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.747+0000 7ff50b7fe700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff50006c600 0x7ff50006eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.747+0000 7ff51a4a2700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff50006c600 0x7ff50006eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.747+0000 7ff50b7fe700 1 -- 192.168.123.102:0/3575594793 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7ff50408cb80 con 0x7ff514103cf0 2026-03-10T10:16:44.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.748+0000 7ff5097fa700 1 -- 192.168.123.102:0/3575594793 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff514076120 con 0x7ff514103cf0 2026-03-10T10:16:44.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.751+0000 7ff51a4a2700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff50006c600 0x7ff50006eac0 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7ff510009710 tx=0x7ff510006c60 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:44.771 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.769+0000 7ff50b7fe700 1 -- 192.168.123.102:0/3575594793 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7ff5040578e0 con 0x7ff514103cf0 2026-03-10T10:16:44.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.776+0000 7f3b8ba28700 1 -- 192.168.123.102:0/591092036 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b84107ff0 msgr2=0x7f3b841083d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.776+0000 7f3b8ba28700 1 --2- 192.168.123.102:0/591092036 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b84107ff0 0x7f3b841083d0 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f3b80007780 tx=0x7f3b8000c050 comp rx=0 tx=0).stop 2026-03-10T10:16:44.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.777+0000 7f3b8ba28700 1 -- 192.168.123.102:0/591092036 shutdown_connections 2026-03-10T10:16:44.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.777+0000 7f3b8ba28700 1 --2- 192.168.123.102:0/591092036 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3b841089a0 0x7f3b8410be70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.777+0000 7f3b8ba28700 1 --2- 192.168.123.102:0/591092036 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b84107ff0 0x7f3b841083d0 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.777+0000 7f3b8ba28700 1 -- 192.168.123.102:0/591092036 >> 192.168.123.102:0/591092036 conn(0x7f3b8406ce20 msgr2=0x7f3b8406d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:44.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.777+0000 7f3b8ba28700 1 -- 192.168.123.102:0/591092036 shutdown_connections 2026-03-10T10:16:44.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.777+0000 7f3b8ba28700 1 -- 192.168.123.102:0/591092036 wait complete. 2026-03-10T10:16:44.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.777+0000 7f3b8ba28700 1 Processor -- start 2026-03-10T10:16:44.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.777+0000 7f3b8ba28700 1 -- start start 2026-03-10T10:16:44.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.777+0000 7f3b8ba28700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b841089a0 0x7f3b8407cf50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.778+0000 7f3b8ba28700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3b8407d490 0x7f3b8407d910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.778+0000 7f3b8ba28700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3b84081ad0 con 0x7f3b8407d490 2026-03-10T10:16:44.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.778+0000 7f3b8ba28700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3b84081c40 con 0x7f3b841089a0 2026-03-10T10:16:44.780 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.778+0000 7f3b897c4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b841089a0 0x7f3b8407cf50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.781 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.778+0000 7f3b897c4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b841089a0 0x7f3b8407cf50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38374/0 (socket says 192.168.123.102:38374) 2026-03-10T10:16:44.781 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.778+0000 7f3b897c4700 1 -- 192.168.123.102:0/1096533199 learned_addr learned my addr 192.168.123.102:0/1096533199 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:44.781 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.778+0000 7f3b88fc3700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3b8407d490 0x7f3b8407d910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.781 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.779+0000 7f3b897c4700 1 -- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3b8407d490 msgr2=0x7f3b8407d910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.783 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.779+0000 7f3b897c4700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3b8407d490 0x7f3b8407d910 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.783 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.779+0000 7f3b897c4700 1 -- 192.168.123.102:0/1096533199 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3b80007430 con 0x7f3b841089a0 2026-03-10T10:16:44.783 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.780+0000 7f3b897c4700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b841089a0 0x7f3b8407cf50 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f3b80000c00 tx=0x7f3b80015870 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:44.783 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.781+0000 7f3b7a7fc700 1 -- 192.168.123.102:0/1096533199 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3b8000f040 con 0x7f3b841089a0 2026-03-10T10:16:44.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.781+0000 7f3b8ba28700 1 -- 192.168.123.102:0/1096533199 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3b84081ec0 con 0x7f3b841089a0 2026-03-10T10:16:44.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.781+0000 7f3b8ba28700 1 -- 192.168.123.102:0/1096533199 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3b84082410 con 0x7f3b841089a0 2026-03-10T10:16:44.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.783+0000 7f3b7a7fc700 1 -- 192.168.123.102:0/1096533199 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3b80007e70 con 0x7f3b841089a0 2026-03-10T10:16:44.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.783+0000 7f3b7a7fc700 1 -- 192.168.123.102:0/1096533199 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3b8000aa40 con 0x7f3b841089a0 2026-03-10T10:16:44.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.783+0000 7f3b8ba28700 1 -- 192.168.123.102:0/1096533199 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3b68005320 con 0x7f3b841089a0 2026-03-10T10:16:44.800 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.797+0000 7f3b7a7fc700 1 -- 192.168.123.102:0/1096533199 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f3b8000a1d0 con 0x7f3b841089a0 2026-03-10T10:16:44.801 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.797+0000 7f3b7a7fc700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3b7006c600 0x7f3b7006eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.801 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.797+0000 7f3b7a7fc700 1 -- 192.168.123.102:0/1096533199 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f3b8008e830 con 0x7f3b841089a0 2026-03-10T10:16:44.801 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.797+0000 7f3b7a7fc700 1 -- 192.168.123.102:0/1096533199 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f3b800bb180 con 0x7f3b841089a0 2026-03-10T10:16:44.801 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.798+0000 7f3b88fc3700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3b7006c600 0x7f3b7006eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.801 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.798+0000 7f3b88fc3700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3b7006c600 0x7f3b7006eac0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f3b8406a180 tx=0x7f3b7c00d040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:44.898 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.896+0000 7f16da2c0700 1 -- 192.168.123.102:0/39417271 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410d0f0 msgr2=0x7f16d410d570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.898 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.896+0000 7f16da2c0700 1 --2- 192.168.123.102:0/39417271 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410d0f0 0x7f16d410d570 secure :-1 s=READY pgs=221 cs=0 l=1 rev1=1 crypto rx=0x7f16c4009b00 tx=0x7f16c4009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:44.900 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.898+0000 7f16da2c0700 1 -- 192.168.123.102:0/39417271 shutdown_connections 2026-03-10T10:16:44.900 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.898+0000 7f16da2c0700 1 --2- 192.168.123.102:0/39417271 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410d0f0 0x7f16d410d570 unknown :-1 s=CLOSED pgs=221 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.900 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.898+0000 7f16da2c0700 1 --2- 192.168.123.102:0/39417271 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f16d410f340 0x7f16d410f720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.900 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.898+0000 7f16da2c0700 1 -- 192.168.123.102:0/39417271 >> 192.168.123.102:0/39417271 conn(0x7f16d406ce20 msgr2=0x7f16d406d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:44.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.902+0000 7f16da2c0700 1 -- 192.168.123.102:0/39417271 shutdown_connections 2026-03-10T10:16:44.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.902+0000 7f16da2c0700 1 -- 192.168.123.102:0/39417271 wait complete. 2026-03-10T10:16:44.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.902+0000 7f16da2c0700 1 Processor -- start 2026-03-10T10:16:44.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.902+0000 7f16da2c0700 1 -- start start 2026-03-10T10:16:44.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.903+0000 7f16da2c0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f16d410d0f0 0x7f16d41180b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.903+0000 7f16da2c0700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410f340 0x7f16d41185f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.903+0000 7f16da2c0700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f16d411c240 con 0x7f16d410f340 2026-03-10T10:16:44.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.903+0000 7f16da2c0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f16d411c3b0 con 0x7f16d410d0f0 2026-03-10T10:16:44.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.903+0000 7f16d8abd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410f340 0x7f16d41185f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.903+0000 7f16d92be700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f16d410d0f0 0x7f16d41180b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.903+0000 7f16d8abd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410f340 0x7f16d41185f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:45168/0 (socket says 192.168.123.102:45168) 2026-03-10T10:16:44.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.903+0000 7f16d8abd700 1 -- 192.168.123.102:0/3245840666 learned_addr learned my addr 192.168.123.102:0/3245840666 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:44.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.904+0000 7f16d8abd700 1 -- 192.168.123.102:0/3245840666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f16d410d0f0 msgr2=0x7f16d41180b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.904+0000 7f16d8abd700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f16d410d0f0 0x7f16d41180b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.904+0000 7f16d8abd700 1 -- 192.168.123.102:0/3245840666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f16c40097e0 con 0x7f16d410f340 2026-03-10T10:16:44.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.904+0000 7f16d8abd700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410f340 0x7f16d41185f0 secure :-1 s=READY pgs=222 cs=0 l=1 rev1=1 crypto rx=0x7f16c4000c00 tx=0x7f16c400ba00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:44.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.904+0000 7f16ca7fc700 1 -- 192.168.123.102:0/3245840666 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f16c401d070 con 0x7f16d410f340 2026-03-10T10:16:44.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.904+0000 7f16ca7fc700 1 -- 192.168.123.102:0/3245840666 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f16c400f460 con 0x7f16d410f340 2026-03-10T10:16:44.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.905+0000 7f16ca7fc700 1 -- 192.168.123.102:0/3245840666 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f16c4021620 con 0x7f16d410f340 2026-03-10T10:16:44.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.909+0000 7f16da2c0700 1 -- 192.168.123.102:0/3245840666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f16d4118b90 con 0x7f16d410f340 2026-03-10T10:16:44.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.909+0000 7f16da2c0700 1 -- 192.168.123.102:0/3245840666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f16d41b82e0 con 0x7f16d410f340 2026-03-10T10:16:44.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.910+0000 7f16ca7fc700 1 -- 192.168.123.102:0/3245840666 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f16c4003a40 con 0x7f16d410f340 2026-03-10T10:16:44.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.911+0000 7f16ca7fc700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f16c006c380 0x7f16c006e840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.911+0000 7f16d92be700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f16c006c380 0x7f16c006e840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.911+0000 7f16ca7fc700 1 -- 192.168.123.102:0/3245840666 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f16c408c0c0 con 0x7f16d410f340 2026-03-10T10:16:44.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.911+0000 7f16d92be700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f16c006c380 0x7f16c006e840 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f16d0009e50 tx=0x7f16d0009450 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:44.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.912+0000 7f16da2c0700 1 -- 192.168.123.102:0/3245840666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f16d4112620 con 0x7f16d410f340 2026-03-10T10:16:44.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.915+0000 7f16ca7fc700 1 -- 192.168.123.102:0/3245840666 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f16c4056d00 con 0x7f16d410f340 2026-03-10T10:16:44.965 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:44 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/2534227542' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T10:16:44.965 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:44 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3148833375' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T10:16:44.965 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:44 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:44.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.962+0000 7f35b65e7700 1 -- 192.168.123.102:0/3144826483 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f35b0103cd0 msgr2=0x7f35b0107d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.962+0000 7f35b65e7700 1 --2- 192.168.123.102:0/3144826483 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f35b0103cd0 0x7f35b0107d20 secure :-1 s=READY pgs=223 cs=0 l=1 rev1=1 crypto rx=0x7f3598009b00 tx=0x7f3598009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:44.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.970+0000 7f35b65e7700 1 -- 192.168.123.102:0/3144826483 shutdown_connections 2026-03-10T10:16:44.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.970+0000 7f35b65e7700 1 --2- 192.168.123.102:0/3144826483 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f35b0103cd0 0x7f35b0107d20 unknown :-1 s=CLOSED pgs=223 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.970+0000 7f35b65e7700 1 --2- 192.168.123.102:0/3144826483 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f35b0103320 0x7f35b0103700 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.970+0000 7f35b65e7700 1 -- 192.168.123.102:0/3144826483 >> 192.168.123.102:0/3144826483 conn(0x7f35b00feb90 msgr2=0x7f35b0100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:44.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.972+0000 7f35b65e7700 1 -- 192.168.123.102:0/3144826483 shutdown_connections 2026-03-10T10:16:44.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.972+0000 7f35b65e7700 1 -- 192.168.123.102:0/3144826483 wait complete. 2026-03-10T10:16:44.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.972+0000 7f35b65e7700 1 Processor -- start 2026-03-10T10:16:44.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.975+0000 7f35b65e7700 1 -- start start 2026-03-10T10:16:44.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.975+0000 7f35b65e7700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f35b0103320 0x7f35b0198d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.978 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.977+0000 7f35affff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f35b0103320 0x7f35b0198d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.978 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.977+0000 7f35affff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f35b0103320 0x7f35b0198d50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:45180/0 (socket says 192.168.123.102:45180) 2026-03-10T10:16:44.978 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.977+0000 7f35b65e7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f35b0103cd0 0x7f35b0199290 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.979 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.977+0000 7f35b65e7700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f35b0199970 con 0x7f35b0103320 2026-03-10T10:16:44.979 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.977+0000 7f35b65e7700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f35b019d700 con 0x7f35b0103cd0 2026-03-10T10:16:44.979 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.977+0000 7f35affff700 1 -- 192.168.123.102:0/3485429625 learned_addr learned my addr 192.168.123.102:0/3485429625 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:44.979 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.977+0000 7f35af7fe700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f35b0103cd0 0x7f35b0199290 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.980 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.978+0000 7f35af7fe700 1 -- 192.168.123.102:0/3485429625 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f35b0103320 msgr2=0x7f35b0198d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:44.980 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.978+0000 7f35af7fe700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f35b0103320 0x7f35b0198d50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:44.980 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.978+0000 7f35af7fe700 1 -- 192.168.123.102:0/3485429625 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f35980097e0 con 0x7f35b0103cd0 2026-03-10T10:16:44.980 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.978+0000 7f35af7fe700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f35b0103cd0 0x7f35b0199290 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f3598009fd0 tx=0x7f3598004970 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:44.980 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.978+0000 7f35ad7fa700 1 -- 192.168.123.102:0/3485429625 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f359801d070 con 0x7f35b0103cd0 2026-03-10T10:16:44.980 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.978+0000 7f35ad7fa700 1 -- 192.168.123.102:0/3485429625 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3598004b90 con 0x7f35b0103cd0 2026-03-10T10:16:44.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.979+0000 7f35ad7fa700 1 -- 192.168.123.102:0/3485429625 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3598021740 con 0x7f35b0103cd0 2026-03-10T10:16:44.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.979+0000 7f35b65e7700 1 -- 192.168.123.102:0/3485429625 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f35b019d980 con 0x7f35b0103cd0 2026-03-10T10:16:44.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.983+0000 7f35b65e7700 1 -- 192.168.123.102:0/3485429625 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f35b019ddf0 con 0x7f35b0103cd0 2026-03-10T10:16:44.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.984+0000 7f35b65e7700 1 -- 192.168.123.102:0/3485429625 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f35b010b630 con 0x7f35b0103cd0 2026-03-10T10:16:44.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.985+0000 7f35ad7fa700 1 -- 192.168.123.102:0/3485429625 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f359800bc50 con 0x7f35b0103cd0 2026-03-10T10:16:44.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.985+0000 7f35ad7fa700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f359c06c2e0 0x7f359c06e7a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:44.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.985+0000 7f35ad7fa700 1 -- 192.168.123.102:0/3485429625 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f359808cee0 con 0x7f35b0103cd0 2026-03-10T10:16:44.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.986+0000 7f35affff700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f359c06c2e0 0x7f359c06e7a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:44.989 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.987+0000 7f35ad7fa700 1 -- 192.168.123.102:0/3485429625 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f3598057b20 con 0x7f35b0103cd0 2026-03-10T10:16:44.989 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:44.988+0000 7f35affff700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f359c06c2e0 0x7f359c06e7a0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f35a0005950 tx=0x7f35a0004080 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:45.087 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.084+0000 7ff5097fa700 1 -- 192.168.123.102:0/3575594793 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) v1 -- 0x7ff514066e80 con 0x7ff514103cf0 2026-03-10T10:16:45.088 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.087+0000 7ff50b7fe700 1 -- 192.168.123.102:0/3575594793 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) v1 ==== 74+0+13 (secure 0 0 0) 0x7ff504027020 con 0x7ff514103cf0 2026-03-10T10:16:45.088 INFO:teuthology.orchestra.run.vm02.stdout:137438953476 2026-03-10T10:16:45.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.091+0000 7ff51c706700 1 -- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff50006c600 msgr2=0x7ff50006eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:45.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.091+0000 7ff51c706700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff50006c600 0x7ff50006eac0 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7ff510009710 tx=0x7ff510006c60 comp rx=0 tx=0).stop 2026-03-10T10:16:45.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.091+0000 7ff51c706700 1 -- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 msgr2=0x7ff5140752a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:45.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.091+0000 7ff51c706700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 0x7ff5140752a0 secure :-1 s=READY pgs=220 cs=0 l=1 rev1=1 crypto rx=0x7ff504005850 tx=0x7ff50400f7f0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.092+0000 7ff51c706700 1 -- 192.168.123.102:0/3575594793 shutdown_connections 2026-03-10T10:16:45.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.092+0000 7ff51c706700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff50006c600 0x7ff50006eac0 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.092+0000 7ff51c706700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff514103340 0x7ff514072a50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.092+0000 7ff51c706700 1 --2- 192.168.123.102:0/3575594793 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff514103cf0 0x7ff5140752a0 unknown :-1 s=CLOSED pgs=220 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.092+0000 7ff51c706700 1 -- 192.168.123.102:0/3575594793 >> 192.168.123.102:0/3575594793 conn(0x7ff5140feb90 msgr2=0x7ff5141003b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:45.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.093+0000 7ff51c706700 1 -- 192.168.123.102:0/3575594793 shutdown_connections 2026-03-10T10:16:45.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.093+0000 7ff51c706700 1 -- 192.168.123.102:0/3575594793 wait complete. 2026-03-10T10:16:45.113 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.112+0000 7f3b8ba28700 1 -- 192.168.123.102:0/1096533199 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) v1 -- 0x7f3b68005190 con 0x7f3b841089a0 2026-03-10T10:16:45.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.115+0000 7f3b7a7fc700 1 -- 192.168.123.102:0/1096533199 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7f3b80059470 con 0x7f3b841089a0 2026-03-10T10:16:45.117 INFO:teuthology.orchestra.run.vm02.stdout:55834574859 2026-03-10T10:16:45.126 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.124+0000 7f16da2c0700 1 -- 192.168.123.102:0/3245840666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) v1 -- 0x7f16d404ea90 con 0x7f16d410f340 2026-03-10T10:16:45.126 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.124+0000 7f16ca7fc700 1 -- 192.168.123.102:0/3245840666 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7f16c4026020 con 0x7f16d410f340 2026-03-10T10:16:45.126 INFO:teuthology.orchestra.run.vm02.stdout:38654705677 2026-03-10T10:16:45.128 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.126+0000 7f3b6ffff700 1 -- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3b7006c600 msgr2=0x7f3b7006eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:45.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.128+0000 7f3b6ffff700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3b7006c600 0x7f3b7006eac0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f3b8406a180 tx=0x7f3b7c00d040 comp rx=0 tx=0).stop 2026-03-10T10:16:45.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.128+0000 7f3b6ffff700 1 -- 192.168.123.102:0/1096533199 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b841089a0 msgr2=0x7f3b8407cf50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:45.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.128+0000 7f3b6ffff700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b841089a0 0x7f3b8407cf50 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f3b80000c00 tx=0x7f3b80015870 comp rx=0 tx=0).stop 2026-03-10T10:16:45.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.128+0000 7f3b6ffff700 1 -- 192.168.123.102:0/1096533199 shutdown_connections 2026-03-10T10:16:45.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.128+0000 7f3b6ffff700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3b7006c600 0x7f3b7006eac0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.128+0000 7f3b6ffff700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3b841089a0 0x7f3b8407cf50 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f3b6ffff700 1 --2- 192.168.123.102:0/1096533199 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3b8407d490 0x7f3b8407d910 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f3b6ffff700 1 -- 192.168.123.102:0/1096533199 >> 192.168.123.102:0/1096533199 conn(0x7f3b8406ce20 msgr2=0x7f3b840705d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f16bffff700 1 -- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f16c006c380 msgr2=0x7f16c006e840 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f16bffff700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f16c006c380 0x7f16c006e840 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f16d0009e50 tx=0x7f16d0009450 comp rx=0 tx=0).stop 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f16bffff700 1 -- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410f340 msgr2=0x7f16d41185f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f16bffff700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410f340 0x7f16d41185f0 secure :-1 s=READY pgs=222 cs=0 l=1 rev1=1 crypto rx=0x7f16c4000c00 tx=0x7f16c400ba00 comp rx=0 tx=0).stop 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f3b6ffff700 1 -- 192.168.123.102:0/1096533199 shutdown_connections 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f16bffff700 1 -- 192.168.123.102:0/3245840666 shutdown_connections 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f16bffff700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f16c006c380 0x7f16c006e840 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f16bffff700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f16d410d0f0 0x7f16d41180b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f16bffff700 1 --2- 192.168.123.102:0/3245840666 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f16d410f340 0x7f16d41185f0 unknown :-1 s=CLOSED pgs=222 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.129+0000 7f16bffff700 1 -- 192.168.123.102:0/3245840666 >> 192.168.123.102:0/3245840666 conn(0x7f16d406ce20 msgr2=0x7f16d4070450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.130+0000 7f16bffff700 1 -- 192.168.123.102:0/3245840666 shutdown_connections 2026-03-10T10:16:45.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.130+0000 7f3b6ffff700 1 -- 192.168.123.102:0/1096533199 wait complete. 2026-03-10T10:16:45.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.130+0000 7f16bffff700 1 -- 192.168.123.102:0/3245840666 wait complete. 2026-03-10T10:16:45.159 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.157+0000 7f35b65e7700 1 -- 192.168.123.102:0/3485429625 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) v1 -- 0x7f35b019a0b0 con 0x7f35b0103cd0 2026-03-10T10:16:45.177 INFO:teuthology.orchestra.run.vm02.stdout:120259084293 2026-03-10T10:16:45.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.168+0000 7f35ad7fa700 1 -- 192.168.123.102:0/3485429625 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) v1 ==== 74+0+13 (secure 0 0 0) 0x7f35980218a0 con 0x7f35b0103cd0 2026-03-10T10:16:45.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.175+0000 7f35a6ffd700 1 -- 192.168.123.102:0/3485429625 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f359c06c2e0 msgr2=0x7f359c06e7a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:45.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.175+0000 7f35a6ffd700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f359c06c2e0 0x7f359c06e7a0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f35a0005950 tx=0x7f35a0004080 comp rx=0 tx=0).stop 2026-03-10T10:16:45.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.175+0000 7f35a6ffd700 1 -- 192.168.123.102:0/3485429625 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f35b0103cd0 msgr2=0x7f35b0199290 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:45.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.175+0000 7f35a6ffd700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f35b0103cd0 0x7f35b0199290 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f3598009fd0 tx=0x7f3598004970 comp rx=0 tx=0).stop 2026-03-10T10:16:45.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.175+0000 7f35a6ffd700 1 -- 192.168.123.102:0/3485429625 shutdown_connections 2026-03-10T10:16:45.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.175+0000 7f35a6ffd700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f35b0103320 0x7f35b0198d50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.175+0000 7f35a6ffd700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f359c06c2e0 0x7f359c06e7a0 secure :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f35a0005950 tx=0x7f35a0004080 comp rx=0 tx=0).stop 2026-03-10T10:16:45.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.175+0000 7f35a6ffd700 1 --2- 192.168.123.102:0/3485429625 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f35b0103cd0 0x7f35b0199290 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.175+0000 7f35a6ffd700 1 -- 192.168.123.102:0/3485429625 >> 192.168.123.102:0/3485429625 conn(0x7f35b00feb90 msgr2=0x7f35b0107590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:45.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.177+0000 7f35a6ffd700 1 -- 192.168.123.102:0/3485429625 shutdown_connections 2026-03-10T10:16:45.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.178+0000 7f35a6ffd700 1 -- 192.168.123.102:0/3485429625 wait complete. 2026-03-10T10:16:45.198 INFO:tasks.cephadm.ceph_manager.ceph:need seq 38654705676 got 38654705677 for osd.0 2026-03-10T10:16:45.198 DEBUG:teuthology.parallel:result is None 2026-03-10T10:16:45.238 INFO:tasks.cephadm.ceph_manager.ceph:need seq 120259084293 got 120259084293 for osd.4 2026-03-10T10:16:45.238 DEBUG:teuthology.parallel:result is None 2026-03-10T10:16:45.273 INFO:tasks.cephadm.ceph_manager.ceph:need seq 137438953476 got 137438953476 for osd.5 2026-03-10T10:16:45.273 DEBUG:teuthology.parallel:result is None 2026-03-10T10:16:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:44 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/2534227542' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T10:16:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:44 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3148833375' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T10:16:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:44 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:45.294 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574859 got 55834574859 for osd.1 2026-03-10T10:16:45.294 DEBUG:teuthology.parallel:result is None 2026-03-10T10:16:45.294 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T10:16:45.294 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph pg dump --format=json 2026-03-10T10:16:45.496 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:45.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.901+0000 7f3c50e15700 1 -- 192.168.123.102:0/778441106 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 msgr2=0x7f3c4c101500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:45.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.901+0000 7f3c50e15700 1 --2- 192.168.123.102:0/778441106 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 0x7f3c4c101500 secure :-1 s=READY pgs=224 cs=0 l=1 rev1=1 crypto rx=0x7f3c34009b00 tx=0x7f3c34009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:45.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.902+0000 7f3c50e15700 1 -- 192.168.123.102:0/778441106 shutdown_connections 2026-03-10T10:16:45.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.902+0000 7f3c50e15700 1 --2- 192.168.123.102:0/778441106 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3c4c101ad0 0x7f3c4c105b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.902+0000 7f3c50e15700 1 --2- 192.168.123.102:0/778441106 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 0x7f3c4c101500 unknown :-1 s=CLOSED pgs=224 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.902+0000 7f3c50e15700 1 -- 192.168.123.102:0/778441106 >> 192.168.123.102:0/778441106 conn(0x7f3c4c0fc9b0 msgr2=0x7f3c4c0fedd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:45.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.902+0000 7f3c50e15700 1 -- 192.168.123.102:0/778441106 shutdown_connections 2026-03-10T10:16:45.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.902+0000 7f3c50e15700 1 -- 192.168.123.102:0/778441106 wait complete. 2026-03-10T10:16:45.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.903+0000 7f3c50e15700 1 Processor -- start 2026-03-10T10:16:45.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.903+0000 7f3c50e15700 1 -- start start 2026-03-10T10:16:45.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.903+0000 7f3c50e15700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 0x7f3c4c198ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:45.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.903+0000 7f3c50e15700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3c4c101ad0 0x7f3c4c1993e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:45.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.903+0000 7f3c50e15700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3c4c199ac0 con 0x7f3c4c101120 2026-03-10T10:16:45.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.903+0000 7f3c50e15700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3c4c19d850 con 0x7f3c4c101ad0 2026-03-10T10:16:45.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.903+0000 7f3c4a59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 0x7f3c4c198ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:45.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.903+0000 7f3c4a59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 0x7f3c4c198ea0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:45194/0 (socket says 192.168.123.102:45194) 2026-03-10T10:16:45.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.903+0000 7f3c4a59c700 1 -- 192.168.123.102:0/4194857396 learned_addr learned my addr 192.168.123.102:0/4194857396 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:45.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.904+0000 7f3c4a59c700 1 -- 192.168.123.102:0/4194857396 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3c4c101ad0 msgr2=0x7f3c4c1993e0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:16:45.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.904+0000 7f3c4a59c700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3c4c101ad0 0x7f3c4c1993e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:45.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.904+0000 7f3c4a59c700 1 -- 192.168.123.102:0/4194857396 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3c340097e0 con 0x7f3c4c101120 2026-03-10T10:16:45.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.904+0000 7f3c4a59c700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 0x7f3c4c198ea0 secure :-1 s=READY pgs=225 cs=0 l=1 rev1=1 crypto rx=0x7f3c3400b5c0 tx=0x7f3c340049b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:45.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.904+0000 7f3c437fe700 1 -- 192.168.123.102:0/4194857396 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3c3401d070 con 0x7f3c4c101120 2026-03-10T10:16:45.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.904+0000 7f3c437fe700 1 -- 192.168.123.102:0/4194857396 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3c3400bc50 con 0x7f3c4c101120 2026-03-10T10:16:45.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.904+0000 7f3c437fe700 1 -- 192.168.123.102:0/4194857396 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3c3400f760 con 0x7f3c4c101120 2026-03-10T10:16:45.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.904+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3c4c19dad0 con 0x7f3c4c101120 2026-03-10T10:16:45.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.904+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3c4c19dfc0 con 0x7f3c4c101120 2026-03-10T10:16:45.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.906+0000 7f3c437fe700 1 -- 192.168.123.102:0/4194857396 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f3c3400f8c0 con 0x7f3c4c101120 2026-03-10T10:16:45.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.906+0000 7f3c437fe700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c3807d610 0x7f3c3807fad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:45.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.906+0000 7f3c437fe700 1 -- 192.168.123.102:0/4194857396 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f3c3408dba0 con 0x7f3c4c101120 2026-03-10T10:16:45.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.907+0000 7f3c49d9b700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c3807d610 0x7f3c3807fad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:45.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.907+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3c4c1052a0 con 0x7f3c4c101120 2026-03-10T10:16:45.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.909+0000 7f3c49d9b700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c3807d610 0x7f3c3807fad0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f3c4c19a4c0 tx=0x7f3c3c006cb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:45.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:45.910+0000 7f3c437fe700 1 -- 192.168.123.102:0/4194857396 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f3c34027080 con 0x7f3c4c101120 2026-03-10T10:16:46.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.011+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) v1 -- 0x7f3c4c066ed0 con 0x7f3c3807d610 2026-03-10T10:16:46.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.011+0000 7f3c437fe700 1 -- 192.168.123.102:0/4194857396 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 dumped all) v1 ==== 18+0+19122 (secure 0 0 0) 0x7f3c4c066ed0 con 0x7f3c3807d610 2026-03-10T10:16:46.013 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:46.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c3807d610 msgr2=0x7f3c3807fad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:46.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c3807d610 0x7f3c3807fad0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f3c4c19a4c0 tx=0x7f3c3c006cb0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 msgr2=0x7f3c4c198ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:46.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 0x7f3c4c198ea0 secure :-1 s=READY pgs=225 cs=0 l=1 rev1=1 crypto rx=0x7f3c3400b5c0 tx=0x7f3c340049b0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 shutdown_connections 2026-03-10T10:16:46.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3c4c101120 0x7f3c4c198ea0 unknown :-1 s=CLOSED pgs=225 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3c3807d610 0x7f3c3807fad0 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 --2- 192.168.123.102:0/4194857396 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3c4c101ad0 0x7f3c4c1993e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 >> 192.168.123.102:0/4194857396 conn(0x7f3c4c0fc9b0 msgr2=0x7f3c4c0fed70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:46.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 shutdown_connections 2026-03-10T10:16:46.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.014+0000 7f3c50e15700 1 -- 192.168.123.102:0/4194857396 wait complete. 2026-03-10T10:16:46.017 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-10T10:16:46.035 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:45 vm02 ceph-mon[50200]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:46.035 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:45 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3575594793' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T10:16:46.035 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:45 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1096533199' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T10:16:46.035 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:45 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3245840666' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T10:16:46.035 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:45 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3485429625' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T10:16:46.061 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":66,"stamp":"2026-03-10T10:16:44.728057+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":76,"ondisk_log_size":76,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":3,"kb":125804544,"kb_used":163644,"kb_used_data":3084,"kb_used_omap":0,"kb_used_meta":160512,"kb_avail":125640900,"statfs":{"total":128823853056,"available":128656281600,"internally_reserved":0,"allocated":3158016,"data_stored":2043408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":164364288},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"9.400782"},"pg_stats":[{"pgid":"1.0","version":"20'76","reported_seq":138,"reported_epoch":32,"state":"active+clean","last_fresh":"2026-03-10T10:16:35.332537+0000","last_change":"2026-03-10T10:16:24.917816+0000","last_active":"2026-03-10T10:16:35.332537+0000","last_peered":"2026-03-10T10:16:35.332537+0000","last_clean":"2026-03-10T10:16:35.332537+0000","last_became_active":"2026-03-10T10:16:24.917678+0000","last_became_peered":"2026-03-10T10:16:24.917678+0000","last_unstale":"2026-03-10T10:16:35.332537+0000","last_undegraded":"2026-03-10T10:16:35.332537+0000","last_fullsized":"2026-03-10T10:16:35.332537+0000","mapping_epoch":27,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":28,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:16:07.143516+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:16:07.143516+0000","last_clean_scrub_stamp":"2026-03-10T10:16:07.143516+0000","objects_scrubbed":0,"log_size":76,"log_dups_size":0,"ondisk_log_size":76,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:22:59.372076+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":76,"ondisk_log_size":76,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":5,"up_from":32,"seq":137438953476,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110928,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.68799999999999994}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.67900000000000005}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.623}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.64500000000000002}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.60199999999999998}]}]},{"osd":4,"up_from":28,"seq":120259084293,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110928,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.48499999999999999}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.498}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52900000000000003}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.35499999999999998}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47399999999999998}]}]},{"osd":3,"up_from":23,"seq":98784247815,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":570208,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.59699999999999998}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.69299999999999995}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.624}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.48299999999999998}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.46899999999999997}]}]},{"osd":2,"up_from":17,"seq":73014444041,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110928,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.49399999999999999}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.56899999999999995}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.503}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.59699999999999998}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.48199999999999998}]}]},{"osd":0,"up_from":9,"seq":38654705677,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":570208,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.66600000000000004}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.63900000000000001}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57499999999999996}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.56699999999999995}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.68400000000000005}]}]},{"osd":1,"up_from":13,"seq":55834574859,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":570208,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57699999999999996}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.316}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.28599999999999998}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.39800000000000002}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.45000000000000001}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T10:16:46.061 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph pg dump --format=json 2026-03-10T10:16:46.228 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:45 vm05 ceph-mon[59051]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:45 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3575594793' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T10:16:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:45 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/1096533199' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T10:16:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:45 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3245840666' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T10:16:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:45 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3485429625' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T10:16:46.485 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.483+0000 7f3052cc9700 1 -- 192.168.123.102:0/3709094565 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103340 msgr2=0x7f304c103720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:46.485 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.483+0000 7f3052cc9700 1 --2- 192.168.123.102:0/3709094565 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103340 0x7f304c103720 secure :-1 s=READY pgs=226 cs=0 l=1 rev1=1 crypto rx=0x7f3038009b50 tx=0x7f3038009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:46.485 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.484+0000 7f3052cc9700 1 -- 192.168.123.102:0/3709094565 shutdown_connections 2026-03-10T10:16:46.485 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.484+0000 7f3052cc9700 1 --2- 192.168.123.102:0/3709094565 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f304c103cf0 0x7f304c107d40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.485 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.484+0000 7f3052cc9700 1 --2- 192.168.123.102:0/3709094565 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103340 0x7f304c103720 unknown :-1 s=CLOSED pgs=226 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.485 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.484+0000 7f3052cc9700 1 -- 192.168.123.102:0/3709094565 >> 192.168.123.102:0/3709094565 conn(0x7f304c0feb90 msgr2=0x7f304c100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:46.486 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.484+0000 7f3052cc9700 1 -- 192.168.123.102:0/3709094565 shutdown_connections 2026-03-10T10:16:46.486 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.484+0000 7f3052cc9700 1 -- 192.168.123.102:0/3709094565 wait complete. 2026-03-10T10:16:46.486 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.485+0000 7f3052cc9700 1 Processor -- start 2026-03-10T10:16:46.486 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.485+0000 7f3052cc9700 1 -- start start 2026-03-10T10:16:46.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.485+0000 7f3052cc9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f304c103340 0x7f304c198eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:46.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.485+0000 7f3052cc9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103cf0 0x7f304c1993f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:46.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.485+0000 7f3052cc9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f304c199ad0 con 0x7f304c103cf0 2026-03-10T10:16:46.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.485+0000 7f3052cc9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f304c19d860 con 0x7f304c103340 2026-03-10T10:16:46.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.486+0000 7f304bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103cf0 0x7f304c1993f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:46.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.486+0000 7f304bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103cf0 0x7f304c1993f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:45214/0 (socket says 192.168.123.102:45214) 2026-03-10T10:16:46.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.486+0000 7f304bfff700 1 -- 192.168.123.102:0/930727196 learned_addr learned my addr 192.168.123.102:0/930727196 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:46.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.486+0000 7f304bfff700 1 -- 192.168.123.102:0/930727196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f304c103340 msgr2=0x7f304c198eb0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:46.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.486+0000 7f304bfff700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f304c103340 0x7f304c198eb0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.486+0000 7f304bfff700 1 -- 192.168.123.102:0/930727196 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f30380097e0 con 0x7f304c103cf0 2026-03-10T10:16:46.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.487+0000 7f304bfff700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103cf0 0x7f304c1993f0 secure :-1 s=READY pgs=227 cs=0 l=1 rev1=1 crypto rx=0x7f3040009fd0 tx=0x7f304000c5b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:46.490 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.487+0000 7f3049ffb700 1 -- 192.168.123.102:0/930727196 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3040009980 con 0x7f304c103cf0 2026-03-10T10:16:46.490 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.487+0000 7f3049ffb700 1 -- 192.168.123.102:0/930727196 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f304000cd70 con 0x7f304c103cf0 2026-03-10T10:16:46.490 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.487+0000 7f3049ffb700 1 -- 192.168.123.102:0/930727196 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3040010640 con 0x7f304c103cf0 2026-03-10T10:16:46.490 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.487+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f304c19db40 con 0x7f304c103cf0 2026-03-10T10:16:46.490 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.487+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f304c19e090 con 0x7f304c103cf0 2026-03-10T10:16:46.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.489+0000 7f3049ffb700 1 -- 192.168.123.102:0/930727196 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f3040004830 con 0x7f304c103cf0 2026-03-10T10:16:46.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.489+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f304c10b6e0 con 0x7f304c103cf0 2026-03-10T10:16:46.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.489+0000 7f3049ffb700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f303c06c4e0 0x7f303c06e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:46.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.489+0000 7f3049ffb700 1 -- 192.168.123.102:0/930727196 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f3040014070 con 0x7f304c103cf0 2026-03-10T10:16:46.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.492+0000 7f3050a65700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f303c06c4e0 0x7f303c06e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:46.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.492+0000 7f3050a65700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f303c06c4e0 0x7f303c06e9a0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f303800b5c0 tx=0x7f3038005fb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:46.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.492+0000 7f3049ffb700 1 -- 192.168.123.102:0/930727196 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f3040056400 con 0x7f304c103cf0 2026-03-10T10:16:46.597 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.595+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) v1 -- 0x7f304c19a210 con 0x7f303c06c4e0 2026-03-10T10:16:46.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.597+0000 7f3049ffb700 1 -- 192.168.123.102:0/930727196 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 dumped all) v1 ==== 18+0+19122 (secure 0 0 0) 0x7f304c19a210 con 0x7f303c06c4e0 2026-03-10T10:16:46.599 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:46.601 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.599+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f303c06c4e0 msgr2=0x7f303c06e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:46.601 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.599+0000 7f3052cc9700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f303c06c4e0 0x7f303c06e9a0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f303800b5c0 tx=0x7f3038005fb0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.601 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.600+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103cf0 msgr2=0x7f304c1993f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:46.601 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.600+0000 7f3052cc9700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103cf0 0x7f304c1993f0 secure :-1 s=READY pgs=227 cs=0 l=1 rev1=1 crypto rx=0x7f3040009fd0 tx=0x7f304000c5b0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.602 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.600+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 shutdown_connections 2026-03-10T10:16:46.602 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.600+0000 7f3052cc9700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f303c06c4e0 0x7f303c06e9a0 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.602 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.600+0000 7f3052cc9700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f304c103340 0x7f304c198eb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.602 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.600+0000 7f3052cc9700 1 --2- 192.168.123.102:0/930727196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f304c103cf0 0x7f304c1993f0 unknown :-1 s=CLOSED pgs=227 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:46.602 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.600+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 >> 192.168.123.102:0/930727196 conn(0x7f304c0feb90 msgr2=0x7f304c100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:46.602 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.601+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 shutdown_connections 2026-03-10T10:16:46.602 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:46.601+0000 7f3052cc9700 1 -- 192.168.123.102:0/930727196 wait complete. 2026-03-10T10:16:46.603 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-10T10:16:46.674 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":66,"stamp":"2026-03-10T10:16:44.728057+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":76,"ondisk_log_size":76,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":3,"kb":125804544,"kb_used":163644,"kb_used_data":3084,"kb_used_omap":0,"kb_used_meta":160512,"kb_avail":125640900,"statfs":{"total":128823853056,"available":128656281600,"internally_reserved":0,"allocated":3158016,"data_stored":2043408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":164364288},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"9.400782"},"pg_stats":[{"pgid":"1.0","version":"20'76","reported_seq":138,"reported_epoch":32,"state":"active+clean","last_fresh":"2026-03-10T10:16:35.332537+0000","last_change":"2026-03-10T10:16:24.917816+0000","last_active":"2026-03-10T10:16:35.332537+0000","last_peered":"2026-03-10T10:16:35.332537+0000","last_clean":"2026-03-10T10:16:35.332537+0000","last_became_active":"2026-03-10T10:16:24.917678+0000","last_became_peered":"2026-03-10T10:16:24.917678+0000","last_unstale":"2026-03-10T10:16:35.332537+0000","last_undegraded":"2026-03-10T10:16:35.332537+0000","last_fullsized":"2026-03-10T10:16:35.332537+0000","mapping_epoch":27,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":28,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:16:07.143516+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:16:07.143516+0000","last_clean_scrub_stamp":"2026-03-10T10:16:07.143516+0000","objects_scrubbed":0,"log_size":76,"log_dups_size":0,"ondisk_log_size":76,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:22:59.372076+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":76,"ondisk_log_size":76,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":5,"up_from":32,"seq":137438953476,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110928,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.68799999999999994}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.67900000000000005}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.623}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.64500000000000002}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.60199999999999998}]}]},{"osd":4,"up_from":28,"seq":120259084293,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110928,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.48499999999999999}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.498}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52900000000000003}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.35499999999999998}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47399999999999998}]}]},{"osd":3,"up_from":23,"seq":98784247815,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":570208,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.59699999999999998}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.69299999999999995}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.624}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.48299999999999998}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.46899999999999997}]}]},{"osd":2,"up_from":17,"seq":73014444041,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110928,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.49399999999999999}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.56899999999999995}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.503}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.59699999999999998}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.48199999999999998}]}]},{"osd":0,"up_from":9,"seq":38654705677,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":570208,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.66600000000000004}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.63900000000000001}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57499999999999996}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.56699999999999995}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.68400000000000005}]}]},{"osd":1,"up_from":13,"seq":55834574859,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":570208,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57699999999999996}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.316}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.28599999999999998}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.39800000000000002}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.45000000000000001}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T10:16:46.674 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T10:16:46.674 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T10:16:46.674 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T10:16:46.674 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph health --format=json 2026-03-10T10:16:46.827 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.157+0000 7fc73a9b6700 1 -- 192.168.123.102:0/4105361496 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc734101ad0 msgr2=0x7fc734105b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.157+0000 7fc73a9b6700 1 --2- 192.168.123.102:0/4105361496 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc734101ad0 0x7fc734105b20 secure :-1 s=READY pgs=228 cs=0 l=1 rev1=1 crypto rx=0x7fc724009b00 tx=0x7fc724009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.157+0000 7fc73a9b6700 1 -- 192.168.123.102:0/4105361496 shutdown_connections 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.157+0000 7fc73a9b6700 1 --2- 192.168.123.102:0/4105361496 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc734101ad0 0x7fc734105b20 unknown :-1 s=CLOSED pgs=228 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.157+0000 7fc73a9b6700 1 --2- 192.168.123.102:0/4105361496 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc734101120 0x7fc734101500 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.157+0000 7fc73a9b6700 1 -- 192.168.123.102:0/4105361496 >> 192.168.123.102:0/4105361496 conn(0x7fc7340fc9b0 msgr2=0x7fc7340fedd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.158+0000 7fc73a9b6700 1 -- 192.168.123.102:0/4105361496 shutdown_connections 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.158+0000 7fc73a9b6700 1 -- 192.168.123.102:0/4105361496 wait complete. 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc73a9b6700 1 Processor -- start 2026-03-10T10:16:47.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc73a9b6700 1 -- start start 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc73a9b6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc734101120 0x7fc7341aca20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc73a9b6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc734101ad0 0x7fc7341a7a20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc73a9b6700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc7341a7f60 con 0x7fc734101120 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc73a9b6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc7341a80d0 con 0x7fc734101ad0 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc7337fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc734101ad0 0x7fc7341a7a20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc7337fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc734101ad0 0x7fc7341a7a20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:39942/0 (socket says 192.168.123.102:39942) 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc7337fe700 1 -- 192.168.123.102:0/1814408397 learned_addr learned my addr 192.168.123.102:0/1814408397 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc7337fe700 1 -- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc734101120 msgr2=0x7fc7341aca20 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.159+0000 7fc733fff700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc734101120 0x7fc7341aca20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.160+0000 7fc7337fe700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc734101120 0x7fc7341aca20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.160+0000 7fc7337fe700 1 -- 192.168.123.102:0/1814408397 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc7240097e0 con 0x7fc734101ad0 2026-03-10T10:16:47.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.160+0000 7fc733fff700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc734101120 0x7fc7341aca20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:47.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.160+0000 7fc7337fe700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc734101ad0 0x7fc7341a7a20 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fc72400b5c0 tx=0x7fc724004a00 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:47.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.160+0000 7fc7317fa700 1 -- 192.168.123.102:0/1814408397 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc72401d070 con 0x7fc734101ad0 2026-03-10T10:16:47.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.160+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc7341a8350 con 0x7fc734101ad0 2026-03-10T10:16:47.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.160+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc7341a8840 con 0x7fc734101ad0 2026-03-10T10:16:47.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.161+0000 7fc7317fa700 1 -- 192.168.123.102:0/1814408397 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc72400bc50 con 0x7fc734101ad0 2026-03-10T10:16:47.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.161+0000 7fc7317fa700 1 -- 192.168.123.102:0/1814408397 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc724017670 con 0x7fc734101ad0 2026-03-10T10:16:47.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.161+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc734109470 con 0x7fc734101ad0 2026-03-10T10:16:47.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.162+0000 7fc7317fa700 1 -- 192.168.123.102:0/1814408397 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fc72400f460 con 0x7fc734101ad0 2026-03-10T10:16:47.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.162+0000 7fc7317fa700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc72006c530 0x7fc72006e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:47.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.162+0000 7fc7317fa700 1 -- 192.168.123.102:0/1814408397 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fc72408cd90 con 0x7fc734101ad0 2026-03-10T10:16:47.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.163+0000 7fc733fff700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc72006c530 0x7fc72006e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:47.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.163+0000 7fc733fff700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc72006c530 0x7fc72006e9f0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fc71c007950 tx=0x7fc71c008040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:47.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.164+0000 7fc7317fa700 1 -- 192.168.123.102:0/1814408397 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fc724057a40 con 0x7fc734101ad0 2026-03-10T10:16:47.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.306+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "format": "json"} v 0) v1 -- 0x7fc73404ea90 con 0x7fc734101ad0 2026-03-10T10:16:47.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.306+0000 7fc7317fa700 1 -- 192.168.123.102:0/1814408397 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) v1 ==== 72+0+46 (secure 0 0 0) 0x7fc72405b060 con 0x7fc734101ad0 2026-03-10T10:16:47.309 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:47.309 INFO:teuthology.orchestra.run.vm02.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.309+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc72006c530 msgr2=0x7fc72006e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.309+0000 7fc73a9b6700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc72006c530 0x7fc72006e9f0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fc71c007950 tx=0x7fc71c008040 comp rx=0 tx=0).stop 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.309+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc734101ad0 msgr2=0x7fc7341a7a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.309+0000 7fc73a9b6700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc734101ad0 0x7fc7341a7a20 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fc72400b5c0 tx=0x7fc724004a00 comp rx=0 tx=0).stop 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.309+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 shutdown_connections 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.309+0000 7fc73a9b6700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc734101120 0x7fc7341aca20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.309+0000 7fc73a9b6700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc72006c530 0x7fc72006e9f0 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.309+0000 7fc73a9b6700 1 --2- 192.168.123.102:0/1814408397 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc734101ad0 0x7fc7341a7a20 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.309+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 >> 192.168.123.102:0/1814408397 conn(0x7fc7340fc9b0 msgr2=0x7fc7340fdf50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:47.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.310+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 shutdown_connections 2026-03-10T10:16:47.312 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:47.310+0000 7fc73a9b6700 1 -- 192.168.123.102:0/1814408397 wait complete. 2026-03-10T10:16:47.508 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:47 vm02 ceph-mon[50200]: from='client.14428 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:16:47.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:47 vm05 ceph-mon[59051]: from='client.14428 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:16:47.542 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T10:16:47.543 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T10:16:47.543 INFO:teuthology.run_tasks:Running task print... 2026-03-10T10:16:47.545 INFO:teuthology.task.print:**** done end installing v18.2.1 cephadm ... 2026-03-10T10:16:47.545 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T10:16:47.547 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:16:47.547 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph config set mgr mgr/cephadm/use_repo_digest true --force' 2026-03-10T10:16:47.720 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:48.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.015+0000 7fab392bd700 1 -- 192.168.123.102:0/3000146393 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fab34074dc0 msgr2=0x7fab34073220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:48.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.015+0000 7fab392bd700 1 --2- 192.168.123.102:0/3000146393 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fab34074dc0 0x7fab34073220 secure :-1 s=READY pgs=229 cs=0 l=1 rev1=1 crypto rx=0x7fab1c009b00 tx=0x7fab1c009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:48.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.017+0000 7fab392bd700 1 -- 192.168.123.102:0/3000146393 shutdown_connections 2026-03-10T10:16:48.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.017+0000 7fab392bd700 1 --2- 192.168.123.102:0/3000146393 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fab340737f0 0x7fab34073c70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.017+0000 7fab392bd700 1 --2- 192.168.123.102:0/3000146393 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fab34074dc0 0x7fab34073220 unknown :-1 s=CLOSED pgs=229 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.017+0000 7fab392bd700 1 -- 192.168.123.102:0/3000146393 >> 192.168.123.102:0/3000146393 conn(0x7fab340fc470 msgr2=0x7fab340fe8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:48.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.017+0000 7fab392bd700 1 -- 192.168.123.102:0/3000146393 shutdown_connections 2026-03-10T10:16:48.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.017+0000 7fab392bd700 1 -- 192.168.123.102:0/3000146393 wait complete. 2026-03-10T10:16:48.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.018+0000 7fab392bd700 1 Processor -- start 2026-03-10T10:16:48.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.018+0000 7fab392bd700 1 -- start start 2026-03-10T10:16:48.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.018+0000 7fab392bd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fab340737f0 0x7fab3419ce00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:48.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.018+0000 7fab392bd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fab34074dc0 0x7fab3419d340 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:48.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.018+0000 7fab392bd700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fab3419d960 con 0x7fab34074dc0 2026-03-10T10:16:48.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.018+0000 7fab392bd700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fab3419daa0 con 0x7fab340737f0 2026-03-10T10:16:48.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.018+0000 7fab32ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fab340737f0 0x7fab3419ce00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:48.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.018+0000 7fab32ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fab340737f0 0x7fab3419ce00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:39966/0 (socket says 192.168.123.102:39966) 2026-03-10T10:16:48.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.018+0000 7fab32ffd700 1 -- 192.168.123.102:0/819049933 learned_addr learned my addr 192.168.123.102:0/819049933 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:48.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.019+0000 7fab32ffd700 1 -- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fab34074dc0 msgr2=0x7fab3419d340 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:48.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.019+0000 7fab327fc700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fab34074dc0 0x7fab3419d340 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:48.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.019+0000 7fab32ffd700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fab34074dc0 0x7fab3419d340 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.020+0000 7fab32ffd700 1 -- 192.168.123.102:0/819049933 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fab1c0097e0 con 0x7fab340737f0 2026-03-10T10:16:48.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.020+0000 7fab327fc700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fab34074dc0 0x7fab3419d340 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:48.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.020+0000 7fab32ffd700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fab340737f0 0x7fab3419ce00 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7fab1c0052d0 tx=0x7fab1c004b10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:48.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.020+0000 7fab2bfff700 1 -- 192.168.123.102:0/819049933 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fab1c01d070 con 0x7fab340737f0 2026-03-10T10:16:48.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.020+0000 7fab2bfff700 1 -- 192.168.123.102:0/819049933 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fab1c00bd10 con 0x7fab340737f0 2026-03-10T10:16:48.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.020+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fab341a24f0 con 0x7fab340737f0 2026-03-10T10:16:48.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.020+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fab341a29e0 con 0x7fab340737f0 2026-03-10T10:16:48.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.020+0000 7fab2bfff700 1 -- 192.168.123.102:0/819049933 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fab1c00f8d0 con 0x7fab340737f0 2026-03-10T10:16:48.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.022+0000 7fab2bfff700 1 -- 192.168.123.102:0/819049933 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fab1c00fa30 con 0x7fab340737f0 2026-03-10T10:16:48.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.022+0000 7fab2bfff700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fab2006c600 0x7fab2006eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:48.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.022+0000 7fab327fc700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fab2006c600 0x7fab2006eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:48.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.023+0000 7fab2bfff700 1 -- 192.168.123.102:0/819049933 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fab1c08dc30 con 0x7fab340737f0 2026-03-10T10:16:48.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.023+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fab14005320 con 0x7fab340737f0 2026-03-10T10:16:48.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.023+0000 7fab327fc700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fab2006c600 0x7fab2006eac0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fab24005fd0 tx=0x7fab24005dc0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:48.027 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.025+0000 7fab2bfff700 1 -- 192.168.123.102:0/819049933 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fab1c027080 con 0x7fab340737f0 2026-03-10T10:16:48.133 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.131+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1 -- 0x7fab14005190 con 0x7fab340737f0 2026-03-10T10:16:48.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.137+0000 7fab2bfff700 1 -- 192.168.123.102:0/819049933 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/use_repo_digest}]=0 v14)=0 v14) v1 ==== 143+0+0 (secure 0 0 0) 0x7fab1c05c3b0 con 0x7fab340737f0 2026-03-10T10:16:48.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.145+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fab2006c600 msgr2=0x7fab2006eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:48.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.146+0000 7fab392bd700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fab2006c600 0x7fab2006eac0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fab24005fd0 tx=0x7fab24005dc0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.146+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fab340737f0 msgr2=0x7fab3419ce00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:48.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.146+0000 7fab392bd700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fab340737f0 0x7fab3419ce00 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7fab1c0052d0 tx=0x7fab1c004b10 comp rx=0 tx=0).stop 2026-03-10T10:16:48.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.146+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 shutdown_connections 2026-03-10T10:16:48.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.146+0000 7fab392bd700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fab2006c600 0x7fab2006eac0 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.146+0000 7fab392bd700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fab340737f0 0x7fab3419ce00 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.146+0000 7fab392bd700 1 --2- 192.168.123.102:0/819049933 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fab34074dc0 0x7fab3419d340 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.146+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 >> 192.168.123.102:0/819049933 conn(0x7fab340fc470 msgr2=0x7fab34102880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:48.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.147+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 shutdown_connections 2026-03-10T10:16:48.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.147+0000 7fab392bd700 1 -- 192.168.123.102:0/819049933 wait complete. 2026-03-10T10:16:48.222 INFO:teuthology.run_tasks:Running task print... 2026-03-10T10:16:48.224 INFO:teuthology.task.print:**** done cephadm.shell ceph config set mgr... 2026-03-10T10:16:48.224 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T10:16:48.226 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:16:48.226 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph orch status' 2026-03-10T10:16:48.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:48 vm02 ceph-mon[50200]: from='client.14432 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:16:48.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:48 vm02 ceph-mon[50200]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:48.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:48 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1814408397' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T10:16:48.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:48 vm02 ceph-mon[50200]: from='client.? ' entity='client.admin' 2026-03-10T10:16:48.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:48 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:48.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:48 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:48.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:48 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:48.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:48 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:48.397 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:48 vm05 ceph-mon[59051]: from='client.14432 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:16:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:48 vm05 ceph-mon[59051]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:48 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/1814408397' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T10:16:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:48 vm05 ceph-mon[59051]: from='client.? ' entity='client.admin' 2026-03-10T10:16:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:48 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:48 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:48 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:48 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:48.656 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.654+0000 7efda6f8c700 1 -- 192.168.123.102:0/2994335553 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efda00ff860 msgr2=0x7efda00ffc80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:48.656 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.654+0000 7efda6f8c700 1 --2- 192.168.123.102:0/2994335553 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efda00ff860 0x7efda00ffc80 secure :-1 s=READY pgs=230 cs=0 l=1 rev1=1 crypto rx=0x7efd90009b00 tx=0x7efd90009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:48.656 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.655+0000 7efda6f8c700 1 -- 192.168.123.102:0/2994335553 shutdown_connections 2026-03-10T10:16:48.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.655+0000 7efda6f8c700 1 --2- 192.168.123.102:0/2994335553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efda01001c0 0x7efda0100640 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.655+0000 7efda6f8c700 1 --2- 192.168.123.102:0/2994335553 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efda00ff860 0x7efda00ffc80 unknown :-1 s=CLOSED pgs=230 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.655+0000 7efda6f8c700 1 -- 192.168.123.102:0/2994335553 >> 192.168.123.102:0/2994335553 conn(0x7efda00fb3c0 msgr2=0x7efda00fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:48.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.655+0000 7efda6f8c700 1 -- 192.168.123.102:0/2994335553 shutdown_connections 2026-03-10T10:16:48.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.655+0000 7efda6f8c700 1 -- 192.168.123.102:0/2994335553 wait complete. 2026-03-10T10:16:48.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.656+0000 7efda6f8c700 1 Processor -- start 2026-03-10T10:16:48.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.657+0000 7efda6f8c700 1 -- start start 2026-03-10T10:16:48.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.657+0000 7efda6f8c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efda00ff860 0x7efda019ce20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:48.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.657+0000 7efda6f8c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efda01001c0 0x7efda019d360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:48.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.657+0000 7efda6f8c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efda019d980 con 0x7efda01001c0 2026-03-10T10:16:48.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.657+0000 7efda6f8c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efda019dac0 con 0x7efda00ff860 2026-03-10T10:16:48.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.658+0000 7efda4d28700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efda00ff860 0x7efda019ce20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:48.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.658+0000 7efda4d28700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efda00ff860 0x7efda019ce20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:39980/0 (socket says 192.168.123.102:39980) 2026-03-10T10:16:48.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.658+0000 7efda4d28700 1 -- 192.168.123.102:0/136672657 learned_addr learned my addr 192.168.123.102:0/136672657 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:48.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.658+0000 7efda4d28700 1 -- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efda01001c0 msgr2=0x7efda019d360 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:48.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.658+0000 7efd9ffff700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efda01001c0 0x7efda019d360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:48.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.658+0000 7efda4d28700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efda01001c0 0x7efda019d360 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.658+0000 7efda4d28700 1 -- 192.168.123.102:0/136672657 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7efd900097e0 con 0x7efda00ff860 2026-03-10T10:16:48.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.659+0000 7efd9ffff700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efda01001c0 0x7efda019d360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:48.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.659+0000 7efda4d28700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efda00ff860 0x7efda019ce20 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7efd90005850 tx=0x7efd90004ab0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:48.660 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.659+0000 7efd9dffb700 1 -- 192.168.123.102:0/136672657 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efd9001d070 con 0x7efda00ff860 2026-03-10T10:16:48.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.659+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7efda01a2510 con 0x7efda00ff860 2026-03-10T10:16:48.662 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.659+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7efda01a2a00 con 0x7efda00ff860 2026-03-10T10:16:48.662 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.659+0000 7efd9dffb700 1 -- 192.168.123.102:0/136672657 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7efd9000bc50 con 0x7efda00ff860 2026-03-10T10:16:48.662 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.659+0000 7efd9dffb700 1 -- 192.168.123.102:0/136672657 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efd90021710 con 0x7efda00ff860 2026-03-10T10:16:48.662 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.661+0000 7efd9dffb700 1 -- 192.168.123.102:0/136672657 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7efd9002b430 con 0x7efda00ff860 2026-03-10T10:16:48.663 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.661+0000 7efd9dffb700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efd8806c4e0 0x7efd8806e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:48.663 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.661+0000 7efd9dffb700 1 -- 192.168.123.102:0/136672657 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7efd9008c990 con 0x7efda00ff860 2026-03-10T10:16:48.663 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.661+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7efd8c005320 con 0x7efda00ff860 2026-03-10T10:16:48.663 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.662+0000 7efd9ffff700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efd8806c4e0 0x7efd8806e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:48.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.662+0000 7efd9ffff700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efd8806c4e0 0x7efd8806e9a0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7efd94005fd0 tx=0x7efd94005dc0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:48.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.665+0000 7efd9dffb700 1 -- 192.168.123.102:0/136672657 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7efd9005ab30 con 0x7efda00ff860 2026-03-10T10:16:48.796 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.793+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch status", "target": ["mon-mgr", ""]}) v1 -- 0x7efd8c000bf0 con 0x7efd8806c4e0 2026-03-10T10:16:48.796 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.795+0000 7efd9dffb700 1 -- 192.168.123.102:0/136672657 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+43 (secure 0 0 0) 0x7efd8c000bf0 con 0x7efd8806c4e0 2026-03-10T10:16:48.797 INFO:teuthology.orchestra.run.vm02.stdout:Backend: cephadm 2026-03-10T10:16:48.797 INFO:teuthology.orchestra.run.vm02.stdout:Available: Yes 2026-03-10T10:16:48.797 INFO:teuthology.orchestra.run.vm02.stdout:Paused: No 2026-03-10T10:16:48.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.797+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efd8806c4e0 msgr2=0x7efd8806e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:48.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.797+0000 7efda6f8c700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efd8806c4e0 0x7efd8806e9a0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7efd94005fd0 tx=0x7efd94005dc0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.797+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efda00ff860 msgr2=0x7efda019ce20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:48.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.797+0000 7efda6f8c700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efda00ff860 0x7efda019ce20 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7efd90005850 tx=0x7efd90004ab0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.800 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.798+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 shutdown_connections 2026-03-10T10:16:48.800 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.798+0000 7efda6f8c700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7efd8806c4e0 0x7efd8806e9a0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.800 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.798+0000 7efda6f8c700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efda00ff860 0x7efda019ce20 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.800 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.798+0000 7efda6f8c700 1 --2- 192.168.123.102:0/136672657 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efda01001c0 0x7efda019d360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:48.800 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.798+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 >> 192.168.123.102:0/136672657 conn(0x7efda00fb3c0 msgr2=0x7efda0103a80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:48.800 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.798+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 shutdown_connections 2026-03-10T10:16:48.800 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:48.798+0000 7efda6f8c700 1 -- 192.168.123.102:0/136672657 wait complete. 2026-03-10T10:16:48.868 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph orch ps' 2026-03-10T10:16:49.026 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:49.313 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.310+0000 7f731c828700 1 -- 192.168.123.102:0/3686473082 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73141006c0 msgr2=0x7f7314102ab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:49.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.310+0000 7f731c828700 1 --2- 192.168.123.102:0/3686473082 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73141006c0 0x7f7314102ab0 secure :-1 s=READY pgs=231 cs=0 l=1 rev1=1 crypto rx=0x7f730c009b00 tx=0x7f730c009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:49.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.312+0000 7f731c828700 1 -- 192.168.123.102:0/3686473082 shutdown_connections 2026-03-10T10:16:49.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.312+0000 7f731c828700 1 --2- 192.168.123.102:0/3686473082 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7314102ff0 0x7f73141053e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.312+0000 7f731c828700 1 --2- 192.168.123.102:0/3686473082 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73141006c0 0x7f7314102ab0 unknown :-1 s=CLOSED pgs=231 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.312+0000 7f731c828700 1 -- 192.168.123.102:0/3686473082 >> 192.168.123.102:0/3686473082 conn(0x7f73140fa6d0 msgr2=0x7f73140fcb10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:49.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.312+0000 7f731c828700 1 -- 192.168.123.102:0/3686473082 shutdown_connections 2026-03-10T10:16:49.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.312+0000 7f731c828700 1 -- 192.168.123.102:0/3686473082 wait complete. 2026-03-10T10:16:49.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.312+0000 7f731c828700 1 Processor -- start 2026-03-10T10:16:49.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.313+0000 7f731c828700 1 -- start start 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.313+0000 7f731c828700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73141006c0 0x7f73141989f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.313+0000 7f731c828700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7314102ff0 0x7f7314198f30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.313+0000 7f731c828700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7314199550 con 0x7f7314102ff0 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.313+0000 7f731c828700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7314199690 con 0x7f73141006c0 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.313+0000 7f7319dc3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7314102ff0 0x7f7314198f30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.313+0000 7f7319dc3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7314102ff0 0x7f7314198f30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:32874/0 (socket says 192.168.123.102:32874) 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.313+0000 7f7319dc3700 1 -- 192.168.123.102:0/3650629027 learned_addr learned my addr 192.168.123.102:0/3650629027 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.313+0000 7f731a5c4700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73141006c0 0x7f73141989f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.314+0000 7f7319dc3700 1 -- 192.168.123.102:0/3650629027 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73141006c0 msgr2=0x7f73141989f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.314+0000 7f7319dc3700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73141006c0 0x7f73141989f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.315 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.314+0000 7f7319dc3700 1 -- 192.168.123.102:0/3650629027 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f730c0097e0 con 0x7f7314102ff0 2026-03-10T10:16:49.316 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.314+0000 7f7319dc3700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7314102ff0 0x7f7314198f30 secure :-1 s=READY pgs=232 cs=0 l=1 rev1=1 crypto rx=0x7f730800d8d0 tx=0x7f730800dbe0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:49.316 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.314+0000 7f73077fe700 1 -- 192.168.123.102:0/3650629027 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7308009880 con 0x7f7314102ff0 2026-03-10T10:16:49.317 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.314+0000 7f73077fe700 1 -- 192.168.123.102:0/3650629027 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f7308010460 con 0x7f7314102ff0 2026-03-10T10:16:49.317 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.314+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f731419e140 con 0x7f7314102ff0 2026-03-10T10:16:49.317 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.314+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f731419e690 con 0x7f7314102ff0 2026-03-10T10:16:49.318 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.314+0000 7f73077fe700 1 -- 192.168.123.102:0/3650629027 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f730800f5d0 con 0x7f7314102ff0 2026-03-10T10:16:49.318 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.316+0000 7f73077fe700 1 -- 192.168.123.102:0/3650629027 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f730800f730 con 0x7f7314102ff0 2026-03-10T10:16:49.318 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.316+0000 7f73077fe700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f730006c5b0 0x7f730006ea70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:49.318 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.317+0000 7f731a5c4700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f730006c5b0 0x7f730006ea70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:49.319 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.317+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f72f8005320 con 0x7f7314102ff0 2026-03-10T10:16:49.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.317+0000 7f73077fe700 1 -- 192.168.123.102:0/3650629027 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f730808c280 con 0x7f7314102ff0 2026-03-10T10:16:49.322 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.320+0000 7f73077fe700 1 -- 192.168.123.102:0/3650629027 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f730805a950 con 0x7f7314102ff0 2026-03-10T10:16:49.322 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.320+0000 7f731a5c4700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f730006c5b0 0x7f730006ea70 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f730c009fd0 tx=0x7f730c005fb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:49.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.436+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f72f8000bf0 con 0x7f730006c5b0 2026-03-10T10:16:49.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.445+0000 7f73077fe700 1 -- 192.168.123.102:0/3650629027 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+2640 (secure 0 0 0) 0x7f72f8000bf0 con 0x7f730006c5b0 2026-03-10T10:16:49.447 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:16:49.447 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (74s) 43s ago 119s 20.3M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:16:49.447 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (2m) 43s ago 2m 7864k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:16:49.447 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (89s) 17s ago 89s 7982k - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:16:49.447 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (2m) 43s ago 2m 7415k - 18.2.1 5be31c24972a 51802fb57170 2026-03-10T10:16:49.447 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (89s) 17s ago 88s 7407k - 18.2.1 5be31c24972a f275982dc269 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (69s) 43s ago 106s 77.7M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:9283,8765,8443 running (2m) 43s ago 2m 490M - 18.2.1 5be31c24972a 8bea583521d3 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (85s) 17s ago 85s 450M - 18.2.1 5be31c24972a ff545ad0664a 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (2m) 43s ago 2m 47.4M 2048M 18.2.1 5be31c24972a ab92d831cc1d 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (83s) 17s ago 83s 41.7M 2048M 18.2.1 5be31c24972a cea7d23f93a6 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (2m) 43s ago 2m 15.8M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (86s) 17s ago 86s 13.9M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (66s) 43s ago 66s 37.9M 4096M 18.2.1 5be31c24972a 9d7f135a3f3b 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (57s) 43s ago 57s 37.2M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (47s) 43s ago 47s 33.1M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (38s) 17s ago 38s 40.2M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (29s) 17s ago 29s 39.3M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (19s) 17s ago 19s 11.8M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:16:49.448 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (68s) 43s ago 100s 29.2M - 2.43.0 a07b618ecd1d a607fd039cb6 2026-03-10T10:16:49.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.448+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f730006c5b0 msgr2=0x7f730006ea70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:49.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.448+0000 7f731c828700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f730006c5b0 0x7f730006ea70 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f730c009fd0 tx=0x7f730c005fb0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.449+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7314102ff0 msgr2=0x7f7314198f30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:49.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.449+0000 7f731c828700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7314102ff0 0x7f7314198f30 secure :-1 s=READY pgs=232 cs=0 l=1 rev1=1 crypto rx=0x7f730800d8d0 tx=0x7f730800dbe0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.449+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 shutdown_connections 2026-03-10T10:16:49.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.449+0000 7f731c828700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f730006c5b0 0x7f730006ea70 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.449+0000 7f731c828700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73141006c0 0x7f73141989f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.449+0000 7f731c828700 1 --2- 192.168.123.102:0/3650629027 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7314102ff0 0x7f7314198f30 unknown :-1 s=CLOSED pgs=232 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.449+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 >> 192.168.123.102:0/3650629027 conn(0x7f73140fa6d0 msgr2=0x7f73140fcb10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:49.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.449+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 shutdown_connections 2026-03-10T10:16:49.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.449+0000 7f731c828700 1 -- 192.168.123.102:0/3650629027 wait complete. 2026-03-10T10:16:49.519 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph orch ls' 2026-03-10T10:16:49.679 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:49.957 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.955+0000 7f4091abd700 1 -- 192.168.123.102:0/2748325634 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c074dc0 msgr2=0x7f408c073220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:49.957 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.955+0000 7f4091abd700 1 --2- 192.168.123.102:0/2748325634 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c074dc0 0x7f408c073220 secure :-1 s=READY pgs=233 cs=0 l=1 rev1=1 crypto rx=0x7f4074009b00 tx=0x7f4074009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:49.957 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.955+0000 7f4091abd700 1 -- 192.168.123.102:0/2748325634 shutdown_connections 2026-03-10T10:16:49.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.955+0000 7f4091abd700 1 --2- 192.168.123.102:0/2748325634 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f408c0737f0 0x7f408c073c70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.956+0000 7f4091abd700 1 --2- 192.168.123.102:0/2748325634 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c074dc0 0x7f408c073220 unknown :-1 s=CLOSED pgs=233 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.956+0000 7f4091abd700 1 -- 192.168.123.102:0/2748325634 >> 192.168.123.102:0/2748325634 conn(0x7f408c0fc460 msgr2=0x7f408c0fe8e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:49.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.956+0000 7f4091abd700 1 -- 192.168.123.102:0/2748325634 shutdown_connections 2026-03-10T10:16:49.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.956+0000 7f4091abd700 1 -- 192.168.123.102:0/2748325634 wait complete. 2026-03-10T10:16:49.960 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.958+0000 7f4091abd700 1 Processor -- start 2026-03-10T10:16:49.960 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.958+0000 7f4091abd700 1 -- start start 2026-03-10T10:16:49.960 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.958+0000 7f4091abd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c0737f0 0x7f408c104230 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:49.960 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.958+0000 7f4091abd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f408c074dc0 0x7f408c104770 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.958+0000 7f4091abd700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f408c100c90 con 0x7f408c0737f0 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.958+0000 7f4091abd700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f408c100e00 con 0x7f408c074dc0 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.958+0000 7f408b7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c0737f0 0x7f408c104230 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.958+0000 7f408b7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c0737f0 0x7f408c104230 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:32890/0 (socket says 192.168.123.102:32890) 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.958+0000 7f408b7fe700 1 -- 192.168.123.102:0/1854954332 learned_addr learned my addr 192.168.123.102:0/1854954332 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.959+0000 7f408b7fe700 1 -- 192.168.123.102:0/1854954332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f408c074dc0 msgr2=0x7f408c104770 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.959+0000 7f408b7fe700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f408c074dc0 0x7f408c104770 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.959+0000 7f408b7fe700 1 -- 192.168.123.102:0/1854954332 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f40740097e0 con 0x7f408c0737f0 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.959+0000 7f408b7fe700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c0737f0 0x7f408c104230 secure :-1 s=READY pgs=234 cs=0 l=1 rev1=1 crypto rx=0x7f4074009fd0 tx=0x7f4074004b10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.959+0000 7f4088ff9700 1 -- 192.168.123.102:0/1854954332 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f407401d070 con 0x7f408c0737f0 2026-03-10T10:16:49.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.959+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f408c101080 con 0x7f408c0737f0 2026-03-10T10:16:49.962 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.959+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f408c101570 con 0x7f408c0737f0 2026-03-10T10:16:49.962 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.960+0000 7f4088ff9700 1 -- 192.168.123.102:0/1854954332 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f407400bcd0 con 0x7f408c0737f0 2026-03-10T10:16:49.962 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.960+0000 7f4088ff9700 1 -- 192.168.123.102:0/1854954332 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f407400f940 con 0x7f408c0737f0 2026-03-10T10:16:49.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.961+0000 7f4088ff9700 1 -- 192.168.123.102:0/1854954332 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f407400faa0 con 0x7f408c0737f0 2026-03-10T10:16:49.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.961+0000 7f4088ff9700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f407806c4e0 0x7f407806e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:49.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.961+0000 7f4088ff9700 1 -- 192.168.123.102:0/1854954332 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f407408dd50 con 0x7f408c0737f0 2026-03-10T10:16:49.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.962+0000 7f408affd700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f407806c4e0 0x7f407806e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:49.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.962+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f406c005320 con 0x7f408c0737f0 2026-03-10T10:16:49.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.963+0000 7f408affd700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f407806c4e0 0x7f407806e9a0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f408c074af0 tx=0x7f407c005cb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:49.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:49.965+0000 7f4088ff9700 1 -- 192.168.123.102:0/1854954332 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f407405c450 con 0x7f408c0737f0 2026-03-10T10:16:50.074 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.073+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch ls", "target": ["mon-mgr", ""]}) v1 -- 0x7f406c000bf0 con 0x7f407806c4e0 2026-03-10T10:16:50.077 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.076+0000 7f4088ff9700 1 -- 192.168.123.102:0/1854954332 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+1150 (secure 0 0 0) 0x7f406c000bf0 con 0x7f407806c4e0 2026-03-10T10:16:50.077 INFO:teuthology.orchestra.run.vm02.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T10:16:50.077 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager ?:9093,9094 1/1 44s ago 2m count:1 2026-03-10T10:16:50.077 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter 2/2 44s ago 2m * 2026-03-10T10:16:50.077 INFO:teuthology.orchestra.run.vm02.stdout:crash 2/2 44s ago 2m * 2026-03-10T10:16:50.077 INFO:teuthology.orchestra.run.vm02.stdout:grafana ?:3000 1/1 44s ago 2m count:1 2026-03-10T10:16:50.077 INFO:teuthology.orchestra.run.vm02.stdout:mgr 2/2 44s ago 2m count:2 2026-03-10T10:16:50.077 INFO:teuthology.orchestra.run.vm02.stdout:mon 2/2 44s ago 2m vm02:192.168.123.102=vm02;vm05:192.168.123.105=vm05;count:2 2026-03-10T10:16:50.078 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter ?:9100 2/2 44s ago 2m * 2026-03-10T10:16:50.078 INFO:teuthology.orchestra.run.vm02.stdout:osd 6 44s ago - 2026-03-10T10:16:50.078 INFO:teuthology.orchestra.run.vm02.stdout:prometheus ?:9095 1/1 44s ago 2m count:1 2026-03-10T10:16:50.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.078+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f407806c4e0 msgr2=0x7f407806e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:50.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.078+0000 7f4091abd700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f407806c4e0 0x7f407806e9a0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f408c074af0 tx=0x7f407c005cb0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.078+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c0737f0 msgr2=0x7f408c104230 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:50.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.078+0000 7f4091abd700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c0737f0 0x7f408c104230 secure :-1 s=READY pgs=234 cs=0 l=1 rev1=1 crypto rx=0x7f4074009fd0 tx=0x7f4074004b10 comp rx=0 tx=0).stop 2026-03-10T10:16:50.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.079+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 shutdown_connections 2026-03-10T10:16:50.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.079+0000 7f4091abd700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f408c0737f0 0x7f408c104230 unknown :-1 s=CLOSED pgs=234 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.079+0000 7f4091abd700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f407806c4e0 0x7f407806e9a0 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.079+0000 7f4091abd700 1 --2- 192.168.123.102:0/1854954332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f408c074dc0 0x7f408c104770 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.079+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 >> 192.168.123.102:0/1854954332 conn(0x7f408c0fc460 msgr2=0x7f408c106ce0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:50.081 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.079+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 shutdown_connections 2026-03-10T10:16:50.081 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.079+0000 7f4091abd700 1 -- 192.168.123.102:0/1854954332 wait complete. 2026-03-10T10:16:50.146 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph orch host ls' 2026-03-10T10:16:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:50 vm02 ceph-mon[50200]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:50.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:50 vm02 ceph-mon[50200]: from='client.24257 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:50.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:50 vm02 ceph-mon[50200]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:50.298 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:50.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:50 vm05 ceph-mon[59051]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:50.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:50 vm05 ceph-mon[59051]: from='client.24257 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:50.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:50 vm05 ceph-mon[59051]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:50.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.545+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1355456364 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 msgr2=0x7f9e201053e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:50.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.545+0000 7f9e26b0c700 1 --2- 192.168.123.102:0/1355456364 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 0x7f9e201053e0 secure :-1 s=READY pgs=235 cs=0 l=1 rev1=1 crypto rx=0x7f9e10009b50 tx=0x7f9e10009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:50.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.545+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1355456364 shutdown_connections 2026-03-10T10:16:50.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.545+0000 7f9e26b0c700 1 --2- 192.168.123.102:0/1355456364 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 0x7f9e201053e0 unknown :-1 s=CLOSED pgs=235 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.545+0000 7f9e26b0c700 1 --2- 192.168.123.102:0/1355456364 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e201006c0 0x7f9e20102ab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.546+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1355456364 >> 192.168.123.102:0/1355456364 conn(0x7f9e200fa6d0 msgr2=0x7f9e200fcb10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:50.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.546+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1355456364 shutdown_connections 2026-03-10T10:16:50.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.546+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1355456364 wait complete. 2026-03-10T10:16:50.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.546+0000 7f9e26b0c700 1 Processor -- start 2026-03-10T10:16:50.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.547+0000 7f9e26b0c700 1 -- start start 2026-03-10T10:16:50.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.547+0000 7f9e26b0c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e201006c0 0x7f9e20198a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:50.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.547+0000 7f9e26b0c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 0x7f9e20198f40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:50.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.547+0000 7f9e26b0c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9e20199560 con 0x7f9e20102ff0 2026-03-10T10:16:50.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.547+0000 7f9e26b0c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9e201996a0 con 0x7f9e201006c0 2026-03-10T10:16:50.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.547+0000 7f9e17fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 0x7f9e20198f40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:50.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.547+0000 7f9e17fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 0x7f9e20198f40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:32908/0 (socket says 192.168.123.102:32908) 2026-03-10T10:16:50.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.547+0000 7f9e17fff700 1 -- 192.168.123.102:0/1806206273 learned_addr learned my addr 192.168.123.102:0/1806206273 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:50.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.548+0000 7f9e17fff700 1 -- 192.168.123.102:0/1806206273 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e201006c0 msgr2=0x7f9e20198a00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:50.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.548+0000 7f9e17fff700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e201006c0 0x7f9e20198a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.548+0000 7f9e17fff700 1 -- 192.168.123.102:0/1806206273 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9e100097e0 con 0x7f9e20102ff0 2026-03-10T10:16:50.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.548+0000 7f9e17fff700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 0x7f9e20198f40 secure :-1 s=READY pgs=236 cs=0 l=1 rev1=1 crypto rx=0x7f9e10004ce0 tx=0x7f9e10005ee0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:50.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.548+0000 7f9e15ffb700 1 -- 192.168.123.102:0/1806206273 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9e1001d070 con 0x7f9e20102ff0 2026-03-10T10:16:50.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.548+0000 7f9e15ffb700 1 -- 192.168.123.102:0/1806206273 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9e1000bc30 con 0x7f9e20102ff0 2026-03-10T10:16:50.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.548+0000 7f9e15ffb700 1 -- 192.168.123.102:0/1806206273 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9e1000f720 con 0x7f9e20102ff0 2026-03-10T10:16:50.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.548+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9e2019e0f0 con 0x7f9e20102ff0 2026-03-10T10:16:50.551 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.549+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9e2019e5e0 con 0x7f9e20102ff0 2026-03-10T10:16:50.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.550+0000 7f9e15ffb700 1 -- 192.168.123.102:0/1806206273 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f9e10022ae0 con 0x7f9e20102ff0 2026-03-10T10:16:50.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.550+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9e20192b70 con 0x7f9e20102ff0 2026-03-10T10:16:50.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.551+0000 7f9e15ffb700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9e0806c4e0 0x7f9e0806e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:50.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.551+0000 7f9e15ffb700 1 -- 192.168.123.102:0/1806206273 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f9e1008cbc0 con 0x7f9e20102ff0 2026-03-10T10:16:50.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.553+0000 7f9e248a8700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9e0806c4e0 0x7f9e0806e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:50.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.554+0000 7f9e248a8700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9e0806c4e0 0x7f9e0806e9a0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f9e1c005950 tx=0x7f9e1c0058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:50.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.554+0000 7f9e15ffb700 1 -- 192.168.123.102:0/1806206273 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f9e1005b210 con 0x7f9e20102ff0 2026-03-10T10:16:50.672 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.668+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""]}) v1 -- 0x7f9e200611d0 con 0x7f9e0806c4e0 2026-03-10T10:16:50.673 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.672+0000 7f9e15ffb700 1 -- 192.168.123.102:0/1806206273 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+139 (secure 0 0 0) 0x7f9e200611d0 con 0x7f9e0806c4e0 2026-03-10T10:16:50.673 INFO:teuthology.orchestra.run.vm02.stdout:HOST ADDR LABELS STATUS 2026-03-10T10:16:50.673 INFO:teuthology.orchestra.run.vm02.stdout:vm02 192.168.123.102 2026-03-10T10:16:50.673 INFO:teuthology.orchestra.run.vm02.stdout:vm05 192.168.123.105 2026-03-10T10:16:50.673 INFO:teuthology.orchestra.run.vm02.stdout:2 hosts in cluster 2026-03-10T10:16:50.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.674+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9e0806c4e0 msgr2=0x7f9e0806e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:50.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.674+0000 7f9e26b0c700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9e0806c4e0 0x7f9e0806e9a0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f9e1c005950 tx=0x7f9e1c0058e0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.675+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 msgr2=0x7f9e20198f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:50.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.675+0000 7f9e26b0c700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 0x7f9e20198f40 secure :-1 s=READY pgs=236 cs=0 l=1 rev1=1 crypto rx=0x7f9e10004ce0 tx=0x7f9e10005ee0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.675+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 shutdown_connections 2026-03-10T10:16:50.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.675+0000 7f9e26b0c700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9e0806c4e0 0x7f9e0806e9a0 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.675+0000 7f9e26b0c700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e201006c0 0x7f9e20198a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.675+0000 7f9e26b0c700 1 --2- 192.168.123.102:0/1806206273 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9e20102ff0 0x7f9e20198f40 unknown :-1 s=CLOSED pgs=236 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:50.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.675+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 >> 192.168.123.102:0/1806206273 conn(0x7f9e200fa6d0 msgr2=0x7f9e200fcb10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:50.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.675+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 shutdown_connections 2026-03-10T10:16:50.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:50.675+0000 7f9e26b0c700 1 -- 192.168.123.102:0/1806206273 wait complete. 2026-03-10T10:16:50.727 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph orch device ls' 2026-03-10T10:16:50.882 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:51.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.126+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/272374178 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 msgr2=0x7f0abc076e10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:51.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.126+0000 7f0ac0ab5700 1 --2- 192.168.123.102:0/272374178 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 0x7f0abc076e10 secure :-1 s=READY pgs=237 cs=0 l=1 rev1=1 crypto rx=0x7f0aa4009b50 tx=0x7f0aa4009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:51.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.127+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/272374178 shutdown_connections 2026-03-10T10:16:51.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.127+0000 7f0ac0ab5700 1 --2- 192.168.123.102:0/272374178 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 0x7f0abc076e10 unknown :-1 s=CLOSED pgs=237 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:51.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.127+0000 7f0ac0ab5700 1 --2- 192.168.123.102:0/272374178 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0abc075740 0x7f0abc075b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:51.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.127+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/272374178 >> 192.168.123.102:0/272374178 conn(0x7f0abc0fe6c0 msgr2=0x7f0abc100b00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:51.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.127+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/272374178 shutdown_connections 2026-03-10T10:16:51.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.127+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/272374178 wait complete. 2026-03-10T10:16:51.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.128+0000 7f0ac0ab5700 1 Processor -- start 2026-03-10T10:16:51.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.128+0000 7f0ac0ab5700 1 -- start start 2026-03-10T10:16:51.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ac0ab5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0abc075740 0x7f0abc19ce10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:51.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ac0ab5700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 0x7f0abc19d350 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:51.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ac0ab5700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0abc19d970 con 0x7f0abc076990 2026-03-10T10:16:51.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ac0ab5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0abc19dab0 con 0x7f0abc075740 2026-03-10T10:16:51.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ab3fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 0x7f0abc19d350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:51.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ab3fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 0x7f0abc19d350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:32934/0 (socket says 192.168.123.102:32934) 2026-03-10T10:16:51.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ab3fff700 1 -- 192.168.123.102:0/3632756900 learned_addr learned my addr 192.168.123.102:0/3632756900 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:51.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ab3fff700 1 -- 192.168.123.102:0/3632756900 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0abc075740 msgr2=0x7f0abc19ce10 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:51.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0aba59c700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0abc075740 0x7f0abc19ce10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:51.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ab3fff700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0abc075740 0x7f0abc19ce10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:51.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ab3fff700 1 -- 192.168.123.102:0/3632756900 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0aa40097e0 con 0x7f0abc076990 2026-03-10T10:16:51.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.129+0000 7f0ab3fff700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 0x7f0abc19d350 secure :-1 s=READY pgs=238 cs=0 l=1 rev1=1 crypto rx=0x7f0aa4006010 tx=0x7f0aa400b920 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:51.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.130+0000 7f0ab37fe700 1 -- 192.168.123.102:0/3632756900 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0aa401d070 con 0x7f0abc076990 2026-03-10T10:16:51.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.130+0000 7f0ab37fe700 1 -- 192.168.123.102:0/3632756900 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0aa400bd30 con 0x7f0abc076990 2026-03-10T10:16:51.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.130+0000 7f0ab37fe700 1 -- 192.168.123.102:0/3632756900 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0aa400f920 con 0x7f0abc076990 2026-03-10T10:16:51.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.130+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0abc1a2500 con 0x7f0abc076990 2026-03-10T10:16:51.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.130+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0abc1a29f0 con 0x7f0abc076990 2026-03-10T10:16:51.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.130+0000 7f0aba59c700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0abc075740 0x7f0abc19ce10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:51.133 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.131+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0abc066e80 con 0x7f0abc076990 2026-03-10T10:16:51.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.133+0000 7f0ab37fe700 1 -- 192.168.123.102:0/3632756900 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f0aa4022c50 con 0x7f0abc076990 2026-03-10T10:16:51.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.133+0000 7f0ab37fe700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0aa806c600 0x7f0aa806eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:51.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.133+0000 7f0ab37fe700 1 -- 192.168.123.102:0/3632756900 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f0aa408d8c0 con 0x7f0abc076990 2026-03-10T10:16:51.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.134+0000 7f0aba59c700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0aa806c600 0x7f0aa806eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:51.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.134+0000 7f0aba59c700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0aa806c600 0x7f0aa806eac0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f0aac007900 tx=0x7f0aac008040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:51.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.135+0000 7f0ab37fe700 1 -- 192.168.123.102:0/3632756900 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f0aa405c0b0 con 0x7f0abc076990 2026-03-10T10:16:51.258 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.256+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch device ls", "target": ["mon-mgr", ""]}) v1 -- 0x7f0abc10eaa0 con 0x7f0aa806c600 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.258+0000 7f0ab37fe700 1 -- 192.168.123.102:0/3632756900 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+1278 (secure 0 0 0) 0x7f0abc10eaa0 con 0x7f0aa806c600 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdb hdd DWNBRSTVMM02001 20.0G Yes 46s ago 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdc hdd DWNBRSTVMM02002 20.0G No 46s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdd hdd DWNBRSTVMM02003 20.0G No 46s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vde hdd DWNBRSTVMM02004 20.0G No 46s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stdout:vm05 /dev/vdb hdd DWNBRSTVMM05001 20.0G Yes 18s ago 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stdout:vm05 /dev/vdc hdd DWNBRSTVMM05002 20.0G No 18s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stdout:vm05 /dev/vdd hdd DWNBRSTVMM05003 20.0G No 18s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T10:16:51.260 INFO:teuthology.orchestra.run.vm02.stdout:vm05 /dev/vde hdd DWNBRSTVMM05004 20.0G No 18s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T10:16:51.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.260+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0aa806c600 msgr2=0x7f0aa806eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:51.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.260+0000 7f0ac0ab5700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0aa806c600 0x7f0aa806eac0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f0aac007900 tx=0x7f0aac008040 comp rx=0 tx=0).stop 2026-03-10T10:16:51.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.260+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 msgr2=0x7f0abc19d350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:51.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.260+0000 7f0ac0ab5700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 0x7f0abc19d350 secure :-1 s=READY pgs=238 cs=0 l=1 rev1=1 crypto rx=0x7f0aa4006010 tx=0x7f0aa400b920 comp rx=0 tx=0).stop 2026-03-10T10:16:51.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.261+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 shutdown_connections 2026-03-10T10:16:51.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.261+0000 7f0ac0ab5700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f0aa806c600 0x7f0aa806eac0 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:51.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.261+0000 7f0ac0ab5700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0abc075740 0x7f0abc19ce10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:51.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.261+0000 7f0ac0ab5700 1 --2- 192.168.123.102:0/3632756900 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0abc076990 0x7f0abc19d350 unknown :-1 s=CLOSED pgs=238 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:51.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.261+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 >> 192.168.123.102:0/3632756900 conn(0x7f0abc0fe6c0 msgr2=0x7f0abc10d380 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:51.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.261+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 shutdown_connections 2026-03-10T10:16:51.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.261+0000 7f0ac0ab5700 1 -- 192.168.123.102:0/3632756900 wait complete. 2026-03-10T10:16:51.328 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T10:16:51.330 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:16:51.330 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph fs volume create cephfs --placement=4' 2026-03-10T10:16:51.486 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:51.508 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:51 vm02 ceph-mon[50200]: from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:51.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:51 vm05 ceph-mon[59051]: from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:51.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.869+0000 7fad403c9700 1 -- 192.168.123.102:0/1362691713 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fad38103140 msgr2=0x7fad38103560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:51.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.869+0000 7fad403c9700 1 --2- 192.168.123.102:0/1362691713 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fad38103140 0x7fad38103560 secure :-1 s=READY pgs=239 cs=0 l=1 rev1=1 crypto rx=0x7fad28009b00 tx=0x7fad28009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:51.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.870+0000 7fad403c9700 1 -- 192.168.123.102:0/1362691713 shutdown_connections 2026-03-10T10:16:51.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.870+0000 7fad403c9700 1 --2- 192.168.123.102:0/1362691713 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fad38104340 0x7fad381047a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:51.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.870+0000 7fad403c9700 1 --2- 192.168.123.102:0/1362691713 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fad38103140 0x7fad38103560 unknown :-1 s=CLOSED pgs=239 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:51.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.870+0000 7fad403c9700 1 -- 192.168.123.102:0/1362691713 >> 192.168.123.102:0/1362691713 conn(0x7fad380fe6c0 msgr2=0x7fad38100b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:51.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.871+0000 7fad403c9700 1 -- 192.168.123.102:0/1362691713 shutdown_connections 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.871+0000 7fad403c9700 1 -- 192.168.123.102:0/1362691713 wait complete. 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.871+0000 7fad403c9700 1 Processor -- start 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.871+0000 7fad403c9700 1 -- start start 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.871+0000 7fad403c9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fad38103140 0x7fad381946b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.871+0000 7fad403c9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fad38104340 0x7fad38194bf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.871+0000 7fad403c9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fad381951c0 con 0x7fad38104340 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.871+0000 7fad403c9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fad38195300 con 0x7fad38103140 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.872+0000 7fad3e165700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fad38103140 0x7fad381946b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.872+0000 7fad3e165700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fad38103140 0x7fad381946b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40066/0 (socket says 192.168.123.102:40066) 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.872+0000 7fad3e165700 1 -- 192.168.123.102:0/2244476173 learned_addr learned my addr 192.168.123.102:0/2244476173 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:51.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.872+0000 7fad3e165700 1 -- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fad38104340 msgr2=0x7fad38194bf0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:51.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.872+0000 7fad3d964700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fad38104340 0x7fad38194bf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:51.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.872+0000 7fad3e165700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fad38104340 0x7fad38194bf0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:51.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.872+0000 7fad3e165700 1 -- 192.168.123.102:0/2244476173 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fad280097e0 con 0x7fad38103140 2026-03-10T10:16:51.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.872+0000 7fad3d964700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fad38104340 0x7fad38194bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:51.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.872+0000 7fad3e165700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fad38103140 0x7fad381946b0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fad28004990 tx=0x7fad28004a70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:51.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.873+0000 7fad2f7fe700 1 -- 192.168.123.102:0/2244476173 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fad2801d070 con 0x7fad38103140 2026-03-10T10:16:51.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.873+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fad3806a870 con 0x7fad38103140 2026-03-10T10:16:51.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.873+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fad3806ad00 con 0x7fad38103140 2026-03-10T10:16:51.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.873+0000 7fad2f7fe700 1 -- 192.168.123.102:0/2244476173 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fad2800bc50 con 0x7fad38103140 2026-03-10T10:16:51.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.873+0000 7fad2f7fe700 1 -- 192.168.123.102:0/2244476173 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fad2800f810 con 0x7fad38103140 2026-03-10T10:16:51.876 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.874+0000 7fad2f7fe700 1 -- 192.168.123.102:0/2244476173 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fad2800fa10 con 0x7fad38103140 2026-03-10T10:16:51.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.875+0000 7fad2f7fe700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fad2406c4e0 0x7fad2406e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:51.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.875+0000 7fad2f7fe700 1 -- 192.168.123.102:0/2244476173 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fad2808cc10 con 0x7fad38103140 2026-03-10T10:16:51.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.875+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fad1c005320 con 0x7fad38103140 2026-03-10T10:16:51.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.875+0000 7fad3d964700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fad2406c4e0 0x7fad2406e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:51.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.875+0000 7fad3d964700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fad2406c4e0 0x7fad2406e9a0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7fad34007900 tx=0x7fad34008040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:51.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.878+0000 7fad2f7fe700 1 -- 192.168.123.102:0/2244476173 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fad2805b630 con 0x7fad38103140 2026-03-10T10:16:51.998 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:51.996+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "fs volume create", "name": "cephfs", "placement": "4", "target": ["mon-mgr", ""]}) v1 -- 0x7fad1c000bf0 con 0x7fad2406c4e0 2026-03-10T10:16:52.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:52 vm02 ceph-mon[50200]: from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:52.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:52 vm02 ceph-mon[50200]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:52.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:52 vm02 ceph-mon[50200]: from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:52.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:52 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch 2026-03-10T10:16:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:52 vm05 ceph-mon[59051]: from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:52 vm05 ceph-mon[59051]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:52 vm05 ceph-mon[59051]: from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:52 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch 2026-03-10T10:16:53.266 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.264+0000 7fad2f7fe700 1 -- 192.168.123.102:0/2244476173 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7fad1c000bf0 con 0x7fad2406c4e0 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.266+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fad2406c4e0 msgr2=0x7fad2406e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.266+0000 7fad403c9700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fad2406c4e0 0x7fad2406e9a0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7fad34007900 tx=0x7fad34008040 comp rx=0 tx=0).stop 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.266+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fad38103140 msgr2=0x7fad381946b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.266+0000 7fad403c9700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fad38103140 0x7fad381946b0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fad28004990 tx=0x7fad28004a70 comp rx=0 tx=0).stop 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.266+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 shutdown_connections 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.266+0000 7fad403c9700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fad2406c4e0 0x7fad2406e9a0 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.267+0000 7fad403c9700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fad38103140 0x7fad381946b0 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.267+0000 7fad403c9700 1 --2- 192.168.123.102:0/2244476173 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fad38104340 0x7fad38194bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.267+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 >> 192.168.123.102:0/2244476173 conn(0x7fad380fe6c0 msgr2=0x7fad38107570 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.267+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 shutdown_connections 2026-03-10T10:16:53.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.267+0000 7fad403c9700 1 -- 192.168.123.102:0/2244476173 wait complete. 2026-03-10T10:16:53.334 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph fs dump' 2026-03-10T10:16:53.512 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:53.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:53 vm02 ceph-mon[50200]: from='client.24265 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:53.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:53 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]': finished 2026-03-10T10:16:53.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:53 vm02 ceph-mon[50200]: osdmap e34: 6 total, 6 up, 6 in 2026-03-10T10:16:53.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:53 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch 2026-03-10T10:16:53.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:53 vm02 ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02[50196]: 2026-03-10T10:16:53.247+0000 7f930ac5d700 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:16:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:53 vm05 ceph-mon[59051]: from='client.24265 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:16:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:53 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]': finished 2026-03-10T10:16:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:53 vm05 ceph-mon[59051]: osdmap e34: 6 total, 6 up, 6 in 2026-03-10T10:16:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:53 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch 2026-03-10T10:16:53.871 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.867+0000 7fb9735b1700 1 -- 192.168.123.102:0/2119601561 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 msgr2=0x7fb96c10cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:53.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.867+0000 7fb9735b1700 1 --2- 192.168.123.102:0/2119601561 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 0x7fb96c10cb90 secure :-1 s=READY pgs=240 cs=0 l=1 rev1=1 crypto rx=0x7fb960009b00 tx=0x7fb960009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:53.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.868+0000 7fb9735b1700 1 -- 192.168.123.102:0/2119601561 shutdown_connections 2026-03-10T10:16:53.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.868+0000 7fb9735b1700 1 --2- 192.168.123.102:0/2119601561 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 0x7fb96c10cb90 unknown :-1 s=CLOSED pgs=240 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:53.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.868+0000 7fb9735b1700 1 --2- 192.168.123.102:0/2119601561 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb96c107d90 0x7fb96c10a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:53.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.868+0000 7fb9735b1700 1 -- 192.168.123.102:0/2119601561 >> 192.168.123.102:0/2119601561 conn(0x7fb96c06dda0 msgr2=0x7fb96c070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:53.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.871+0000 7fb9735b1700 1 -- 192.168.123.102:0/2119601561 shutdown_connections 2026-03-10T10:16:53.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.871+0000 7fb9735b1700 1 -- 192.168.123.102:0/2119601561 wait complete. 2026-03-10T10:16:53.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.871+0000 7fb9735b1700 1 Processor -- start 2026-03-10T10:16:53.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb9735b1700 1 -- start start 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb9735b1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb96c107d90 0x7fb96c1ae020 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb9735b1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 0x7fb96c1ae560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb9735b1700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb96c1aeb10 con 0x7fb96c10a700 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb9735b1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb96c1aec80 con 0x7fb96c107d90 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb970b4c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 0x7fb96c1ae560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb970b4c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 0x7fb96c1ae560 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:32976/0 (socket says 192.168.123.102:32976) 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb970b4c700 1 -- 192.168.123.102:0/1913125420 learned_addr learned my addr 192.168.123.102:0/1913125420 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb970b4c700 1 -- 192.168.123.102:0/1913125420 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb96c107d90 msgr2=0x7fb96c1ae020 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb970b4c700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb96c107d90 0x7fb96c1ae020 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb970b4c700 1 -- 192.168.123.102:0/1913125420 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb9600097e0 con 0x7fb96c10a700 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb970b4c700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 0x7fb96c1ae560 secure :-1 s=READY pgs=241 cs=0 l=1 rev1=1 crypto rx=0x7fb960004990 tx=0x7fb960004a70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb95e7fc700 1 -- 192.168.123.102:0/1913125420 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb96001d070 con 0x7fb96c10a700 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.872+0000 7fb95e7fc700 1 -- 192.168.123.102:0/1913125420 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb96000bd90 con 0x7fb96c10a700 2026-03-10T10:16:53.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.873+0000 7fb95e7fc700 1 -- 192.168.123.102:0/1913125420 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb96000f950 con 0x7fb96c10a700 2026-03-10T10:16:53.876 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.873+0000 7fb9735b1700 1 -- 192.168.123.102:0/1913125420 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb96c1b33c0 con 0x7fb96c10a700 2026-03-10T10:16:53.876 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.873+0000 7fb9735b1700 1 -- 192.168.123.102:0/1913125420 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb96c1b3830 con 0x7fb96c10a700 2026-03-10T10:16:53.876 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.875+0000 7fb9735b1700 1 -- 192.168.123.102:0/1913125420 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb96c111e80 con 0x7fb96c10a700 2026-03-10T10:16:53.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.875+0000 7fb95e7fc700 1 -- 192.168.123.102:0/1913125420 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fb96000fab0 con 0x7fb96c10a700 2026-03-10T10:16:53.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.876+0000 7fb95e7fc700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb95806c290 0x7fb95806e750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:53.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.876+0000 7fb95e7fc700 1 -- 192.168.123.102:0/1913125420 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(36..36 src has 1..36) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fb96008eb80 con 0x7fb96c10a700 2026-03-10T10:16:53.879 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.878+0000 7fb95e7fc700 1 -- 192.168.123.102:0/1913125420 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fb960092050 con 0x7fb96c10a700 2026-03-10T10:16:53.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.879+0000 7fb97134d700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb95806c290 0x7fb95806e750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:53.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:53.881+0000 7fb97134d700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb95806c290 0x7fb95806e750 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fb96800a9b0 tx=0x7fb968005c90 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:54.041 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.039+0000 7fb9735b1700 1 -- 192.168.123.102:0/1913125420 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fb96c066e80 con 0x7fb96c10a700 2026-03-10T10:16:54.042 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.040+0000 7fb95e7fc700 1 -- 192.168.123.102:0/1913125420 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 2 v2) v1 ==== 75+0+1093 (secure 0 0 0) 0x7fb960027460 con 0x7fb96c10a700 2026-03-10T10:16:54.044 INFO:teuthology.orchestra.run.vm02.stdout:e2 2026-03-10T10:16:54.044 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:16:54.044 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:16:54.044 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:16:54.044 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:54.044 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:16:54.044 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:epoch 2 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:flags 12 joinable allow_snaps allow_multimds_snaps 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:16:53.248729+0000 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 0 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:in 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:up {} 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 0 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:54.045 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:54.048 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.046+0000 7fb94ffff700 1 -- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb95806c290 msgr2=0x7fb95806e750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:54.048 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.046+0000 7fb94ffff700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb95806c290 0x7fb95806e750 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fb96800a9b0 tx=0x7fb968005c90 comp rx=0 tx=0).stop 2026-03-10T10:16:54.048 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.046+0000 7fb94ffff700 1 -- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 msgr2=0x7fb96c1ae560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:54.048 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.046+0000 7fb94ffff700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 0x7fb96c1ae560 secure :-1 s=READY pgs=241 cs=0 l=1 rev1=1 crypto rx=0x7fb960004990 tx=0x7fb960004a70 comp rx=0 tx=0).stop 2026-03-10T10:16:54.050 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.048+0000 7fb94ffff700 1 -- 192.168.123.102:0/1913125420 shutdown_connections 2026-03-10T10:16:54.050 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.048+0000 7fb94ffff700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb95806c290 0x7fb95806e750 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:54.050 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.048+0000 7fb94ffff700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb96c107d90 0x7fb96c1ae020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:54.050 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.048+0000 7fb94ffff700 1 --2- 192.168.123.102:0/1913125420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb96c10a700 0x7fb96c1ae560 unknown :-1 s=CLOSED pgs=241 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:54.050 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.048+0000 7fb94ffff700 1 -- 192.168.123.102:0/1913125420 >> 192.168.123.102:0/1913125420 conn(0x7fb96c06dda0 msgr2=0x7fb96c10c120 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:54.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.050+0000 7fb94ffff700 1 -- 192.168.123.102:0/1913125420 shutdown_connections 2026-03-10T10:16:54.053 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.052+0000 7fb94ffff700 1 -- 192.168.123.102:0/1913125420 wait complete. 2026-03-10T10:16:54.060 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 2 2026-03-10T10:16:54.218 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T10:16:54.221 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:16:54.221 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph fs set cephfs max_mds 1' 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: pgmap v71: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]': finished 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: osdmap e35: 6 total, 6 up, 6 in 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: osdmap e36: 6 total, 6 up, 6 in 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: fsmap cephfs:0 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: Saving service mds.cephfs spec with placement count:4 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.zymcrs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.zymcrs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: Deploying daemon mds.cephfs.vm02.zymcrs on vm02 2026-03-10T10:16:54.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:54 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1913125420' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:16:54.467 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: pgmap v71: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]': finished 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: osdmap e35: 6 total, 6 up, 6 in 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: osdmap e36: 6 total, 6 up, 6 in 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: fsmap cephfs:0 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: Saving service mds.cephfs spec with placement count:4 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.zymcrs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.zymcrs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: Deploying daemon mds.cephfs.vm02.zymcrs on vm02 2026-03-10T10:16:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:54 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/1913125420' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:16:54.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.761+0000 7f127b5c9700 1 -- 192.168.123.102:0/3619404938 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12741019f0 msgr2=0x7f1274103de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:54.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.761+0000 7f127b5c9700 1 --2- 192.168.123.102:0/3619404938 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12741019f0 0x7f1274103de0 secure :-1 s=READY pgs=244 cs=0 l=1 rev1=1 crypto rx=0x7f1270009b00 tx=0x7f1270009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:54.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.762+0000 7f127b5c9700 1 -- 192.168.123.102:0/3619404938 shutdown_connections 2026-03-10T10:16:54.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.762+0000 7f127b5c9700 1 --2- 192.168.123.102:0/3619404938 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1274104320 0x7f1274106710 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:54.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.762+0000 7f127b5c9700 1 --2- 192.168.123.102:0/3619404938 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12741019f0 0x7f1274103de0 unknown :-1 s=CLOSED pgs=244 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:54.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.762+0000 7f127b5c9700 1 -- 192.168.123.102:0/3619404938 >> 192.168.123.102:0/3619404938 conn(0x7f12740fb3c0 msgr2=0x7f12740fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:54.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.762+0000 7f127b5c9700 1 -- 192.168.123.102:0/3619404938 shutdown_connections 2026-03-10T10:16:54.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.762+0000 7f127b5c9700 1 -- 192.168.123.102:0/3619404938 wait complete. 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.763+0000 7f127b5c9700 1 Processor -- start 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.763+0000 7f127b5c9700 1 -- start start 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.763+0000 7f127b5c9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12741019f0 0x7f12741989e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.763+0000 7f127b5c9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1274104320 0x7f1274198f20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.763+0000 7f127b5c9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1274199540 con 0x7f12741019f0 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.763+0000 7f127b5c9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1274199680 con 0x7f1274104320 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.764+0000 7f1279365700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12741019f0 0x7f12741989e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.764+0000 7f1278b64700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1274104320 0x7f1274198f20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.764+0000 7f1278b64700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1274104320 0x7f1274198f20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40142/0 (socket says 192.168.123.102:40142) 2026-03-10T10:16:54.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.764+0000 7f1278b64700 1 -- 192.168.123.102:0/2287002012 learned_addr learned my addr 192.168.123.102:0/2287002012 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:54.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.764+0000 7f1278b64700 1 -- 192.168.123.102:0/2287002012 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12741019f0 msgr2=0x7f12741989e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:54.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.764+0000 7f1278b64700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12741019f0 0x7f12741989e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:54.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.764+0000 7f1278b64700 1 -- 192.168.123.102:0/2287002012 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f12700097e0 con 0x7f1274104320 2026-03-10T10:16:54.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.764+0000 7f1278b64700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1274104320 0x7f1274198f20 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f126400d8d0 tx=0x7f126400dbe0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:54.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.765+0000 7f126a7fc700 1 -- 192.168.123.102:0/2287002012 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1264009880 con 0x7f1274104320 2026-03-10T10:16:54.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.765+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f127419e130 con 0x7f1274104320 2026-03-10T10:16:54.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.765+0000 7f126a7fc700 1 -- 192.168.123.102:0/2287002012 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1264010460 con 0x7f1274104320 2026-03-10T10:16:54.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.765+0000 7f126a7fc700 1 -- 192.168.123.102:0/2287002012 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f126400f5d0 con 0x7f1274104320 2026-03-10T10:16:54.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.765+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f127419e680 con 0x7f1274104320 2026-03-10T10:16:54.768 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.766+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f12740fcfd0 con 0x7f1274104320 2026-03-10T10:16:54.768 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.767+0000 7f126a7fc700 1 -- 192.168.123.102:0/2287002012 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f1264010ab0 con 0x7f1274104320 2026-03-10T10:16:54.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.767+0000 7f126a7fc700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f126006c530 0x7f126006e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:54.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.767+0000 7f126a7fc700 1 -- 192.168.123.102:0/2287002012 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f126408b100 con 0x7f1274104320 2026-03-10T10:16:54.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.767+0000 7f1279365700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f126006c530 0x7f126006e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:54.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.768+0000 7f1279365700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f126006c530 0x7f126006e9f0 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f127000b5c0 tx=0x7f1270005fb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:54.771 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.769+0000 7f126a7fc700 1 -- 192.168.123.102:0/2287002012 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f1264059410 con 0x7f1274104320 2026-03-10T10:16:54.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:54.901+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"} v 0) v1 -- 0x7f12740619a0 con 0x7f1274104320 2026-03-10T10:16:55.373 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:55 vm05 ceph-mon[59051]: osdmap e37: 6 total, 6 up, 6 in 2026-03-10T10:16:55.374 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:55.374 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:55.374 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:55.374 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.liatdh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:16:55.374 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.liatdh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T10:16:55.374 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:55 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:55.374 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:55 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/2287002012' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-10T10:16:55.374 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:55 vm05 ceph-mon[59051]: from='client.? ' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-10T10:16:55.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.478+0000 7f126a7fc700 1 -- 192.168.123.102:0/2287002012 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]=0 v3) v1 ==== 105+0+0 (secure 0 0 0) 0x7f1264058fa0 con 0x7f1274104320 2026-03-10T10:16:55.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f126006c530 msgr2=0x7f126006e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:55.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f126006c530 0x7f126006e9f0 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f127000b5c0 tx=0x7f1270005fb0 comp rx=0 tx=0).stop 2026-03-10T10:16:55.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1274104320 msgr2=0x7f1274198f20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:55.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1274104320 0x7f1274198f20 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f126400d8d0 tx=0x7f126400dbe0 comp rx=0 tx=0).stop 2026-03-10T10:16:55.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 shutdown_connections 2026-03-10T10:16:55.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12741019f0 0x7f12741989e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:55.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f126006c530 0x7f126006e9f0 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:55.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 --2- 192.168.123.102:0/2287002012 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1274104320 0x7f1274198f20 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:55.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 >> 192.168.123.102:0/2287002012 conn(0x7f12740fb3c0 msgr2=0x7f1274104f90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:55.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 shutdown_connections 2026-03-10T10:16:55.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:55.481+0000 7f127b5c9700 1 -- 192.168.123.102:0/2287002012 wait complete. 2026-03-10T10:16:55.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:55 vm02 ceph-mon[50200]: osdmap e37: 6 total, 6 up, 6 in 2026-03-10T10:16:55.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:55.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:55.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:55.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.liatdh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:16:55.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.liatdh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T10:16:55.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:55 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:55.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:55 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/2287002012' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-10T10:16:55.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:55 vm02 ceph-mon[50200]: from='client.? ' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-10T10:16:55.556 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T10:16:55.558 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:16:55.558 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph fs set cephfs allow_standby_replay true' 2026-03-10T10:16:55.744 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:56.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.094+0000 7f8532c1f700 1 -- 192.168.123.102:0/122731103 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c075c80 msgr2=0x7f852c078110 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:56.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.094+0000 7f8532c1f700 1 --2- 192.168.123.102:0/122731103 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c075c80 0x7f852c078110 secure :-1 s=READY pgs=245 cs=0 l=1 rev1=1 crypto rx=0x7f8524009230 tx=0x7f8524009260 comp rx=0 tx=0).stop 2026-03-10T10:16:56.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.094+0000 7f8532c1f700 1 -- 192.168.123.102:0/122731103 shutdown_connections 2026-03-10T10:16:56.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.094+0000 7f8532c1f700 1 --2- 192.168.123.102:0/122731103 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c075c80 0x7f852c078110 unknown :-1 s=CLOSED pgs=245 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:56.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.094+0000 7f8532c1f700 1 --2- 192.168.123.102:0/122731103 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f852c072d90 0x7f852c0731b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:56.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.094+0000 7f8532c1f700 1 -- 192.168.123.102:0/122731103 >> 192.168.123.102:0/122731103 conn(0x7f852c06dda0 msgr2=0x7f852c070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.094+0000 7f8532c1f700 1 -- 192.168.123.102:0/122731103 shutdown_connections 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.094+0000 7f8532c1f700 1 -- 192.168.123.102:0/122731103 wait complete. 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.095+0000 7f8532c1f700 1 Processor -- start 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.095+0000 7f8532c1f700 1 -- start start 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f8532c1f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c072d90 0x7f852c12bdb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f8532c1f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f852c083ac0 0x7f852c12e300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f8532c1f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f852c12e840 con 0x7f852c072d90 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f8532c1f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f852c12e9b0 con 0x7f852c083ac0 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f85309bb700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c072d90 0x7f852c12bdb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f85309bb700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c072d90 0x7f852c12bdb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33028/0 (socket says 192.168.123.102:33028) 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f85309bb700 1 -- 192.168.123.102:0/1772144127 learned_addr learned my addr 192.168.123.102:0/1772144127 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f85309bb700 1 -- 192.168.123.102:0/1772144127 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f852c083ac0 msgr2=0x7f852c12e300 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:16:56.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f85309bb700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f852c083ac0 0x7f852c12e300 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:56.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f85309bb700 1 -- 192.168.123.102:0/1772144127 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8524008ee0 con 0x7f852c072d90 2026-03-10T10:16:56.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.096+0000 7f85309bb700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c072d90 0x7f852c12bdb0 secure :-1 s=READY pgs=246 cs=0 l=1 rev1=1 crypto rx=0x7f851c007c00 tx=0x7f851c00f130 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:56.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.097+0000 7f8529ffb700 1 -- 192.168.123.102:0/1772144127 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f851c010040 con 0x7f852c072d90 2026-03-10T10:16:56.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.097+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f852c12ec30 con 0x7f852c072d90 2026-03-10T10:16:56.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.097+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f852c12f180 con 0x7f852c072d90 2026-03-10T10:16:56.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.098+0000 7f8529ffb700 1 -- 192.168.123.102:0/1772144127 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f851c015470 con 0x7f852c072d90 2026-03-10T10:16:56.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.098+0000 7f8529ffb700 1 -- 192.168.123.102:0/1772144127 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f851c014670 con 0x7f852c072d90 2026-03-10T10:16:56.101 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.099+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8518005320 con 0x7f852c072d90 2026-03-10T10:16:56.101 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.099+0000 7f8529ffb700 1 -- 192.168.123.102:0/1772144127 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f851c014870 con 0x7f852c072d90 2026-03-10T10:16:56.103 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.099+0000 7f8529ffb700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f851406c600 0x7f851406eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:56.103 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.099+0000 7f8529ffb700 1 -- 192.168.123.102:0/1772144127 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(38..38 src has 1..38) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f851c015650 con 0x7f852c072d90 2026-03-10T10:16:56.103 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.100+0000 7f852bfff700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f851406c600 0x7f851406eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:56.103 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.100+0000 7f852bfff700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f851406c600 0x7f851406eac0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f852400b930 tx=0x7f8524019f90 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:56.104 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.102+0000 7f8529ffb700 1 -- 192.168.123.102:0/1772144127 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f851c05b220 con 0x7f852c072d90 2026-03-10T10:16:56.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:56.259+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"} v 0) v1 -- 0x7f8518005f70 con 0x7f852c072d90 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: Deploying daemon mds.cephfs.vm05.liatdh on vm05 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: pgmap v75: 65 pgs: 6 creating+peering, 4 active+clean, 55 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: osdmap e38: 6 total, 6 up, 6 in 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.stcvsz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.stcvsz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: Deploying daemon mds.cephfs.vm02.stcvsz on vm02 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:boot 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='client.? ' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]': finished 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.105:6824/2054341310,v1:192.168.123.105:6825/2054341310] up:boot 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: daemon mds.cephfs.vm05.liatdh assigned to filesystem cephfs as rank 0 (now has 1 ranks) 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: fsmap cephfs:0 2 up:standby 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:creating} 1 up:standby 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: Cluster is now healthy 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: daemon mds.cephfs.vm05.liatdh is now active in filesystem cephfs as rank 0 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.105:6824/2054341310,v1:192.168.123.105:6825/2054341310] up:active 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby 2026-03-10T10:16:56.413 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:56 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1772144127' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]: dispatch 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: Deploying daemon mds.cephfs.vm05.liatdh on vm05 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: pgmap v75: 65 pgs: 6 creating+peering, 4 active+clean, 55 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: osdmap e38: 6 total, 6 up, 6 in 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.stcvsz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.stcvsz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: Deploying daemon mds.cephfs.vm02.stcvsz on vm02 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:boot 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='client.? ' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]': finished 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.105:6824/2054341310,v1:192.168.123.105:6825/2054341310] up:boot 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: daemon mds.cephfs.vm05.liatdh assigned to filesystem cephfs as rank 0 (now has 1 ranks) 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: fsmap cephfs:0 2 up:standby 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:16:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:16:56.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:creating} 1 up:standby 2026-03-10T10:16:56.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T10:16:56.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: Cluster is now healthy 2026-03-10T10:16:56.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: daemon mds.cephfs.vm05.liatdh is now active in filesystem cephfs as rank 0 2026-03-10T10:16:56.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.105:6824/2054341310,v1:192.168.123.105:6825/2054341310] up:active 2026-03-10T10:16:56.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby 2026-03-10T10:16:56.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:56 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/1772144127' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]: dispatch 2026-03-10T10:16:57.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.179+0000 7f8529ffb700 1 -- 192.168.123.102:0/1772144127 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]=0 v6) v1 ==== 121+0+0 (secure 0 0 0) 0x7f851c05adb0 con 0x7f852c072d90 2026-03-10T10:16:57.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.182+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f851406c600 msgr2=0x7f851406eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:57.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.182+0000 7f8532c1f700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f851406c600 0x7f851406eac0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f852400b930 tx=0x7f8524019f90 comp rx=0 tx=0).stop 2026-03-10T10:16:57.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.182+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c072d90 msgr2=0x7f852c12bdb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:57.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.182+0000 7f8532c1f700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c072d90 0x7f852c12bdb0 secure :-1 s=READY pgs=246 cs=0 l=1 rev1=1 crypto rx=0x7f851c007c00 tx=0x7f851c00f130 comp rx=0 tx=0).stop 2026-03-10T10:16:57.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.183+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 shutdown_connections 2026-03-10T10:16:57.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.183+0000 7f8532c1f700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f852c072d90 0x7f852c12bdb0 unknown :-1 s=CLOSED pgs=246 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:57.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.183+0000 7f8532c1f700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f851406c600 0x7f851406eac0 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:57.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.183+0000 7f8532c1f700 1 --2- 192.168.123.102:0/1772144127 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f852c083ac0 0x7f852c12e300 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:57.185 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.183+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 >> 192.168.123.102:0/1772144127 conn(0x7f852c06dda0 msgr2=0x7f852c077540 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:57.185 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.183+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 shutdown_connections 2026-03-10T10:16:57.185 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.183+0000 7f8532c1f700 1 -- 192.168.123.102:0/1772144127 wait complete. 2026-03-10T10:16:57.245 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T10:16:57.248 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:16:57.248 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph fs set cephfs inline_data false' 2026-03-10T10:16:57.409 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:57.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:57.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:57.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:57.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.sudjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:16:57.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.sudjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T10:16:57.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:57.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: Deploying daemon mds.cephfs.vm05.sudjys on vm05 2026-03-10T10:16:57.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: pgmap v77: 65 pgs: 6 creating+peering, 47 active+clean, 12 unknown; 450 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 767 B/s wr, 6 op/s 2026-03-10T10:16:57.693 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1772144127' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]': finished 2026-03-10T10:16:57.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.102:6828/3981676048,v1:192.168.123.102:6829/3981676048] up:boot 2026-03-10T10:16:57.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 2 up:standby 2026-03-10T10:16:57.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:16:57.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby-replay 1 up:standby 2026-03-10T10:16:57.694 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:57 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:57.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.691+0000 7faf92b9c700 1 -- 192.168.123.102:0/3439005615 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faf8c103140 msgr2=0x7faf8c103560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:57.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.691+0000 7faf92b9c700 1 --2- 192.168.123.102:0/3439005615 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faf8c103140 0x7faf8c103560 secure :-1 s=READY pgs=251 cs=0 l=1 rev1=1 crypto rx=0x7faf7c009b50 tx=0x7faf7c009e60 comp rx=0 tx=0).stop 2026-03-10T10:16:57.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.692+0000 7faf92b9c700 1 -- 192.168.123.102:0/3439005615 shutdown_connections 2026-03-10T10:16:57.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.692+0000 7faf92b9c700 1 --2- 192.168.123.102:0/3439005615 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faf8c104340 0x7faf8c1047a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:57.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.692+0000 7faf92b9c700 1 --2- 192.168.123.102:0/3439005615 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faf8c103140 0x7faf8c103560 unknown :-1 s=CLOSED pgs=251 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:57.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.692+0000 7faf92b9c700 1 -- 192.168.123.102:0/3439005615 >> 192.168.123.102:0/3439005615 conn(0x7faf8c0fe6c0 msgr2=0x7faf8c100b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:57.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.692+0000 7faf92b9c700 1 -- 192.168.123.102:0/3439005615 shutdown_connections 2026-03-10T10:16:57.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.692+0000 7faf92b9c700 1 -- 192.168.123.102:0/3439005615 wait complete. 2026-03-10T10:16:57.695 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.693+0000 7faf92b9c700 1 Processor -- start 2026-03-10T10:16:57.695 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.693+0000 7faf92b9c700 1 -- start start 2026-03-10T10:16:57.695 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.693+0000 7faf92b9c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faf8c103140 0x7faf8c198a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:57.695 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf92b9c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faf8c104340 0x7faf8c198fa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:57.695 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf92b9c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7faf8c1995c0 con 0x7faf8c103140 2026-03-10T10:16:57.695 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf92b9c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7faf8c199700 con 0x7faf8c104340 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf8bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faf8c104340 0x7faf8c198fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf8bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faf8c104340 0x7faf8c198fa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:57316/0 (socket says 192.168.123.102:57316) 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf8bfff700 1 -- 192.168.123.102:0/898772508 learned_addr learned my addr 192.168.123.102:0/898772508 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf90938700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faf8c103140 0x7faf8c198a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf8bfff700 1 -- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faf8c103140 msgr2=0x7faf8c198a60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf8bfff700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faf8c103140 0x7faf8c198a60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf8bfff700 1 -- 192.168.123.102:0/898772508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7faf7c0097e0 con 0x7faf8c104340 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.694+0000 7faf90938700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faf8c103140 0x7faf8c198a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.695+0000 7faf8bfff700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faf8c104340 0x7faf8c198fa0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7faf8000eb10 tx=0x7faf8000eed0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.695+0000 7faf89ffb700 1 -- 192.168.123.102:0/898772508 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7faf8000cca0 con 0x7faf8c104340 2026-03-10T10:16:57.697 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.695+0000 7faf89ffb700 1 -- 192.168.123.102:0/898772508 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7faf8000ce00 con 0x7faf8c104340 2026-03-10T10:16:57.697 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.695+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7faf8c19e1b0 con 0x7faf8c104340 2026-03-10T10:16:57.697 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.695+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7faf8c19e700 con 0x7faf8c104340 2026-03-10T10:16:57.697 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.696+0000 7faf89ffb700 1 -- 192.168.123.102:0/898772508 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7faf80018990 con 0x7faf8c104340 2026-03-10T10:16:57.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.696+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7faf8c066e80 con 0x7faf8c104340 2026-03-10T10:16:57.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.698+0000 7faf89ffb700 1 -- 192.168.123.102:0/898772508 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7faf80018b80 con 0x7faf8c104340 2026-03-10T10:16:57.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.699+0000 7faf89ffb700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faf7406c490 0x7faf7406e950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:57.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.699+0000 7faf89ffb700 1 -- 192.168.123.102:0/898772508 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(38..38 src has 1..38) v4 ==== 5276+0+0 (secure 0 0 0) 0x7faf80027080 con 0x7faf8c104340 2026-03-10T10:16:57.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.699+0000 7faf90938700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faf7406c490 0x7faf7406e950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:57.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.700+0000 7faf90938700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faf7406c490 0x7faf7406e950 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7faf7c006010 tx=0x7faf7c0058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:57.704 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.702+0000 7faf89ffb700 1 -- 192.168.123.102:0/898772508 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7faf8005a6d0 con 0x7faf8c104340 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.sudjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.sudjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: Deploying daemon mds.cephfs.vm05.sudjys on vm05 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: pgmap v77: 65 pgs: 6 creating+peering, 47 active+clean, 12 unknown; 450 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 767 B/s wr, 6 op/s 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/1772144127' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]': finished 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.102:6828/3981676048,v1:192.168.123.102:6829/3981676048] up:boot 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 2 up:standby 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby-replay 1 up:standby 2026-03-10T10:16:57.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:57 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:57.843 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:57.841+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"} v 0) v1 -- 0x7faf8c19e9e0 con 0x7faf8c104340 2026-03-10T10:16:58.195 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.193+0000 7faf89ffb700 1 -- 192.168.123.102:0/898772508 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]=0 inline data disabled v8) v1 ==== 133+0+0 (secure 0 0 0) 0x7faf8005a260 con 0x7faf8c104340 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.196+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faf7406c490 msgr2=0x7faf7406e950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.196+0000 7faf92b9c700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faf7406c490 0x7faf7406e950 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7faf7c006010 tx=0x7faf7c0058e0 comp rx=0 tx=0).stop 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.196+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faf8c104340 msgr2=0x7faf8c198fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.196+0000 7faf92b9c700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faf8c104340 0x7faf8c198fa0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7faf8000eb10 tx=0x7faf8000eed0 comp rx=0 tx=0).stop 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.196+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 shutdown_connections 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.196+0000 7faf92b9c700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faf8c103140 0x7faf8c198a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.196+0000 7faf92b9c700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faf7406c490 0x7faf7406e950 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.196+0000 7faf92b9c700 1 --2- 192.168.123.102:0/898772508 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faf8c104340 0x7faf8c198fa0 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.196+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 >> 192.168.123.102:0/898772508 conn(0x7faf8c0fe6c0 msgr2=0x7faf8c107570 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.197+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 shutdown_connections 2026-03-10T10:16:58.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.197+0000 7faf92b9c700 1 -- 192.168.123.102:0/898772508 wait complete. 2026-03-10T10:16:58.203 INFO:teuthology.orchestra.run.vm02.stderr:inline data disabled 2026-03-10T10:16:58.276 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T10:16:58.279 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:16:58.279 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph fs dump' 2026-03-10T10:16:58.470 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: from='client.? ' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]: dispatch 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/898772508' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]: dispatch 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] up:boot 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: from='client.? ' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]': finished 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:16:58.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:58 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:16:58.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.946+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3063191109 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7b8107d90 msgr2=0x7fa7b810a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:58.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.946+0000 7fa7beb2a700 1 --2- 192.168.123.102:0/3063191109 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7b8107d90 0x7fa7b810a1c0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fa7b4009a60 tx=0x7fa7b4009d70 comp rx=0 tx=0).stop 2026-03-10T10:16:58.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.947+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3063191109 shutdown_connections 2026-03-10T10:16:58.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.947+0000 7fa7beb2a700 1 --2- 192.168.123.102:0/3063191109 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7b810a700 0x7fa7b810cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:58.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.947+0000 7fa7beb2a700 1 --2- 192.168.123.102:0/3063191109 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7b8107d90 0x7fa7b810a1c0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:58.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.947+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3063191109 >> 192.168.123.102:0/3063191109 conn(0x7fa7b806dda0 msgr2=0x7fa7b8070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:58.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.947+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3063191109 shutdown_connections 2026-03-10T10:16:58.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.947+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3063191109 wait complete. 2026-03-10T10:16:58.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.947+0000 7fa7beb2a700 1 Processor -- start 2026-03-10T10:16:58.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.948+0000 7fa7beb2a700 1 -- start start 2026-03-10T10:16:58.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.948+0000 7fa7beb2a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7b8107d90 0x7fa7b81169a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:58.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.948+0000 7fa7beb2a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7b810a700 0x7fa7b8116ee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:58.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.948+0000 7fa7beb2a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa7b8117500 con 0x7fa7b8107d90 2026-03-10T10:16:58.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.948+0000 7fa7beb2a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa7b8117640 con 0x7fa7b810a700 2026-03-10T10:16:58.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.948+0000 7fa7affff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7b810a700 0x7fa7b8116ee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:58.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.948+0000 7fa7bc8c6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7b8107d90 0x7fa7b81169a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:58.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.949+0000 7fa7bc8c6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7b8107d90 0x7fa7b81169a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:51558/0 (socket says 192.168.123.102:51558) 2026-03-10T10:16:58.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.949+0000 7fa7bc8c6700 1 -- 192.168.123.102:0/3851817467 learned_addr learned my addr 192.168.123.102:0/3851817467 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:58.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.950+0000 7fa7bc8c6700 1 -- 192.168.123.102:0/3851817467 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7b810a700 msgr2=0x7fa7b8116ee0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:58.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.950+0000 7fa7bc8c6700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7b810a700 0x7fa7b8116ee0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:58.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.950+0000 7fa7bc8c6700 1 -- 192.168.123.102:0/3851817467 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa7a80097e0 con 0x7fa7b8107d90 2026-03-10T10:16:58.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.950+0000 7fa7bc8c6700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7b8107d90 0x7fa7b81169a0 secure :-1 s=READY pgs=252 cs=0 l=1 rev1=1 crypto rx=0x7fa7b40038c0 tx=0x7fa7b40044b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:58.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.951+0000 7fa7adffb700 1 -- 192.168.123.102:0/3851817467 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa7b401d070 con 0x7fa7b8107d90 2026-03-10T10:16:58.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.951+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa7b4009710 con 0x7fa7b8107d90 2026-03-10T10:16:58.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.951+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa7b81b3750 con 0x7fa7b8107d90 2026-03-10T10:16:58.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.953+0000 7fa7adffb700 1 -- 192.168.123.102:0/3851817467 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa7b4022950 con 0x7fa7b8107d90 2026-03-10T10:16:58.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.953+0000 7fa7adffb700 1 -- 192.168.123.102:0/3851817467 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa7b400fa50 con 0x7fa7b8107d90 2026-03-10T10:16:58.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.954+0000 7fa7adffb700 1 -- 192.168.123.102:0/3851817467 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fa7b400fc30 con 0x7fa7b8107d90 2026-03-10T10:16:58.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.955+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa7a4005320 con 0x7fa7b8107d90 2026-03-10T10:16:58.960 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.958+0000 7fa7adffb700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa7a006c6d0 0x7fa7a006eb90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:58.960 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.958+0000 7fa7affff700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa7a006c6d0 0x7fa7a006eb90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:58.960 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.959+0000 7fa7affff700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa7a006c6d0 0x7fa7a006eb90 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7fa7a8005fd0 tx=0x7fa7a8009500 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:58.960 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.959+0000 7fa7adffb700 1 -- 192.168.123.102:0/3851817467 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(38..38 src has 1..38) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fa7b408df20 con 0x7fa7b8107d90 2026-03-10T10:16:58.961 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:58.959+0000 7fa7adffb700 1 -- 192.168.123.102:0/3851817467 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa7b400f460 con 0x7fa7b8107d90 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: from='client.? ' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]: dispatch 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/898772508' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]: dispatch 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] up:boot 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: from='client.? ' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "false"}]': finished 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:16:59.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:58 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:16:59.091 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.087+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fa7a4006200 con 0x7fa7b8107d90 2026-03-10T10:16:59.091 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.087+0000 7fa7adffb700 1 -- 192.168.123.102:0/3851817467 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 8 v8) v1 ==== 75+0+1799 (secure 0 0 0) 0x7fa7b405c360 con 0x7fa7b8107d90 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:e8 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:epoch 8 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:16:58.187428+0000 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:16:59.092 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 0 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:up {0=24275} 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{0:24275} state up:active seq 2 addr [v2:192.168.123.105:6824/2054341310,v1:192.168.123.105:6825/2054341310] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{0:14480} state up:standby-replay seq 1 addr [v2:192.168.123.102:6828/3981676048,v1:192.168.123.102:6829/3981676048] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{-1:14464} state up:standby seq 1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:16:59.093 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{-1:14484} state up:standby seq 1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:16:59.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.092+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa7a006c6d0 msgr2=0x7fa7a006eb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:59.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.092+0000 7fa7beb2a700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa7a006c6d0 0x7fa7a006eb90 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7fa7a8005fd0 tx=0x7fa7a8009500 comp rx=0 tx=0).stop 2026-03-10T10:16:59.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.092+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7b8107d90 msgr2=0x7fa7b81169a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:59.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.092+0000 7fa7beb2a700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7b8107d90 0x7fa7b81169a0 secure :-1 s=READY pgs=252 cs=0 l=1 rev1=1 crypto rx=0x7fa7b40038c0 tx=0x7fa7b40044b0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.093+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 shutdown_connections 2026-03-10T10:16:59.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.093+0000 7fa7beb2a700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7b8107d90 0x7fa7b81169a0 unknown :-1 s=CLOSED pgs=252 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.093+0000 7fa7beb2a700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa7a006c6d0 0x7fa7a006eb90 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.093+0000 7fa7beb2a700 1 --2- 192.168.123.102:0/3851817467 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7b810a700 0x7fa7b8116ee0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.093+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 >> 192.168.123.102:0/3851817467 conn(0x7fa7b806dda0 msgr2=0x7fa7b810c070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:59.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.093+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 shutdown_connections 2026-03-10T10:16:59.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.093+0000 7fa7beb2a700 1 -- 192.168.123.102:0/3851817467 wait complete. 2026-03-10T10:16:59.098 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 8 2026-03-10T10:16:59.175 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph --format=json fs dump | jq -e ".filesystems | length == 1"' 2026-03-10T10:16:59.419 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:16:59.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.739+0000 7f1b6a4cc700 1 -- 192.168.123.102:0/3744300878 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64102070 msgr2=0x7f1b641024f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:59.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.739+0000 7f1b6a4cc700 1 --2- 192.168.123.102:0/3744300878 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64102070 0x7f1b641024f0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f1b54009b00 tx=0x7f1b54009e10 comp rx=0 tx=0).stop 2026-03-10T10:16:59.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.740+0000 7f1b6a4cc700 1 -- 192.168.123.102:0/3744300878 shutdown_connections 2026-03-10T10:16:59.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.740+0000 7f1b6a4cc700 1 --2- 192.168.123.102:0/3744300878 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64102070 0x7f1b641024f0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.740+0000 7f1b6a4cc700 1 --2- 192.168.123.102:0/3744300878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b64100f10 0x7f1b64101330 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.740+0000 7f1b6a4cc700 1 -- 192.168.123.102:0/3744300878 >> 192.168.123.102:0/3744300878 conn(0x7f1b640fc4b0 msgr2=0x7f1b640fe8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:59.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.741+0000 7f1b6a4cc700 1 -- 192.168.123.102:0/3744300878 shutdown_connections 2026-03-10T10:16:59.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.741+0000 7f1b6a4cc700 1 -- 192.168.123.102:0/3744300878 wait complete. 2026-03-10T10:16:59.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.741+0000 7f1b6a4cc700 1 Processor -- start 2026-03-10T10:16:59.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.741+0000 7f1b6a4cc700 1 -- start start 2026-03-10T10:16:59.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b6a4cc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64100f10 0x7f1b64074b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:59.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b6a4cc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b64102070 0x7f1b64073180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:59.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b6a4cc700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1b6419a400 con 0x7f1b64102070 2026-03-10T10:16:59.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b6a4cc700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1b6419a540 con 0x7f1b64100f10 2026-03-10T10:16:59.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b637fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b64102070 0x7f1b64073180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:59.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b63fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64100f10 0x7f1b64074b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:59.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b63fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64100f10 0x7f1b64074b30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:57366/0 (socket says 192.168.123.102:57366) 2026-03-10T10:16:59.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b63fff700 1 -- 192.168.123.102:0/3328345739 learned_addr learned my addr 192.168.123.102:0/3328345739 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:16:59.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b63fff700 1 -- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b64102070 msgr2=0x7f1b64073180 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:59.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b63fff700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b64102070 0x7f1b64073180 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b63fff700 1 -- 192.168.123.102:0/3328345739 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1b4c009710 con 0x7f1b64100f10 2026-03-10T10:16:59.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.742+0000 7f1b637fe700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b64102070 0x7f1b64073180 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:16:59.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.743+0000 7f1b63fff700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64100f10 0x7f1b64074b30 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f1b4c00ec80 tx=0x7f1b4c00ef90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:59.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.743+0000 7f1b617fa700 1 -- 192.168.123.102:0/3328345739 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1b4c00ccd0 con 0x7f1b64100f10 2026-03-10T10:16:59.747 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.743+0000 7f1b617fa700 1 -- 192.168.123.102:0/3328345739 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1b4c004500 con 0x7f1b64100f10 2026-03-10T10:16:59.747 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.743+0000 7f1b617fa700 1 -- 192.168.123.102:0/3328345739 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1b4c0052c0 con 0x7f1b64100f10 2026-03-10T10:16:59.747 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.743+0000 7f1b6a4cc700 1 -- 192.168.123.102:0/3328345739 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1b540097e0 con 0x7f1b64100f10 2026-03-10T10:16:59.747 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.743+0000 7f1b6a4cc700 1 -- 192.168.123.102:0/3328345739 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1b64073a50 con 0x7f1b64100f10 2026-03-10T10:16:59.747 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.744+0000 7f1b617fa700 1 -- 192.168.123.102:0/3328345739 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f1b4c004a50 con 0x7f1b64100f10 2026-03-10T10:16:59.747 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.744+0000 7f1b6a4cc700 1 -- 192.168.123.102:0/3328345739 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1b64066e80 con 0x7f1b64100f10 2026-03-10T10:16:59.748 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.746+0000 7f1b617fa700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1b5006c4e0 0x7f1b5006e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:16:59.748 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.746+0000 7f1b637fe700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1b5006c4e0 0x7f1b5006e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:16:59.748 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.746+0000 7f1b617fa700 1 -- 192.168.123.102:0/3328345739 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f1b4c014070 con 0x7f1b64100f10 2026-03-10T10:16:59.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.753+0000 7f1b617fa700 1 -- 192.168.123.102:0/3328345739 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f1b4c059ec0 con 0x7f1b64100f10 2026-03-10T10:16:59.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.753+0000 7f1b637fe700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1b5006c4e0 0x7f1b5006e9a0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f1b54005230 tx=0x7f1b5401a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:16:59.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.907+0000 7f1b6a4cc700 1 -- 192.168.123.102:0/3328345739 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "format": "json"} v 0) v1 -- 0x7f1b64109b80 con 0x7f1b64100f10 2026-03-10T10:16:59.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.907+0000 7f1b617fa700 1 -- 192.168.123.102:0/3328345739 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "format": "json"}]=0 dumped fsmap epoch 11 v11) v1 ==== 94+0+3187 (secure 0 0 0) 0x7f1b4c01cc90 con 0x7f1b64100f10 2026-03-10T10:16:59.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.910+0000 7f1b5affd700 1 -- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1b5006c4e0 msgr2=0x7f1b5006e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:59.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.910+0000 7f1b5affd700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1b5006c4e0 0x7f1b5006e9a0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f1b54005230 tx=0x7f1b5401a040 comp rx=0 tx=0).stop 2026-03-10T10:16:59.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.910+0000 7f1b5affd700 1 -- 192.168.123.102:0/3328345739 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64100f10 msgr2=0x7f1b64074b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:16:59.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.910+0000 7f1b5affd700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64100f10 0x7f1b64074b30 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f1b4c00ec80 tx=0x7f1b4c00ef90 comp rx=0 tx=0).stop 2026-03-10T10:16:59.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.910+0000 7f1b5affd700 1 -- 192.168.123.102:0/3328345739 shutdown_connections 2026-03-10T10:16:59.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.910+0000 7f1b5affd700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f1b5006c4e0 0x7f1b5006e9a0 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.910+0000 7f1b5affd700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b64100f10 0x7f1b64074b30 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.910+0000 7f1b5affd700 1 --2- 192.168.123.102:0/3328345739 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1b64102070 0x7f1b64073180 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:16:59.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.910+0000 7f1b5affd700 1 -- 192.168.123.102:0/3328345739 >> 192.168.123.102:0/3328345739 conn(0x7f1b640fc4b0 msgr2=0x7f1b64105330 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:16:59.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.911+0000 7f1b5affd700 1 -- 192.168.123.102:0/3328345739 shutdown_connections 2026-03-10T10:16:59.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:16:59.911+0000 7f1b5affd700 1 -- 192.168.123.102:0/3328345739 wait complete. 2026-03-10T10:16:59.915 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 11 2026-03-10T10:16:59.924 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:16:59.982 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'while ! ceph --format=json mds versions | jq -e ". | add == 4"; do sleep 1; done' 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: pgmap v78: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 371 B/s rd, 2.4 KiB/s wr, 7 op/s 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3851817467' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:standby 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: Dropping low affinity standby-replay daemon mds.cephfs.vm02.stcvsz in favor of higher affinity standby. 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: Dropping low affinity active daemon mds.cephfs.vm05.liatdh in favor of higher affinity standby. 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: Replacing daemon mds.cephfs.vm05.liatdh as rank 0 with standby daemon mds.cephfs.vm02.zymcrs 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby-replay 1 up:standby 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: osdmap e39: 6 total, 6 up, 6 in 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:replay} 1 up:standby 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:00.023 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:16:59 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.175 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: pgmap v78: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 371 B/s rd, 2.4 KiB/s wr, 7 op/s 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3851817467' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:standby 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: Dropping low affinity standby-replay daemon mds.cephfs.vm02.stcvsz in favor of higher affinity standby. 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: Dropping low affinity active daemon mds.cephfs.vm05.liatdh in favor of higher affinity standby. 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: Replacing daemon mds.cephfs.vm05.liatdh as rank 0 with standby daemon mds.cephfs.vm02.zymcrs 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 1 up:standby-replay 1 up:standby 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: osdmap e39: 6 total, 6 up, 6 in 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:replay} 1 up:standby 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:00.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:16:59 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.501+0000 7f2a53da2700 1 -- 192.168.123.102:0/1603360361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 msgr2=0x7f2a4c10cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:00.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.501+0000 7f2a53da2700 1 --2- 192.168.123.102:0/1603360361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 0x7f2a4c10cb90 secure :-1 s=READY pgs=256 cs=0 l=1 rev1=1 crypto rx=0x7f2a3c009b00 tx=0x7f2a3c009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:00.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 -- 192.168.123.102:0/1603360361 shutdown_connections 2026-03-10T10:17:00.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 --2- 192.168.123.102:0/1603360361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 0x7f2a4c10cb90 unknown :-1 s=CLOSED pgs=256 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:00.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 --2- 192.168.123.102:0/1603360361 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a4c107d90 0x7f2a4c10a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:00.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 -- 192.168.123.102:0/1603360361 >> 192.168.123.102:0/1603360361 conn(0x7f2a4c06dda0 msgr2=0x7f2a4c070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:00.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 -- 192.168.123.102:0/1603360361 shutdown_connections 2026-03-10T10:17:00.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 -- 192.168.123.102:0/1603360361 wait complete. 2026-03-10T10:17:00.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 Processor -- start 2026-03-10T10:17:00.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 -- start start 2026-03-10T10:17:00.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a4c107d90 0x7f2a4c116940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:00.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 0x7f2a4c116e80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:00.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.503+0000 7f2a53da2700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a4c1174c0 con 0x7f2a4c10a700 2026-03-10T10:17:00.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.504+0000 7f2a53da2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a4c117630 con 0x7f2a4c107d90 2026-03-10T10:17:00.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.504+0000 7f2a5133d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 0x7f2a4c116e80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:00.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.504+0000 7f2a5133d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 0x7f2a4c116e80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:51622/0 (socket says 192.168.123.102:51622) 2026-03-10T10:17:00.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.504+0000 7f2a5133d700 1 -- 192.168.123.102:0/1362616719 learned_addr learned my addr 192.168.123.102:0/1362616719 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:00.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.504+0000 7f2a51b3e700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a4c107d90 0x7f2a4c116940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:00.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.505+0000 7f2a5133d700 1 -- 192.168.123.102:0/1362616719 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a4c107d90 msgr2=0x7f2a4c116940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:00.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.505+0000 7f2a5133d700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a4c107d90 0x7f2a4c116940 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:00.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.505+0000 7f2a5133d700 1 -- 192.168.123.102:0/1362616719 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2a3c0097e0 con 0x7f2a4c10a700 2026-03-10T10:17:00.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.505+0000 7f2a5133d700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 0x7f2a4c116e80 secure :-1 s=READY pgs=257 cs=0 l=1 rev1=1 crypto rx=0x7f2a3c005230 tx=0x7f2a3c00bac0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:00.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.506+0000 7f2a42ffd700 1 -- 192.168.123.102:0/1362616719 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2a3c01d070 con 0x7f2a4c10a700 2026-03-10T10:17:00.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.506+0000 7f2a42ffd700 1 -- 192.168.123.102:0/1362616719 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f2a3c00fb70 con 0x7f2a4c10a700 2026-03-10T10:17:00.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.506+0000 7f2a42ffd700 1 -- 192.168.123.102:0/1362616719 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2a3c021b20 con 0x7f2a4c10a700 2026-03-10T10:17:00.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.506+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2a4c077120 con 0x7f2a4c10a700 2026-03-10T10:17:00.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.506+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2a4c077610 con 0x7f2a4c10a700 2026-03-10T10:17:00.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.507+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2a4c110c60 con 0x7f2a4c10a700 2026-03-10T10:17:00.513 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.511+0000 7f2a42ffd700 1 -- 192.168.123.102:0/1362616719 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f2a3c00fce0 con 0x7f2a4c10a700 2026-03-10T10:17:00.513 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.511+0000 7f2a42ffd700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a3806c600 0x7f2a3806eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:00.513 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.511+0000 7f2a42ffd700 1 -- 192.168.123.102:0/1362616719 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f2a3c08d8e0 con 0x7f2a4c10a700 2026-03-10T10:17:00.513 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.511+0000 7f2a42ffd700 1 -- 192.168.123.102:0/1362616719 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f2a3c0ba180 con 0x7f2a4c10a700 2026-03-10T10:17:00.516 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.514+0000 7f2a51b3e700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a3806c600 0x7f2a3806eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:00.518 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.517+0000 7f2a51b3e700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a3806c600 0x7f2a3806eac0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f2a4c1af280 tx=0x7f2a4800b410 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:00.682 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.678+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "mds versions", "format": "json"} v 0) v1 -- 0x7f2a4c02cc70 con 0x7f2a4c10a700 2026-03-10T10:17:00.683 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.681+0000 7f2a42ffd700 1 -- 192.168.123.102:0/1362616719 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "mds versions", "format": "json"}]=0 v12) v1 ==== 78+0+83 (secure 0 0 0) 0x7f2a3c026090 con 0x7f2a4c10a700 2026-03-10T10:17:00.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.689+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a3806c600 msgr2=0x7f2a3806eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:00.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.689+0000 7f2a53da2700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a3806c600 0x7f2a3806eac0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f2a4c1af280 tx=0x7f2a4800b410 comp rx=0 tx=0).stop 2026-03-10T10:17:00.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.690+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 msgr2=0x7f2a4c116e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:00.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.690+0000 7f2a53da2700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 0x7f2a4c116e80 secure :-1 s=READY pgs=257 cs=0 l=1 rev1=1 crypto rx=0x7f2a3c005230 tx=0x7f2a3c00bac0 comp rx=0 tx=0).stop 2026-03-10T10:17:00.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.690+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 shutdown_connections 2026-03-10T10:17:00.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.690+0000 7f2a53da2700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f2a3806c600 0x7f2a3806eac0 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:00.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.690+0000 7f2a53da2700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a4c107d90 0x7f2a4c116940 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:00.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.690+0000 7f2a53da2700 1 --2- 192.168.123.102:0/1362616719 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2a4c10a700 0x7f2a4c116e80 unknown :-1 s=CLOSED pgs=257 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:00.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.690+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 >> 192.168.123.102:0/1362616719 conn(0x7f2a4c06dda0 msgr2=0x7f2a4c1096d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:00.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.690+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 shutdown_connections 2026-03-10T10:17:00.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:00.690+0000 7f2a53da2700 1 -- 192.168.123.102:0/1362616719 wait complete. 2026-03-10T10:17:00.701 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:17:00.746 INFO:teuthology.run_tasks:Running task fs.pre_upgrade_save... 2026-03-10T10:17:00.750 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3328345739' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] up:boot 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] up:boot 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:reconnect 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:reconnect} 3 up:standby 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:17:00.831 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:00 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/1362616719' entity='client.admin' cmd=[{"prefix": "mds versions", "format": "json"}]: dispatch 2026-03-10T10:17:01.011 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:01.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3328345739' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-10T10:17:01.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] up:boot 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] up:boot 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:reconnect 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:reconnect} 3 up:standby 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:17:01.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:00 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/1362616719' entity='client.admin' cmd=[{"prefix": "mds versions", "format": "json"}]: dispatch 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.626+0000 7f3a2bfff700 1 -- 192.168.123.102:0/1187291996 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3a2c10d0f0 msgr2=0x7f3a2c10d570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.626+0000 7f3a2bfff700 1 --2- 192.168.123.102:0/1187291996 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3a2c10d0f0 0x7f3a2c10d570 secure :-1 s=READY pgs=258 cs=0 l=1 rev1=1 crypto rx=0x7f3a14009b00 tx=0x7f3a14009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.627+0000 7f3a2bfff700 1 -- 192.168.123.102:0/1187291996 shutdown_connections 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.627+0000 7f3a2bfff700 1 --2- 192.168.123.102:0/1187291996 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3a2c10d0f0 0x7f3a2c10d570 unknown :-1 s=CLOSED pgs=258 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.627+0000 7f3a2bfff700 1 --2- 192.168.123.102:0/1187291996 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a2c10f340 0x7f3a2c10f720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.627+0000 7f3a2bfff700 1 -- 192.168.123.102:0/1187291996 >> 192.168.123.102:0/1187291996 conn(0x7f3a2c06ce20 msgr2=0x7f3a2c06d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.627+0000 7f3a2bfff700 1 -- 192.168.123.102:0/1187291996 shutdown_connections 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.627+0000 7f3a2bfff700 1 -- 192.168.123.102:0/1187291996 wait complete. 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.628+0000 7f3a2bfff700 1 Processor -- start 2026-03-10T10:17:01.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.628+0000 7f3a2bfff700 1 -- start start 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.628+0000 7f3a2bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3a2c10d0f0 0x7f3a2c1a56d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.628+0000 7f3a2bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a2c10f340 0x7f3a2c1a5c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.628+0000 7f3a2bfff700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3a2c1a62a0 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.628+0000 7f3a2bfff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3a2c1a8fd0 con 0x7f3a2c10f340 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.628+0000 7f3a23fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a2c10f340 0x7f3a2c1a5c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.628+0000 7f3a2affd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3a2c10d0f0 0x7f3a2c1a56d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.629+0000 7f3a23fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a2c10f340 0x7f3a2c1a5c10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:57402/0 (socket says 192.168.123.102:57402) 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.629+0000 7f3a23fff700 1 -- 192.168.123.102:0/3004624887 learned_addr learned my addr 192.168.123.102:0/3004624887 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.629+0000 7f3a2affd700 1 -- 192.168.123.102:0/3004624887 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a2c10f340 msgr2=0x7f3a2c1a5c10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.629+0000 7f3a2affd700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a2c10f340 0x7f3a2c1a5c10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.629+0000 7f3a2affd700 1 -- 192.168.123.102:0/3004624887 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3a140097e0 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.629+0000 7f3a2affd700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3a2c10d0f0 0x7f3a2c1a56d0 secure :-1 s=READY pgs=259 cs=0 l=1 rev1=1 crypto rx=0x7f3a1c00eab0 tx=0x7f3a1c00edc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.629+0000 7f3a28ff9700 1 -- 192.168.123.102:0/3004624887 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3a1c00cb20 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.629+0000 7f3a28ff9700 1 -- 192.168.123.102:0/3004624887 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3a1c00cc80 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.630+0000 7f3a28ff9700 1 -- 192.168.123.102:0/3004624887 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3a1c018860 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.630+0000 7f3a2bfff700 1 -- 192.168.123.102:0/3004624887 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3a2c1a92b0 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.630+0000 7f3a2bfff700 1 -- 192.168.123.102:0/3004624887 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3a2c1a9780 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.631+0000 7f3a2bfff700 1 -- 192.168.123.102:0/3004624887 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3a2c04f2e0 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.632+0000 7f3a28ff9700 1 -- 192.168.123.102:0/3004624887 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f3a1c0189c0 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.632+0000 7f3a28ff9700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3a1806c330 0x7f3a1806e7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:01.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.632+0000 7f3a28ff9700 1 -- 192.168.123.102:0/3004624887 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f3a1c014070 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.635+0000 7f3a23fff700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3a1806c330 0x7f3a1806e7f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:01.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.635+0000 7f3a28ff9700 1 -- 192.168.123.102:0/3004624887 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f3a1c056420 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.635+0000 7f3a23fff700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3a1806c330 0x7f3a1806e7f0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f3a2c1a6cf0 tx=0x7f3a140051f0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:01.768 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.766+0000 7f3a2bfff700 1 -- 192.168.123.102:0/3004624887 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "format": "json"} v 0) v1 -- 0x7f3a2c04ea90 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.769 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:17:01.769 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":13,"default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14484,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/4269439469","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":4269439469},{"type":"v1","addr":"192.168.123.105:6827","nonce":4269439469}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":8},{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12}],"filesystems":[{"mdsmap":{"epoch":13,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:17:01.269128+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"last_failure":0,"last_failure_osd_epoch":39,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14464},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14464":{"gid":14464,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":11,"state":"up:rejoin","state_seq":4,"addr":"192.168.123.102:6827/658252295","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":658252295},{"type":"v1","addr":"192.168.123.102:6827","nonce":658252295}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1},"id":1}]} 2026-03-10T10:17:01.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.767+0000 7f3a28ff9700 1 -- 192.168.123.102:0/3004624887 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "format": "json"}]=0 dumped fsmap epoch 13 v13) v1 ==== 94+0+4753 (secure 0 0 0) 0x7f3a1c059a40 con 0x7f3a2c10d0f0 2026-03-10T10:17:01.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.773+0000 7f3a21ffb700 1 -- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3a1806c330 msgr2=0x7f3a1806e7f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:01.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.773+0000 7f3a21ffb700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3a1806c330 0x7f3a1806e7f0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f3a2c1a6cf0 tx=0x7f3a140051f0 comp rx=0 tx=0).stop 2026-03-10T10:17:01.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.773+0000 7f3a21ffb700 1 -- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3a2c10d0f0 msgr2=0x7f3a2c1a56d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:01.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.773+0000 7f3a21ffb700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3a2c10d0f0 0x7f3a2c1a56d0 secure :-1 s=READY pgs=259 cs=0 l=1 rev1=1 crypto rx=0x7f3a1c00eab0 tx=0x7f3a1c00edc0 comp rx=0 tx=0).stop 2026-03-10T10:17:01.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.774+0000 7f3a21ffb700 1 -- 192.168.123.102:0/3004624887 shutdown_connections 2026-03-10T10:17:01.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.774+0000 7f3a21ffb700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3a2c10d0f0 0x7f3a2c1a56d0 unknown :-1 s=CLOSED pgs=259 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:01.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.774+0000 7f3a21ffb700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f3a1806c330 0x7f3a1806e7f0 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:01.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.774+0000 7f3a21ffb700 1 --2- 192.168.123.102:0/3004624887 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a2c10f340 0x7f3a2c1a5c10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:01.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.774+0000 7f3a21ffb700 1 -- 192.168.123.102:0/3004624887 >> 192.168.123.102:0/3004624887 conn(0x7f3a2c06ce20 msgr2=0x7f3a2c0702f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:01.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.774+0000 7f3a21ffb700 1 -- 192.168.123.102:0/3004624887 shutdown_connections 2026-03-10T10:17:01.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:01.774+0000 7f3a21ffb700 1 -- 192.168.123.102:0/3004624887 wait complete. 2026-03-10T10:17:01.784 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 13 2026-03-10T10:17:01.844 DEBUG:tasks.fs:fs fscid=1,name=cephfs state = {'epoch': 13, 'max_mds': 1, 'flags': 50} 2026-03-10T10:17:01.844 INFO:teuthology.run_tasks:Running task ceph-fuse... 2026-03-10T10:17:01.854 INFO:tasks.ceph_fuse:Running ceph_fuse task... 2026-03-10T10:17:01.854 INFO:tasks.ceph_fuse:config is {'client.0': {}, 'client.1': {}} 2026-03-10T10:17:01.854 INFO:tasks.ceph_fuse:client.0 config is {} 2026-03-10T10:17:01.854 INFO:tasks.cephfs.mount:cephfs_mntpt = None 2026-03-10T10:17:01.854 INFO:tasks.cephfs.mount:hostfs_mntpt = /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:01.854 INFO:tasks.ceph_fuse:client.1 config is {} 2026-03-10T10:17:01.854 INFO:tasks.cephfs.mount:cephfs_mntpt = None 2026-03-10T10:17:01.854 INFO:tasks.cephfs.mount:hostfs_mntpt = /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:01.854 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:01.854 DEBUG:teuthology.orchestra.run.vm05:> ip netns list 2026-03-10T10:17:01.875 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:01.875 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link delete ceph-brx 2026-03-10T10:17:01.948 INFO:teuthology.orchestra.run.vm05.stderr:Cannot find device "ceph-brx" 2026-03-10T10:17:01.949 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:17:01.949 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:01.949 DEBUG:teuthology.orchestra.run.vm02:> ip netns list 2026-03-10T10:17:01.969 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:01.969 DEBUG:teuthology.orchestra.run.vm02:> sudo ip link delete ceph-brx 2026-03-10T10:17:02.051 INFO:teuthology.orchestra.run.vm02.stderr:Cannot find device "ceph-brx" 2026-03-10T10:17:02.052 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:17:02.052 INFO:tasks.ceph_fuse:Mounting ceph-fuse clients... 2026-03-10T10:17:02.052 DEBUG:tasks.ceph_fuse:passing mntargs=[] 2026-03-10T10:17:02.052 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs ls 2026-03-10T10:17:02.303 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:02.629 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:02 vm02 ceph-mon[50200]: pgmap v80: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 632 B/s rd, 2.2 KiB/s wr, 7 op/s 2026-03-10T10:17:02.629 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:02 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:02.629 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:02 vm02 ceph-mon[50200]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:rejoin 2026-03-10T10:17:02.629 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:02 vm02 ceph-mon[50200]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:rejoin} 3 up:standby 2026-03-10T10:17:02.630 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:02 vm02 ceph-mon[50200]: daemon mds.cephfs.vm02.zymcrs is now active in filesystem cephfs as rank 0 2026-03-10T10:17:02.630 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:02 vm02 ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:02.630 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:02 vm02 ceph-mon[50200]: from='client.? 192.168.123.102:0/3004624887' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-10T10:17:02.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.626+0000 7f631a806700 1 -- 192.168.123.102:0/907649420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6314103cf0 msgr2=0x7f6314107d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:02.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.626+0000 7f631a806700 1 --2- 192.168.123.102:0/907649420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6314103cf0 0x7f6314107d40 secure :-1 s=READY pgs=260 cs=0 l=1 rev1=1 crypto rx=0x7f6304009b00 tx=0x7f6304009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:02.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.628+0000 7f631a806700 1 -- 192.168.123.102:0/907649420 shutdown_connections 2026-03-10T10:17:02.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.628+0000 7f631a806700 1 --2- 192.168.123.102:0/907649420 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6314103cf0 0x7f6314107d40 unknown :-1 s=CLOSED pgs=260 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:02.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.628+0000 7f631a806700 1 --2- 192.168.123.102:0/907649420 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6314103340 0x7f6314103720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:02.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.628+0000 7f631a806700 1 -- 192.168.123.102:0/907649420 >> 192.168.123.102:0/907649420 conn(0x7f63140feb90 msgr2=0x7f6314100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:02.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.628+0000 7f631a806700 1 -- 192.168.123.102:0/907649420 shutdown_connections 2026-03-10T10:17:02.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.629+0000 7f631a806700 1 -- 192.168.123.102:0/907649420 wait complete. 2026-03-10T10:17:02.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.629+0000 7f631a806700 1 Processor -- start 2026-03-10T10:17:02.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.629+0000 7f631a806700 1 -- start start 2026-03-10T10:17:02.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.629+0000 7f631a806700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6314103340 0x7f6314198de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:02.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f631a806700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6314103cf0 0x7f6314199320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:02.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f631a806700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6314199970 con 0x7f6314103340 2026-03-10T10:17:02.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f631a806700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6314199ab0 con 0x7f6314103cf0 2026-03-10T10:17:02.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f63137fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6314103cf0 0x7f6314199320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:02.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f63137fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6314103cf0 0x7f6314199320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:57418/0 (socket says 192.168.123.102:57418) 2026-03-10T10:17:02.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f63137fe700 1 -- 192.168.123.102:0/2656979636 learned_addr learned my addr 192.168.123.102:0/2656979636 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:02.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f63137fe700 1 -- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6314103340 msgr2=0x7f6314198de0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:17:02.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f6313fff700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6314103340 0x7f6314198de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:02.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f63137fe700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6314103340 0x7f6314198de0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:02.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f63137fe700 1 -- 192.168.123.102:0/2656979636 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f63040097e0 con 0x7f6314103cf0 2026-03-10T10:17:02.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.630+0000 7f6313fff700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6314103340 0x7f6314198de0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:17:02.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.631+0000 7f63137fe700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6314103cf0 0x7f6314199320 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f6304009fd0 tx=0x7f63040049e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:02.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.631+0000 7f63117fa700 1 -- 192.168.123.102:0/2656979636 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f630401d070 con 0x7f6314103cf0 2026-03-10T10:17:02.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.631+0000 7f63117fa700 1 -- 192.168.123.102:0/2656979636 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f630400bc50 con 0x7f6314103cf0 2026-03-10T10:17:02.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.631+0000 7f63117fa700 1 -- 192.168.123.102:0/2656979636 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f630400f740 con 0x7f6314103cf0 2026-03-10T10:17:02.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.631+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f631419d8a0 con 0x7f6314103cf0 2026-03-10T10:17:02.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.631+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f631419dd90 con 0x7f6314103cf0 2026-03-10T10:17:02.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.632+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f631410b740 con 0x7f6314103cf0 2026-03-10T10:17:02.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.633+0000 7f63117fa700 1 -- 192.168.123.102:0/2656979636 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f630400f8a0 con 0x7f6314103cf0 2026-03-10T10:17:02.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.633+0000 7f63117fa700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f630006c530 0x7f630006e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:02.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.633+0000 7f6313fff700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f630006c530 0x7f630006e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:02.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.633+0000 7f63117fa700 1 -- 192.168.123.102:0/2656979636 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f630408da50 con 0x7f6314103cf0 2026-03-10T10:17:02.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.633+0000 7f6313fff700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f630006c530 0x7f630006e9f0 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f62fc005fd0 tx=0x7f62fc005e20 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:02.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.635+0000 7f63117fa700 1 -- 192.168.123.102:0/2656979636 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f63040582f0 con 0x7f6314103cf0 2026-03-10T10:17:02.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.760+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs ls"} v 0) v1 -- 0x7f631419a1e0 con 0x7f6314103cf0 2026-03-10T10:17:02.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.761+0000 7f63117fa700 1 -- 192.168.123.102:0/2656979636 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs ls"}]=0 v15) v1 ==== 53+0+83 (secure 0 0 0) 0x7f6304027070 con 0x7f6314103cf0 2026-03-10T10:17:02.763 INFO:teuthology.orchestra.run.vm02.stdout:name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ] 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.763+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f630006c530 msgr2=0x7f630006e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.763+0000 7f631a806700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f630006c530 0x7f630006e9f0 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f62fc005fd0 tx=0x7f62fc005e20 comp rx=0 tx=0).stop 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.763+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6314103cf0 msgr2=0x7f6314199320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.763+0000 7f631a806700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6314103cf0 0x7f6314199320 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f6304009fd0 tx=0x7f63040049e0 comp rx=0 tx=0).stop 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.764+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 shutdown_connections 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.764+0000 7f631a806700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6314103340 0x7f6314198de0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.764+0000 7f631a806700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f630006c530 0x7f630006e9f0 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.764+0000 7f631a806700 1 --2- 192.168.123.102:0/2656979636 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6314103cf0 0x7f6314199320 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.764+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 >> 192.168.123.102:0/2656979636 conn(0x7f63140feb90 msgr2=0x7f6314100f40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.764+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 shutdown_connections 2026-03-10T10:17:02.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02.764+0000 7f631a806700 1 -- 192.168.123.102:0/2656979636 wait complete. 2026-03-10T10:17:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:02 vm05 ceph-mon[59051]: pgmap v80: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 632 B/s rd, 2.2 KiB/s wr, 7 op/s 2026-03-10T10:17:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:02 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:02 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:rejoin 2026-03-10T10:17:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:02 vm05 ceph-mon[59051]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:rejoin} 3 up:standby 2026-03-10T10:17:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:02 vm05 ceph-mon[59051]: daemon mds.cephfs.vm02.zymcrs is now active in filesystem cephfs as rank 0 2026-03-10T10:17:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:02 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:02.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:02 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/3004624887' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-10T10:17:02.827 INFO:tasks.cephfs.mount:Mounting default Ceph FS; just confirmed its presence on cluster 2026-03-10T10:17:02.827 INFO:tasks.cephfs.mount:Mounting Ceph FS. Following are details of mount; remember "None" represents Python type None - 2026-03-10T10:17:02.827 INFO:tasks.cephfs.mount:self.client_remote.hostname = vm02.local 2026-03-10T10:17:02.827 INFO:tasks.cephfs.mount:self.client.name = client.0 2026-03-10T10:17:02.828 INFO:tasks.cephfs.mount:self.hostfs_mntpt = /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:02.828 INFO:tasks.cephfs.mount:self.cephfs_name = None 2026-03-10T10:17:02.828 INFO:tasks.cephfs.mount:self.cephfs_mntpt = None 2026-03-10T10:17:02.828 INFO:tasks.cephfs.mount:self.client_keyring_path = None 2026-03-10T10:17:02.828 INFO:tasks.cephfs.mount:Setting the 'None' netns for '/home/ubuntu/cephtest/mnt.0' 2026-03-10T10:17:02.828 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:02.828 DEBUG:teuthology.orchestra.run.vm02:> ip addr 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout:1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: inet 127.0.0.1/8 scope host lo 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: valid_lft forever preferred_lft forever 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: inet6 ::1/128 scope host 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: valid_lft forever preferred_lft forever 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout:2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: link/ether 52:55:00:00:00:02 brd ff:ff:ff:ff:ff:ff 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: altname enp0s3 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: altname ens3 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: inet 192.168.123.102/24 brd 192.168.123.255 scope global dynamic noprefixroute eth0 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: valid_lft 3146sec preferred_lft 3146sec 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: inet6 fe80::5055:ff:fe00:2/64 scope link noprefixroute 2026-03-10T10:17:02.843 INFO:teuthology.orchestra.run.vm02.stdout: valid_lft forever preferred_lft forever 2026-03-10T10:17:02.843 INFO:tasks.cephfs.mount:Setuping the 'ceph-brx' with 192.168.159.254/20 2026-03-10T10:17:02.843 DEBUG:teuthology.orchestra.run.vm02:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:02.843 DEBUG:teuthology.orchestra.run.vm02:> set -e 2026-03-10T10:17:02.843 DEBUG:teuthology.orchestra.run.vm02:> sudo ip link add name ceph-brx type bridge 2026-03-10T10:17:02.843 DEBUG:teuthology.orchestra.run.vm02:> sudo ip addr flush dev ceph-brx 2026-03-10T10:17:02.843 DEBUG:teuthology.orchestra.run.vm02:> sudo ip link set ceph-brx up 2026-03-10T10:17:02.843 DEBUG:teuthology.orchestra.run.vm02:> sudo ip addr add 192.168.159.254/20 brd 192.168.159.255 dev ceph-brx 2026-03-10T10:17:02.843 DEBUG:teuthology.orchestra.run.vm02:> ') 2026-03-10T10:17:02.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:02.999 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:02 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:03.005 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:03.005 DEBUG:teuthology.orchestra.run.vm02:> echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward 2026-03-10T10:17:03.070 INFO:teuthology.orchestra.run.vm02.stdout:1 2026-03-10T10:17:03.072 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:03.072 DEBUG:teuthology.orchestra.run.vm02:> ip r 2026-03-10T10:17:03.131 INFO:teuthology.orchestra.run.vm02.stdout:default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.102 metric 100 2026-03-10T10:17:03.131 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.102 metric 100 2026-03-10T10:17:03.131 INFO:teuthology.orchestra.run.vm02.stdout:192.168.144.0/20 dev ceph-brx proto kernel scope link src 192.168.159.254 2026-03-10T10:17:03.131 DEBUG:teuthology.orchestra.run.vm02:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:03.131 DEBUG:teuthology.orchestra.run.vm02:> set -e 2026-03-10T10:17:03.131 DEBUG:teuthology.orchestra.run.vm02:> sudo iptables -A FORWARD -o eth0 -i ceph-brx -j ACCEPT 2026-03-10T10:17:03.131 DEBUG:teuthology.orchestra.run.vm02:> sudo iptables -A FORWARD -i eth0 -o ceph-brx -j ACCEPT 2026-03-10T10:17:03.131 DEBUG:teuthology.orchestra.run.vm02:> sudo iptables -t nat -A POSTROUTING -s 192.168.159.254/20 -o eth0 -j MASQUERADE 2026-03-10T10:17:03.131 DEBUG:teuthology.orchestra.run.vm02:> ') 2026-03-10T10:17:03.208 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:03 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:03.273 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:03 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:03.278 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:03.278 DEBUG:teuthology.orchestra.run.vm02:> ip netns list 2026-03-10T10:17:03.333 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:03.333 DEBUG:teuthology.orchestra.run.vm02:> ip netns list-id 2026-03-10T10:17:03.390 DEBUG:teuthology.orchestra.run.vm02:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:03.390 DEBUG:teuthology.orchestra.run.vm02:> set -e 2026-03-10T10:17:03.390 DEBUG:teuthology.orchestra.run.vm02:> sudo ip netns add ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-10T10:17:03.390 DEBUG:teuthology.orchestra.run.vm02:> sudo ip netns set ceph-ns--home-ubuntu-cephtest-mnt.0 0 2026-03-10T10:17:03.390 DEBUG:teuthology.orchestra.run.vm02:> ') 2026-03-10T10:17:03.468 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:03 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: Cluster is now healthy 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:active 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: mds.? [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] up:standby 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 3 up:standby 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:03.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:03 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/2656979636' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-10T10:17:03.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:03 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:03.496 INFO:tasks.cephfs.mount:Setuping the netns 'ceph-ns--home-ubuntu-cephtest-mnt.0' with 192.168.144.1/20 2026-03-10T10:17:03.496 DEBUG:teuthology.orchestra.run.vm02:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:03.496 DEBUG:teuthology.orchestra.run.vm02:> set -e 2026-03-10T10:17:03.496 DEBUG:teuthology.orchestra.run.vm02:> sudo ip link add veth0 netns ceph-ns--home-ubuntu-cephtest-mnt.0 type veth peer name brx.0 2026-03-10T10:17:03.496 DEBUG:teuthology.orchestra.run.vm02:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip addr add 192.168.144.1/20 brd 192.168.159.255 dev veth0 2026-03-10T10:17:03.496 DEBUG:teuthology.orchestra.run.vm02:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip link set veth0 up 2026-03-10T10:17:03.496 DEBUG:teuthology.orchestra.run.vm02:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip link set lo up 2026-03-10T10:17:03.496 DEBUG:teuthology.orchestra.run.vm02:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip route add default via 192.168.159.254 2026-03-10T10:17:03.496 DEBUG:teuthology.orchestra.run.vm02:> ') 2026-03-10T10:17:03.573 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:03 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:03.643 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:03 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:03.646 DEBUG:teuthology.orchestra.run.vm02:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:03.646 DEBUG:teuthology.orchestra.run.vm02:> set -e 2026-03-10T10:17:03.646 DEBUG:teuthology.orchestra.run.vm02:> sudo ip link set brx.0 up 2026-03-10T10:17:03.646 DEBUG:teuthology.orchestra.run.vm02:> sudo ip link set dev brx.0 master ceph-brx 2026-03-10T10:17:03.646 DEBUG:teuthology.orchestra.run.vm02:> ') 2026-03-10T10:17:03.723 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:03 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:03.753 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:03 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:03.766 INFO:tasks.cephfs.fuse_mount:Client client.0 config is {} 2026-03-10T10:17:03.766 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-10T10:17:03.766 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -v /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: Cluster is now healthy 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] up:active 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: mds.? [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] up:standby 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 3 up:standby 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:03.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:03 vm05 ceph-mon[59051]: from='client.? 192.168.123.102:0/2656979636' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-10T10:17:03.823 INFO:teuthology.orchestra.run.vm02.stdout:mkdir: created directory '/home/ubuntu/cephtest/mnt.0' 2026-03-10T10:17:03.823 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-10T10:17:03.823 DEBUG:teuthology.orchestra.run.vm02:> chmod 0000 /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:03.877 DEBUG:teuthology.orchestra.run.vm02:> sudo modprobe fuse 2026-03-10T10:17:03.942 DEBUG:teuthology.orchestra.run.vm02:> cat /proc/self/mounts | awk '{print $2}' 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/proc 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/sys 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/dev 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/sys/kernel/security 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/dev/shm 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/dev/pts 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/run 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/cgroup 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/pstore 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/bpf 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/sys/kernel/config 2026-03-10T10:17:03.998 INFO:teuthology.orchestra.run.vm02.stdout:/ 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/nfs/rpc_pipefs 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/selinux 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/proc/sys/fs/binfmt_misc 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/dev/hugepages 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/dev/mqueue 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/sys/kernel/debug 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/sys/kernel/tracing 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/run/credentials/systemd-sysctl.service 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/fuse/connections 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/run/credentials/systemd-sysusers.service 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/run/user/1000 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/f3139661f521b9bf2f3d91fbcc730ab24e00778a6d06a53f53da7edccca0d2d7/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/7c910d1dc447be440522ae013716d61ce577eda5f51e25ba942ee8b9ca6df47b/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/run/user/0 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/proc/sys/fs/binfmt_misc 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/b3bb29adc61237c3abed5c30017ca3f5bc885750ad42ecb9874c7c88af4f3c83/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/b11b0086dac37afbaceb0faf0883b1cbc26a27eae8b1b9a04e0bb13b2a3a75c9/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/1c61c7d8c24e68892bdd750162a865bda007ae0cd0676888605d3465ee66d569/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/e5d08798fbb20401196c2be44c5debddc14db316457c6e81eb153db6ef8f8648/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/bc62dffa112028594a91ebef9ebca04913a76f376e42b90c3cadf56e74fa6136/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/8ac7ace66b7c0e6183f6ecc61e10a024d1ca60f917cdd1047e1380fcc3f1059c/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/39a4fcfe099a238bcc71ff293d7495cf12d4d8f0788a9b2e004f08112211a566/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/fca7321976fd0356589e9add6058143e2e8361984fe0482f1746713aac4486ad/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/8911408c647c0923dc77004ce3f45f436ecb45d4a3c724d75053c8d565fc3b12/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/8b26ff499c1162a9c4fd276fba0204e6d9087d4fc1ecd442ad63aa46975eb6e3/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/54325196f6e7669dc2d7216647ec683c015e36589a248fb196db01e7e5dc1ef7/merged 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/run/netns 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-10T10:17:03.999 INFO:teuthology.orchestra.run.vm02.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-10T10:17:04.000 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:04.000 DEBUG:teuthology.orchestra.run.vm02:> ls /sys/fs/fuse/connections 2026-03-10T10:17:04.055 INFO:tasks.cephfs.fuse_mount:Pre-mount connections: [] 2026-03-10T10:17:04.055 DEBUG:teuthology.orchestra.run.vm02:> (cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.0 --id 0) 2026-03-10T10:17:04.097 DEBUG:teuthology.orchestra.run.vm02:> sudo modprobe fuse 2026-03-10T10:17:04.122 DEBUG:teuthology.orchestra.run.vm02:> cat /proc/self/mounts | awk '{print $2}' 2026-03-10T10:17:04.169 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm02.stderr:ceph-fuse[94923]: starting ceph client 2026-03-10T10:17:04.170 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm02.stderr:2026-03-10T10:17:04.168+0000 7fe2983df480 -1 init, newargv = 0x55b45e57d580 newargc=15 2026-03-10T10:17:04.180 INFO:teuthology.orchestra.run.vm02.stdout:/proc 2026-03-10T10:17:04.180 INFO:teuthology.orchestra.run.vm02.stdout:/sys 2026-03-10T10:17:04.180 INFO:teuthology.orchestra.run.vm02.stdout:/dev 2026-03-10T10:17:04.180 INFO:teuthology.orchestra.run.vm02.stdout:/sys/kernel/security 2026-03-10T10:17:04.180 INFO:teuthology.orchestra.run.vm02.stdout:/dev/shm 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/dev/pts 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/cgroup 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/pstore 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/bpf 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/sys/kernel/config 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/ 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/nfs/rpc_pipefs 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/selinux 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/proc/sys/fs/binfmt_misc 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/dev/hugepages 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/dev/mqueue 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/sys/kernel/debug 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/sys/kernel/tracing 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run/credentials/systemd-sysctl.service 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/sys/fs/fuse/connections 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run/credentials/systemd-sysusers.service 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run/user/1000 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/f3139661f521b9bf2f3d91fbcc730ab24e00778a6d06a53f53da7edccca0d2d7/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/7c910d1dc447be440522ae013716d61ce577eda5f51e25ba942ee8b9ca6df47b/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run/user/0 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/proc/sys/fs/binfmt_misc 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/b3bb29adc61237c3abed5c30017ca3f5bc885750ad42ecb9874c7c88af4f3c83/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/b11b0086dac37afbaceb0faf0883b1cbc26a27eae8b1b9a04e0bb13b2a3a75c9/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/1c61c7d8c24e68892bdd750162a865bda007ae0cd0676888605d3465ee66d569/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/e5d08798fbb20401196c2be44c5debddc14db316457c6e81eb153db6ef8f8648/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/bc62dffa112028594a91ebef9ebca04913a76f376e42b90c3cadf56e74fa6136/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/8ac7ace66b7c0e6183f6ecc61e10a024d1ca60f917cdd1047e1380fcc3f1059c/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/39a4fcfe099a238bcc71ff293d7495cf12d4d8f0788a9b2e004f08112211a566/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/fca7321976fd0356589e9add6058143e2e8361984fe0482f1746713aac4486ad/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/8911408c647c0923dc77004ce3f45f436ecb45d4a3c724d75053c8d565fc3b12/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/8b26ff499c1162a9c4fd276fba0204e6d9087d4fc1ecd442ad63aa46975eb6e3/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/var/lib/containers/storage/overlay/54325196f6e7669dc2d7216647ec683c015e36589a248fb196db01e7e5dc1ef7/merged 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run/netns 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-10T10:17:04.181 INFO:teuthology.orchestra.run.vm02.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-10T10:17:04.182 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:04.182 DEBUG:teuthology.orchestra.run.vm02:> ls /sys/fs/fuse/connections 2026-03-10T10:17:04.189 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm02.stderr:ceph-fuse[94923]: starting fuse 2026-03-10T10:17:04.206 INFO:teuthology.orchestra.run.vm02.stdout:51 2026-03-10T10:17:04.206 INFO:tasks.cephfs.fuse_mount:Post-mount connections: [51] 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> sudo stdin-killer -- python3 -c ' 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> import glob 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> import re 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> import os 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> import subprocess 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> def _find_admin_socket(client_name): 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> asok_path = "/var/run/ceph/ceph-client.0.*.asok" 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> files = glob.glob(asok_path) 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> mountpoint = "/home/ubuntu/cephtest/mnt.0" 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> # Given a non-glob path, it better be there 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> if "*" not in asok_path: 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> assert(len(files) == 1) 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> return files[0] 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> for f in files: 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> pid = re.match(".*\.(\d+)\.asok$", f).group(1) 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> if os.path.exists("/proc/{0}".format(pid)): 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> with open("/proc/{0}/cmdline".format(pid), '"'"'r'"'"') as proc_f: 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> contents = proc_f.read() 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> if mountpoint in contents: 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> return f 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> raise RuntimeError("Client socket {0} not found".format(client_name)) 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> print(_find_admin_socket("client.0")) 2026-03-10T10:17:04.207 DEBUG:teuthology.orchestra.run.vm02:> ' 2026-03-10T10:17:04.312 INFO:teuthology.orchestra.run.vm02.stdout:/var/run/ceph/ceph-client.0.94923.asok 2026-03-10T10:17:04.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:04.321 INFO:tasks.cephfs.fuse_mount:Found client admin socket at /var/run/ceph/ceph-client.0.94923.asok 2026-03-10T10:17:04.321 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:04.321 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph --admin-daemon /var/run/ceph/ceph-client.0.94923.asok status 2026-03-10T10:17:04.384 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:04 vm02.local ceph-mon[50200]: pgmap v81: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 8 op/s 2026-03-10T10:17:04.430 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:17:04.430 INFO:teuthology.orchestra.run.vm02.stdout: "metadata": { 2026-03-10T10:17:04.430 INFO:teuthology.orchestra.run.vm02.stdout: "ceph_sha1": "7fe91d5d5842e04be3b4f514d6dd990c54b29c76", 2026-03-10T10:17:04.430 INFO:teuthology.orchestra.run.vm02.stdout: "ceph_version": "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)", 2026-03-10T10:17:04.430 INFO:teuthology.orchestra.run.vm02.stdout: "entity_id": "0", 2026-03-10T10:17:04.430 INFO:teuthology.orchestra.run.vm02.stdout: "hostname": "vm02.local", 2026-03-10T10:17:04.430 INFO:teuthology.orchestra.run.vm02.stdout: "mount_point": "/home/ubuntu/cephtest/mnt.0", 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "pid": "94923", 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "root": "/" 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "dentry_count": 0, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "dentry_pinned_count": 0, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "id": 14516, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "inst": { 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "name": { 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "type": "client", 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "num": 14516 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "addr": { 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "type": "v1", 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "addr": "192.168.144.1:0", 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "nonce": 2782098490 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "addr": { 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "type": "v1", 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "addr": "192.168.144.1:0", 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "nonce": 2782098490 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "inst_str": "client.14516 192.168.144.1:0/2782098490", 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "addr_str": "192.168.144.1:0/2782098490", 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "inode_count": 1, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "mds_epoch": 15, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "osd_epoch": 39, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "osd_epoch_barrier": 0, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "blocklisted": false, 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout: "fs_name": "cephfs" 2026-03-10T10:17:04.431 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:17:04.437 DEBUG:tasks.ceph_fuse:passing mntargs=[] 2026-03-10T10:17:04.437 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs ls 2026-03-10T10:17:04.645 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:04.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:04 vm05 ceph-mon[59051]: pgmap v81: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.5 KiB/s rd, 1.7 KiB/s wr, 8 op/s 2026-03-10T10:17:04.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.913+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/1456906971 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 msgr2=0x7fc1b01037a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:04.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.913+0000 7fc1b7ef9700 1 --2- 192.168.123.102:0/1456906971 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 0x7fc1b01037a0 secure :-1 s=READY pgs=262 cs=0 l=1 rev1=1 crypto rx=0x7fc1a0009b00 tx=0x7fc1a0009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:04.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.914+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/1456906971 shutdown_connections 2026-03-10T10:17:04.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.914+0000 7fc1b7ef9700 1 --2- 192.168.123.102:0/1456906971 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc1b0103d70 0x7fc1b0107dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:04.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.914+0000 7fc1b7ef9700 1 --2- 192.168.123.102:0/1456906971 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 0x7fc1b01037a0 unknown :-1 s=CLOSED pgs=262 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:04.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.914+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/1456906971 >> 192.168.123.102:0/1456906971 conn(0x7fc1b00fec30 msgr2=0x7fc1b0101050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:04.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.915+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/1456906971 shutdown_connections 2026-03-10T10:17:04.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.915+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/1456906971 wait complete. 2026-03-10T10:17:04.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.915+0000 7fc1b7ef9700 1 Processor -- start 2026-03-10T10:17:04.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.915+0000 7fc1b7ef9700 1 -- start start 2026-03-10T10:17:04.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b7ef9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 0x7fc1b0198ee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:04.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b7ef9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc1b0103d70 0x7fc1b0199420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b7ef9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc1b0199b00 con 0x7fc1b01033c0 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b7ef9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc1b019d890 con 0x7fc1b0103d70 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b5c95700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 0x7fc1b0198ee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b5c95700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 0x7fc1b0198ee0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:51670/0 (socket says 192.168.123.102:51670) 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b5c95700 1 -- 192.168.123.102:0/3548008974 learned_addr learned my addr 192.168.123.102:0/3548008974 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b5c95700 1 -- 192.168.123.102:0/3548008974 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc1b0103d70 msgr2=0x7fc1b0199420 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b5c95700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc1b0103d70 0x7fc1b0199420 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b5c95700 1 -- 192.168.123.102:0/3548008974 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc1a00097e0 con 0x7fc1b01033c0 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b5c95700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 0x7fc1b0198ee0 secure :-1 s=READY pgs=263 cs=0 l=1 rev1=1 crypto rx=0x7fc1a0004930 tx=0x7fc1a0004a10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1a6ffd700 1 -- 192.168.123.102:0/3548008974 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc1a001d070 con 0x7fc1b01033c0 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1a6ffd700 1 -- 192.168.123.102:0/3548008974 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc1a000bc50 con 0x7fc1b01033c0 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc1b019db10 con 0x7fc1b01033c0 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.916+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc1b019e000 con 0x7fc1b01033c0 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.917+0000 7fc1a6ffd700 1 -- 192.168.123.102:0/3548008974 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc1a000f830 con 0x7fc1b01033c0 2026-03-10T10:17:04.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.918+0000 7fc1a6ffd700 1 -- 192.168.123.102:0/3548008974 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fc1a000fa30 con 0x7fc1b01033c0 2026-03-10T10:17:04.920 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.919+0000 7fc1a6ffd700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc19c074cf0 0x7fc19c0771b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:04.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.919+0000 7fc1a6ffd700 1 -- 192.168.123.102:0/3548008974 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fc1a008df10 con 0x7fc1b01033c0 2026-03-10T10:17:04.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.919+0000 7fc1b5494700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc19c074cf0 0x7fc19c0771b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:04.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.919+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc194005320 con 0x7fc1b01033c0 2026-03-10T10:17:04.924 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.922+0000 7fc1a6ffd700 1 -- 192.168.123.102:0/3548008974 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fc1a00587e0 con 0x7fc1b01033c0 2026-03-10T10:17:04.924 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:04.922+0000 7fc1b5494700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc19c074cf0 0x7fc19c0771b0 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7fc1b019a500 tx=0x7fc1ac005c10 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:05.055 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.053+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs ls"} v 0) v1 -- 0x7fc194006200 con 0x7fc1b01033c0 2026-03-10T10:17:05.055 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.053+0000 7fc1a6ffd700 1 -- 192.168.123.102:0/3548008974 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs ls"}]=0 v15) v1 ==== 53+0+83 (secure 0 0 0) 0x7fc1a0027030 con 0x7fc1b01033c0 2026-03-10T10:17:05.055 INFO:teuthology.orchestra.run.vm02.stdout:name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ] 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.056+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc19c074cf0 msgr2=0x7fc19c0771b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.056+0000 7fc1b7ef9700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc19c074cf0 0x7fc19c0771b0 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7fc1b019a500 tx=0x7fc1ac005c10 comp rx=0 tx=0).stop 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.056+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 msgr2=0x7fc1b0198ee0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.056+0000 7fc1b7ef9700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 0x7fc1b0198ee0 secure :-1 s=READY pgs=263 cs=0 l=1 rev1=1 crypto rx=0x7fc1a0004930 tx=0x7fc1a0004a10 comp rx=0 tx=0).stop 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.057+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 shutdown_connections 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.057+0000 7fc1b7ef9700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc1b01033c0 0x7fc1b0198ee0 unknown :-1 s=CLOSED pgs=263 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.057+0000 7fc1b7ef9700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc19c074cf0 0x7fc19c0771b0 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.057+0000 7fc1b7ef9700 1 --2- 192.168.123.102:0/3548008974 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc1b0103d70 0x7fc1b0199420 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.057+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 >> 192.168.123.102:0/3548008974 conn(0x7fc1b00fec30 msgr2=0x7fc1b0100200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:05.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.057+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 shutdown_connections 2026-03-10T10:17:05.059 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:05.057+0000 7fc1b7ef9700 1 -- 192.168.123.102:0/3548008974 wait complete. 2026-03-10T10:17:05.122 INFO:tasks.cephfs.mount:Mounting default Ceph FS; just confirmed its presence on cluster 2026-03-10T10:17:05.122 INFO:tasks.cephfs.mount:Mounting Ceph FS. Following are details of mount; remember "None" represents Python type None - 2026-03-10T10:17:05.122 INFO:tasks.cephfs.mount:self.client_remote.hostname = vm05.local 2026-03-10T10:17:05.122 INFO:tasks.cephfs.mount:self.client.name = client.1 2026-03-10T10:17:05.122 INFO:tasks.cephfs.mount:self.hostfs_mntpt = /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:05.122 INFO:tasks.cephfs.mount:self.cephfs_name = None 2026-03-10T10:17:05.122 INFO:tasks.cephfs.mount:self.cephfs_mntpt = None 2026-03-10T10:17:05.122 INFO:tasks.cephfs.mount:self.client_keyring_path = None 2026-03-10T10:17:05.122 INFO:tasks.cephfs.mount:Setting the 'None' netns for '/home/ubuntu/cephtest/mnt.1' 2026-03-10T10:17:05.123 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:05.123 DEBUG:teuthology.orchestra.run.vm05:> ip addr 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout:1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: inet 127.0.0.1/8 scope host lo 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: valid_lft forever preferred_lft forever 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: inet6 ::1/128 scope host 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: valid_lft forever preferred_lft forever 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout:2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: link/ether 52:55:00:00:00:05 brd ff:ff:ff:ff:ff:ff 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: altname enp0s3 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: altname ens3 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: inet 192.168.123.105/24 brd 192.168.123.255 scope global dynamic noprefixroute eth0 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: valid_lft 3176sec preferred_lft 3176sec 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: inet6 fe80::5055:ff:fe00:5/64 scope link noprefixroute 2026-03-10T10:17:05.138 INFO:teuthology.orchestra.run.vm05.stdout: valid_lft forever preferred_lft forever 2026-03-10T10:17:05.138 INFO:tasks.cephfs.mount:Setuping the 'ceph-brx' with 192.168.159.254/20 2026-03-10T10:17:05.139 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:05.139 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-10T10:17:05.139 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link add name ceph-brx type bridge 2026-03-10T10:17:05.139 DEBUG:teuthology.orchestra.run.vm05:> sudo ip addr flush dev ceph-brx 2026-03-10T10:17:05.139 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link set ceph-brx up 2026-03-10T10:17:05.139 DEBUG:teuthology.orchestra.run.vm05:> sudo ip addr add 192.168.159.254/20 brd 192.168.159.255 dev ceph-brx 2026-03-10T10:17:05.139 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-10T10:17:05.212 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:05 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:05.285 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:05 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:05.288 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:05.288 DEBUG:teuthology.orchestra.run.vm05:> echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward 2026-03-10T10:17:05.360 INFO:teuthology.orchestra.run.vm05.stdout:1 2026-03-10T10:17:05.361 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:05.361 DEBUG:teuthology.orchestra.run.vm05:> ip r 2026-03-10T10:17:05.417 INFO:teuthology.orchestra.run.vm05.stdout:default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.105 metric 100 2026-03-10T10:17:05.417 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.105 metric 100 2026-03-10T10:17:05.417 INFO:teuthology.orchestra.run.vm05.stdout:192.168.144.0/20 dev ceph-brx proto kernel scope link src 192.168.159.254 2026-03-10T10:17:05.418 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:05.418 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-10T10:17:05.418 DEBUG:teuthology.orchestra.run.vm05:> sudo iptables -A FORWARD -o eth0 -i ceph-brx -j ACCEPT 2026-03-10T10:17:05.418 DEBUG:teuthology.orchestra.run.vm05:> sudo iptables -A FORWARD -i eth0 -o ceph-brx -j ACCEPT 2026-03-10T10:17:05.418 DEBUG:teuthology.orchestra.run.vm05:> sudo iptables -t nat -A POSTROUTING -s 192.168.159.254/20 -o eth0 -j MASQUERADE 2026-03-10T10:17:05.418 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-10T10:17:05.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:05 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:05.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:05 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:05.563 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:05.563 DEBUG:teuthology.orchestra.run.vm05:> ip netns list 2026-03-10T10:17:05.622 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:05.622 DEBUG:teuthology.orchestra.run.vm05:> ip netns list-id 2026-03-10T10:17:05.681 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:05.681 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-10T10:17:05.681 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns add ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-10T10:17:05.681 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns set ceph-ns--home-ubuntu-cephtest-mnt.1 0 2026-03-10T10:17:05.681 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-10T10:17:05.760 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:05 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:05.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:05 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:05.797 INFO:tasks.cephfs.mount:Setuping the netns 'ceph-ns--home-ubuntu-cephtest-mnt.1' with 192.168.144.1/20 2026-03-10T10:17:05.798 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:05.798 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-10T10:17:05.798 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link add veth0 netns ceph-ns--home-ubuntu-cephtest-mnt.1 type veth peer name brx.0 2026-03-10T10:17:05.798 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip addr add 192.168.144.1/20 brd 192.168.159.255 dev veth0 2026-03-10T10:17:05.798 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip link set veth0 up 2026-03-10T10:17:05.798 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip link set lo up 2026-03-10T10:17:05.798 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip route add default via 192.168.159.254 2026-03-10T10:17:05.798 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-10T10:17:05.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:05 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:05.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:05 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:05.946 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-10T10:17:05.946 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-10T10:17:05.946 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link set brx.0 up 2026-03-10T10:17:05.946 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link set dev brx.0 master ceph-brx 2026-03-10T10:17:05.946 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-10T10:17:06.023 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:06 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-10T10:17:06.030 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:05 vm05.local ceph-mon[59051]: pgmap v82: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 3.6 KiB/s rd, 1.5 KiB/s wr, 8 op/s 2026-03-10T10:17:06.031 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:05 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:06.031 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:05 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/3548008974' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-10T10:17:06.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:06 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:06.052 INFO:tasks.cephfs.fuse_mount:Client client.1 config is {} 2026-03-10T10:17:06.052 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-10T10:17:06.052 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -v /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:06.108 INFO:teuthology.orchestra.run.vm05.stdout:mkdir: created directory '/home/ubuntu/cephtest/mnt.1' 2026-03-10T10:17:06.108 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-10T10:17:06.108 DEBUG:teuthology.orchestra.run.vm05:> chmod 0000 /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:06.164 DEBUG:teuthology.orchestra.run.vm05:> sudo modprobe fuse 2026-03-10T10:17:06.235 DEBUG:teuthology.orchestra.run.vm05:> cat /proc/self/mounts | awk '{print $2}' 2026-03-10T10:17:06.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:05 vm02.local ceph-mon[50200]: pgmap v82: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 3.6 KiB/s rd, 1.5 KiB/s wr, 8 op/s 2026-03-10T10:17:06.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:05 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:06.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:05 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/3548008974' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-10T10:17:06.292 INFO:teuthology.orchestra.run.vm05.stdout:/proc 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/dev 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/security 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/dev/shm 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/dev/pts 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/cgroup 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/pstore 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/bpf 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/config 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/ 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/nfs/rpc_pipefs 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/selinux 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/proc/sys/fs/binfmt_misc 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/dev/hugepages 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/dev/mqueue 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/debug 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/tracing 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-sysctl.service 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/fuse/connections 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-sysusers.service 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run/user/1000 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run/user/0 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/proc/sys/fs/binfmt_misc 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/762b75d66da9c39755344b0f49651c57f1b843ff8d552611b6f3b4fc4ce3561a/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/5d7548c54cf4436991649851c6c220fd227ad7de89b9528e0a1d6a96a7a60dac/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/b26cb09bfd5f1b3c8e50dedd6db95377c774d27314b5115119009c59bd310140/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/7ea954fd21655dc4db09995925299ed8c7934d0437d47219242bac039ef238e2/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/90f9642f5385e38455f3f8aeb7cf8cf3853deeb29beffda4ad624c94977b186e/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/53f2b2594ea52afd3dddd647447fd3a8484bdfe735603851f69b969a2fa249cc/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/bfa6f8b6a0a342a955cf7a04d9bde152de993118d0406aaaa8081e83975bb137/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/57b917404c90f56e9a6790b71c3857a7317d409626fe0c62d2a17f0fd151f452/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/ba31c36dbfbed55b8bf125cbfa42ff9ed5530e846e99cd421129fbcb1b12dbd9/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/c905955f7f31ca1c233dc4ed0f075e91b94498b1dc7cc8b19675656853963acf/merged 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-10T10:17:06.293 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:06.293 DEBUG:teuthology.orchestra.run.vm05:> ls /sys/fs/fuse/connections 2026-03-10T10:17:06.348 INFO:tasks.cephfs.fuse_mount:Pre-mount connections: [] 2026-03-10T10:17:06.348 DEBUG:teuthology.orchestra.run.vm05:> (cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.1 --id 1) 2026-03-10T10:17:06.391 DEBUG:teuthology.orchestra.run.vm05:> sudo modprobe fuse 2026-03-10T10:17:06.418 DEBUG:teuthology.orchestra.run.vm05:> cat /proc/self/mounts | awk '{print $2}' 2026-03-10T10:17:06.465 INFO:tasks.cephfs.fuse_mount.ceph-fuse.1.vm05.stderr:2026-03-10T10:17:06.463+0000 7f874c20d480 -1 init, newargv = 0x561e5b2af150 newargc=15 2026-03-10T10:17:06.465 INFO:tasks.cephfs.fuse_mount.ceph-fuse.1.vm05.stderr:ceph-fuse[84420]: starting ceph client 2026-03-10T10:17:06.473 INFO:tasks.cephfs.fuse_mount.ceph-fuse.1.vm05.stderr:ceph-fuse[84420]: starting fuse 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/proc 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/dev 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/security 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/dev/shm 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/dev/pts 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/cgroup 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/pstore 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/bpf 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/config 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/ 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/nfs/rpc_pipefs 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/selinux 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/proc/sys/fs/binfmt_misc 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/dev/hugepages 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/dev/mqueue 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/debug 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/tracing 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-sysctl.service 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/fuse/connections 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-sysusers.service 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run/user/1000 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run/user/0 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/proc/sys/fs/binfmt_misc 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/762b75d66da9c39755344b0f49651c57f1b843ff8d552611b6f3b4fc4ce3561a/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/5d7548c54cf4436991649851c6c220fd227ad7de89b9528e0a1d6a96a7a60dac/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/b26cb09bfd5f1b3c8e50dedd6db95377c774d27314b5115119009c59bd310140/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/7ea954fd21655dc4db09995925299ed8c7934d0437d47219242bac039ef238e2/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/90f9642f5385e38455f3f8aeb7cf8cf3853deeb29beffda4ad624c94977b186e/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/53f2b2594ea52afd3dddd647447fd3a8484bdfe735603851f69b969a2fa249cc/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/bfa6f8b6a0a342a955cf7a04d9bde152de993118d0406aaaa8081e83975bb137/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/57b917404c90f56e9a6790b71c3857a7317d409626fe0c62d2a17f0fd151f452/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/ba31c36dbfbed55b8bf125cbfa42ff9ed5530e846e99cd421129fbcb1b12dbd9/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/c905955f7f31ca1c233dc4ed0f075e91b94498b1dc7cc8b19675656853963acf/merged 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-10T10:17:06.486 INFO:teuthology.orchestra.run.vm05.stdout:/home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:06.487 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:06.487 DEBUG:teuthology.orchestra.run.vm05:> ls /sys/fs/fuse/connections 2026-03-10T10:17:06.543 INFO:teuthology.orchestra.run.vm05.stdout:90 2026-03-10T10:17:06.543 INFO:tasks.cephfs.fuse_mount:Post-mount connections: [90] 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> sudo stdin-killer -- python3 -c ' 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> import glob 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> import re 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> import os 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> import subprocess 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> def _find_admin_socket(client_name): 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> asok_path = "/var/run/ceph/ceph-client.1.*.asok" 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> files = glob.glob(asok_path) 2026-03-10T10:17:06.543 DEBUG:teuthology.orchestra.run.vm05:> mountpoint = "/home/ubuntu/cephtest/mnt.1" 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> # Given a non-glob path, it better be there 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> if "*" not in asok_path: 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> assert(len(files) == 1) 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> return files[0] 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> for f in files: 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> pid = re.match(".*\.(\d+)\.asok$", f).group(1) 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> if os.path.exists("/proc/{0}".format(pid)): 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> with open("/proc/{0}/cmdline".format(pid), '"'"'r'"'"') as proc_f: 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> contents = proc_f.read() 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> if mountpoint in contents: 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> return f 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> raise RuntimeError("Client socket {0} not found".format(client_name)) 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> print(_find_admin_socket("client.1")) 2026-03-10T10:17:06.544 DEBUG:teuthology.orchestra.run.vm05:> ' 2026-03-10T10:17:06.642 INFO:teuthology.orchestra.run.vm05.stdout:/var/run/ceph/ceph-client.1.84420.asok 2026-03-10T10:17:06.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T10:17:06 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-10T10:17:06.649 INFO:tasks.cephfs.fuse_mount:Found client admin socket at /var/run/ceph/ceph-client.1.84420.asok 2026-03-10T10:17:06.649 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:06.650 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph --admin-daemon /var/run/ceph/ceph-client.1.84420.asok status 2026-03-10T10:17:06.758 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T10:17:06.758 INFO:teuthology.orchestra.run.vm05.stdout: "metadata": { 2026-03-10T10:17:06.758 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_sha1": "7fe91d5d5842e04be3b4f514d6dd990c54b29c76", 2026-03-10T10:17:06.758 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "entity_id": "1", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "hostname": "vm05.local", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "mount_point": "/home/ubuntu/cephtest/mnt.1", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "pid": "84420", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "root": "/" 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "dentry_count": 0, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "dentry_pinned_count": 0, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "id": 24319, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "inst": { 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "name": { 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "type": "client", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "num": 24319 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "addr": { 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "type": "v1", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "addr": "192.168.144.1:0", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "nonce": 2118398450 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "addr": { 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "type": "v1", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "addr": "192.168.144.1:0", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "nonce": 2118398450 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "inst_str": "client.24319 192.168.144.1:0/2118398450", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "addr_str": "192.168.144.1:0/2118398450", 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "inode_count": 1, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "mds_epoch": 15, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "osd_epoch": 39, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "osd_epoch_barrier": 0, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "blocklisted": false, 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout: "fs_name": "cephfs" 2026-03-10T10:17:06.759 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T10:17:06.766 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:06.766 DEBUG:teuthology.orchestra.run.vm02:> stat --file-system '--printf=%T 2026-03-10T10:17:06.766 DEBUG:teuthology.orchestra.run.vm02:> ' -- /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:06.782 INFO:teuthology.orchestra.run.vm02.stdout:fuseblk 2026-03-10T10:17:06.782 INFO:tasks.cephfs.fuse_mount:ceph-fuse is mounted on /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:06.782 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:06.782 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 1777 /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:06.852 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:06.852 DEBUG:teuthology.orchestra.run.vm05:> stat --file-system '--printf=%T 2026-03-10T10:17:06.853 DEBUG:teuthology.orchestra.run.vm05:> ' -- /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:06.868 INFO:teuthology.orchestra.run.vm05.stdout:fuseblk 2026-03-10T10:17:06.868 INFO:tasks.cephfs.fuse_mount:ceph-fuse is mounted on /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:06.868 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:17:06.868 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 1777 /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:06.937 INFO:teuthology.run_tasks:Running task print... 2026-03-10T10:17:06.940 INFO:teuthology.task.print:**** done client 2026-03-10T10:17:06.940 INFO:teuthology.run_tasks:Running task parallel... 2026-03-10T10:17:06.943 INFO:teuthology.task.parallel:starting parallel... 2026-03-10T10:17:06.943 INFO:teuthology.task.parallel:In parallel, running task sequential... 2026-03-10T10:17:06.943 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-10T10:17:06.944 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:17:06.944 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mgr mgr/orchestrator/fail_fs false || true' 2026-03-10T10:17:06.944 INFO:teuthology.task.parallel:In parallel, running task sequential... 2026-03-10T10:17:06.944 INFO:teuthology.task.sequential:In sequential, running task workunit... 2026-03-10T10:17:06.945 INFO:tasks.workunit:Pulling workunits from ref 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T10:17:06.946 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-10T10:17:06.946 INFO:tasks.workunit:timeout=3h 2026-03-10T10:17:06.946 INFO:tasks.workunit:cleanup=True 2026-03-10T10:17:06.946 DEBUG:teuthology.orchestra.run.vm02:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:06.966 INFO:teuthology.orchestra.run.vm02.stdout: File: /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:06.966 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 1 IO Block: 4194304 directory 2026-03-10T10:17:06.966 INFO:teuthology.orchestra.run.vm02.stdout:Device: 33h/51d Inode: 1 Links: 2 2026-03-10T10:17:06.966 INFO:teuthology.orchestra.run.vm02.stdout:Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-10T10:17:06.966 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:fusefs_t:s0 2026-03-10T10:17:06.966 INFO:teuthology.orchestra.run.vm02.stdout:Access: 1970-01-01 00:00:00.000000000 +0000 2026-03-10T10:17:06.966 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 10:16:55.485037769 +0000 2026-03-10T10:17:06.966 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 10:17:06.850341360 +0000 2026-03-10T10:17:06.966 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-10T10:17:06.967 INFO:tasks.workunit:Did not need to create dir /home/ubuntu/cephtest/mnt.0 2026-03-10T10:17:06.967 DEBUG:teuthology.orchestra.run.vm02:> cd -- /home/ubuntu/cephtest/mnt.0 && sudo install -d -m 0755 --owner=ubuntu -- client.0 2026-03-10T10:17:07.047 DEBUG:teuthology.orchestra.run.vm05:> stat -- /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:07.065 INFO:teuthology.orchestra.run.vm05.stdout: File: /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:07.065 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 1 IO Block: 4194304 directory 2026-03-10T10:17:07.065 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5ah/90d Inode: 1 Links: 3 2026-03-10T10:17:07.065 INFO:teuthology.orchestra.run.vm05.stdout:Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-10T10:17:07.065 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fusefs_t:s0 2026-03-10T10:17:07.065 INFO:teuthology.orchestra.run.vm05.stdout:Access: 1970-01-01 00:00:00.000000000 +0000 2026-03-10T10:17:07.065 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 10:17:07.036772092 +0000 2026-03-10T10:17:07.065 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 10:17:07.036772092 +0000 2026-03-10T10:17:07.065 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T10:17:07.065 INFO:tasks.workunit:Did not need to create dir /home/ubuntu/cephtest/mnt.1 2026-03-10T10:17:07.066 DEBUG:teuthology.orchestra.run.vm05:> cd -- /home/ubuntu/cephtest/mnt.1 && sudo install -d -m 0755 --owner=ubuntu -- client.1 2026-03-10T10:17:07.132 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:07.141 DEBUG:teuthology.orchestra.run.vm02:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T10:17:07.141 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/clone.client.1 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.1 && cd /home/ubuntu/cephtest/clone.client.1 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T10:17:07.175 INFO:tasks.workunit.client.0.vm02.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-10T10:17:07.199 INFO:tasks.workunit.client.1.vm05.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.1'... 2026-03-10T10:17:07.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.405+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2222587889 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 msgr2=0x7f8c70101340 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:07.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.405+0000 7f8c76b7d700 1 --2- 192.168.123.102:0/2222587889 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 0x7f8c70101340 secure :-1 s=READY pgs=264 cs=0 l=1 rev1=1 crypto rx=0x7f8c60009b00 tx=0x7f8c60009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:07.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.407+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2222587889 shutdown_connections 2026-03-10T10:17:07.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.407+0000 7f8c76b7d700 1 --2- 192.168.123.102:0/2222587889 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c701020e0 0x7f8c70102560 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:07.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.407+0000 7f8c76b7d700 1 --2- 192.168.123.102:0/2222587889 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 0x7f8c70101340 unknown :-1 s=CLOSED pgs=264 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:07.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.407+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2222587889 >> 192.168.123.102:0/2222587889 conn(0x7f8c700fc4d0 msgr2=0x7f8c700fe930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:07.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.407+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2222587889 shutdown_connections 2026-03-10T10:17:07.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.407+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2222587889 wait complete. 2026-03-10T10:17:07.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.408+0000 7f8c76b7d700 1 Processor -- start 2026-03-10T10:17:07.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.408+0000 7f8c76b7d700 1 -- start start 2026-03-10T10:17:07.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.408+0000 7f8c76b7d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 0x7f8c70192b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:07.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.408+0000 7f8c76b7d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c701020e0 0x7f8c701930b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:07.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.408+0000 7f8c76b7d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8c7018f660 con 0x7f8c70100f00 2026-03-10T10:17:07.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.408+0000 7f8c76b7d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8c7018f7d0 con 0x7f8c701020e0 2026-03-10T10:17:07.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.408+0000 7f8c74919700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 0x7f8c70192b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:07.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.408+0000 7f8c74919700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 0x7f8c70192b70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:42988/0 (socket says 192.168.123.102:42988) 2026-03-10T10:17:07.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.408+0000 7f8c74919700 1 -- 192.168.123.102:0/2998207655 learned_addr learned my addr 192.168.123.102:0/2998207655 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:07.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.409+0000 7f8c74919700 1 -- 192.168.123.102:0/2998207655 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c701020e0 msgr2=0x7f8c701930b0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:17:07.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.409+0000 7f8c74919700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c701020e0 0x7f8c701930b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:07.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.409+0000 7f8c74919700 1 -- 192.168.123.102:0/2998207655 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8c600097e0 con 0x7f8c70100f00 2026-03-10T10:17:07.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.409+0000 7f8c74919700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 0x7f8c70192b70 secure :-1 s=READY pgs=265 cs=0 l=1 rev1=1 crypto rx=0x7f8c60006010 tx=0x7f8c600049e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:07.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.409+0000 7f8c6dffb700 1 -- 192.168.123.102:0/2998207655 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8c6001d070 con 0x7f8c70100f00 2026-03-10T10:17:07.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.409+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8c7018fa50 con 0x7f8c70100f00 2026-03-10T10:17:07.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.410+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8c7018ff40 con 0x7f8c70100f00 2026-03-10T10:17:07.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.411+0000 7f8c6dffb700 1 -- 192.168.123.102:0/2998207655 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8c6000bc50 con 0x7f8c70100f00 2026-03-10T10:17:07.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.411+0000 7f8c6dffb700 1 -- 192.168.123.102:0/2998207655 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8c6000f8b0 con 0x7f8c70100f00 2026-03-10T10:17:07.413 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.411+0000 7f8c6dffb700 1 -- 192.168.123.102:0/2998207655 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f8c6000fad0 con 0x7f8c70100f00 2026-03-10T10:17:07.413 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.411+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8c5c005320 con 0x7f8c70100f00 2026-03-10T10:17:07.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.412+0000 7f8c6dffb700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8c5806c490 0x7f8c5806e950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:07.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.412+0000 7f8c6dffb700 1 -- 192.168.123.102:0/2998207655 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f8c6008d440 con 0x7f8c70100f00 2026-03-10T10:17:07.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.415+0000 7f8c6ffff700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8c5806c490 0x7f8c5806e950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:07.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.415+0000 7f8c6ffff700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8c5806c490 0x7f8c5806e950 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f8c6400ba10 tx=0x7f8c6400b3f0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:07.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.415+0000 7f8c6dffb700 1 -- 192.168.123.102:0/2998207655 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f8c6005c3e0 con 0x7f8c70100f00 2026-03-10T10:17:07.534 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.532+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command([{prefix=config set, name=mgr/orchestrator/fail_fs}] v 0) v1 -- 0x7f8c5c005f70 con 0x7f8c70100f00 2026-03-10T10:17:07.541 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.540+0000 7f8c6dffb700 1 -- 192.168.123.102:0/2998207655 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/orchestrator/fail_fs}]=0 v16)=0 v16) v1 ==== 126+0+0 (secure 0 0 0) 0x7f8c6005bf70 con 0x7f8c70100f00 2026-03-10T10:17:07.545 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8c5806c490 msgr2=0x7f8c5806e950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8c5806c490 0x7f8c5806e950 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f8c6400ba10 tx=0x7f8c6400b3f0 comp rx=0 tx=0).stop 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 msgr2=0x7f8c70192b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 0x7f8c70192b70 secure :-1 s=READY pgs=265 cs=0 l=1 rev1=1 crypto rx=0x7f8c60006010 tx=0x7f8c600049e0 comp rx=0 tx=0).stop 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 shutdown_connections 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c70100f00 0x7f8c70192b70 unknown :-1 s=CLOSED pgs=265 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8c5806c490 0x7f8c5806e950 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 --2- 192.168.123.102:0/2998207655 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c701020e0 0x7f8c701930b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 >> 192.168.123.102:0/2998207655 conn(0x7f8c700fc4d0 msgr2=0x7f8c70103390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 shutdown_connections 2026-03-10T10:17:07.546 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:07.544+0000 7f8c76b7d700 1 -- 192.168.123.102:0/2998207655 wait complete. 2026-03-10T10:17:07.621 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-10T10:17:07.621 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:17:07.621 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force' 2026-03-10T10:17:07.832 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:08.108 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.106+0000 7fc2eac60700 1 -- 192.168.123.102:0/2702853109 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2e4108750 msgr2=0x7fc2e4108bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:08.108 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.106+0000 7fc2eac60700 1 --2- 192.168.123.102:0/2702853109 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2e4108750 0x7fc2e4108bb0 secure :-1 s=READY pgs=266 cs=0 l=1 rev1=1 crypto rx=0x7fc2d8009b00 tx=0x7fc2d8009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:08.108 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.107+0000 7fc2eac60700 1 -- 192.168.123.102:0/2702853109 shutdown_connections 2026-03-10T10:17:08.108 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.107+0000 7fc2eac60700 1 --2- 192.168.123.102:0/2702853109 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2e4108750 0x7fc2e4108bb0 unknown :-1 s=CLOSED pgs=266 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.108 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.107+0000 7fc2eac60700 1 --2- 192.168.123.102:0/2702853109 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2e4107550 0x7fc2e4107970 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.107+0000 7fc2eac60700 1 -- 192.168.123.102:0/2702853109 >> 192.168.123.102:0/2702853109 conn(0x7fc2e4076500 msgr2=0x7fc2e4078960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:08.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.107+0000 7fc2eac60700 1 -- 192.168.123.102:0/2702853109 shutdown_connections 2026-03-10T10:17:08.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.107+0000 7fc2eac60700 1 -- 192.168.123.102:0/2702853109 wait complete. 2026-03-10T10:17:08.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2eac60700 1 Processor -- start 2026-03-10T10:17:08.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2eac60700 1 -- start start 2026-03-10T10:17:08.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2eac60700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2e4107550 0x7fc2e419ce70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2eac60700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2e4108750 0x7fc2e419d3b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2eac60700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc2e419d9d0 con 0x7fc2e4108750 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2eac60700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc2e419db10 con 0x7fc2e4107550 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2e89fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2e4107550 0x7fc2e419ce70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2e89fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2e4107550 0x7fc2e419ce70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40834/0 (socket says 192.168.123.102:40834) 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2e89fc700 1 -- 192.168.123.102:0/2013722114 learned_addr learned my addr 192.168.123.102:0/2013722114 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.108+0000 7fc2e3fff700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2e4108750 0x7fc2e419d3b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.109+0000 7fc2e89fc700 1 -- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2e4108750 msgr2=0x7fc2e419d3b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.109+0000 7fc2e89fc700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2e4108750 0x7fc2e419d3b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.109+0000 7fc2e89fc700 1 -- 192.168.123.102:0/2013722114 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc2d80097e0 con 0x7fc2e4107550 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.109+0000 7fc2e3fff700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2e4108750 0x7fc2e419d3b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:17:08.110 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.109+0000 7fc2e89fc700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2e4107550 0x7fc2e419ce70 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fc2d400b700 tx=0x7fc2d400bac0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:08.111 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.109+0000 7fc2e1ffb700 1 -- 192.168.123.102:0/2013722114 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc2d4010820 con 0x7fc2e4107550 2026-03-10T10:17:08.111 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.109+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc2e41a25c0 con 0x7fc2e4107550 2026-03-10T10:17:08.111 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.110+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc2e41a2b10 con 0x7fc2e4107550 2026-03-10T10:17:08.112 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.110+0000 7fc2e1ffb700 1 -- 192.168.123.102:0/2013722114 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc2d4010e60 con 0x7fc2e4107550 2026-03-10T10:17:08.112 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.110+0000 7fc2e1ffb700 1 -- 192.168.123.102:0/2013722114 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc2d4017570 con 0x7fc2e4107550 2026-03-10T10:17:08.113 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.111+0000 7fc2e1ffb700 1 -- 192.168.123.102:0/2013722114 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fc2d400f3c0 con 0x7fc2e4107550 2026-03-10T10:17:08.113 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.112+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc2e4066e80 con 0x7fc2e4107550 2026-03-10T10:17:08.113 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.112+0000 7fc2e1ffb700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc2cc06c4e0 0x7fc2cc06e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:08.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.112+0000 7fc2e3fff700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc2cc06c4e0 0x7fc2cc06e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:08.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.113+0000 7fc2e1ffb700 1 -- 192.168.123.102:0/2013722114 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fc2d408b2f0 con 0x7fc2e4107550 2026-03-10T10:17:08.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.113+0000 7fc2e3fff700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc2cc06c4e0 0x7fc2cc06e9a0 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fc2d800b5c0 tx=0x7fc2d8005fb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:08.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.115+0000 7fc2e1ffb700 1 -- 192.168.123.102:0/2013722114 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fc2d4059520 con 0x7fc2e4107550 2026-03-10T10:17:08.231 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.229+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim}] v 0) v1 -- 0x7fc2e41a2df0 con 0x7fc2e4107550 2026-03-10T10:17:08.236 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.234+0000 7fc2e1ffb700 1 -- 192.168.123.102:0/2013722114 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim}]=0 v16)=0 v16) v1 ==== 155+0+0 (secure 0 0 0) 0x7fc2d40590b0 con 0x7fc2e4107550 2026-03-10T10:17:08.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.237+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc2cc06c4e0 msgr2=0x7fc2cc06e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:08.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.237+0000 7fc2eac60700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc2cc06c4e0 0x7fc2cc06e9a0 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fc2d800b5c0 tx=0x7fc2d8005fb0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.237+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2e4107550 msgr2=0x7fc2e419ce70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:08.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.237+0000 7fc2eac60700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2e4107550 0x7fc2e419ce70 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fc2d400b700 tx=0x7fc2d400bac0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.238+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 shutdown_connections 2026-03-10T10:17:08.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.238+0000 7fc2eac60700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc2cc06c4e0 0x7fc2cc06e9a0 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.238+0000 7fc2eac60700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2e4107550 0x7fc2e419ce70 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.238+0000 7fc2eac60700 1 --2- 192.168.123.102:0/2013722114 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2e4108750 0x7fc2e419d3b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.238+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 >> 192.168.123.102:0/2013722114 conn(0x7fc2e4076500 msgr2=0x7fc2e410b980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:08.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.238+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 shutdown_connections 2026-03-10T10:17:08.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.238+0000 7fc2eac60700 1 -- 192.168.123.102:0/2013722114 wait complete. 2026-03-10T10:17:08.310 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force' 2026-03-10T10:17:08.472 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:08.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:08 vm02.local ceph-mon[50200]: pgmap v83: 65 pgs: 65 active+clean; 452 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 9.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s 2026-03-10T10:17:08.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:08 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/2998207655' entity='client.admin' 2026-03-10T10:17:08.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:08 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:08.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:08 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:08.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:08 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:08.499 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:08 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:08 vm05.local ceph-mon[59051]: pgmap v83: 65 pgs: 65 active+clean; 452 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 9.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s 2026-03-10T10:17:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:08 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/2998207655' entity='client.admin' 2026-03-10T10:17:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:08 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:08 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:08 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:08 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:08.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.736+0000 7f154006f700 1 -- 192.168.123.102:0/2478942361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538103100 msgr2=0x7f1538103520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:08.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.736+0000 7f154006f700 1 --2- 192.168.123.102:0/2478942361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538103100 0x7f1538103520 secure :-1 s=READY pgs=267 cs=0 l=1 rev1=1 crypto rx=0x7f1534009b00 tx=0x7f1534009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:08.739 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.737+0000 7f154006f700 1 -- 192.168.123.102:0/2478942361 shutdown_connections 2026-03-10T10:17:08.739 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.737+0000 7f154006f700 1 --2- 192.168.123.102:0/2478942361 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1538104300 0x7f1538104760 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.739 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.737+0000 7f154006f700 1 --2- 192.168.123.102:0/2478942361 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538103100 0x7f1538103520 unknown :-1 s=CLOSED pgs=267 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.739 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.737+0000 7f154006f700 1 -- 192.168.123.102:0/2478942361 >> 192.168.123.102:0/2478942361 conn(0x7f15380fe6c0 msgr2=0x7f1538100ae0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:08.739 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.738+0000 7f154006f700 1 -- 192.168.123.102:0/2478942361 shutdown_connections 2026-03-10T10:17:08.740 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.738+0000 7f154006f700 1 -- 192.168.123.102:0/2478942361 wait complete. 2026-03-10T10:17:08.740 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.739+0000 7f154006f700 1 Processor -- start 2026-03-10T10:17:08.740 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.739+0000 7f154006f700 1 -- start start 2026-03-10T10:17:08.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.739+0000 7f154006f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1538103100 0x7f1538198ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:08.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.739+0000 7f154006f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538104300 0x7f1538199000 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:08.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.739+0000 7f153d60a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538104300 0x7f1538199000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:08.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.740+0000 7f153d60a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538104300 0x7f1538199000 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:43022/0 (socket says 192.168.123.102:43022) 2026-03-10T10:17:08.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.740+0000 7f153d60a700 1 -- 192.168.123.102:0/2307820852 learned_addr learned my addr 192.168.123.102:0/2307820852 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:08.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.739+0000 7f154006f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1538199620 con 0x7f1538104300 2026-03-10T10:17:08.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.740+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1538199760 con 0x7f1538103100 2026-03-10T10:17:08.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.740+0000 7f153d60a700 1 -- 192.168.123.102:0/2307820852 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1538103100 msgr2=0x7f1538198ac0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:17:08.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.740+0000 7f153d60a700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1538103100 0x7f1538198ac0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.740+0000 7f153d60a700 1 -- 192.168.123.102:0/2307820852 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f15340097e0 con 0x7f1538104300 2026-03-10T10:17:08.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.741+0000 7f153d60a700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538104300 0x7f1538199000 secure :-1 s=READY pgs=268 cs=0 l=1 rev1=1 crypto rx=0x7f152800b700 tx=0x7f152800bac0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:08.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.741+0000 7f152effd700 1 -- 192.168.123.102:0/2307820852 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1528010820 con 0x7f1538104300 2026-03-10T10:17:08.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.741+0000 7f152effd700 1 -- 192.168.123.102:0/2307820852 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1528010e60 con 0x7f1538104300 2026-03-10T10:17:08.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.741+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f153819e210 con 0x7f1538104300 2026-03-10T10:17:08.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.741+0000 7f152effd700 1 -- 192.168.123.102:0/2307820852 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1528017570 con 0x7f1538104300 2026-03-10T10:17:08.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.742+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f153819e760 con 0x7f1538104300 2026-03-10T10:17:08.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.743+0000 7f152effd700 1 -- 192.168.123.102:0/2307820852 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f15280176d0 con 0x7f1538104300 2026-03-10T10:17:08.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.743+0000 7f152effd700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f152406c5b0 0x7f152406ea70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:08.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.743+0000 7f153de0b700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f152406c5b0 0x7f152406ea70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:08.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.744+0000 7f152effd700 1 -- 192.168.123.102:0/2307820852 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f152808c2b0 con 0x7f1538104300 2026-03-10T10:17:08.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.744+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f153819e3a0 con 0x7f1538104300 2026-03-10T10:17:08.748 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.745+0000 7f153de0b700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f152406c5b0 0x7f152406ea70 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f15340052d0 tx=0x7f1534005be0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:08.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.747+0000 7f152effd700 1 -- 192.168.123.102:0/2307820852 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f153819e3a0 con 0x7f1538104300 2026-03-10T10:17:08.871 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.870+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim_allowed}] v 0) v1 -- 0x7f1538066e80 con 0x7f1538104300 2026-03-10T10:17:08.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.870+0000 7f152effd700 1 -- 192.168.123.102:0/2307820852 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim_allowed}]=0 v16)=0 v16) v1 ==== 163+0+0 (secure 0 0 0) 0x7f152801e0e0 con 0x7f1538104300 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.873+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f152406c5b0 msgr2=0x7f152406ea70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.873+0000 7f154006f700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f152406c5b0 0x7f152406ea70 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f15340052d0 tx=0x7f1534005be0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.873+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538104300 msgr2=0x7f1538199000 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.873+0000 7f154006f700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538104300 0x7f1538199000 secure :-1 s=READY pgs=268 cs=0 l=1 rev1=1 crypto rx=0x7f152800b700 tx=0x7f152800bac0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.873+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 shutdown_connections 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.873+0000 7f154006f700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f152406c5b0 0x7f152406ea70 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.873+0000 7f154006f700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1538103100 0x7f1538198ac0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.873+0000 7f154006f700 1 --2- 192.168.123.102:0/2307820852 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1538104300 0x7f1538199000 unknown :-1 s=CLOSED pgs=268 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.873+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 >> 192.168.123.102:0/2307820852 conn(0x7f15380fe6c0 msgr2=0x7f1538107530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.874+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 shutdown_connections 2026-03-10T10:17:08.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:08.874+0000 7f154006f700 1 -- 192.168.123.102:0/2307820852 wait complete. 2026-03-10T10:17:08.948 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set global log_to_journald false --force' 2026-03-10T10:17:09.110 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:09.437 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.430+0000 7ff5c986b700 1 -- 192.168.123.102:0/3418954880 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 msgr2=0x7ff5c41044e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.430+0000 7ff5c986b700 1 --2- 192.168.123.102:0/3418954880 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 0x7ff5c41044e0 secure :-1 s=READY pgs=269 cs=0 l=1 rev1=1 crypto rx=0x7ff5b0009b50 tx=0x7ff5b0009e60 comp rx=0 tx=0).stop 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.431+0000 7ff5c986b700 1 -- 192.168.123.102:0/3418954880 shutdown_connections 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.431+0000 7ff5c986b700 1 --2- 192.168.123.102:0/3418954880 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 0x7ff5c41044e0 unknown :-1 s=CLOSED pgs=269 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.431+0000 7ff5c986b700 1 --2- 192.168.123.102:0/3418954880 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5c4102e70 0x7ff5c4103290 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.431+0000 7ff5c986b700 1 -- 192.168.123.102:0/3418954880 >> 192.168.123.102:0/3418954880 conn(0x7ff5c40fe440 msgr2=0x7ff5c41008a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.431+0000 7ff5c986b700 1 -- 192.168.123.102:0/3418954880 shutdown_connections 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.431+0000 7ff5c986b700 1 -- 192.168.123.102:0/3418954880 wait complete. 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.432+0000 7ff5c986b700 1 Processor -- start 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.432+0000 7ff5c986b700 1 -- start start 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.432+0000 7ff5c986b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5c4102e70 0x7ff5c410f6c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.432+0000 7ff5c986b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 0x7ff5c410fc00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.432+0000 7ff5c8869700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5c4102e70 0x7ff5c410f6c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.432+0000 7ff5c8869700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5c4102e70 0x7ff5c410f6c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40876/0 (socket says 192.168.123.102:40876) 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.433+0000 7ff5c3fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 0x7ff5c410fc00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.433+0000 7ff5c3fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 0x7ff5c410fc00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:43050/0 (socket says 192.168.123.102:43050) 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.433+0000 7ff5c8869700 1 -- 192.168.123.102:0/2970923181 learned_addr learned my addr 192.168.123.102:0/2970923181 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.433+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5c4110220 con 0x7ff5c4104060 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.433+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5c4110360 con 0x7ff5c4102e70 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.433+0000 7ff5c8869700 1 -- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 msgr2=0x7ff5c410fc00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.433+0000 7ff5c8869700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 0x7ff5c410fc00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.433+0000 7ff5c8869700 1 -- 192.168.123.102:0/2970923181 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff5b00097e0 con 0x7ff5c4102e70 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.433+0000 7ff5c3fff700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 0x7ff5c410fc00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.434+0000 7ff5c8869700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5c4102e70 0x7ff5c410f6c0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7ff5b8009fd0 tx=0x7ff5b800edf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.435+0000 7ff5c1ffb700 1 -- 192.168.123.102:0/2970923181 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff5b8009980 con 0x7ff5c4102e70 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.435+0000 7ff5c1ffb700 1 -- 192.168.123.102:0/2970923181 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff5b8004500 con 0x7ff5c4102e70 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.435+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff5c4112e00 con 0x7ff5c4102e70 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.436+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff5c41ab080 con 0x7ff5c4102e70 2026-03-10T10:17:09.438 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.436+0000 7ff5c1ffb700 1 -- 192.168.123.102:0/2970923181 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff5b8010430 con 0x7ff5c4102e70 2026-03-10T10:17:09.439 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.438+0000 7ff5c1ffb700 1 -- 192.168.123.102:0/2970923181 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7ff5b8010630 con 0x7ff5c4102e70 2026-03-10T10:17:09.440 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.438+0000 7ff5c1ffb700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5b406c5b0 0x7ff5b406ea70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:09.440 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.439+0000 7ff5c3fff700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5b406c5b0 0x7ff5b406ea70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:09.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.440+0000 7ff5c3fff700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5b406c5b0 0x7ff5b406ea70 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7ff5b00053b0 tx=0x7ff5b0005a90 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:09.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.440+0000 7ff5c1ffb700 1 -- 192.168.123.102:0/2970923181 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7ff5b8014070 con 0x7ff5c4102e70 2026-03-10T10:17:09.443 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.441+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff5a8005320 con 0x7ff5c4102e70 2026-03-10T10:17:09.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.444+0000 7ff5c1ffb700 1 -- 192.168.123.102:0/2970923181 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7ff5b805af30 con 0x7ff5c4102e70 2026-03-10T10:17:09.616 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.613+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=log_to_journald}] v 0) v1 -- 0x7ff5a8005f70 con 0x7ff5c4102e70 2026-03-10T10:17:09.617 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.615+0000 7ff5c1ffb700 1 -- 192.168.123.102:0/2970923181 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=log_to_journald}]=0 v16)=0 v16) v1 ==== 135+0+0 (secure 0 0 0) 0x7ff5b805d0c0 con 0x7ff5c4102e70 2026-03-10T10:17:09.620 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.618+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5b406c5b0 msgr2=0x7ff5b406ea70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:09.620 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.619+0000 7ff5c986b700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5b406c5b0 0x7ff5b406ea70 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7ff5b00053b0 tx=0x7ff5b0005a90 comp rx=0 tx=0).stop 2026-03-10T10:17:09.621 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.619+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5c4102e70 msgr2=0x7ff5c410f6c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:09.621 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.619+0000 7ff5c986b700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5c4102e70 0x7ff5c410f6c0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7ff5b8009fd0 tx=0x7ff5b800edf0 comp rx=0 tx=0).stop 2026-03-10T10:17:09.621 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.620+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 shutdown_connections 2026-03-10T10:17:09.621 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.620+0000 7ff5c986b700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5b406c5b0 0x7ff5b406ea70 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:09.622 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.620+0000 7ff5c986b700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5c4102e70 0x7ff5c410f6c0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:09.622 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.620+0000 7ff5c986b700 1 --2- 192.168.123.102:0/2970923181 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5c4104060 0x7ff5c410fc00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:09.622 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.620+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 >> 192.168.123.102:0/2970923181 conn(0x7ff5c40fe440 msgr2=0x7ff5c4078df0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:09.623 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.621+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 shutdown_connections 2026-03-10T10:17:09.623 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:09.622+0000 7ff5c986b700 1 -- 192.168.123.102:0/2970923181 wait complete. 2026-03-10T10:17:09.692 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-10T10:17:09.881 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.212+0000 7f442e6ab700 1 -- 192.168.123.102:0/365051915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f442810a700 msgr2=0x7f44281114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.212+0000 7f442e6ab700 1 --2- 192.168.123.102:0/365051915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f442810a700 0x7f44281114d0 secure :-1 s=READY pgs=270 cs=0 l=1 rev1=1 crypto rx=0x7f4418009b00 tx=0x7f4418009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.214+0000 7f442e6ab700 1 -- 192.168.123.102:0/365051915 shutdown_connections 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.214+0000 7f442e6ab700 1 --2- 192.168.123.102:0/365051915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f442810a700 0x7f44281114d0 unknown :-1 s=CLOSED pgs=270 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.214+0000 7f442e6ab700 1 --2- 192.168.123.102:0/365051915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4428107d90 0x7f442810a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.214+0000 7f442e6ab700 1 -- 192.168.123.102:0/365051915 >> 192.168.123.102:0/365051915 conn(0x7f442806daa0 msgr2=0x7f442806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.214+0000 7f442e6ab700 1 -- 192.168.123.102:0/365051915 shutdown_connections 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.214+0000 7f442e6ab700 1 -- 192.168.123.102:0/365051915 wait complete. 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442e6ab700 1 Processor -- start 2026-03-10T10:17:10.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442e6ab700 1 -- start start 2026-03-10T10:17:10.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442e6ab700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4428107d90 0x7f44281a0f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:10.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442e6ab700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f442810a700 0x7f44281a14a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:10.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442e6ab700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f44281a1ac0 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442e6ab700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f44281a1c00 con 0x7f442810a700 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442d6a9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4428107d90 0x7f44281a0f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442d6a9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4428107d90 0x7f44281a0f60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:43054/0 (socket says 192.168.123.102:43054) 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442d6a9700 1 -- 192.168.123.102:0/157759029 learned_addr learned my addr 192.168.123.102:0/157759029 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.215+0000 7f442cea8700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f442810a700 0x7f44281a14a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.216+0000 7f442d6a9700 1 -- 192.168.123.102:0/157759029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f442810a700 msgr2=0x7f44281a14a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.216+0000 7f442d6a9700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f442810a700 0x7f44281a14a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.216+0000 7f442d6a9700 1 -- 192.168.123.102:0/157759029 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f44180097e0 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.216+0000 7f442d6a9700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4428107d90 0x7f44281a0f60 secure :-1 s=READY pgs=271 cs=0 l=1 rev1=1 crypto rx=0x7f442000c390 tx=0x7f442000c6a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.216+0000 7f441e7fc700 1 -- 192.168.123.102:0/157759029 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f442000e050 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.216+0000 7f441e7fc700 1 -- 192.168.123.102:0/157759029 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f442000f040 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.216+0000 7f441e7fc700 1 -- 192.168.123.102:0/157759029 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4420013400 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.216+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f44281a6650 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.217+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f442810b280 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.219+0000 7f441e7fc700 1 -- 192.168.123.102:0/157759029 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f4420004ad0 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.220+0000 7f441e7fc700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f441406c600 0x7f441406eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.220+0000 7f441e7fc700 1 -- 192.168.123.102:0/157759029 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f442008af80 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.220+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f442819b150 con 0x7f4428107d90 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.223+0000 7f442cea8700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f441406c600 0x7f441406eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:10.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.223+0000 7f441e7fc700 1 -- 192.168.123.102:0/157759029 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f44200592b0 con 0x7f4428107d90 2026-03-10T10:17:10.230 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.228+0000 7f442cea8700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f441406c600 0x7f441406eac0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f441800b5c0 tx=0x7f441801a040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:10.237 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:10 vm05.local ceph-mon[59051]: pgmap v84: 65 pgs: 65 active+clean; 452 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 10 KiB/s rd, 409 B/s wr, 11 op/s 2026-03-10T10:17:10.371 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.369+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}) v1 -- 0x7f44280611d0 con 0x7f441406c600 2026-03-10T10:17:10.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.376+0000 7f441e7fc700 1 -- 192.168.123.102:0/157759029 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+89 (secure 0 0 0) 0x7f44280611d0 con 0x7f441406c600 2026-03-10T10:17:10.378 INFO:teuthology.orchestra.run.vm02.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:17:10.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.379+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f441406c600 msgr2=0x7f441406eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:10.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.379+0000 7f442e6ab700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f441406c600 0x7f441406eac0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f441800b5c0 tx=0x7f441801a040 comp rx=0 tx=0).stop 2026-03-10T10:17:10.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.379+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4428107d90 msgr2=0x7f44281a0f60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:10.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.379+0000 7f442e6ab700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4428107d90 0x7f44281a0f60 secure :-1 s=READY pgs=271 cs=0 l=1 rev1=1 crypto rx=0x7f442000c390 tx=0x7f442000c6a0 comp rx=0 tx=0).stop 2026-03-10T10:17:10.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.379+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 shutdown_connections 2026-03-10T10:17:10.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.379+0000 7f442e6ab700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4428107d90 0x7f44281a0f60 unknown :-1 s=CLOSED pgs=271 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:10.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.379+0000 7f442e6ab700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f441406c600 0x7f441406eac0 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:10.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.379+0000 7f442e6ab700 1 --2- 192.168.123.102:0/157759029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f442810a700 0x7f44281a14a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:10.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.379+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 >> 192.168.123.102:0/157759029 conn(0x7f442806daa0 msgr2=0x7f442806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:10.382 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.380+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 shutdown_connections 2026-03-10T10:17:10.382 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:10.381+0000 7f442e6ab700 1 -- 192.168.123.102:0/157759029 wait complete. 2026-03-10T10:17:10.464 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-10T10:17:10.465 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:17:10.465 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph fs dump; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done' 2026-03-10T10:17:10.517 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:10 vm02.local ceph-mon[50200]: pgmap v84: 65 pgs: 65 active+clean; 452 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 10 KiB/s rd, 409 B/s wr, 11 op/s 2026-03-10T10:17:10.717 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:17:11.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.096+0000 7facbdd79700 1 -- 192.168.123.102:0/1609878063 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8072b20 msgr2=0x7facb8072f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.096+0000 7facbdd79700 1 --2- 192.168.123.102:0/1609878063 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8072b20 0x7facb8072f40 secure :-1 s=READY pgs=272 cs=0 l=1 rev1=1 crypto rx=0x7facb000b210 tx=0x7facb000b520 comp rx=0 tx=0).stop 2026-03-10T10:17:11.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.097+0000 7facbdd79700 1 -- 192.168.123.102:0/1609878063 shutdown_connections 2026-03-10T10:17:11.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.097+0000 7facbdd79700 1 --2- 192.168.123.102:0/1609878063 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7facb8075a10 0x7facb8077ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.097+0000 7facbdd79700 1 --2- 192.168.123.102:0/1609878063 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8072b20 0x7facb8072f40 unknown :-1 s=CLOSED pgs=272 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.097+0000 7facbdd79700 1 -- 192.168.123.102:0/1609878063 >> 192.168.123.102:0/1609878063 conn(0x7facb806daa0 msgr2=0x7facb806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.097+0000 7facbdd79700 1 -- 192.168.123.102:0/1609878063 shutdown_connections 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.097+0000 7facbdd79700 1 -- 192.168.123.102:0/1609878063 wait complete. 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.098+0000 7facbdd79700 1 Processor -- start 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.098+0000 7facbdd79700 1 -- start start 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.098+0000 7facbdd79700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8075a10 0x7facb8083180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.098+0000 7facbdd79700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7facb80836c0 0x7facb81b3240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.098+0000 7facbdd79700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7facb8083b40 con 0x7facb8075a10 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.098+0000 7facbdd79700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7facb8083cb0 con 0x7facb80836c0 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.100+0000 7facb7fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7facb80836c0 0x7facb81b3240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.100+0000 7facbcd77700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8075a10 0x7facb8083180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.100+0000 7facbcd77700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8075a10 0x7facb8083180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:43068/0 (socket says 192.168.123.102:43068) 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.100+0000 7facbcd77700 1 -- 192.168.123.102:0/4291446925 learned_addr learned my addr 192.168.123.102:0/4291446925 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:11.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.100+0000 7facbcd77700 1 -- 192.168.123.102:0/4291446925 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7facb80836c0 msgr2=0x7facb81b3240 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.100+0000 7facbcd77700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7facb80836c0 0x7facb81b3240 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.101+0000 7facbcd77700 1 -- 192.168.123.102:0/4291446925 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7facb0009e30 con 0x7facb8075a10 2026-03-10T10:17:11.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.101+0000 7facbcd77700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8075a10 0x7facb8083180 secure :-1 s=READY pgs=273 cs=0 l=1 rev1=1 crypto rx=0x7facb0000f80 tx=0x7facb0009510 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:11.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.101+0000 7facb5ffb700 1 -- 192.168.123.102:0/4291446925 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7facb000e050 con 0x7facb8075a10 2026-03-10T10:17:11.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.102+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7facb81b3780 con 0x7facb8075a10 2026-03-10T10:17:11.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.102+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7facb81b3ca0 con 0x7facb8075a10 2026-03-10T10:17:11.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.103+0000 7facb5ffb700 1 -- 192.168.123.102:0/4291446925 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7facb0009cb0 con 0x7facb8075a10 2026-03-10T10:17:11.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.103+0000 7facb5ffb700 1 -- 192.168.123.102:0/4291446925 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7facb001b6d0 con 0x7facb8075a10 2026-03-10T10:17:11.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.104+0000 7facb5ffb700 1 -- 192.168.123.102:0/4291446925 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7facb0019040 con 0x7facb8075a10 2026-03-10T10:17:11.111 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.106+0000 7facb5ffb700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faca006c6d0 0x7faca006eb90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.112 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.106+0000 7facb7fff700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faca006c6d0 0x7faca006eb90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.112 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.106+0000 7facb7fff700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faca006c6d0 0x7faca006eb90 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7faca8005b40 tx=0x7faca8009500 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:11.112 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.106+0000 7facb5ffb700 1 -- 192.168.123.102:0/4291446925 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7facb008d3e0 con 0x7facb8075a10 2026-03-10T10:17:11.112 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.106+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7faca4005320 con 0x7facb8075a10 2026-03-10T10:17:11.112 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.109+0000 7facb5ffb700 1 -- 192.168.123.102:0/4291446925 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7facb0057c80 con 0x7facb8075a10 2026-03-10T10:17:11.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.301+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7faca4000bf0 con 0x7faca006c6d0 2026-03-10T10:17:11.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.303+0000 7facb5ffb700 1 -- 192.168.123.102:0/4291446925 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7faca4000bf0 con 0x7faca006c6d0 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.307+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faca006c6d0 msgr2=0x7faca006eb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.307+0000 7facbdd79700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faca006c6d0 0x7faca006eb90 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7faca8005b40 tx=0x7faca8009500 comp rx=0 tx=0).stop 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.308+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8075a10 msgr2=0x7facb8083180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.308+0000 7facbdd79700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8075a10 0x7facb8083180 secure :-1 s=READY pgs=273 cs=0 l=1 rev1=1 crypto rx=0x7facb0000f80 tx=0x7facb0009510 comp rx=0 tx=0).stop 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.308+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 shutdown_connections 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.308+0000 7facbdd79700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7facb8075a10 0x7facb8083180 unknown :-1 s=CLOSED pgs=273 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.308+0000 7facbdd79700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7faca006c6d0 0x7faca006eb90 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.308+0000 7facbdd79700 1 --2- 192.168.123.102:0/4291446925 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7facb80836c0 0x7facb81b3240 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.308+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 >> 192.168.123.102:0/4291446925 conn(0x7facb806daa0 msgr2=0x7facb806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.308+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 shutdown_connections 2026-03-10T10:17:11.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.308+0000 7facbdd79700 1 -- 192.168.123.102:0/4291446925 wait complete. 2026-03-10T10:17:11.319 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:17:11.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.438+0000 7fa92ad46700 1 -- 192.168.123.102:0/3378735786 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa924107d90 msgr2=0x7fa92410a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.438+0000 7fa92ad46700 1 --2- 192.168.123.102:0/3378735786 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa924107d90 0x7fa92410a1c0 secure :-1 s=READY pgs=274 cs=0 l=1 rev1=1 crypto rx=0x7fa91801aa70 tx=0x7fa91801ad80 comp rx=0 tx=0).stop 2026-03-10T10:17:11.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.438+0000 7fa92ad46700 1 -- 192.168.123.102:0/3378735786 shutdown_connections 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.438+0000 7fa92ad46700 1 --2- 192.168.123.102:0/3378735786 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa92410a700 0x7fa92410cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.438+0000 7fa92ad46700 1 --2- 192.168.123.102:0/3378735786 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa924107d90 0x7fa92410a1c0 unknown :-1 s=CLOSED pgs=274 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.438+0000 7fa92ad46700 1 -- 192.168.123.102:0/3378735786 >> 192.168.123.102:0/3378735786 conn(0x7fa92406daa0 msgr2=0x7fa92406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa92ad46700 1 -- 192.168.123.102:0/3378735786 shutdown_connections 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa92ad46700 1 -- 192.168.123.102:0/3378735786 wait complete. 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa92ad46700 1 Processor -- start 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa92ad46700 1 -- start start 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa92ad46700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa924107d90 0x7fa92419cb60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa92ad46700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa92410a700 0x7fa92419d0a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa92ad46700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa92419d6c0 con 0x7fa924107d90 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa92ad46700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa92419d800 con 0x7fa92410a700 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa929543700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa92410a700 0x7fa92419d0a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa929543700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa92410a700 0x7fa92419d0a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40924/0 (socket says 192.168.123.102:40924) 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa929543700 1 -- 192.168.123.102:0/1891418334 learned_addr learned my addr 192.168.123.102:0/1891418334 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.439+0000 7fa929d44700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa924107d90 0x7fa92419cb60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.440+0000 7fa929543700 1 -- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa924107d90 msgr2=0x7fa92419cb60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.440+0000 7fa929543700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa924107d90 0x7fa92419cb60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.440+0000 7fa929543700 1 -- 192.168.123.102:0/1891418334 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa91801a720 con 0x7fa92410a700 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.440+0000 7fa929d44700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa924107d90 0x7fa92419cb60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:17:11.442 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.440+0000 7fa929543700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa92410a700 0x7fa92419d0a0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fa92000ea30 tx=0x7fa92000edf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:11.443 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.441+0000 7fa916ffd700 1 -- 192.168.123.102:0/1891418334 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa92000cc40 con 0x7fa92410a700 2026-03-10T10:17:11.443 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.441+0000 7fa92ad46700 1 -- 192.168.123.102:0/1891418334 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa9241a2260 con 0x7fa92410a700 2026-03-10T10:17:11.443 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.441+0000 7fa92ad46700 1 -- 192.168.123.102:0/1891418334 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa9241a27b0 con 0x7fa92410a700 2026-03-10T10:17:11.443 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.442+0000 7fa916ffd700 1 -- 192.168.123.102:0/1891418334 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa92000cda0 con 0x7fa92410a700 2026-03-10T10:17:11.444 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.442+0000 7fa916ffd700 1 -- 192.168.123.102:0/1891418334 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa920010430 con 0x7fa92410a700 2026-03-10T10:17:11.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.443+0000 7fa916ffd700 1 -- 192.168.123.102:0/1891418334 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fa920010590 con 0x7fa92410a700 2026-03-10T10:17:11.445 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.443+0000 7fa916ffd700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa91006c4e0 0x7fa91006e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.444+0000 7fa929d44700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa91006c4e0 0x7fa91006e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.445+0000 7fa929d44700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa91006c4e0 0x7fa91006e9a0 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fa918004800 tx=0x7fa91801c550 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:11.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.445+0000 7fa916ffd700 1 -- 192.168.123.102:0/1891418334 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fa920014070 con 0x7fa92410a700 2026-03-10T10:17:11.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.445+0000 7fa92ad46700 1 -- 192.168.123.102:0/1891418334 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa908005320 con 0x7fa92410a700 2026-03-10T10:17:11.453 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.451+0000 7fa916ffd700 1 -- 192.168.123.102:0/1891418334 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa92005aff0 con 0x7fa92410a700 2026-03-10T10:17:11.581 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:11 vm02.local ceph-mon[50200]: from='client.14544 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:11.582 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:11 vm02.local ceph-mon[50200]: Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:17:11.582 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:11 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:11.582 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:11 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:11.582 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:11 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:11.582 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:11 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:11.582 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:11 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:11.584 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.579+0000 7fa92ad46700 1 -- 192.168.123.102:0/1891418334 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fa908000bf0 con 0x7fa91006c4e0 2026-03-10T10:17:11.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.583+0000 7fa916ffd700 1 -- 192.168.123.102:0/1891418334 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7fa908000bf0 con 0x7fa91006c4e0 2026-03-10T10:17:11.587 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.585+0000 7fa914ff9700 1 -- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa91006c4e0 msgr2=0x7fa91006e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.587 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.585+0000 7fa914ff9700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa91006c4e0 0x7fa91006e9a0 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fa918004800 tx=0x7fa91801c550 comp rx=0 tx=0).stop 2026-03-10T10:17:11.587 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.586+0000 7fa914ff9700 1 -- 192.168.123.102:0/1891418334 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa92410a700 msgr2=0x7fa92419d0a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.587 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.586+0000 7fa914ff9700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa92410a700 0x7fa92419d0a0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fa92000ea30 tx=0x7fa92000edf0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.587+0000 7fa914ff9700 1 -- 192.168.123.102:0/1891418334 shutdown_connections 2026-03-10T10:17:11.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.587+0000 7fa914ff9700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa924107d90 0x7fa92419cb60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.587+0000 7fa914ff9700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa91006c4e0 0x7fa91006e9a0 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.587+0000 7fa914ff9700 1 --2- 192.168.123.102:0/1891418334 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa92410a700 0x7fa92419d0a0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.587+0000 7fa914ff9700 1 -- 192.168.123.102:0/1891418334 >> 192.168.123.102:0/1891418334 conn(0x7fa92406daa0 msgr2=0x7fa92406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:11.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.587+0000 7fa914ff9700 1 -- 192.168.123.102:0/1891418334 shutdown_connections 2026-03-10T10:17:11.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.587+0000 7fa914ff9700 1 -- 192.168.123.102:0/1891418334 wait complete. 2026-03-10T10:17:11.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.687+0000 7feff60a2700 1 -- 192.168.123.102:0/756306903 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0072b50 msgr2=0x7feff0072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.687+0000 7feff60a2700 1 --2- 192.168.123.102:0/756306903 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0072b50 0x7feff0072f70 secure :-1 s=READY pgs=275 cs=0 l=1 rev1=1 crypto rx=0x7fefe0008790 tx=0x7fefe0008aa0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.687+0000 7feff60a2700 1 -- 192.168.123.102:0/756306903 shutdown_connections 2026-03-10T10:17:11.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.687+0000 7feff60a2700 1 --2- 192.168.123.102:0/756306903 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feff0075a40 0x7feff0077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.687+0000 7feff60a2700 1 --2- 192.168.123.102:0/756306903 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0072b50 0x7feff0072f70 unknown :-1 s=CLOSED pgs=275 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.687+0000 7feff60a2700 1 -- 192.168.123.102:0/756306903 >> 192.168.123.102:0/756306903 conn(0x7feff006dae0 msgr2=0x7feff006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7feff60a2700 1 -- 192.168.123.102:0/756306903 shutdown_connections 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7feff60a2700 1 -- 192.168.123.102:0/756306903 wait complete. 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7feff60a2700 1 Processor -- start 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7feff60a2700 1 -- start start 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7feff60a2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0075a40 0x7feff00830d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7feff60a2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feff0083610 0x7feff012ddc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7feff60a2700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feff0083b50 con 0x7feff0075a40 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7feff60a2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feff0083cc0 con 0x7feff0083610 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7fefef7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0075a40 0x7feff00830d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7fefef7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0075a40 0x7feff00830d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:43110/0 (socket says 192.168.123.102:43110) 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.688+0000 7fefef7fe700 1 -- 192.168.123.102:0/4055140864 learned_addr learned my addr 192.168.123.102:0/4055140864 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.689+0000 7fefeeffd700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feff0083610 0x7feff012ddc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.689+0000 7fefef7fe700 1 -- 192.168.123.102:0/4055140864 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feff0083610 msgr2=0x7feff012ddc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.689+0000 7fefef7fe700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feff0083610 0x7feff012ddc0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.689+0000 7fefef7fe700 1 -- 192.168.123.102:0/4055140864 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fefe0008440 con 0x7feff0075a40 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.689+0000 7fefef7fe700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0075a40 0x7feff00830d0 secure :-1 s=READY pgs=276 cs=0 l=1 rev1=1 crypto rx=0x7fefe0003c40 tx=0x7fefe00047c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:11.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.690+0000 7fefecff9700 1 -- 192.168.123.102:0/4055140864 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fefe000b7c0 con 0x7feff0075a40 2026-03-10T10:17:11.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.690+0000 7feff60a2700 1 -- 192.168.123.102:0/4055140864 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7feff012e3c0 con 0x7feff0075a40 2026-03-10T10:17:11.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.690+0000 7feff60a2700 1 -- 192.168.123.102:0/4055140864 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7feff012e8c0 con 0x7feff0075a40 2026-03-10T10:17:11.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.692+0000 7fefecff9700 1 -- 192.168.123.102:0/4055140864 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fefe000be00 con 0x7feff0075a40 2026-03-10T10:17:11.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.692+0000 7fefecff9700 1 -- 192.168.123.102:0/4055140864 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fefe0015410 con 0x7feff0075a40 2026-03-10T10:17:11.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.692+0000 7fefecff9700 1 -- 192.168.123.102:0/4055140864 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fefe0012070 con 0x7feff0075a40 2026-03-10T10:17:11.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.693+0000 7fefecff9700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fefd806c6d0 0x7fefd806eb90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.695 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.693+0000 7fefeeffd700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fefd806c6d0 0x7fefd806eb90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.695 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.693+0000 7fefeeffd700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fefd806c6d0 0x7fefd806eb90 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fefe80079a0 tx=0x7fefe800d040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:11.695 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.694+0000 7fefecff9700 1 -- 192.168.123.102:0/4055140864 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fefe008ca40 con 0x7feff0075a40 2026-03-10T10:17:11.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.694+0000 7feff60a2700 1 -- 192.168.123.102:0/4055140864 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fefdc005320 con 0x7feff0075a40 2026-03-10T10:17:11.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.697+0000 7fefecff9700 1 -- 192.168.123.102:0/4055140864 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fefe00572e0 con 0x7feff0075a40 2026-03-10T10:17:11.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:11 vm05.local ceph-mon[59051]: from='client.14544 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:11.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:11 vm05.local ceph-mon[59051]: Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:17:11.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:11 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:11.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:11 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:11.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:11 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:11.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:11 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:11.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:11 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:11.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.813+0000 7feff60a2700 1 -- 192.168.123.102:0/4055140864 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fefdc000bf0 con 0x7fefd806c6d0 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.818+0000 7fefecff9700 1 -- 192.168.123.102:0/4055140864 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3288 (secure 0 0 0) 0x7fefdc000bf0 con 0x7fefd806c6d0 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (96s) 9s ago 2m 22.5M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (2m) 9s ago 2m 8154k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (112s) 10s ago 112s 8166k - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (2m) 9s ago 2m 7415k - 18.2.1 5be31c24972a 51802fb57170 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (111s) 10s ago 111s 7407k - 18.2.1 5be31c24972a f275982dc269 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (91s) 9s ago 2m 78.5M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (15s) 9s ago 15s 12.0M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (17s) 9s ago 17s 14.3M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (16s) 10s ago 16s 12.2M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:17:11.820 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (14s) 10s ago 14s 16.6M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:9283,8765,8443 running (3m) 9s ago 3m 502M - 18.2.1 5be31c24972a 8bea583521d3 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (107s) 10s ago 107s 450M - 18.2.1 5be31c24972a ff545ad0664a 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (3m) 9s ago 3m 52.7M 2048M 18.2.1 5be31c24972a ab92d831cc1d 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (106s) 10s ago 106s 45.0M 2048M 18.2.1 5be31c24972a cea7d23f93a6 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (2m) 9s ago 2m 16.0M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (108s) 10s ago 108s 14.6M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (89s) 9s ago 89s 45.5M 4096M 18.2.1 5be31c24972a 9d7f135a3f3b 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (79s) 9s ago 79s 46.2M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (69s) 9s ago 69s 45.2M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (60s) 10s ago 60s 43.6M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (51s) 10s ago 51s 43.8M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (41s) 10s ago 41s 43.6M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:17:11.821 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (90s) 9s ago 2m 34.3M - 2.43.0 a07b618ecd1d a607fd039cb6 2026-03-10T10:17:11.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.821+0000 7fefd67fc700 1 -- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fefd806c6d0 msgr2=0x7fefd806eb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.821+0000 7fefd67fc700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fefd806c6d0 0x7fefd806eb90 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fefe80079a0 tx=0x7fefe800d040 comp rx=0 tx=0).stop 2026-03-10T10:17:11.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.821+0000 7fefd67fc700 1 -- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0075a40 msgr2=0x7feff00830d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.821+0000 7fefd67fc700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0075a40 0x7feff00830d0 secure :-1 s=READY pgs=276 cs=0 l=1 rev1=1 crypto rx=0x7fefe0003c40 tx=0x7fefe00047c0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.822+0000 7fefd67fc700 1 -- 192.168.123.102:0/4055140864 shutdown_connections 2026-03-10T10:17:11.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.822+0000 7fefd67fc700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feff0075a40 0x7feff00830d0 unknown :-1 s=CLOSED pgs=276 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.822+0000 7fefd67fc700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fefd806c6d0 0x7fefd806eb90 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.822+0000 7fefd67fc700 1 --2- 192.168.123.102:0/4055140864 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feff0083610 0x7feff012ddc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.822+0000 7fefd67fc700 1 -- 192.168.123.102:0/4055140864 >> 192.168.123.102:0/4055140864 conn(0x7feff006dae0 msgr2=0x7feff006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:11.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.822+0000 7fefd67fc700 1 -- 192.168.123.102:0/4055140864 shutdown_connections 2026-03-10T10:17:11.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.822+0000 7fefd67fc700 1 -- 192.168.123.102:0/4055140864 wait complete. 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.905+0000 7f51c4c8f700 1 -- 192.168.123.102:0/805837792 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f51c0072b20 msgr2=0x7f51c0072f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.905+0000 7f51c4c8f700 1 --2- 192.168.123.102:0/805837792 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f51c0072b20 0x7f51c0072f40 secure :-1 s=READY pgs=277 cs=0 l=1 rev1=1 crypto rx=0x7f51b0008790 tx=0x7f51b0008aa0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.905+0000 7f51c4c8f700 1 -- 192.168.123.102:0/805837792 shutdown_connections 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.905+0000 7f51c4c8f700 1 --2- 192.168.123.102:0/805837792 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51c0075a10 0x7f51c0077ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.905+0000 7f51c4c8f700 1 --2- 192.168.123.102:0/805837792 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f51c0072b20 0x7f51c0072f40 unknown :-1 s=CLOSED pgs=277 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.905+0000 7f51c4c8f700 1 -- 192.168.123.102:0/805837792 >> 192.168.123.102:0/805837792 conn(0x7f51c006daa0 msgr2=0x7f51c006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.905+0000 7f51c4c8f700 1 -- 192.168.123.102:0/805837792 shutdown_connections 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.905+0000 7f51c4c8f700 1 -- 192.168.123.102:0/805837792 wait complete. 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.905+0000 7f51c4c8f700 1 Processor -- start 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51c4c8f700 1 -- start start 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51c4c8f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f51c0075a10 0x7f51c00830a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51c4c8f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51c00835e0 0x7f51c012ddc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51c4c8f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f51c0083af0 con 0x7f51c0075a10 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51c4c8f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f51c0083c60 con 0x7f51c00835e0 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51beffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51c00835e0 0x7f51c012ddc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51beffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51c00835e0 0x7f51c012ddc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40956/0 (socket says 192.168.123.102:40956) 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51beffd700 1 -- 192.168.123.102:0/1972441971 learned_addr learned my addr 192.168.123.102:0/1972441971 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51beffd700 1 -- 192.168.123.102:0/1972441971 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f51c0075a10 msgr2=0x7f51c00830a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51beffd700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f51c0075a10 0x7f51c00830a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51beffd700 1 -- 192.168.123.102:0/1972441971 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f51b0008440 con 0x7f51c00835e0 2026-03-10T10:17:11.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.906+0000 7f51beffd700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51c00835e0 0x7f51c012ddc0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f51b800f4d0 tx=0x7f51b800f890 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:11.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.908+0000 7f51bcff9700 1 -- 192.168.123.102:0/1972441971 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f51b8010040 con 0x7f51c00835e0 2026-03-10T10:17:11.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.908+0000 7f51bcff9700 1 -- 192.168.123.102:0/1972441971 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f51b8009bf0 con 0x7f51c00835e0 2026-03-10T10:17:11.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.908+0000 7f51bcff9700 1 -- 192.168.123.102:0/1972441971 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f51b8015980 con 0x7f51c00835e0 2026-03-10T10:17:11.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.908+0000 7f51c4c8f700 1 -- 192.168.123.102:0/1972441971 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f51c012e300 con 0x7f51c00835e0 2026-03-10T10:17:11.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.908+0000 7f51c4c8f700 1 -- 192.168.123.102:0/1972441971 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f51c012e850 con 0x7f51c00835e0 2026-03-10T10:17:11.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.909+0000 7f51a67fc700 1 -- 192.168.123.102:0/1972441971 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f51c004ea90 con 0x7f51c00835e0 2026-03-10T10:17:11.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.912+0000 7f51bcff9700 1 -- 192.168.123.102:0/1972441971 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f51b8009710 con 0x7f51c00835e0 2026-03-10T10:17:11.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.912+0000 7f51bcff9700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f51a806c530 0x7f51a806e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:11.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.912+0000 7f51bcff9700 1 -- 192.168.123.102:0/1972441971 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f51b808c6c0 con 0x7f51c00835e0 2026-03-10T10:17:11.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.914+0000 7f51bf7fe700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f51a806c530 0x7f51a806e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:11.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.914+0000 7f51bf7fe700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f51a806c530 0x7f51a806e9f0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f51b0003860 tx=0x7f51b0005d60 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:11.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:11.914+0000 7f51bcff9700 1 -- 192.168.123.102:0/1972441971 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f51b8056f60 con 0x7f51c00835e0 2026-03-10T10:17:12.091 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.089+0000 7f51a67fc700 1 -- 192.168.123.102:0/1972441971 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f51c0061c10 con 0x7f51c00835e0 2026-03-10T10:17:12.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.090+0000 7f51bcff9700 1 -- 192.168.123.102:0/1972441971 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+558 (secure 0 0 0) 0x7f51b805a580 con 0x7f51c00835e0 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 6 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 14 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:17:12.093 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:17:12.095 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.093+0000 7f51a67fc700 1 -- 192.168.123.102:0/1972441971 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f51a806c530 msgr2=0x7f51a806e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.095 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.093+0000 7f51a67fc700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f51a806c530 0x7f51a806e9f0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f51b0003860 tx=0x7f51b0005d60 comp rx=0 tx=0).stop 2026-03-10T10:17:12.095 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.093+0000 7f51a67fc700 1 -- 192.168.123.102:0/1972441971 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51c00835e0 msgr2=0x7f51c012ddc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.093+0000 7f51a67fc700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51c00835e0 0x7f51c012ddc0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f51b800f4d0 tx=0x7f51b800f890 comp rx=0 tx=0).stop 2026-03-10T10:17:12.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.094+0000 7f51a67fc700 1 -- 192.168.123.102:0/1972441971 shutdown_connections 2026-03-10T10:17:12.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.094+0000 7f51a67fc700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f51c0075a10 0x7f51c00830a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.094+0000 7f51a67fc700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f51a806c530 0x7f51a806e9f0 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.094+0000 7f51a67fc700 1 --2- 192.168.123.102:0/1972441971 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51c00835e0 0x7f51c012ddc0 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.094+0000 7f51a67fc700 1 -- 192.168.123.102:0/1972441971 >> 192.168.123.102:0/1972441971 conn(0x7f51c006daa0 msgr2=0x7f51c006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:12.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.094+0000 7f51a67fc700 1 -- 192.168.123.102:0/1972441971 shutdown_connections 2026-03-10T10:17:12.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.094+0000 7f51a67fc700 1 -- 192.168.123.102:0/1972441971 wait complete. 2026-03-10T10:17:12.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.175+0000 7fb1f759e700 1 -- 192.168.123.102:0/4116703970 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8075a10 msgr2=0x7fb1f8077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.175+0000 7fb1f759e700 1 --2- 192.168.123.102:0/4116703970 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8075a10 0x7fb1f8077ea0 secure :-1 s=READY pgs=278 cs=0 l=1 rev1=1 crypto rx=0x7fb1f000b600 tx=0x7fb1f000b910 comp rx=0 tx=0).stop 2026-03-10T10:17:12.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.176+0000 7fb1f759e700 1 -- 192.168.123.102:0/4116703970 shutdown_connections 2026-03-10T10:17:12.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.176+0000 7fb1f759e700 1 --2- 192.168.123.102:0/4116703970 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8075a10 0x7fb1f8077ea0 unknown :-1 s=CLOSED pgs=278 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.176+0000 7fb1f759e700 1 --2- 192.168.123.102:0/4116703970 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1f8072b20 0x7fb1f8072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.176+0000 7fb1f759e700 1 -- 192.168.123.102:0/4116703970 >> 192.168.123.102:0/4116703970 conn(0x7fb1f806daa0 msgr2=0x7fb1f806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:12.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.176+0000 7fb1f759e700 1 -- 192.168.123.102:0/4116703970 shutdown_connections 2026-03-10T10:17:12.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.176+0000 7fb1f759e700 1 -- 192.168.123.102:0/4116703970 wait complete. 2026-03-10T10:17:12.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.177+0000 7fb1f759e700 1 Processor -- start 2026-03-10T10:17:12.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.177+0000 7fb1f759e700 1 -- start start 2026-03-10T10:17:12.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.177+0000 7fb1f759e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8072b20 0x7fb1f81aedb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:12.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.177+0000 7fb1f759e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1f81af2f0 0x7fb1f81b4320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:12.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.177+0000 7fb1f659c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8072b20 0x7fb1f81aedb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:12.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.177+0000 7fb1f659c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8072b20 0x7fb1f81aedb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:43146/0 (socket says 192.168.123.102:43146) 2026-03-10T10:17:12.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.177+0000 7fb1f759e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb1f81af800 con 0x7fb1f8072b20 2026-03-10T10:17:12.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.177+0000 7fb1f759e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb1f81af970 con 0x7fb1f81af2f0 2026-03-10T10:17:12.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.177+0000 7fb1f659c700 1 -- 192.168.123.102:0/787281336 learned_addr learned my addr 192.168.123.102:0/787281336 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:12.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.178+0000 7fb1f5d9b700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1f81af2f0 0x7fb1f81b4320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:12.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.178+0000 7fb1f659c700 1 -- 192.168.123.102:0/787281336 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1f81af2f0 msgr2=0x7fb1f81b4320 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.178+0000 7fb1f659c700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1f81af2f0 0x7fb1f81b4320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.178+0000 7fb1f659c700 1 -- 192.168.123.102:0/787281336 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb1f000b050 con 0x7fb1f8072b20 2026-03-10T10:17:12.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.178+0000 7fb1f659c700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8072b20 0x7fb1f81aedb0 secure :-1 s=READY pgs=279 cs=0 l=1 rev1=1 crypto rx=0x7fb1e800ba70 tx=0x7fb1e800be30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:12.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.179+0000 7fb1e77fe700 1 -- 192.168.123.102:0/787281336 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb1e800c760 con 0x7fb1f8072b20 2026-03-10T10:17:12.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.179+0000 7fb1e77fe700 1 -- 192.168.123.102:0/787281336 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb1e800cda0 con 0x7fb1f8072b20 2026-03-10T10:17:12.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.179+0000 7fb1e77fe700 1 -- 192.168.123.102:0/787281336 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb1e8012550 con 0x7fb1f8072b20 2026-03-10T10:17:12.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.179+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb1f81b48c0 con 0x7fb1f8072b20 2026-03-10T10:17:12.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.179+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb1f81b4e10 con 0x7fb1f8072b20 2026-03-10T10:17:12.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.181+0000 7fb1e77fe700 1 -- 192.168.123.102:0/787281336 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fb1e8014440 con 0x7fb1f8072b20 2026-03-10T10:17:12.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.182+0000 7fb1e77fe700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb1e006c530 0x7fb1e006e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:12.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.182+0000 7fb1e77fe700 1 -- 192.168.123.102:0/787281336 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fb1e808aac0 con 0x7fb1f8072b20 2026-03-10T10:17:12.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.182+0000 7fb1f5d9b700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb1e006c530 0x7fb1e006e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:12.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.182+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb1f804ea90 con 0x7fb1f8072b20 2026-03-10T10:17:12.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.186+0000 7fb1e77fe700 1 -- 192.168.123.102:0/787281336 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fb1e80552e0 con 0x7fb1f8072b20 2026-03-10T10:17:12.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.187+0000 7fb1f5d9b700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb1e006c530 0x7fb1e006e9f0 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fb1f0015040 tx=0x7fb1f000bf00 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:12.339 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.337+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fb1f81b5090 con 0x7fb1f8072b20 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.338+0000 7fb1e77fe700 1 -- 192.168.123.102:0/787281336 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1854 (secure 0 0 0) 0x7fb1e8019020 con 0x7fb1f8072b20 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:17:12.340 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:17:12.341 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:17:12.341 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:17:12.341 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:17:12.341 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:17:12.341 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:17:12.341 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:17:12.341 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:17:12.341 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:17:12.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.341+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb1e006c530 msgr2=0x7fb1e006e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.342+0000 7fb1f759e700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb1e006c530 0x7fb1e006e9f0 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fb1f0015040 tx=0x7fb1f000bf00 comp rx=0 tx=0).stop 2026-03-10T10:17:12.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.342+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8072b20 msgr2=0x7fb1f81aedb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.342+0000 7fb1f759e700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8072b20 0x7fb1f81aedb0 secure :-1 s=READY pgs=279 cs=0 l=1 rev1=1 crypto rx=0x7fb1e800ba70 tx=0x7fb1e800be30 comp rx=0 tx=0).stop 2026-03-10T10:17:12.344 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.342+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 shutdown_connections 2026-03-10T10:17:12.344 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.342+0000 7fb1f759e700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb1f8072b20 0x7fb1f81aedb0 unknown :-1 s=CLOSED pgs=279 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.344 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.342+0000 7fb1f759e700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fb1e006c530 0x7fb1e006e9f0 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.344 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.343+0000 7fb1f759e700 1 --2- 192.168.123.102:0/787281336 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1f81af2f0 0x7fb1f81b4320 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.344 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.343+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 >> 192.168.123.102:0/787281336 conn(0x7fb1f806daa0 msgr2=0x7fb1f806def0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:12.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.343+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 shutdown_connections 2026-03-10T10:17:12.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.343+0000 7fb1f759e700 1 -- 192.168.123.102:0/787281336 wait complete. 2026-03-10T10:17:12.348 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:17:12.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:12 vm02.local ceph-mon[50200]: Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:17:12.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:12 vm02.local ceph-mon[50200]: pgmap v85: 65 pgs: 65 active+clean; 452 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 9.0 KiB/s rd, 356 B/s wr, 10 op/s 2026-03-10T10:17:12.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:12 vm02.local ceph-mon[50200]: from='client.14548 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:12.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:12 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/1972441971' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:17:12.446 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:12 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/787281336' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:17:12.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.444+0000 7f59a35b2700 1 -- 192.168.123.102:0/4156505744 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994096f90 msgr2=0x7f5994097410 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.444+0000 7f59a35b2700 1 --2- 192.168.123.102:0/4156505744 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994096f90 0x7f5994097410 secure :-1 s=READY pgs=280 cs=0 l=1 rev1=1 crypto rx=0x7f598c009b50 tx=0x7f598c009e60 comp rx=0 tx=0).stop 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.447+0000 7f59a35b2700 1 -- 192.168.123.102:0/4156505744 shutdown_connections 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.447+0000 7f59a35b2700 1 --2- 192.168.123.102:0/4156505744 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994096f90 0x7f5994097410 unknown :-1 s=CLOSED pgs=280 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.447+0000 7f59a35b2700 1 --2- 192.168.123.102:0/4156505744 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5994095e30 0x7f5994096250 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.447+0000 7f59a35b2700 1 -- 192.168.123.102:0/4156505744 >> 192.168.123.102:0/4156505744 conn(0x7f5994091390 msgr2=0x7f5994093810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.447+0000 7f59a35b2700 1 -- 192.168.123.102:0/4156505744 shutdown_connections 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.447+0000 7f59a35b2700 1 -- 192.168.123.102:0/4156505744 wait complete. 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.448+0000 7f59a35b2700 1 Processor -- start 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.448+0000 7f59a35b2700 1 -- start start 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.448+0000 7f59a35b2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994095e30 0x7f59940a46c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.448+0000 7f59a35b2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5994096f90 0x7f59940a4c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.448+0000 7f59a35b2700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f59940a5220 con 0x7f5994095e30 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.448+0000 7f59a35b2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f59940a5360 con 0x7f5994096f90 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.448+0000 7f59a134e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994095e30 0x7f59940a46c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.448+0000 7f59a134e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994095e30 0x7f59940a46c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:43166/0 (socket says 192.168.123.102:43166) 2026-03-10T10:17:12.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.448+0000 7f59a134e700 1 -- 192.168.123.102:0/1885930897 learned_addr learned my addr 192.168.123.102:0/1885930897 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:12.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.449+0000 7f59a0b4d700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5994096f90 0x7f59940a4c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:12.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.449+0000 7f59a134e700 1 -- 192.168.123.102:0/1885930897 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5994096f90 msgr2=0x7f59940a4c00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.449+0000 7f59a134e700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5994096f90 0x7f59940a4c00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.449+0000 7f59a134e700 1 -- 192.168.123.102:0/1885930897 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f598c0097e0 con 0x7f5994095e30 2026-03-10T10:17:12.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.449+0000 7f59a134e700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994095e30 0x7f59940a46c0 secure :-1 s=READY pgs=281 cs=0 l=1 rev1=1 crypto rx=0x7f599800eb10 tx=0x7f599800eed0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:12.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.449+0000 7f59927fc700 1 -- 192.168.123.102:0/1885930897 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f599800cca0 con 0x7f5994095e30 2026-03-10T10:17:12.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.450+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f59940a9e10 con 0x7f5994095e30 2026-03-10T10:17:12.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.450+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f59940aa360 con 0x7f5994095e30 2026-03-10T10:17:12.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.450+0000 7f59927fc700 1 -- 192.168.123.102:0/1885930897 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f599800ce00 con 0x7f5994095e30 2026-03-10T10:17:12.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.450+0000 7f59927fc700 1 -- 192.168.123.102:0/1885930897 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5998018910 con 0x7f5994095e30 2026-03-10T10:17:12.453 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.451+0000 7f59927fc700 1 -- 192.168.123.102:0/1885930897 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f5998018b50 con 0x7f5994095e30 2026-03-10T10:17:12.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.452+0000 7f59927fc700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f598806c5f0 0x7f598806eab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:12.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.452+0000 7f59927fc700 1 -- 192.168.123.102:0/1885930897 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f5998014070 con 0x7f5994095e30 2026-03-10T10:17:12.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.452+0000 7f59a0b4d700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f598806c5f0 0x7f598806eab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:12.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.452+0000 7f59a0b4d700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f598806c5f0 0x7f598806eab0 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f598c006010 tx=0x7f598c0058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:12.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.454+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5980005320 con 0x7f5994095e30 2026-03-10T10:17:12.461 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.459+0000 7f59927fc700 1 -- 192.168.123.102:0/1885930897 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f599805b570 con 0x7f5994095e30 2026-03-10T10:17:12.593 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.591+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f5980000bf0 con 0x7f598806c5f0 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.593+0000 7f59927fc700 1 -- 192.168.123.102:0/1885930897 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f5980000bf0 con 0x7f598806c5f0 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "", 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:17:12.595 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:17:12.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.596+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f598806c5f0 msgr2=0x7f598806eab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.596+0000 7f59a35b2700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f598806c5f0 0x7f598806eab0 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f598c006010 tx=0x7f598c0058e0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.596+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994095e30 msgr2=0x7f59940a46c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.598 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.596+0000 7f59a35b2700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994095e30 0x7f59940a46c0 secure :-1 s=READY pgs=281 cs=0 l=1 rev1=1 crypto rx=0x7f599800eb10 tx=0x7f599800eed0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.599 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.597+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 shutdown_connections 2026-03-10T10:17:12.599 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.598+0000 7f59a35b2700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5994095e30 0x7f59940a46c0 unknown :-1 s=CLOSED pgs=281 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.599 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.598+0000 7f59a35b2700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f598806c5f0 0x7f598806eab0 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.599 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.598+0000 7f59a35b2700 1 --2- 192.168.123.102:0/1885930897 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5994096f90 0x7f59940a4c00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.599 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.598+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 >> 192.168.123.102:0/1885930897 conn(0x7f5994091390 msgr2=0x7f59940937c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:12.600 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.598+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 shutdown_connections 2026-03-10T10:17:12.600 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.598+0000 7f59a35b2700 1 -- 192.168.123.102:0/1885930897 wait complete. 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.681+0000 7f63c093a700 1 -- 192.168.123.102:0/3243436287 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 msgr2=0x7f63b4097440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.681+0000 7f63c093a700 1 --2- 192.168.123.102:0/3243436287 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 0x7f63b4097440 secure :-1 s=READY pgs=282 cs=0 l=1 rev1=1 crypto rx=0x7f63a8009b00 tx=0x7f63a8009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.682+0000 7f63c093a700 1 -- 192.168.123.102:0/3243436287 shutdown_connections 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.682+0000 7f63c093a700 1 --2- 192.168.123.102:0/3243436287 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 0x7f63b4097440 unknown :-1 s=CLOSED pgs=282 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.682+0000 7f63c093a700 1 --2- 192.168.123.102:0/3243436287 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b4095de0 0x7f63b4096200 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.682+0000 7f63c093a700 1 -- 192.168.123.102:0/3243436287 >> 192.168.123.102:0/3243436287 conn(0x7f63b4091360 msgr2=0x7f63b40937c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.682+0000 7f63c093a700 1 -- 192.168.123.102:0/3243436287 shutdown_connections 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.682+0000 7f63c093a700 1 -- 192.168.123.102:0/3243436287 wait complete. 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63c093a700 1 Processor -- start 2026-03-10T10:17:12.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63c093a700 1 -- start start 2026-03-10T10:17:12.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63c093a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b4095de0 0x7f63b412b640 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:12.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63c093a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 0x7f63b412bb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:12.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63c093a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f63b412c1a0 con 0x7f63b4096fe0 2026-03-10T10:17:12.685 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63c093a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f63b412c2e0 con 0x7f63b4095de0 2026-03-10T10:17:12.686 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63baffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 0x7f63b412bb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:12.686 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63baffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 0x7f63b412bb80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:43174/0 (socket says 192.168.123.102:43174) 2026-03-10T10:17:12.686 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63baffd700 1 -- 192.168.123.102:0/791261844 learned_addr learned my addr 192.168.123.102:0/791261844 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:12.686 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.683+0000 7f63bb7fe700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b4095de0 0x7f63b412b640 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:12.686 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.684+0000 7f63baffd700 1 -- 192.168.123.102:0/791261844 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b4095de0 msgr2=0x7f63b412b640 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.686 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.684+0000 7f63baffd700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b4095de0 0x7f63b412b640 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.686 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.684+0000 7f63baffd700 1 -- 192.168.123.102:0/791261844 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f63a80097e0 con 0x7f63b4096fe0 2026-03-10T10:17:12.686 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.684+0000 7f63baffd700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 0x7f63b412bb80 secure :-1 s=READY pgs=283 cs=0 l=1 rev1=1 crypto rx=0x7f63a8006010 tx=0x7f63a80048c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:12.687 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.684+0000 7f63b8ff9700 1 -- 192.168.123.102:0/791261844 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f63a801d070 con 0x7f63b4096fe0 2026-03-10T10:17:12.687 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.684+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f63b4130d30 con 0x7f63b4096fe0 2026-03-10T10:17:12.687 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.685+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f63b4131220 con 0x7f63b4096fe0 2026-03-10T10:17:12.687 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.685+0000 7f63b8ff9700 1 -- 192.168.123.102:0/791261844 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f63a800bb40 con 0x7f63b4096fe0 2026-03-10T10:17:12.687 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.685+0000 7f63b8ff9700 1 -- 192.168.123.102:0/791261844 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f63a800f670 con 0x7f63b4096fe0 2026-03-10T10:17:12.688 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.686+0000 7f63b8ff9700 1 -- 192.168.123.102:0/791261844 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f63a800f890 con 0x7f63b4096fe0 2026-03-10T10:17:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.688+0000 7f63b8ff9700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f63ac06c4e0 0x7f63ac06e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.688+0000 7f63b8ff9700 1 -- 192.168.123.102:0/791261844 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f63a8004d20 con 0x7f63b4096fe0 2026-03-10T10:17:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.688+0000 7f63bb7fe700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f63ac06c4e0 0x7f63ac06e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.688+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f63a0005320 con 0x7f63b4096fe0 2026-03-10T10:17:12.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.689+0000 7f63bb7fe700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f63ac06c4e0 0x7f63ac06e9a0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f63b4096e40 tx=0x7f63b00093d0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:12.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.691+0000 7f63b8ff9700 1 -- 192.168.123.102:0/791261844 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f63a8093050 con 0x7f63b4096fe0 2026-03-10T10:17:12.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:12 vm05.local ceph-mon[59051]: Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:17:12.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:12 vm05.local ceph-mon[59051]: pgmap v85: 65 pgs: 65 active+clean; 452 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 9.0 KiB/s rd, 356 B/s wr, 10 op/s 2026-03-10T10:17:12.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:12 vm05.local ceph-mon[59051]: from='client.14548 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:12.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:12 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/1972441971' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:17:12.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:12 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/787281336' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:17:12.845 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.843+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f63a0005190 con 0x7f63b4096fe0 2026-03-10T10:17:12.845 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.843+0000 7f63b8ff9700 1 -- 192.168.123.102:0/791261844 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f63a8027070 con 0x7f63b4096fe0 2026-03-10T10:17:12.846 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:17:12.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f63ac06c4e0 msgr2=0x7f63ac06e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f63ac06c4e0 0x7f63ac06e9a0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f63b4096e40 tx=0x7f63b00093d0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 msgr2=0x7f63b412bb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 0x7f63b412bb80 secure :-1 s=READY pgs=283 cs=0 l=1 rev1=1 crypto rx=0x7f63a8006010 tx=0x7f63a80048c0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 shutdown_connections 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f63ac06c4e0 0x7f63ac06e9a0 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b4095de0 0x7f63b412b640 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 --2- 192.168.123.102:0/791261844 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b4096fe0 0x7f63b412bb80 unknown :-1 s=CLOSED pgs=283 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 >> 192.168.123.102:0/791261844 conn(0x7f63b4091360 msgr2=0x7f63b409a210 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 shutdown_connections 2026-03-10T10:17:12.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:12.847+0000 7f63c093a700 1 -- 192.168.123.102:0/791261844 wait complete. 2026-03-10T10:17:13.418 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:13 vm02.local ceph-mon[50200]: from='client.24339 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:13.419 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:13 vm02.local ceph-mon[50200]: from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:13.419 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:13 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/791261844' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:17:13.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:13 vm05.local ceph-mon[59051]: from='client.24339 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:13.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:13 vm05.local ceph-mon[59051]: from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:13.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:13 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/791261844' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:17:14.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:14 vm02.local ceph-mon[50200]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:14.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:14 vm02.local ceph-mon[50200]: pgmap v86: 65 pgs: 65 active+clean; 457 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 9.4 KiB/s rd, 767 B/s wr, 10 op/s 2026-03-10T10:17:14.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:14 vm05.local ceph-mon[59051]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:14.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:14 vm05.local ceph-mon[59051]: pgmap v86: 65 pgs: 65 active+clean; 457 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 9.4 KiB/s rd, 767 B/s wr, 10 op/s 2026-03-10T10:17:15.761 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:15 vm05.local ceph-mon[59051]: pgmap v87: 65 pgs: 65 active+clean; 457 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 8.1 KiB/s rd, 767 B/s wr, 8 op/s 2026-03-10T10:17:15.762 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:15 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:15.762 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:15 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:16.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:15 vm02.local ceph-mon[50200]: pgmap v87: 65 pgs: 65 active+clean; 457 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 8.1 KiB/s rd, 767 B/s wr, 8 op/s 2026-03-10T10:17:16.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:15 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:17:16.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:15 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:18.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:18 vm02.local ceph-mon[50200]: pgmap v88: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 7.7 KiB/s rd, 1.1 KiB/s wr, 9 op/s 2026-03-10T10:17:18.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:18 vm05.local ceph-mon[59051]: pgmap v88: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 7.7 KiB/s rd, 1.1 KiB/s wr, 9 op/s 2026-03-10T10:17:20.245 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:20 vm05.local ceph-mon[59051]: pgmap v89: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s 2026-03-10T10:17:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:20 vm02.local ceph-mon[50200]: pgmap v89: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s 2026-03-10T10:17:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:22 vm02.local ceph-mon[50200]: pgmap v90: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s 2026-03-10T10:17:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:22 vm05.local ceph-mon[59051]: pgmap v90: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s 2026-03-10T10:17:24.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:24 vm02.local ceph-mon[50200]: pgmap v91: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s 2026-03-10T10:17:24.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:24 vm05.local ceph-mon[59051]: pgmap v91: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s 2026-03-10T10:17:26.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:26 vm02.local ceph-mon[50200]: pgmap v92: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s 2026-03-10T10:17:26.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:26 vm05.local ceph-mon[59051]: pgmap v92: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s 2026-03-10T10:17:28.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:27 vm02.local ceph-mon[50200]: pgmap v93: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s 2026-03-10T10:17:28.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:27 vm05.local ceph-mon[59051]: pgmap v93: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s 2026-03-10T10:17:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:30 vm02.local ceph-mon[50200]: pgmap v94: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 767 B/s rd, 85 B/s wr, 1 op/s 2026-03-10T10:17:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:30 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:30 vm05.local ceph-mon[59051]: pgmap v94: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 767 B/s rd, 85 B/s wr, 1 op/s 2026-03-10T10:17:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:30 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:32.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:32 vm02.local ceph-mon[50200]: pgmap v95: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:17:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:32 vm05.local ceph-mon[59051]: pgmap v95: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:17:34.238 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:34 vm02.local ceph-mon[50200]: pgmap v96: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:34 vm05.local ceph-mon[59051]: pgmap v96: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:36.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:36 vm02.local ceph-mon[50200]: pgmap v97: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:17:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:36 vm05.local ceph-mon[59051]: pgmap v97: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:17:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:37 vm05.local ceph-mon[59051]: pgmap v98: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:38.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:37 vm02.local ceph-mon[50200]: pgmap v98: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:40.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:39 vm02.local ceph-mon[50200]: pgmap v99: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:40.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:39 vm05.local ceph-mon[59051]: pgmap v99: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:42.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:41 vm02.local ceph-mon[50200]: pgmap v100: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:42.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:41 vm05.local ceph-mon[59051]: pgmap v100: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:42.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.946+0000 7f9fa0563700 1 -- 192.168.123.102:0/2956760553 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f98075a40 msgr2=0x7f9f98077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:42.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.946+0000 7f9fa0563700 1 --2- 192.168.123.102:0/2956760553 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f98075a40 0x7f9f98077ed0 secure :-1 s=READY pgs=284 cs=0 l=1 rev1=1 crypto rx=0x7f9f9000b3a0 tx=0x7f9f9000b6b0 comp rx=0 tx=0).stop 2026-03-10T10:17:42.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.947+0000 7f9fa0563700 1 -- 192.168.123.102:0/2956760553 shutdown_connections 2026-03-10T10:17:42.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.947+0000 7f9fa0563700 1 --2- 192.168.123.102:0/2956760553 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f98075a40 0x7f9f98077ed0 unknown :-1 s=CLOSED pgs=284 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:42.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.947+0000 7f9fa0563700 1 --2- 192.168.123.102:0/2956760553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f98072b50 0x7f9f98072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:42.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.947+0000 7f9fa0563700 1 -- 192.168.123.102:0/2956760553 >> 192.168.123.102:0/2956760553 conn(0x7f9f9806dae0 msgr2=0x7f9f9806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.947+0000 7f9fa0563700 1 -- 192.168.123.102:0/2956760553 shutdown_connections 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.947+0000 7f9fa0563700 1 -- 192.168.123.102:0/2956760553 wait complete. 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.947+0000 7f9fa0563700 1 Processor -- start 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.947+0000 7f9fa0563700 1 -- start start 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9fa0563700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f98072b50 0x7f9f98083980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9fa0563700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f981b0ac0 0x7f9f981b2eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9fa0563700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9f981b33f0 con 0x7f9f98072b50 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9fa0563700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9f981b3530 con 0x7f9f981b0ac0 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9f9dafe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f981b0ac0 0x7f9f981b2eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9f9dafe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f981b0ac0 0x7f9f981b2eb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:42598/0 (socket says 192.168.123.102:42598) 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9f9dafe700 1 -- 192.168.123.102:0/166773226 learned_addr learned my addr 192.168.123.102:0/166773226 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:42.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9f9e2ff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f98072b50 0x7f9f98083980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:42.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9f9dafe700 1 -- 192.168.123.102:0/166773226 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f98072b50 msgr2=0x7f9f98083980 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:42.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9f9dafe700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f98072b50 0x7f9f98083980 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:42.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9f9dafe700 1 -- 192.168.123.102:0/166773226 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9f9000b050 con 0x7f9f981b0ac0 2026-03-10T10:17:42.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.948+0000 7f9f9dafe700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f981b0ac0 0x7f9f981b2eb0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f9f90007b60 tx=0x7f9f900095a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:42.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.949+0000 7f9f8f7fe700 1 -- 192.168.123.102:0/166773226 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9f9000e050 con 0x7f9f981b0ac0 2026-03-10T10:17:42.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.949+0000 7f9fa0563700 1 -- 192.168.123.102:0/166773226 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9f981b3780 con 0x7f9f981b0ac0 2026-03-10T10:17:42.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.949+0000 7f9fa0563700 1 -- 192.168.123.102:0/166773226 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9f981b3c70 con 0x7f9f981b0ac0 2026-03-10T10:17:42.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.949+0000 7f9f8f7fe700 1 -- 192.168.123.102:0/166773226 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9f90003c10 con 0x7f9f981b0ac0 2026-03-10T10:17:42.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.949+0000 7f9f8f7fe700 1 -- 192.168.123.102:0/166773226 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9f9001cbc0 con 0x7f9f981b0ac0 2026-03-10T10:17:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.951+0000 7f9f8f7fe700 1 -- 192.168.123.102:0/166773226 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f9f90022070 con 0x7f9f981b0ac0 2026-03-10T10:17:42.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.952+0000 7f9f8f7fe700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9f8406c530 0x7f9f8406e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:42.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.952+0000 7f9f9e2ff700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9f8406c530 0x7f9f8406e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:42.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.952+0000 7f9f8f7fe700 1 -- 192.168.123.102:0/166773226 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f9f90091660 con 0x7f9f981b0ac0 2026-03-10T10:17:42.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.952+0000 7f9f9e2ff700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9f8406c530 0x7f9f8406e9f0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f9f94006fd0 tx=0x7f9f94009380 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:42.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.953+0000 7f9fa0563700 1 -- 192.168.123.102:0/166773226 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9f7c005320 con 0x7f9f981b0ac0 2026-03-10T10:17:42.959 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:42.956+0000 7f9f8f7fe700 1 -- 192.168.123.102:0/166773226 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f9f9005f910 con 0x7f9f981b0ac0 2026-03-10T10:17:43.080 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.079+0000 7f9fa0563700 1 -- 192.168.123.102:0/166773226 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f9f7c000bf0 con 0x7f9f8406c530 2026-03-10T10:17:43.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.080+0000 7f9f8f7fe700 1 -- 192.168.123.102:0/166773226 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f9f7c000bf0 con 0x7f9f8406c530 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 -- 192.168.123.102:0/166773226 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9f8406c530 msgr2=0x7f9f8406e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9f8406c530 0x7f9f8406e9f0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f9f94006fd0 tx=0x7f9f94009380 comp rx=0 tx=0).stop 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 -- 192.168.123.102:0/166773226 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f981b0ac0 msgr2=0x7f9f981b2eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f981b0ac0 0x7f9f981b2eb0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f9f90007b60 tx=0x7f9f900095a0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 -- 192.168.123.102:0/166773226 shutdown_connections 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9f98072b50 0x7f9f98083980 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9f8406c530 0x7f9f8406e9f0 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 --2- 192.168.123.102:0/166773226 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f981b0ac0 0x7f9f981b2eb0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 -- 192.168.123.102:0/166773226 >> 192.168.123.102:0/166773226 conn(0x7f9f9806dae0 msgr2=0x7f9f9806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 -- 192.168.123.102:0/166773226 shutdown_connections 2026-03-10T10:17:43.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.091+0000 7f9f8d7fa700 1 -- 192.168.123.102:0/166773226 wait complete. 2026-03-10T10:17:43.103 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:17:43.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.213+0000 7f37affff700 1 -- 192.168.123.102:0/443141939 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b0075a10 msgr2=0x7f37b0077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.213+0000 7f37affff700 1 --2- 192.168.123.102:0/443141939 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b0075a10 0x7f37b0077ea0 secure :-1 s=READY pgs=285 cs=0 l=1 rev1=1 crypto rx=0x7f37a800b780 tx=0x7f37a800ba90 comp rx=0 tx=0).stop 2026-03-10T10:17:43.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 -- 192.168.123.102:0/443141939 shutdown_connections 2026-03-10T10:17:43.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 --2- 192.168.123.102:0/443141939 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b0075a10 0x7f37b0077ea0 unknown :-1 s=CLOSED pgs=285 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 --2- 192.168.123.102:0/443141939 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f37b0072b20 0x7f37b0072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 -- 192.168.123.102:0/443141939 >> 192.168.123.102:0/443141939 conn(0x7f37b006daa0 msgr2=0x7f37b006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:43.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 -- 192.168.123.102:0/443141939 shutdown_connections 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 -- 192.168.123.102:0/443141939 wait complete. 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 Processor -- start 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 -- start start 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f37b0072b20 0x7f37b0082eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b00833f0 0x7f37b0083870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f37b012e650 con 0x7f37b00833f0 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.214+0000 7f37affff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f37b012e7c0 con 0x7f37b0072b20 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.215+0000 7f37ae7fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b00833f0 0x7f37b0083870 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.215+0000 7f37ae7fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b00833f0 0x7f37b0083870 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57182/0 (socket says 192.168.123.102:57182) 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.215+0000 7f37ae7fc700 1 -- 192.168.123.102:0/2281905029 learned_addr learned my addr 192.168.123.102:0/2281905029 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.215+0000 7f37ae7fc700 1 -- 192.168.123.102:0/2281905029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f37b0072b20 msgr2=0x7f37b0082eb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.215+0000 7f37ae7fc700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f37b0072b20 0x7f37b0082eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.215+0000 7f37ae7fc700 1 -- 192.168.123.102:0/2281905029 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f37a800b050 con 0x7f37b00833f0 2026-03-10T10:17:43.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.215+0000 7f37ae7fc700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b00833f0 0x7f37b0083870 secure :-1 s=READY pgs=286 cs=0 l=1 rev1=1 crypto rx=0x7f37a800b750 tx=0x7f37a80093b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:43.218 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.215+0000 7f37b48a6700 1 -- 192.168.123.102:0/2281905029 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f37a8003bb0 con 0x7f37b00833f0 2026-03-10T10:17:43.219 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.216+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f37b012ea40 con 0x7f37b00833f0 2026-03-10T10:17:43.219 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.216+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f37b012ef90 con 0x7f37b00833f0 2026-03-10T10:17:43.219 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.217+0000 7f37b48a6700 1 -- 192.168.123.102:0/2281905029 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f37a80117d0 con 0x7f37b00833f0 2026-03-10T10:17:43.219 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.217+0000 7f37b48a6700 1 -- 192.168.123.102:0/2281905029 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f37a8004510 con 0x7f37b00833f0 2026-03-10T10:17:43.219 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.218+0000 7f37b48a6700 1 -- 192.168.123.102:0/2281905029 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f37a802b030 con 0x7f37b00833f0 2026-03-10T10:17:43.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.218+0000 7f37b48a6700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f379806c530 0x7f379806e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:43.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.218+0000 7f37aeffd700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f379806c530 0x7f379806e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:43.220 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.219+0000 7f37b48a6700 1 -- 192.168.123.102:0/2281905029 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f37a808e0c0 con 0x7f37b00833f0 2026-03-10T10:17:43.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.219+0000 7f37aeffd700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f379806c530 0x7f379806e9f0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f37a00098a0 tx=0x7f37a0006d90 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:43.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.219+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f379c005320 con 0x7f37b00833f0 2026-03-10T10:17:43.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.224+0000 7f37b48a6700 1 -- 192.168.123.102:0/2281905029 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f37a805c2f0 con 0x7f37b00833f0 2026-03-10T10:17:43.366 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.364+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f379c000bf0 con 0x7f379806c530 2026-03-10T10:17:43.368 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.366+0000 7f37b48a6700 1 -- 192.168.123.102:0/2281905029 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f379c000bf0 con 0x7f379806c530 2026-03-10T10:17:43.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f379806c530 msgr2=0x7f379806e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f379806c530 0x7f379806e9f0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f37a00098a0 tx=0x7f37a0006d90 comp rx=0 tx=0).stop 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b00833f0 msgr2=0x7f37b0083870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b00833f0 0x7f37b0083870 secure :-1 s=READY pgs=286 cs=0 l=1 rev1=1 crypto rx=0x7f37a800b750 tx=0x7f37a80093b0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 shutdown_connections 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f379806c530 0x7f379806e9f0 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f37b0072b20 0x7f37b0082eb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 --2- 192.168.123.102:0/2281905029 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f37b00833f0 0x7f37b0083870 unknown :-1 s=CLOSED pgs=286 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 >> 192.168.123.102:0/2281905029 conn(0x7f37b006daa0 msgr2=0x7f37b006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 shutdown_connections 2026-03-10T10:17:43.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.373+0000 7f37affff700 1 -- 192.168.123.102:0/2281905029 wait complete. 2026-03-10T10:17:43.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.488+0000 7fe62b485700 1 -- 192.168.123.102:0/3708845394 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624104340 msgr2=0x7fe6241047a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.488+0000 7fe62b485700 1 --2- 192.168.123.102:0/3708845394 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624104340 0x7fe6241047a0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fe620009a60 tx=0x7fe620009d70 comp rx=0 tx=0).stop 2026-03-10T10:17:43.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.490+0000 7fe62b485700 1 -- 192.168.123.102:0/3708845394 shutdown_connections 2026-03-10T10:17:43.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.490+0000 7fe62b485700 1 --2- 192.168.123.102:0/3708845394 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624104340 0x7fe6241047a0 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.490+0000 7fe62b485700 1 --2- 192.168.123.102:0/3708845394 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe624103140 0x7fe624103560 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.490+0000 7fe62b485700 1 -- 192.168.123.102:0/3708845394 >> 192.168.123.102:0/3708845394 conn(0x7fe6240fe6c0 msgr2=0x7fe624100b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:43.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.491+0000 7fe62b485700 1 -- 192.168.123.102:0/3708845394 shutdown_connections 2026-03-10T10:17:43.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.491+0000 7fe62b485700 1 -- 192.168.123.102:0/3708845394 wait complete. 2026-03-10T10:17:43.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.491+0000 7fe62b485700 1 Processor -- start 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.491+0000 7fe62b485700 1 -- start start 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.491+0000 7fe62b485700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624103140 0x7fe624198a10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.491+0000 7fe62b485700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe624104340 0x7fe624198f50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.491+0000 7fe62b485700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe624199570 con 0x7fe624104340 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.491+0000 7fe62b485700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe6241996b0 con 0x7fe624103140 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.492+0000 7fe629221700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624103140 0x7fe624198a10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.492+0000 7fe629221700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624103140 0x7fe624198a10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:42622/0 (socket says 192.168.123.102:42622) 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.492+0000 7fe629221700 1 -- 192.168.123.102:0/4042667315 learned_addr learned my addr 192.168.123.102:0/4042667315 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.492+0000 7fe628a20700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe624104340 0x7fe624198f50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.492+0000 7fe629221700 1 -- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe624104340 msgr2=0x7fe624198f50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.492+0000 7fe629221700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe624104340 0x7fe624198f50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.492+0000 7fe629221700 1 -- 192.168.123.102:0/4042667315 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe620009710 con 0x7fe624103140 2026-03-10T10:17:43.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.492+0000 7fe629221700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624103140 0x7fe624198a10 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fe61800ea00 tx=0x7fe61800ed10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:43.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.493+0000 7fe6167fc700 1 -- 192.168.123.102:0/4042667315 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe61800cb80 con 0x7fe624103140 2026-03-10T10:17:43.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.493+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe62419e160 con 0x7fe624103140 2026-03-10T10:17:43.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.493+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe62419e6b0 con 0x7fe624103140 2026-03-10T10:17:43.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.493+0000 7fe6167fc700 1 -- 192.168.123.102:0/4042667315 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe618004500 con 0x7fe624103140 2026-03-10T10:17:43.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.493+0000 7fe6167fc700 1 -- 192.168.123.102:0/4042667315 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe618010430 con 0x7fe624103140 2026-03-10T10:17:43.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.494+0000 7fe6167fc700 1 -- 192.168.123.102:0/4042667315 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fe618003680 con 0x7fe624103140 2026-03-10T10:17:43.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.495+0000 7fe6167fc700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe61006c5b0 0x7fe61006ea70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:43.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.495+0000 7fe6167fc700 1 -- 192.168.123.102:0/4042667315 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fe618014070 con 0x7fe624103140 2026-03-10T10:17:43.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.494+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe608005320 con 0x7fe624103140 2026-03-10T10:17:43.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.495+0000 7fe628a20700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe61006c5b0 0x7fe61006ea70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:43.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.495+0000 7fe628a20700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe61006c5b0 0x7fe61006ea70 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7fe620003930 tx=0x7fe62000b540 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:43.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.499+0000 7fe6167fc700 1 -- 192.168.123.102:0/4042667315 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fe618059ce0 con 0x7fe624103140 2026-03-10T10:17:43.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.632+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fe608000bf0 con 0x7fe61006c5b0 2026-03-10T10:17:43.660 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:17:43.660 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (2m) 41s ago 2m 22.5M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:17:43.660 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (3m) 41s ago 3m 8154k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:17:43.660 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (2m) 42s ago 2m 8166k - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:17:43.660 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (2m) 41s ago 2m 7415k - 18.2.1 5be31c24972a 51802fb57170 2026-03-10T10:17:43.660 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (2m) 42s ago 2m 7407k - 18.2.1 5be31c24972a f275982dc269 2026-03-10T10:17:43.660 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (2m) 41s ago 2m 78.5M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:17:43.660 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (47s) 41s ago 47s 12.0M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:17:43.660 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (49s) 41s ago 49s 14.3M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (48s) 42s ago 48s 12.2M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (46s) 42s ago 46s 16.6M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:9283,8765,8443 running (3m) 41s ago 3m 502M - 18.2.1 5be31c24972a 8bea583521d3 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (2m) 42s ago 2m 450M - 18.2.1 5be31c24972a ff545ad0664a 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (3m) 41s ago 3m 52.7M 2048M 18.2.1 5be31c24972a ab92d831cc1d 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (2m) 42s ago 2m 45.0M 2048M 18.2.1 5be31c24972a cea7d23f93a6 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (2m) 41s ago 2m 16.0M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (2m) 42s ago 2m 14.6M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (2m) 41s ago 2m 45.5M 4096M 18.2.1 5be31c24972a 9d7f135a3f3b 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (111s) 41s ago 111s 46.2M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (101s) 41s ago 101s 45.2M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (92s) 42s ago 92s 43.6M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (83s) 42s ago 83s 43.8M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (73s) 42s ago 73s 43.6M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (2m) 41s ago 2m 34.3M - 2.43.0 a07b618ecd1d a607fd039cb6 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.641+0000 7fe6167fc700 1 -- 192.168.123.102:0/4042667315 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3288 (secure 0 0 0) 0x7fe608000bf0 con 0x7fe61006c5b0 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe61006c5b0 msgr2=0x7fe61006ea70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe61006c5b0 0x7fe61006ea70 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7fe620003930 tx=0x7fe62000b540 comp rx=0 tx=0).stop 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624103140 msgr2=0x7fe624198a10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624103140 0x7fe624198a10 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fe61800ea00 tx=0x7fe61800ed10 comp rx=0 tx=0).stop 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 shutdown_connections 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fe61006c5b0 0x7fe61006ea70 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe624103140 0x7fe624198a10 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 --2- 192.168.123.102:0/4042667315 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe624104340 0x7fe624198f50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 >> 192.168.123.102:0/4042667315 conn(0x7fe6240fe6c0 msgr2=0x7fe624100aa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 shutdown_connections 2026-03-10T10:17:43.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.648+0000 7fe62b485700 1 -- 192.168.123.102:0/4042667315 wait complete. 2026-03-10T10:17:43.728 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.724+0000 7f95f7706700 1 -- 192.168.123.102:0/3220060601 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f00759a0 msgr2=0x7f95f0077e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.729 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.724+0000 7f95f7706700 1 --2- 192.168.123.102:0/3220060601 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f00759a0 0x7f95f0077e30 secure :-1 s=READY pgs=287 cs=0 l=1 rev1=1 crypto rx=0x7f95ec009230 tx=0x7f95ec009260 comp rx=0 tx=0).stop 2026-03-10T10:17:43.729 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.726+0000 7f95f7706700 1 -- 192.168.123.102:0/3220060601 shutdown_connections 2026-03-10T10:17:43.729 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.726+0000 7f95f7706700 1 --2- 192.168.123.102:0/3220060601 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f00759a0 0x7f95f0077e30 unknown :-1 s=CLOSED pgs=287 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.729 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.726+0000 7f95f7706700 1 --2- 192.168.123.102:0/3220060601 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f95f0072ab0 0x7f95f0072ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.729 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.726+0000 7f95f7706700 1 -- 192.168.123.102:0/3220060601 >> 192.168.123.102:0/3220060601 conn(0x7f95f006da90 msgr2=0x7f95f006fef0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:43.729 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.727+0000 7f95f7706700 1 -- 192.168.123.102:0/3220060601 shutdown_connections 2026-03-10T10:17:43.729 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.727+0000 7f95f7706700 1 -- 192.168.123.102:0/3220060601 wait complete. 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.727+0000 7f95f7706700 1 Processor -- start 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.727+0000 7f95f7706700 1 -- start start 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.727+0000 7f95f7706700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f95f0072ab0 0x7f95f0081550 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.727+0000 7f95f7706700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f0081a90 0x7f95f012e0e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.727+0000 7f95f7706700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f95f0081fa0 con 0x7f95f0081a90 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.727+0000 7f95f7706700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f95f0082110 con 0x7f95f0072ab0 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.728+0000 7f95f5f03700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f0081a90 0x7f95f012e0e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.728+0000 7f95f5f03700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f0081a90 0x7f95f012e0e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57214/0 (socket says 192.168.123.102:57214) 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.728+0000 7f95f5f03700 1 -- 192.168.123.102:0/1210043377 learned_addr learned my addr 192.168.123.102:0/1210043377 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.728+0000 7f95f5f03700 1 -- 192.168.123.102:0/1210043377 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f95f0072ab0 msgr2=0x7f95f0081550 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.728+0000 7f95f5f03700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f95f0072ab0 0x7f95f0081550 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.728+0000 7f95f5f03700 1 -- 192.168.123.102:0/1210043377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f95ec008ee0 con 0x7f95f0081a90 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.728+0000 7f95f5f03700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f0081a90 0x7f95f012e0e0 secure :-1 s=READY pgs=288 cs=0 l=1 rev1=1 crypto rx=0x7f95ec00fea0 tx=0x7f95ec00ff80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:43.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.728+0000 7f95e37fe700 1 -- 192.168.123.102:0/1210043377 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f95ec00d9b0 con 0x7f95f0081a90 2026-03-10T10:17:43.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.728+0000 7f95f7706700 1 -- 192.168.123.102:0/1210043377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f95f012e620 con 0x7f95f0081a90 2026-03-10T10:17:43.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.729+0000 7f95f7706700 1 -- 192.168.123.102:0/1210043377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f95f012eb10 con 0x7f95f0081a90 2026-03-10T10:17:43.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.729+0000 7f95e37fe700 1 -- 192.168.123.102:0/1210043377 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f95ec00e470 con 0x7f95f0081a90 2026-03-10T10:17:43.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.729+0000 7f95e37fe700 1 -- 192.168.123.102:0/1210043377 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f95ec025d80 con 0x7f95f0081a90 2026-03-10T10:17:43.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.730+0000 7f95f7706700 1 -- 192.168.123.102:0/1210043377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f95d4005320 con 0x7f95f0081a90 2026-03-10T10:17:43.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.731+0000 7f95e37fe700 1 -- 192.168.123.102:0/1210043377 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f95ec025460 con 0x7f95f0081a90 2026-03-10T10:17:43.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.732+0000 7f95e37fe700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f95dc06c530 0x7f95dc06e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:43.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.732+0000 7f95f6704700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f95dc06c530 0x7f95dc06e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:43.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.733+0000 7f95f6704700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f95dc06c530 0x7f95dc06e9f0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f95f01c0020 tx=0x7f95e4006d20 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:43.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.733+0000 7f95e37fe700 1 -- 192.168.123.102:0/1210043377 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f95ec08d890 con 0x7f95f0081a90 2026-03-10T10:17:43.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.735+0000 7f95e37fe700 1 -- 192.168.123.102:0/1210043377 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f95ec058130 con 0x7f95f0081a90 2026-03-10T10:17:43.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.929+0000 7f95f7706700 1 -- 192.168.123.102:0/1210043377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f95d4005cc0 con 0x7f95f0081a90 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 6 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:43.974 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 14 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.970+0000 7f95e37fe700 1 -- 192.168.123.102:0/1210043377 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+558 (secure 0 0 0) 0x7f95ec05b750 con 0x7f95f0081a90 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.973+0000 7f95e17fa700 1 -- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f95dc06c530 msgr2=0x7f95dc06e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.973+0000 7f95e17fa700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f95dc06c530 0x7f95dc06e9f0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f95f01c0020 tx=0x7f95e4006d20 comp rx=0 tx=0).stop 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.973+0000 7f95e17fa700 1 -- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f0081a90 msgr2=0x7f95f012e0e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.973+0000 7f95e17fa700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f0081a90 0x7f95f012e0e0 secure :-1 s=READY pgs=288 cs=0 l=1 rev1=1 crypto rx=0x7f95ec00fea0 tx=0x7f95ec00ff80 comp rx=0 tx=0).stop 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.973+0000 7f95e17fa700 1 -- 192.168.123.102:0/1210043377 shutdown_connections 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.973+0000 7f95e17fa700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f95dc06c530 0x7f95dc06e9f0 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.973+0000 7f95e17fa700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f95f0072ab0 0x7f95f0081550 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.973+0000 7f95e17fa700 1 --2- 192.168.123.102:0/1210043377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f95f0081a90 0x7f95f012e0e0 unknown :-1 s=CLOSED pgs=288 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:43.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.973+0000 7f95e17fa700 1 -- 192.168.123.102:0/1210043377 >> 192.168.123.102:0/1210043377 conn(0x7f95f006da90 msgr2=0x7f95f00771e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:43.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.974+0000 7f95e17fa700 1 -- 192.168.123.102:0/1210043377 shutdown_connections 2026-03-10T10:17:43.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:43.976+0000 7f95e17fa700 1 -- 192.168.123.102:0/1210043377 wait complete. 2026-03-10T10:17:44.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.111+0000 7f192186b700 1 -- 192.168.123.102:0/3550917831 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c107d90 msgr2=0x7f191c10a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.111+0000 7f192186b700 1 --2- 192.168.123.102:0/3550917831 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c107d90 0x7f191c10a1c0 secure :-1 s=READY pgs=289 cs=0 l=1 rev1=1 crypto rx=0x7f190c009b00 tx=0x7f190c009e10 comp rx=0 tx=0).stop 2026-03-10T10:17:44.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.111+0000 7f192186b700 1 -- 192.168.123.102:0/3550917831 shutdown_connections 2026-03-10T10:17:44.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.111+0000 7f192186b700 1 --2- 192.168.123.102:0/3550917831 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f191c10a700 0x7f191c10cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.111+0000 7f192186b700 1 --2- 192.168.123.102:0/3550917831 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c107d90 0x7f191c10a1c0 unknown :-1 s=CLOSED pgs=289 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.111+0000 7f192186b700 1 -- 192.168.123.102:0/3550917831 >> 192.168.123.102:0/3550917831 conn(0x7f191c06dda0 msgr2=0x7f191c070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:44.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.112+0000 7f192186b700 1 -- 192.168.123.102:0/3550917831 shutdown_connections 2026-03-10T10:17:44.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.112+0000 7f192186b700 1 -- 192.168.123.102:0/3550917831 wait complete. 2026-03-10T10:17:44.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.112+0000 7f192186b700 1 Processor -- start 2026-03-10T10:17:44.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.112+0000 7f192186b700 1 -- start start 2026-03-10T10:17:44.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.112+0000 7f192186b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f191c107d90 0x7f191c116a70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:44.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.112+0000 7f192186b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c10a700 0x7f191c116fb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:44.124 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.112+0000 7f192186b700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f191c1175d0 con 0x7f191c10a700 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.112+0000 7f192186b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f191c117710 con 0x7f191c107d90 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.113+0000 7f191a7fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c10a700 0x7f191c116fb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.113+0000 7f191a7fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c10a700 0x7f191c116fb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57234/0 (socket says 192.168.123.102:57234) 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.113+0000 7f191a7fc700 1 -- 192.168.123.102:0/434584199 learned_addr learned my addr 192.168.123.102:0/434584199 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.113+0000 7f191a7fc700 1 -- 192.168.123.102:0/434584199 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f191c107d90 msgr2=0x7f191c116a70 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.113+0000 7f191a7fc700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f191c107d90 0x7f191c116a70 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.113+0000 7f191a7fc700 1 -- 192.168.123.102:0/434584199 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f190c0097e0 con 0x7f191c10a700 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.114+0000 7f191a7fc700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c10a700 0x7f191c116fb0 secure :-1 s=READY pgs=290 cs=0 l=1 rev1=1 crypto rx=0x7f191000eb10 tx=0x7f191000eed0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.114+0000 7f1920869700 1 -- 192.168.123.102:0/434584199 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f191000cca0 con 0x7f191c10a700 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.114+0000 7f1920869700 1 -- 192.168.123.102:0/434584199 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f191000ce00 con 0x7f191c10a700 2026-03-10T10:17:44.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.114+0000 7f1920869700 1 -- 192.168.123.102:0/434584199 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1910018910 con 0x7f191c10a700 2026-03-10T10:17:44.126 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.116+0000 7f192186b700 1 -- 192.168.123.102:0/434584199 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f191c073020 con 0x7f191c10a700 2026-03-10T10:17:44.126 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.116+0000 7f192186b700 1 -- 192.168.123.102:0/434584199 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f191c0734f0 con 0x7f191c10a700 2026-03-10T10:17:44.126 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.116+0000 7f192186b700 1 -- 192.168.123.102:0/434584199 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f191c110c60 con 0x7f191c10a700 2026-03-10T10:17:44.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.122+0000 7f1920869700 1 -- 192.168.123.102:0/434584199 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f1910018a70 con 0x7f191c10a700 2026-03-10T10:17:44.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.125+0000 7f1920869700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f190408e2a0 0x7f1904090760 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:44.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.126+0000 7f1920869700 1 -- 192.168.123.102:0/434584199 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f1910014070 con 0x7f191c10a700 2026-03-10T10:17:44.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.126+0000 7f1920869700 1 -- 192.168.123.102:0/434584199 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f191008c5b0 con 0x7f191c10a700 2026-03-10T10:17:44.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.128+0000 7f191affd700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f190408e2a0 0x7f1904090760 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:44.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.129+0000 7f191affd700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f190408e2a0 0x7f1904090760 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f190c00b5c0 tx=0x7f190c005fb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:44.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:43 vm02.local ceph-mon[50200]: pgmap v101: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:44.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:43 vm02.local ceph-mon[50200]: from='client.24357 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:44.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:43 vm02.local ceph-mon[50200]: from='client.14576 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:43 vm05.local ceph-mon[59051]: pgmap v101: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:43 vm05.local ceph-mon[59051]: from='client.24357 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:43 vm05.local ceph-mon[59051]: from='client.14576 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:44.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.297+0000 7f192186b700 1 -- 192.168.123.102:0/434584199 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f191c066e80 con 0x7f191c10a700 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:17:44.306 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:17:44.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.304+0000 7f1920869700 1 -- 192.168.123.102:0/434584199 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1854 (secure 0 0 0) 0x7f191005a6c0 con 0x7f191c10a700 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 -- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f190408e2a0 msgr2=0x7f1904090760 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f190408e2a0 0x7f1904090760 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f190c00b5c0 tx=0x7f190c005fb0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 -- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c10a700 msgr2=0x7f191c116fb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c10a700 0x7f191c116fb0 secure :-1 s=READY pgs=290 cs=0 l=1 rev1=1 crypto rx=0x7f191000eb10 tx=0x7f191000eed0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 -- 192.168.123.102:0/434584199 shutdown_connections 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f190408e2a0 0x7f1904090760 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f191c107d90 0x7f191c116a70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 --2- 192.168.123.102:0/434584199 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f191c10a700 0x7f191c116fb0 unknown :-1 s=CLOSED pgs=290 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 -- 192.168.123.102:0/434584199 >> 192.168.123.102:0/434584199 conn(0x7f191c06dda0 msgr2=0x7f191c10c150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 -- 192.168.123.102:0/434584199 shutdown_connections 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.307+0000 7f19027fc700 1 -- 192.168.123.102:0/434584199 wait complete. 2026-03-10T10:17:44.310 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:17:44.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.398+0000 7f5ef3184700 1 -- 192.168.123.102:0/3309490137 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5eec075a10 msgr2=0x7f5eec077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.398+0000 7f5ef3184700 1 --2- 192.168.123.102:0/3309490137 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5eec075a10 0x7f5eec077ea0 secure :-1 s=READY pgs=291 cs=0 l=1 rev1=1 crypto rx=0x7f5ee400b780 tx=0x7f5ee400ba90 comp rx=0 tx=0).stop 2026-03-10T10:17:44.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.398+0000 7f5ef3184700 1 -- 192.168.123.102:0/3309490137 shutdown_connections 2026-03-10T10:17:44.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.398+0000 7f5ef3184700 1 --2- 192.168.123.102:0/3309490137 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5eec075a10 0x7f5eec077ea0 unknown :-1 s=CLOSED pgs=291 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.398+0000 7f5ef3184700 1 --2- 192.168.123.102:0/3309490137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5eec072b20 0x7f5eec072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.398+0000 7f5ef3184700 1 -- 192.168.123.102:0/3309490137 >> 192.168.123.102:0/3309490137 conn(0x7f5eec06daa0 msgr2=0x7f5eec06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:44.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.399+0000 7f5ef3184700 1 -- 192.168.123.102:0/3309490137 shutdown_connections 2026-03-10T10:17:44.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.399+0000 7f5ef3184700 1 -- 192.168.123.102:0/3309490137 wait complete. 2026-03-10T10:17:44.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.399+0000 7f5ef3184700 1 Processor -- start 2026-03-10T10:17:44.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.399+0000 7f5ef3184700 1 -- start start 2026-03-10T10:17:44.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.399+0000 7f5ef3184700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5eec072b20 0x7f5eec082eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:44.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.399+0000 7f5ef3184700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5eec0833f0 0x7f5eec083870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:44.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.399+0000 7f5ef3184700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5eec12e650 con 0x7f5eec072b20 2026-03-10T10:17:44.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.399+0000 7f5ef3184700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5eec12e7c0 con 0x7f5eec0833f0 2026-03-10T10:17:44.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.400+0000 7f5ef1981700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5eec0833f0 0x7f5eec083870 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:44.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.400+0000 7f5ef1981700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5eec0833f0 0x7f5eec083870 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:42682/0 (socket says 192.168.123.102:42682) 2026-03-10T10:17:44.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.400+0000 7f5ef1981700 1 -- 192.168.123.102:0/2629222390 learned_addr learned my addr 192.168.123.102:0/2629222390 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:44.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.400+0000 7f5ef2182700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5eec072b20 0x7f5eec082eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:44.404 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.400+0000 7f5ef2182700 1 -- 192.168.123.102:0/2629222390 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5eec0833f0 msgr2=0x7f5eec083870 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.404 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.400+0000 7f5ef2182700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5eec0833f0 0x7f5eec083870 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.404 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.400+0000 7f5ef2182700 1 -- 192.168.123.102:0/2629222390 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5ee400b050 con 0x7f5eec072b20 2026-03-10T10:17:44.404 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.400+0000 7f5ef1981700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5eec0833f0 0x7f5eec083870 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:17:44.404 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.400+0000 7f5ef2182700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5eec072b20 0x7f5eec082eb0 secure :-1 s=READY pgs=292 cs=0 l=1 rev1=1 crypto rx=0x7f5ee800b700 tx=0x7f5ee800ba10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:44.404 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.403+0000 7f5ee37fe700 1 -- 192.168.123.102:0/2629222390 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5ee8011840 con 0x7f5eec072b20 2026-03-10T10:17:44.406 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.403+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5eec12eaa0 con 0x7f5eec072b20 2026-03-10T10:17:44.406 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.403+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5eec12eff0 con 0x7f5eec072b20 2026-03-10T10:17:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.405+0000 7f5ee37fe700 1 -- 192.168.123.102:0/2629222390 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5ee8011e80 con 0x7f5eec072b20 2026-03-10T10:17:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.405+0000 7f5ee37fe700 1 -- 192.168.123.102:0/2629222390 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5ee800f550 con 0x7f5eec072b20 2026-03-10T10:17:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.406+0000 7f5ee37fe700 1 -- 192.168.123.102:0/2629222390 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f5ee800f730 con 0x7f5eec072b20 2026-03-10T10:17:44.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.406+0000 7f5ee37fe700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5ed806c6d0 0x7f5ed806eb90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:44.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.407+0000 7f5ef1981700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5ed806c6d0 0x7f5ed806eb90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:44.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.407+0000 7f5ef1981700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5ed806c6d0 0x7f5ed806eb90 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f5ee400b020 tx=0x7f5ee400afb0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:44.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.407+0000 7f5ee37fe700 1 -- 192.168.123.102:0/2629222390 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f5ee808c650 con 0x7f5eec072b20 2026-03-10T10:17:44.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.408+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5ed0005320 con 0x7f5eec072b20 2026-03-10T10:17:44.415 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.413+0000 7f5ee37fe700 1 -- 192.168.123.102:0/2629222390 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f5ee805a980 con 0x7f5eec072b20 2026-03-10T10:17:44.554 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.552+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f5ed0000bf0 con 0x7f5ed806c6d0 2026-03-10T10:17:44.557 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.556+0000 7f5ee37fe700 1 -- 192.168.123.102:0/2629222390 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f5ed0000bf0 con 0x7f5ed806c6d0 2026-03-10T10:17:44.558 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:17:44.558 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T10:17:44.558 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:17:44.558 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:17:44.558 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T10:17:44.558 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "", 2026-03-10T10:17:44.558 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-10T10:17:44.558 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:17:44.558 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:17:44.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.559+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5ed806c6d0 msgr2=0x7f5ed806eb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.559+0000 7f5ef3184700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5ed806c6d0 0x7f5ed806eb90 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f5ee400b020 tx=0x7f5ee400afb0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.560+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5eec072b20 msgr2=0x7f5eec082eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.560+0000 7f5ef3184700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5eec072b20 0x7f5eec082eb0 secure :-1 s=READY pgs=292 cs=0 l=1 rev1=1 crypto rx=0x7f5ee800b700 tx=0x7f5ee800ba10 comp rx=0 tx=0).stop 2026-03-10T10:17:44.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.560+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 shutdown_connections 2026-03-10T10:17:44.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.560+0000 7f5ef3184700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5eec072b20 0x7f5eec082eb0 unknown :-1 s=CLOSED pgs=292 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.560+0000 7f5ef3184700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f5ed806c6d0 0x7f5ed806eb90 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.561+0000 7f5ef3184700 1 --2- 192.168.123.102:0/2629222390 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5eec0833f0 0x7f5eec083870 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.561+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 >> 192.168.123.102:0/2629222390 conn(0x7f5eec06daa0 msgr2=0x7f5eec06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:44.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.561+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 shutdown_connections 2026-03-10T10:17:44.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.561+0000 7f5ef3184700 1 -- 192.168.123.102:0/2629222390 wait complete. 2026-03-10T10:17:44.643 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.641+0000 7f3352127700 1 -- 192.168.123.102:0/2066382178 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f334c075a40 msgr2=0x7f334c077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.643 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.641+0000 7f3352127700 1 --2- 192.168.123.102:0/2066382178 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f334c075a40 0x7f334c077ed0 secure :-1 s=READY pgs=293 cs=0 l=1 rev1=1 crypto rx=0x7f334400b780 tx=0x7f334400ba90 comp rx=0 tx=0).stop 2026-03-10T10:17:44.644 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.642+0000 7f3352127700 1 -- 192.168.123.102:0/2066382178 shutdown_connections 2026-03-10T10:17:44.644 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.642+0000 7f3352127700 1 --2- 192.168.123.102:0/2066382178 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f334c075a40 0x7f334c077ed0 unknown :-1 s=CLOSED pgs=293 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.644 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.642+0000 7f3352127700 1 --2- 192.168.123.102:0/2066382178 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f334c072b50 0x7f334c072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.645 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.642+0000 7f3352127700 1 -- 192.168.123.102:0/2066382178 >> 192.168.123.102:0/2066382178 conn(0x7f334c06dae0 msgr2=0x7f334c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:44.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.643+0000 7f3352127700 1 -- 192.168.123.102:0/2066382178 shutdown_connections 2026-03-10T10:17:44.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.643+0000 7f3352127700 1 -- 192.168.123.102:0/2066382178 wait complete. 2026-03-10T10:17:44.647 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.643+0000 7f3352127700 1 Processor -- start 2026-03-10T10:17:44.647 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.643+0000 7f3352127700 1 -- start start 2026-03-10T10:17:44.647 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.644+0000 7f3352127700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f334c072b50 0x7f334c083100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:44.648 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.644+0000 7f3352127700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f334c083640 0x7f334c1b30f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:44.648 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.644+0000 7f3352127700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f334c083b50 con 0x7f334c072b50 2026-03-10T10:17:44.648 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.644+0000 7f3352127700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f334c083cc0 con 0x7f334c083640 2026-03-10T10:17:44.648 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.644+0000 7f334b7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f334c072b50 0x7f334c083100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:44.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.644+0000 7f334affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f334c083640 0x7f334c1b30f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:44.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.644+0000 7f334affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f334c083640 0x7f334c1b30f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:42702/0 (socket says 192.168.123.102:42702) 2026-03-10T10:17:44.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.644+0000 7f334affd700 1 -- 192.168.123.102:0/1781791409 learned_addr learned my addr 192.168.123.102:0/1781791409 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:17:44.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.645+0000 7f334affd700 1 -- 192.168.123.102:0/1781791409 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f334c072b50 msgr2=0x7f334c083100 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.645+0000 7f334affd700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f334c072b50 0x7f334c083100 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.645+0000 7f334affd700 1 -- 192.168.123.102:0/1781791409 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f334400b050 con 0x7f334c083640 2026-03-10T10:17:44.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.646+0000 7f334affd700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f334c083640 0x7f334c1b30f0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f334400b750 tx=0x7f3344009d70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:44.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.647+0000 7f3348ff9700 1 -- 192.168.123.102:0/1781791409 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f334401ebf0 con 0x7f334c083640 2026-03-10T10:17:44.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.648+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f334c1b3690 con 0x7f334c083640 2026-03-10T10:17:44.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.648+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f334c1b3be0 con 0x7f334c083640 2026-03-10T10:17:44.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.649+0000 7f3348ff9700 1 -- 192.168.123.102:0/1781791409 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f334401ed50 con 0x7f334c083640 2026-03-10T10:17:44.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.649+0000 7f3348ff9700 1 -- 192.168.123.102:0/1781791409 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3344004020 con 0x7f334c083640 2026-03-10T10:17:44.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.649+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f334c04ea90 con 0x7f334c083640 2026-03-10T10:17:44.652 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.650+0000 7f3348ff9700 1 -- 192.168.123.102:0/1781791409 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f334404b020 con 0x7f334c083640 2026-03-10T10:17:44.652 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.651+0000 7f3348ff9700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f333406c600 0x7f333406eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:17:44.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.651+0000 7f334b7fe700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f333406c600 0x7f333406eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:17:44.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.652+0000 7f334b7fe700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f333406c600 0x7f333406eac0 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f333c009bb0 tx=0x7f333c008040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:17:44.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.652+0000 7f3348ff9700 1 -- 192.168.123.102:0/1781791409 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f3344022070 con 0x7f334c083640 2026-03-10T10:17:44.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.654+0000 7f3348ff9700 1 -- 192.168.123.102:0/1781791409 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f334405d5d0 con 0x7f334c083640 2026-03-10T10:17:44.843 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.840+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f334c1b3fb0 con 0x7f334c083640 2026-03-10T10:17:44.843 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.841+0000 7f3348ff9700 1 -- 192.168.123.102:0/1781791409 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f3344060bf0 con 0x7f334c083640 2026-03-10T10:17:44.846 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:17:44.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.847+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f333406c600 msgr2=0x7f333406eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.847+0000 7f3352127700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f333406c600 0x7f333406eac0 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f333c009bb0 tx=0x7f333c008040 comp rx=0 tx=0).stop 2026-03-10T10:17:44.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.847+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f334c083640 msgr2=0x7f334c1b30f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:17:44.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.847+0000 7f3352127700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f334c083640 0x7f334c1b30f0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f334400b750 tx=0x7f3344009d70 comp rx=0 tx=0).stop 2026-03-10T10:17:44.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.847+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 shutdown_connections 2026-03-10T10:17:44.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.848+0000 7f3352127700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f334c072b50 0x7f334c083100 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.848+0000 7f3352127700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f333406c600 0x7f333406eac0 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.848+0000 7f3352127700 1 --2- 192.168.123.102:0/1781791409 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f334c083640 0x7f334c1b30f0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:17:44.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.848+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 >> 192.168.123.102:0/1781791409 conn(0x7f334c06dae0 msgr2=0x7f334c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:17:44.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.848+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 shutdown_connections 2026-03-10T10:17:44.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:17:44.848+0000 7f3352127700 1 -- 192.168.123.102:0/1781791409 wait complete. 2026-03-10T10:17:45.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:44 vm02.local ceph-mon[50200]: from='client.24361 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:45.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:44 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/1210043377' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:17:45.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:44 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/434584199' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:17:45.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:44 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:45.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:44 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/1781791409' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:17:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:44 vm05.local ceph-mon[59051]: from='client.24361 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:44 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/1210043377' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:17:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:44 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/434584199' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:17:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:44 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:44 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/1781791409' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:17:46.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:45 vm02.local ceph-mon[50200]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:46.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:45 vm02.local ceph-mon[50200]: pgmap v102: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:45 vm05.local ceph-mon[59051]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:17:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:45 vm05.local ceph-mon[59051]: pgmap v102: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:48.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:47 vm02.local ceph-mon[50200]: pgmap v103: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:47 vm05.local ceph-mon[59051]: pgmap v103: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:50.240 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:49 vm05.local ceph-mon[59051]: pgmap v104: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:50.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:49 vm02.local ceph-mon[50200]: pgmap v104: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:52.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:51 vm02.local ceph-mon[50200]: pgmap v105: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:51 vm05.local ceph-mon[59051]: pgmap v105: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:54.465 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:54 vm02.local ceph-mon[50200]: pgmap v106: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:54.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:54 vm05.local ceph-mon[59051]: pgmap v106: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:56.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:56 vm02.local ceph-mon[50200]: pgmap v107: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:56.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:56 vm05.local ceph-mon[59051]: pgmap v107: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:17:58.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:57 vm02.local ceph-mon[50200]: pgmap v108: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:17:58.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:57 vm05.local ceph-mon[59051]: pgmap v108: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:18:00.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:59 vm02.local ceph-mon[50200]: pgmap v109: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:00.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:17:59 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:59 vm05.local ceph-mon[59051]: pgmap v109: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:17:59 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:01 vm02.local ceph-mon[50200]: pgmap v110: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:01 vm05.local ceph-mon[59051]: pgmap v110: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:04.140 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:03 vm02.local ceph-mon[50200]: pgmap v111: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:18:04.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:03 vm05.local ceph-mon[59051]: pgmap v111: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:18:06.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:05 vm02.local ceph-mon[50200]: pgmap v112: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:06.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:05 vm05.local ceph-mon[59051]: pgmap v112: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:08.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:07 vm02.local ceph-mon[50200]: pgmap v113: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:18:08.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:07 vm05.local ceph-mon[59051]: pgmap v113: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:18:10.309 INFO:tasks.workunit.client.0.vm02.stderr:Updating files: 88% (12352/13941) Updating files: 89% (12408/13941) Updating files: 90% (12547/13941) Updating files: 91% (12687/13941) Updating files: 92% (12826/13941) Updating files: 93% (12966/13941) Updating files: 94% (13105/13941) Updating files: 95% (13244/13941) Updating files: 96% (13384/13941) Updating files: 97% (13523/13941) Updating files: 98% (13663/13941) Updating files: 99% (13802/13941) Updating files: 100% (13941/13941) Updating files: 100% (13941/13941), done. 2026-03-10T10:18:10.724 INFO:tasks.workunit.client.1.vm05.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T10:18:10.724 INFO:tasks.workunit.client.1.vm05.stderr: 2026-03-10T10:18:10.724 INFO:tasks.workunit.client.1.vm05.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T10:18:10.724 INFO:tasks.workunit.client.1.vm05.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T10:18:10.724 INFO:tasks.workunit.client.1.vm05.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr: 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr: 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr: git switch -c 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr: 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr:Or undo this operation with: 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr: 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr: git switch - 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr: 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr: 2026-03-10T10:18:10.725 INFO:tasks.workunit.client.1.vm05.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T10:18:10.730 DEBUG:teuthology.orchestra.run.vm05:> cd -- /home/ubuntu/cephtest/clone.client.1/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.1 2026-03-10T10:18:10.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:10 vm05.local ceph-mon[59051]: pgmap v114: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:10.792 INFO:tasks.workunit.client.1.vm05.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T10:18:10.794 INFO:tasks.workunit.client.1.vm05.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/direct_io' 2026-03-10T10:18:10.795 INFO:tasks.workunit.client.1.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T10:18:10.840 INFO:tasks.workunit.client.1.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T10:18:10.874 INFO:tasks.workunit.client.1.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T10:18:10.902 INFO:tasks.workunit.client.1.vm05.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/direct_io' 2026-03-10T10:18:10.903 INFO:tasks.workunit.client.1.vm05.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/fs' 2026-03-10T10:18:10.903 INFO:tasks.workunit.client.1.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T10:18:10.933 INFO:tasks.workunit.client.1.vm05.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/fs' 2026-03-10T10:18:10.953 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T10:18:10.953 DEBUG:teuthology.orchestra.run.vm05:> dd if=/home/ubuntu/cephtest/workunits.list.client.1 of=/dev/stdout 2026-03-10T10:18:11.011 INFO:tasks.workunit:Running workunits matching suites/fsstress.sh on client.1... 2026-03-10T10:18:11.012 INFO:tasks.workunit:Running workunit suites/fsstress.sh... 2026-03-10T10:18:11.012 DEBUG:teuthology.orchestra.run.vm05:workunit test suites/fsstress.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh 2026-03-10T10:18:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:10 vm02.local ceph-mon[50200]: pgmap v114: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:11.084 INFO:tasks.workunit.client.1.vm05.stderr:+ mkdir -p fsstress 2026-03-10T10:18:11.086 INFO:tasks.workunit.client.1.vm05.stderr:+ pushd fsstress 2026-03-10T10:18:11.087 INFO:tasks.workunit.client.1.vm05.stdout:~/cephtest/mnt.1/client.1/tmp/fsstress ~/cephtest/mnt.1/client.1/tmp 2026-03-10T10:18:11.087 INFO:tasks.workunit.client.1.vm05.stderr:+ wget -q -O ltp-full.tgz http://download.ceph.com/qa/ltp-full-20091231.tgz 2026-03-10T10:18:11.762 INFO:tasks.workunit.client.0.vm02.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr: git switch -c 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr:Or undo this operation with: 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr: git switch - 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr: 2026-03-10T10:18:11.763 INFO:tasks.workunit.client.0.vm02.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T10:18:11.769 DEBUG:teuthology.orchestra.run.vm02:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-10T10:18:11.811 INFO:tasks.workunit.client.0.vm02.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T10:18:11.814 INFO:tasks.workunit.client.0.vm02.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T10:18:11.814 INFO:tasks.workunit.client.0.vm02.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T10:18:12.007 INFO:tasks.workunit.client.0.vm02.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T10:18:12.083 INFO:tasks.workunit.client.0.vm02.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T10:18:12.120 INFO:tasks.workunit.client.0.vm02.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T10:18:12.123 INFO:tasks.workunit.client.0.vm02.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T10:18:12.123 INFO:tasks.workunit.client.0.vm02.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T10:18:12.158 INFO:tasks.workunit.client.0.vm02.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T10:18:12.162 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:18:12.162 DEBUG:teuthology.orchestra.run.vm02:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-10T10:18:12.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:11 vm02.local ceph-mon[50200]: pgmap v115: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:11 vm05.local ceph-mon[59051]: pgmap v115: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:18:12.292 INFO:tasks.workunit:Running workunits matching suites/fsstress.sh on client.0... 2026-03-10T10:18:12.293 INFO:tasks.workunit:Running workunit suites/fsstress.sh... 2026-03-10T10:18:12.293 DEBUG:teuthology.orchestra.run.vm02:workunit test suites/fsstress.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh 2026-03-10T10:18:12.450 INFO:tasks.workunit.client.0.vm02.stderr:+ mkdir -p fsstress 2026-03-10T10:18:12.466 INFO:tasks.workunit.client.0.vm02.stderr:+ pushd fsstress 2026-03-10T10:18:12.472 INFO:tasks.workunit.client.0.vm02.stdout:~/cephtest/mnt.0/client.0/tmp/fsstress ~/cephtest/mnt.0/client.0/tmp 2026-03-10T10:18:12.472 INFO:tasks.workunit.client.0.vm02.stderr:+ wget -q -O ltp-full.tgz http://download.ceph.com/qa/ltp-full-20091231.tgz 2026-03-10T10:18:12.581 INFO:tasks.workunit.client.1.vm05.stderr:+ tar xzf ltp-full.tgz 2026-03-10T10:18:14.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:14 vm02.local ceph-mon[50200]: pgmap v116: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:18:14.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:14 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:18:14.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:14 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:18:14.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:14 vm05.local ceph-mon[59051]: pgmap v116: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:18:14.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:14 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:18:14.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:14 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:18:14.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.964+0000 7f9b287da700 1 -- 192.168.123.102:0/1089985474 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9b20101ec0 msgr2=0x7f9b20102320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:14.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.964+0000 7f9b287da700 1 --2- 192.168.123.102:0/1089985474 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9b20101ec0 0x7f9b20102320 secure :-1 s=READY pgs=294 cs=0 l=1 rev1=1 crypto rx=0x7f9b10009b00 tx=0x7f9b10009e10 comp rx=0 tx=0).stop 2026-03-10T10:18:14.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.965+0000 7f9b287da700 1 -- 192.168.123.102:0/1089985474 shutdown_connections 2026-03-10T10:18:14.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.965+0000 7f9b287da700 1 --2- 192.168.123.102:0/1089985474 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9b20101ec0 0x7f9b20102320 unknown :-1 s=CLOSED pgs=294 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:14.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.965+0000 7f9b287da700 1 --2- 192.168.123.102:0/1089985474 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b20100cc0 0x7f9b201010e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:14.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.965+0000 7f9b287da700 1 -- 192.168.123.102:0/1089985474 >> 192.168.123.102:0/1089985474 conn(0x7f9b200fc240 msgr2=0x7f9b200fe6a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:14.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.965+0000 7f9b287da700 1 -- 192.168.123.102:0/1089985474 shutdown_connections 2026-03-10T10:18:14.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.965+0000 7f9b287da700 1 -- 192.168.123.102:0/1089985474 wait complete. 2026-03-10T10:18:14.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.966+0000 7f9b287da700 1 Processor -- start 2026-03-10T10:18:14.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.967+0000 7f9b287da700 1 -- start start 2026-03-10T10:18:14.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.967+0000 7f9b287da700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b20100cc0 0x7f9b20194390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:14.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.968+0000 7f9b26576700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b20100cc0 0x7f9b20194390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:14.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.968+0000 7f9b26576700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b20100cc0 0x7f9b20194390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40474/0 (socket says 192.168.123.102:40474) 2026-03-10T10:18:14.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.970+0000 7f9b287da700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9b20101ec0 0x7f9b201948d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:14.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.970+0000 7f9b287da700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9b20194ef0 con 0x7f9b20101ec0 2026-03-10T10:18:14.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.970+0000 7f9b287da700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9b20195030 con 0x7f9b20100cc0 2026-03-10T10:18:14.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.970+0000 7f9b26576700 1 -- 192.168.123.102:0/4204810506 learned_addr learned my addr 192.168.123.102:0/4204810506 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:14.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.970+0000 7f9b26576700 1 -- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9b20101ec0 msgr2=0x7f9b201948d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:14.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.970+0000 7f9b25d75700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9b20101ec0 0x7f9b201948d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:14.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.970+0000 7f9b26576700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9b20101ec0 0x7f9b201948d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:14.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.970+0000 7f9b26576700 1 -- 192.168.123.102:0/4204810506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9b100097e0 con 0x7f9b20100cc0 2026-03-10T10:18:14.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.970+0000 7f9b25d75700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9b20101ec0 0x7f9b201948d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:18:14.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.971+0000 7f9b26576700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b20100cc0 0x7f9b20194390 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f9b1c009fd0 tx=0x7f9b1c00eea0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:14.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.971+0000 7f9b177fe700 1 -- 192.168.123.102:0/4204810506 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9b1c009980 con 0x7f9b20100cc0 2026-03-10T10:18:14.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.971+0000 7f9b287da700 1 -- 192.168.123.102:0/4204810506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9b20199ae0 con 0x7f9b20100cc0 2026-03-10T10:18:14.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.971+0000 7f9b177fe700 1 -- 192.168.123.102:0/4204810506 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9b1c004500 con 0x7f9b20100cc0 2026-03-10T10:18:14.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.972+0000 7f9b177fe700 1 -- 192.168.123.102:0/4204810506 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9b1c010450 con 0x7f9b20100cc0 2026-03-10T10:18:14.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.972+0000 7f9b287da700 1 -- 192.168.123.102:0/4204810506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9b2019a030 con 0x7f9b20100cc0 2026-03-10T10:18:14.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.973+0000 7f9b287da700 1 -- 192.168.123.102:0/4204810506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9b2004ea90 con 0x7f9b20100cc0 2026-03-10T10:18:14.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.974+0000 7f9b177fe700 1 -- 192.168.123.102:0/4204810506 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f9b1c00cca0 con 0x7f9b20100cc0 2026-03-10T10:18:14.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.974+0000 7f9b177fe700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9b0c070740 0x7f9b0c072c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:14.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.975+0000 7f9b25d75700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9b0c070740 0x7f9b0c072c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:14.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.975+0000 7f9b177fe700 1 -- 192.168.123.102:0/4204810506 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f9b1c014070 con 0x7f9b20100cc0 2026-03-10T10:18:14.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.975+0000 7f9b25d75700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9b0c070740 0x7f9b0c072c00 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f9b100051d0 tx=0x7f9b10005230 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:14.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:14.976+0000 7f9b177fe700 1 -- 192.168.123.102:0/4204810506 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f9b1c059890 con 0x7f9b20100cc0 2026-03-10T10:18:15.020 INFO:tasks.workunit.client.0.vm02.stderr:+ tar xzf ltp-full.tgz 2026-03-10T10:18:15.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.166+0000 7f9b287da700 1 -- 192.168.123.102:0/4204810506 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f9b20106810 con 0x7f9b0c070740 2026-03-10T10:18:15.167 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.167+0000 7f9b177fe700 1 -- 192.168.123.102:0/4204810506 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7f9b20106810 con 0x7f9b0c070740 2026-03-10T10:18:15.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 -- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9b0c070740 msgr2=0x7f9b0c072c00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9b0c070740 0x7f9b0c072c00 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f9b100051d0 tx=0x7f9b10005230 comp rx=0 tx=0).stop 2026-03-10T10:18:15.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 -- 192.168.123.102:0/4204810506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b20100cc0 msgr2=0x7f9b20194390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b20100cc0 0x7f9b20194390 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f9b1c009fd0 tx=0x7f9b1c00eea0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 -- 192.168.123.102:0/4204810506 shutdown_connections 2026-03-10T10:18:15.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f9b0c070740 0x7f9b0c072c00 secure :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f9b100051d0 tx=0x7f9b10005230 comp rx=0 tx=0).stop 2026-03-10T10:18:15.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b20100cc0 0x7f9b20194390 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 --2- 192.168.123.102:0/4204810506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9b20101ec0 0x7f9b201948d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 -- 192.168.123.102:0/4204810506 >> 192.168.123.102:0/4204810506 conn(0x7f9b200fc240 msgr2=0x7f9b201050f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:15.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.171+0000 7f9b157fa700 1 -- 192.168.123.102:0/4204810506 shutdown_connections 2026-03-10T10:18:15.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.172+0000 7f9b157fa700 1 -- 192.168.123.102:0/4204810506 wait complete. 2026-03-10T10:18:15.189 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:18:15.289 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.288+0000 7f8d9f974700 1 -- 192.168.123.102:0/3947516697 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d9810a700 msgr2=0x7f8d9810cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.289 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.288+0000 7f8d9f974700 1 --2- 192.168.123.102:0/3947516697 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d9810a700 0x7f8d9810cb90 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f8d9000b3a0 tx=0x7f8d9000b6b0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.289 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.288+0000 7f8d9f974700 1 -- 192.168.123.102:0/3947516697 shutdown_connections 2026-03-10T10:18:15.289 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.288+0000 7f8d9f974700 1 --2- 192.168.123.102:0/3947516697 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d9810a700 0x7f8d9810cb90 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.289 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.288+0000 7f8d9f974700 1 --2- 192.168.123.102:0/3947516697 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d98107d90 0x7f8d9810a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.288+0000 7f8d9f974700 1 -- 192.168.123.102:0/3947516697 >> 192.168.123.102:0/3947516697 conn(0x7f8d9806dae0 msgr2=0x7f8d9806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:15.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.288+0000 7f8d9f974700 1 -- 192.168.123.102:0/3947516697 shutdown_connections 2026-03-10T10:18:15.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.289+0000 7f8d9f974700 1 -- 192.168.123.102:0/3947516697 wait complete. 2026-03-10T10:18:15.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.289+0000 7f8d9f974700 1 Processor -- start 2026-03-10T10:18:15.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.289+0000 7f8d9f974700 1 -- start start 2026-03-10T10:18:15.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.290+0000 7f8d9f974700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d98107d90 0x7f8d981a5440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:15.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.290+0000 7f8d9f974700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d981a5980 0x7f8d981aa9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:15.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.290+0000 7f8d9f974700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8d981a5e90 con 0x7f8d98107d90 2026-03-10T10:18:15.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.290+0000 7f8d9f974700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8d981a6000 con 0x7f8d981a5980 2026-03-10T10:18:15.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.291+0000 7f8d9d710700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d98107d90 0x7f8d981a5440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:15.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.291+0000 7f8d9d710700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d98107d90 0x7f8d981a5440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:38442/0 (socket says 192.168.123.102:38442) 2026-03-10T10:18:15.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.291+0000 7f8d9d710700 1 -- 192.168.123.102:0/1325584108 learned_addr learned my addr 192.168.123.102:0/1325584108 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:15.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.291+0000 7f8d9cf0f700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d981a5980 0x7f8d981aa9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:15.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.291+0000 7f8d9d710700 1 -- 192.168.123.102:0/1325584108 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d981a5980 msgr2=0x7f8d981aa9f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.291+0000 7f8d9d710700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d981a5980 0x7f8d981aa9f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.291+0000 7f8d9d710700 1 -- 192.168.123.102:0/1325584108 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8d9000b050 con 0x7f8d98107d90 2026-03-10T10:18:15.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.291+0000 7f8d9d710700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d98107d90 0x7f8d981a5440 secure :-1 s=READY pgs=295 cs=0 l=1 rev1=1 crypto rx=0x7f8d9400eb10 tx=0x7f8d9400ee20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:15.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.342+0000 7f8d8e7fc700 1 -- 192.168.123.102:0/1325584108 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8d9400cc40 con 0x7f8d98107d90 2026-03-10T10:18:15.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.342+0000 7f8d9f974700 1 -- 192.168.123.102:0/1325584108 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8d981aaf90 con 0x7f8d98107d90 2026-03-10T10:18:15.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.342+0000 7f8d9f974700 1 -- 192.168.123.102:0/1325584108 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8d981ab450 con 0x7f8d98107d90 2026-03-10T10:18:15.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.343+0000 7f8d8e7fc700 1 -- 192.168.123.102:0/1325584108 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8d9400cda0 con 0x7f8d98107d90 2026-03-10T10:18:15.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.343+0000 7f8d9f974700 1 -- 192.168.123.102:0/1325584108 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8d9804ea90 con 0x7f8d98107d90 2026-03-10T10:18:15.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.344+0000 7f8d8e7fc700 1 -- 192.168.123.102:0/1325584108 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8d940105e0 con 0x7f8d98107d90 2026-03-10T10:18:15.346 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.345+0000 7f8d8e7fc700 1 -- 192.168.123.102:0/1325584108 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f8d94010740 con 0x7f8d98107d90 2026-03-10T10:18:15.347 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.346+0000 7f8d8e7fc700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8d8406c580 0x7f8d8406ea40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:15.347 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.346+0000 7f8d8e7fc700 1 -- 192.168.123.102:0/1325584108 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f8d94014070 con 0x7f8d98107d90 2026-03-10T10:18:15.347 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.346+0000 7f8d9cf0f700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8d8406c580 0x7f8d8406ea40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:15.347 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.347+0000 7f8d9cf0f700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8d8406c580 0x7f8d8406ea40 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f8d9000ba80 tx=0x7f8d9000bee0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:15.352 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.351+0000 7f8d8e7fc700 1 -- 192.168.123.102:0/1325584108 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f8d9405add0 con 0x7f8d98107d90 2026-03-10T10:18:15.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.554+0000 7f8d9f974700 1 -- 192.168.123.102:0/1325584108 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f8d9810a1a0 con 0x7f8d8406c580 2026-03-10T10:18:15.555 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:15 vm02.local ceph-mon[50200]: Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-10T10:18:15.555 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:15 vm02.local ceph-mon[50200]: Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T10:18:15.555 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:15 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:18:15.555 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:15 vm02.local ceph-mon[50200]: Upgrade: Need to upgrade myself (mgr.vm02.zmavgl) 2026-03-10T10:18:15.555 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:15 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:15.557 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.556+0000 7f8d8e7fc700 1 -- 192.168.123.102:0/1325584108 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7f8d9810a1a0 con 0x7f8d8406c580 2026-03-10T10:18:15.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.561+0000 7f8d83fff700 1 -- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8d8406c580 msgr2=0x7f8d8406ea40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.561+0000 7f8d83fff700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8d8406c580 0x7f8d8406ea40 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f8d9000ba80 tx=0x7f8d9000bee0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.561+0000 7f8d83fff700 1 -- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d98107d90 msgr2=0x7f8d981a5440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.561+0000 7f8d83fff700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d98107d90 0x7f8d981a5440 secure :-1 s=READY pgs=295 cs=0 l=1 rev1=1 crypto rx=0x7f8d9400eb10 tx=0x7f8d9400ee20 comp rx=0 tx=0).stop 2026-03-10T10:18:15.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.561+0000 7f8d83fff700 1 -- 192.168.123.102:0/1325584108 shutdown_connections 2026-03-10T10:18:15.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.561+0000 7f8d83fff700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d98107d90 0x7f8d981a5440 unknown :-1 s=CLOSED pgs=295 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.561+0000 7f8d83fff700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8d8406c580 0x7f8d8406ea40 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.561+0000 7f8d83fff700 1 --2- 192.168.123.102:0/1325584108 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d981a5980 0x7f8d981aa9f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.561+0000 7f8d83fff700 1 -- 192.168.123.102:0/1325584108 >> 192.168.123.102:0/1325584108 conn(0x7f8d9806dae0 msgr2=0x7f8d9806e7c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:15.572 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.566+0000 7f8d83fff700 1 -- 192.168.123.102:0/1325584108 shutdown_connections 2026-03-10T10:18:15.572 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.566+0000 7f8d83fff700 1 -- 192.168.123.102:0/1325584108 wait complete. 2026-03-10T10:18:15.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.742+0000 7f662c508700 1 -- 192.168.123.102:0/1325199349 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6624101800 msgr2=0x7f6624103bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.742+0000 7f662c508700 1 --2- 192.168.123.102:0/1325199349 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6624101800 0x7f6624103bf0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f6620009a60 tx=0x7f6620009d70 comp rx=0 tx=0).stop 2026-03-10T10:18:15.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.743+0000 7f662c508700 1 -- 192.168.123.102:0/1325199349 shutdown_connections 2026-03-10T10:18:15.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.743+0000 7f662c508700 1 --2- 192.168.123.102:0/1325199349 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6624104130 0x7f6624106520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.743+0000 7f662c508700 1 --2- 192.168.123.102:0/1325199349 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6624101800 0x7f6624103bf0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.743+0000 7f662c508700 1 -- 192.168.123.102:0/1325199349 >> 192.168.123.102:0/1325199349 conn(0x7f66240fb130 msgr2=0x7f66240fd590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:15.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.743+0000 7f662c508700 1 -- 192.168.123.102:0/1325199349 shutdown_connections 2026-03-10T10:18:15.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.743+0000 7f662c508700 1 -- 192.168.123.102:0/1325199349 wait complete. 2026-03-10T10:18:15.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f662c508700 1 Processor -- start 2026-03-10T10:18:15.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f662c508700 1 -- start start 2026-03-10T10:18:15.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f662c508700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6624101800 0x7f662419cb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:15.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f662c508700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6624104130 0x7f662419d0c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:15.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f662c508700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f662419d6e0 con 0x7f6624101800 2026-03-10T10:18:15.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f662c508700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f662419d820 con 0x7f6624104130 2026-03-10T10:18:15.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f662a2a4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6624101800 0x7f662419cb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:15.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f6629aa3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6624104130 0x7f662419d0c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:15.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f6629aa3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6624104130 0x7f662419d0c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40502/0 (socket says 192.168.123.102:40502) 2026-03-10T10:18:15.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.744+0000 7f6629aa3700 1 -- 192.168.123.102:0/1636671892 learned_addr learned my addr 192.168.123.102:0/1636671892 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:15.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.745+0000 7f662a2a4700 1 -- 192.168.123.102:0/1636671892 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6624104130 msgr2=0x7f662419d0c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.745+0000 7f662a2a4700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6624104130 0x7f662419d0c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.745+0000 7f662a2a4700 1 -- 192.168.123.102:0/1636671892 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6620009710 con 0x7f6624101800 2026-03-10T10:18:15.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.745+0000 7f662a2a4700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6624101800 0x7f662419cb80 secure :-1 s=READY pgs=296 cs=0 l=1 rev1=1 crypto rx=0x7f6620009a60 tx=0x7f662000f690 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:15.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.745+0000 7f661b7fe700 1 -- 192.168.123.102:0/1636671892 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f662001d070 con 0x7f6624101800 2026-03-10T10:18:15.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.745+0000 7f661b7fe700 1 -- 192.168.123.102:0/1636671892 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f662000fbf0 con 0x7f6624101800 2026-03-10T10:18:15.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.745+0000 7f661b7fe700 1 -- 192.168.123.102:0/1636671892 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f66200176d0 con 0x7f6624101800 2026-03-10T10:18:15.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.746+0000 7f662c508700 1 -- 192.168.123.102:0/1636671892 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f66241a2270 con 0x7f6624101800 2026-03-10T10:18:15.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.746+0000 7f662c508700 1 -- 192.168.123.102:0/1636671892 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f66241a2710 con 0x7f6624101800 2026-03-10T10:18:15.747 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.747+0000 7f662c508700 1 -- 192.168.123.102:0/1636671892 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6608005320 con 0x7f6624101800 2026-03-10T10:18:15.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.748+0000 7f661b7fe700 1 -- 192.168.123.102:0/1636671892 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f6620021a80 con 0x7f6624101800 2026-03-10T10:18:15.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.748+0000 7f661b7fe700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f661006c490 0x7f661006e950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:15.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.749+0000 7f6629aa3700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f661006c490 0x7f661006e950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:15.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.749+0000 7f661b7fe700 1 -- 192.168.123.102:0/1636671892 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f662005f070 con 0x7f6624101800 2026-03-10T10:18:15.751 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.750+0000 7f6629aa3700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f661006c490 0x7f661006e950 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f6614005950 tx=0x7f6614009500 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:15.755 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.754+0000 7f661b7fe700 1 -- 192.168.123.102:0/1636671892 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f662005b0c0 con 0x7f6624101800 2026-03-10T10:18:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:15 vm05.local ceph-mon[59051]: Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-10T10:18:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:15 vm05.local ceph-mon[59051]: Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T10:18:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:15 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:18:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:15 vm05.local ceph-mon[59051]: Upgrade: Need to upgrade myself (mgr.vm02.zmavgl) 2026-03-10T10:18:15.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:15 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:15.937 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.937+0000 7f662c508700 1 -- 192.168.123.102:0/1636671892 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f6608000bf0 con 0x7f661006c490 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (2m) 73s ago 3m 22.5M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (3m) 73s ago 3m 8154k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (2m) 74s ago 2m 8166k - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (3m) 73s ago 3m 7415k - 18.2.1 5be31c24972a 51802fb57170 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (2m) 74s ago 2m 7407k - 18.2.1 5be31c24972a f275982dc269 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (2m) 73s ago 3m 78.5M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (79s) 73s ago 79s 12.0M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (81s) 73s ago 81s 14.3M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (80s) 74s ago 80s 12.2M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (78s) 74s ago 78s 16.6M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:9283,8765,8443 running (4m) 73s ago 4m 502M - 18.2.1 5be31c24972a 8bea583521d3 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (2m) 74s ago 2m 450M - 18.2.1 5be31c24972a ff545ad0664a 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (4m) 73s ago 4m 52.7M 2048M 18.2.1 5be31c24972a ab92d831cc1d 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (2m) 74s ago 2m 45.0M 2048M 18.2.1 5be31c24972a cea7d23f93a6 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (3m) 73s ago 3m 16.0M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (2m) 74s ago 2m 14.6M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (2m) 73s ago 2m 45.5M 4096M 18.2.1 5be31c24972a 9d7f135a3f3b 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (2m) 73s ago 2m 46.2M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (2m) 73s ago 2m 45.2M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (2m) 74s ago 2m 43.6M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (115s) 74s ago 115s 43.8M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (105s) 74s ago 105s 43.6M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:18:15.948 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (2m) 73s ago 3m 34.3M - 2.43.0 a07b618ecd1d a607fd039cb6 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.945+0000 7f661b7fe700 1 -- 192.168.123.102:0/1636671892 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3288 (secure 0 0 0) 0x7f6608000bf0 con 0x7f661006c490 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 -- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f661006c490 msgr2=0x7f661006e950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f661006c490 0x7f661006e950 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f6614005950 tx=0x7f6614009500 comp rx=0 tx=0).stop 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 -- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6624101800 msgr2=0x7f662419cb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6624101800 0x7f662419cb80 secure :-1 s=READY pgs=296 cs=0 l=1 rev1=1 crypto rx=0x7f6620009a60 tx=0x7f662000f690 comp rx=0 tx=0).stop 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 -- 192.168.123.102:0/1636671892 shutdown_connections 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6624101800 0x7f662419cb80 unknown :-1 s=CLOSED pgs=296 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f661006c490 0x7f661006e950 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 --2- 192.168.123.102:0/1636671892 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6624104130 0x7f662419d0c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 -- 192.168.123.102:0/1636671892 >> 192.168.123.102:0/1636671892 conn(0x7f66240fb130 msgr2=0x7f6624104da0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 -- 192.168.123.102:0/1636671892 shutdown_connections 2026-03-10T10:18:15.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:15.948+0000 7f66197fa700 1 -- 192.168.123.102:0/1636671892 wait complete. 2026-03-10T10:18:16.044 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.043+0000 7f44253f4700 1 -- 192.168.123.102:0/903136021 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420075a40 msgr2=0x7f4420077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.044 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.043+0000 7f44253f4700 1 --2- 192.168.123.102:0/903136021 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420075a40 0x7f4420077ed0 secure :-1 s=READY pgs=297 cs=0 l=1 rev1=1 crypto rx=0x7f441800cd40 tx=0x7f441800a320 comp rx=0 tx=0).stop 2026-03-10T10:18:16.044 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.043+0000 7f44253f4700 1 -- 192.168.123.102:0/903136021 shutdown_connections 2026-03-10T10:18:16.044 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.043+0000 7f44253f4700 1 --2- 192.168.123.102:0/903136021 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420075a40 0x7f4420077ed0 unknown :-1 s=CLOSED pgs=297 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.044 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.043+0000 7f44253f4700 1 --2- 192.168.123.102:0/903136021 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4420072b50 0x7f4420072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.044 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.043+0000 7f44253f4700 1 -- 192.168.123.102:0/903136021 >> 192.168.123.102:0/903136021 conn(0x7f442006dae0 msgr2=0x7f442006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.043+0000 7f44253f4700 1 -- 192.168.123.102:0/903136021 shutdown_connections 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.043+0000 7f44253f4700 1 -- 192.168.123.102:0/903136021 wait complete. 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.044+0000 7f44253f4700 1 Processor -- start 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.044+0000 7f44253f4700 1 -- start start 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.044+0000 7f44253f4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4420072b50 0x7f4420082f30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.044+0000 7f44253f4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420083470 0x7f44200838f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.044+0000 7f44253f4700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f44201b33e0 con 0x7f4420083470 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.044+0000 7f44253f4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f44201b3550 con 0x7f4420072b50 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.045+0000 7f441e7fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420083470 0x7f44200838f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.045+0000 7f441e7fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420083470 0x7f44200838f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:38470/0 (socket says 192.168.123.102:38470) 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.045+0000 7f441e7fc700 1 -- 192.168.123.102:0/3960836119 learned_addr learned my addr 192.168.123.102:0/3960836119 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.045+0000 7f441effd700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4420072b50 0x7f4420082f30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.045+0000 7f441e7fc700 1 -- 192.168.123.102:0/3960836119 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4420072b50 msgr2=0x7f4420082f30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.045+0000 7f441e7fc700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4420072b50 0x7f4420082f30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.045+0000 7f441e7fc700 1 -- 192.168.123.102:0/3960836119 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f441800c9f0 con 0x7f4420083470 2026-03-10T10:18:16.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.045+0000 7f441e7fc700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420083470 0x7f44200838f0 secure :-1 s=READY pgs=298 cs=0 l=1 rev1=1 crypto rx=0x7f441800bb40 tx=0x7f441800bb70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:16.047 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.047+0000 7f4407fff700 1 -- 192.168.123.102:0/3960836119 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f44180047e0 con 0x7f4420083470 2026-03-10T10:18:16.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.047+0000 7f44253f4700 1 -- 192.168.123.102:0/3960836119 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f44201b37d0 con 0x7f4420083470 2026-03-10T10:18:16.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.047+0000 7f44253f4700 1 -- 192.168.123.102:0/3960836119 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f44201b3d20 con 0x7f4420083470 2026-03-10T10:18:16.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.048+0000 7f4407fff700 1 -- 192.168.123.102:0/3960836119 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f4418009d70 con 0x7f4420083470 2026-03-10T10:18:16.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.048+0000 7f4407fff700 1 -- 192.168.123.102:0/3960836119 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f441801fb40 con 0x7f4420083470 2026-03-10T10:18:16.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.049+0000 7f44253f4700 1 -- 192.168.123.102:0/3960836119 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f440c005320 con 0x7f4420083470 2026-03-10T10:18:16.050 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.050+0000 7f4407fff700 1 -- 192.168.123.102:0/3960836119 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f44180074e0 con 0x7f4420083470 2026-03-10T10:18:16.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.050+0000 7f4407fff700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f440806c600 0x7f440806eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:16.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.051+0000 7f4407fff700 1 -- 192.168.123.102:0/3960836119 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f441800de40 con 0x7f4420083470 2026-03-10T10:18:16.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.051+0000 7f441effd700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f440806c600 0x7f440806eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:16.055 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.054+0000 7f441effd700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f440806c600 0x7f440806eac0 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f4410009c80 tx=0x7f4410009400 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:16.055 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.054+0000 7f4407fff700 1 -- 192.168.123.102:0/3960836119 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f441805bbb0 con 0x7f4420083470 2026-03-10T10:18:16.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.277+0000 7f44253f4700 1 -- 192.168.123.102:0/3960836119 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f440c005cc0 con 0x7f4420083470 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.280+0000 7f4407fff700 1 -- 192.168.123.102:0/3960836119 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+558 (secure 0 0 0) 0x7f441805b740 con 0x7f4420083470 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 6 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 14 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:18:16.281 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:18:16.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.282+0000 7f4405ffb700 1 -- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f440806c600 msgr2=0x7f440806eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.282+0000 7f4405ffb700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f440806c600 0x7f440806eac0 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f4410009c80 tx=0x7f4410009400 comp rx=0 tx=0).stop 2026-03-10T10:18:16.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.284+0000 7f4405ffb700 1 -- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420083470 msgr2=0x7f44200838f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.284+0000 7f4405ffb700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420083470 0x7f44200838f0 secure :-1 s=READY pgs=298 cs=0 l=1 rev1=1 crypto rx=0x7f441800bb40 tx=0x7f441800bb70 comp rx=0 tx=0).stop 2026-03-10T10:18:16.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.284+0000 7f4405ffb700 1 -- 192.168.123.102:0/3960836119 shutdown_connections 2026-03-10T10:18:16.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.284+0000 7f4405ffb700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f440806c600 0x7f440806eac0 unknown :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.284+0000 7f4405ffb700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4420072b50 0x7f4420082f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.284+0000 7f4405ffb700 1 --2- 192.168.123.102:0/3960836119 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4420083470 0x7f44200838f0 unknown :-1 s=CLOSED pgs=298 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.284+0000 7f4405ffb700 1 -- 192.168.123.102:0/3960836119 >> 192.168.123.102:0/3960836119 conn(0x7f442006dae0 msgr2=0x7f442006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:16.285 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.284+0000 7f4405ffb700 1 -- 192.168.123.102:0/3960836119 shutdown_connections 2026-03-10T10:18:16.285 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.284+0000 7f4405ffb700 1 -- 192.168.123.102:0/3960836119 wait complete. 2026-03-10T10:18:16.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.394+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/833999916 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8bc107d90 msgr2=0x7fc8bc10a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.396 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.394+0000 7fc8c1bfb700 1 --2- 192.168.123.102:0/833999916 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8bc107d90 0x7fc8bc10a1c0 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7fc8b8009a60 tx=0x7fc8b8009d70 comp rx=0 tx=0).stop 2026-03-10T10:18:16.396 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.395+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/833999916 shutdown_connections 2026-03-10T10:18:16.396 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.395+0000 7fc8c1bfb700 1 --2- 192.168.123.102:0/833999916 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc8bc10a700 0x7fc8bc10cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.396 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.395+0000 7fc8c1bfb700 1 --2- 192.168.123.102:0/833999916 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8bc107d90 0x7fc8bc10a1c0 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.396 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.395+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/833999916 >> 192.168.123.102:0/833999916 conn(0x7fc8bc06daa0 msgr2=0x7fc8bc06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/833999916 shutdown_connections 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/833999916 wait complete. 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c1bfb700 1 Processor -- start 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c1bfb700 1 -- start start 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c1bfb700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc8bc107d90 0x7fc8bc1beb40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c1bfb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8bc10a700 0x7fc8bc1bf080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c1bfb700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc8bc1bf6a0 con 0x7fc8bc107d90 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c1bfb700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc8bc1bf7e0 con 0x7fc8bc10a700 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c0bf9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc8bc107d90 0x7fc8bc1beb40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c0bf9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc8bc107d90 0x7fc8bc1beb40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:38498/0 (socket says 192.168.123.102:38498) 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8c0bf9700 1 -- 192.168.123.102:0/991673671 learned_addr learned my addr 192.168.123.102:0/991673671 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:16.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.397+0000 7fc8b3fff700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8bc10a700 0x7fc8bc1bf080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:16.399 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.398+0000 7fc8b3fff700 1 -- 192.168.123.102:0/991673671 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc8bc107d90 msgr2=0x7fc8bc1beb40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.399 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.398+0000 7fc8b3fff700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc8bc107d90 0x7fc8bc1beb40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.399 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.398+0000 7fc8b3fff700 1 -- 192.168.123.102:0/991673671 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc8b8009710 con 0x7fc8bc10a700 2026-03-10T10:18:16.399 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.398+0000 7fc8b3fff700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8bc10a700 0x7fc8bc1bf080 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7fc8ac009fd0 tx=0x7fc8ac00ec30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:16.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.399+0000 7fc8b1ffb700 1 -- 192.168.123.102:0/991673671 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc8ac00c970 con 0x7fc8bc10a700 2026-03-10T10:18:16.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.399+0000 7fc8b1ffb700 1 -- 192.168.123.102:0/991673671 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc8ac011470 con 0x7fc8bc10a700 2026-03-10T10:18:16.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.400+0000 7fc8b1ffb700 1 -- 192.168.123.102:0/991673671 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc8ac01a610 con 0x7fc8bc10a700 2026-03-10T10:18:16.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.400+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc8bc1c4290 con 0x7fc8bc10a700 2026-03-10T10:18:16.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.400+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc8bc1c47b0 con 0x7fc8bc10a700 2026-03-10T10:18:16.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.402+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc8bc1b8d60 con 0x7fc8bc10a700 2026-03-10T10:18:16.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.403+0000 7fc8b1ffb700 1 -- 192.168.123.102:0/991673671 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fc8ac00cad0 con 0x7fc8bc10a700 2026-03-10T10:18:16.404 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.403+0000 7fc8b1ffb700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc8a406c400 0x7fc8a406e8c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:16.404 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.403+0000 7fc8b1ffb700 1 -- 192.168.123.102:0/991673671 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fc8ac08b730 con 0x7fc8bc10a700 2026-03-10T10:18:16.405 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.404+0000 7fc8c0bf9700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc8a406c400 0x7fc8a406e8c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:16.406 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.406+0000 7fc8c0bf9700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc8a406c400 0x7fc8a406e8c0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7fc8b800b5c0 tx=0x7fc8b80038f0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:16.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.407+0000 7fc8b1ffb700 1 -- 192.168.123.102:0/991673671 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fc8ac0597d0 con 0x7fc8bc10a700 2026-03-10T10:18:16.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.627+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fc8bc1c40f0 con 0x7fc8bc10a700 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.628+0000 7fc8b1ffb700 1 -- 192.168.123.102:0/991673671 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1854 (secure 0 0 0) 0x7fc8bc1c40f0 con 0x7fc8bc10a700 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:18:16.629 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:18:16.630 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:18:16.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.634+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc8a406c400 msgr2=0x7fc8a406e8c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.634+0000 7fc8c1bfb700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc8a406c400 0x7fc8a406e8c0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7fc8b800b5c0 tx=0x7fc8b80038f0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.634+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8bc10a700 msgr2=0x7fc8bc1bf080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.634+0000 7fc8c1bfb700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8bc10a700 0x7fc8bc1bf080 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7fc8ac009fd0 tx=0x7fc8ac00ec30 comp rx=0 tx=0).stop 2026-03-10T10:18:16.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.634+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 shutdown_connections 2026-03-10T10:18:16.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.634+0000 7fc8c1bfb700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc8bc107d90 0x7fc8bc1beb40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.634+0000 7fc8c1bfb700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fc8a406c400 0x7fc8a406e8c0 unknown :-1 s=CLOSED pgs=132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.634+0000 7fc8c1bfb700 1 --2- 192.168.123.102:0/991673671 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8bc10a700 0x7fc8bc1bf080 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.634+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 >> 192.168.123.102:0/991673671 conn(0x7fc8bc06daa0 msgr2=0x7fc8bc06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:16.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.635+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 shutdown_connections 2026-03-10T10:18:16.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.635+0000 7fc8c1bfb700 1 -- 192.168.123.102:0/991673671 wait complete. 2026-03-10T10:18:16.636 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:18:16.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.773+0000 7fba30fca700 1 -- 192.168.123.102:0/3621664716 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c075a40 msgr2=0x7fba2c077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.773+0000 7fba30fca700 1 --2- 192.168.123.102:0/3621664716 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c075a40 0x7fba2c077ed0 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7fba2400b210 tx=0x7fba2400b520 comp rx=0 tx=0).stop 2026-03-10T10:18:16.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.773+0000 7fba30fca700 1 -- 192.168.123.102:0/3621664716 shutdown_connections 2026-03-10T10:18:16.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.773+0000 7fba30fca700 1 --2- 192.168.123.102:0/3621664716 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c075a40 0x7fba2c077ed0 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.773+0000 7fba30fca700 1 --2- 192.168.123.102:0/3621664716 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fba2c072b50 0x7fba2c072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.773+0000 7fba30fca700 1 -- 192.168.123.102:0/3621664716 >> 192.168.123.102:0/3621664716 conn(0x7fba2c06dae0 msgr2=0x7fba2c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.773+0000 7fba30fca700 1 -- 192.168.123.102:0/3621664716 shutdown_connections 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.773+0000 7fba30fca700 1 -- 192.168.123.102:0/3621664716 wait complete. 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.774+0000 7fba30fca700 1 Processor -- start 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.774+0000 7fba30fca700 1 -- start start 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.774+0000 7fba30fca700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fba2c072b50 0x7fba2c0830b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.774+0000 7fba30fca700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c0835f0 0x7fba2c12e490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.774+0000 7fba30fca700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fba2c083a70 con 0x7fba2c072b50 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.774+0000 7fba30fca700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fba2c083be0 con 0x7fba2c0835f0 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.774+0000 7fba29d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c0835f0 0x7fba2c12e490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.774+0000 7fba29d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c0835f0 0x7fba2c12e490 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:46756/0 (socket says 192.168.123.102:46756) 2026-03-10T10:18:16.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.774+0000 7fba29d9b700 1 -- 192.168.123.102:0/2323430370 learned_addr learned my addr 192.168.123.102:0/2323430370 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:16.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.775+0000 7fba29d9b700 1 -- 192.168.123.102:0/2323430370 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fba2c072b50 msgr2=0x7fba2c0830b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.775+0000 7fba29d9b700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fba2c072b50 0x7fba2c0830b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.775+0000 7fba29d9b700 1 -- 192.168.123.102:0/2323430370 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fba24009e30 con 0x7fba2c0835f0 2026-03-10T10:18:16.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.775+0000 7fba29d9b700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c0835f0 0x7fba2c12e490 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fba2400b960 tx=0x7fba24008be0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:16.776 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.776+0000 7fba1b7fe700 1 -- 192.168.123.102:0/2323430370 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fba2400e050 con 0x7fba2c0835f0 2026-03-10T10:18:16.777 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.776+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fba2c12e9d0 con 0x7fba2c0835f0 2026-03-10T10:18:16.777 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.776+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fba2c12ef20 con 0x7fba2c0835f0 2026-03-10T10:18:16.777 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.776+0000 7fba1b7fe700 1 -- 192.168.123.102:0/2323430370 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fba240040f0 con 0x7fba2c0835f0 2026-03-10T10:18:16.777 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.776+0000 7fba1b7fe700 1 -- 192.168.123.102:0/2323430370 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fba2401bc60 con 0x7fba2c0835f0 2026-03-10T10:18:16.777 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.777+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fba0c005320 con 0x7fba2c0835f0 2026-03-10T10:18:16.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.777+0000 7fba1b7fe700 1 -- 192.168.123.102:0/2323430370 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fba24019040 con 0x7fba2c0835f0 2026-03-10T10:18:16.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.778+0000 7fba1b7fe700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fba1406c600 0x7fba1406eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:16.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.778+0000 7fba1b7fe700 1 -- 192.168.123.102:0/2323430370 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fba2408ce80 con 0x7fba2c0835f0 2026-03-10T10:18:16.779 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.778+0000 7fba2a59c700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fba1406c600 0x7fba1406eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:16.779 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.779+0000 7fba2a59c700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fba1406c600 0x7fba1406eac0 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7fba1c005fd0 tx=0x7fba1c00c040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:16.782 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.781+0000 7fba1b7fe700 1 -- 192.168.123.102:0/2323430370 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fba24057720 con 0x7fba2c0835f0 2026-03-10T10:18:16.967 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:16 vm02.local ceph-mon[50200]: pgmap v117: 65 pgs: 65 active+clean; 768 KiB data, 164 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 29 KiB/s wr, 6 op/s 2026-03-10T10:18:16.967 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:16 vm02.local ceph-mon[50200]: Upgrade: Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc on vm05 2026-03-10T10:18:16.967 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:16 vm02.local ceph-mon[50200]: from='client.24371 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:16.967 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:16 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/3960836119' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:18:16.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.966+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fba0c000bf0 con 0x7fba1406c600 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.970+0000 7fba1b7fe700 1 -- 192.168.123.102:0/2323430370 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7fba0c000bf0 con 0x7fba1406c600 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "0/23 daemons upgraded", 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc image on host vm05", 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:18:16.971 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fba1406c600 msgr2=0x7fba1406eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fba1406c600 0x7fba1406eac0 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7fba1c005fd0 tx=0x7fba1c00c040 comp rx=0 tx=0).stop 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c0835f0 msgr2=0x7fba2c12e490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c0835f0 0x7fba2c12e490 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fba2400b960 tx=0x7fba24008be0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 shutdown_connections 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fba2c072b50 0x7fba2c0830b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fba1406c600 0x7fba1406eac0 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 --2- 192.168.123.102:0/2323430370 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba2c0835f0 0x7fba2c12e490 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 >> 192.168.123.102:0/2323430370 conn(0x7fba2c06dae0 msgr2=0x7fba2c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 shutdown_connections 2026-03-10T10:18:16.977 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:16.976+0000 7fba30fca700 1 -- 192.168.123.102:0/2323430370 wait complete. 2026-03-10T10:18:17.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:16 vm05.local ceph-mon[59051]: pgmap v117: 65 pgs: 65 active+clean; 768 KiB data, 164 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 29 KiB/s wr, 6 op/s 2026-03-10T10:18:17.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:16 vm05.local ceph-mon[59051]: Upgrade: Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc on vm05 2026-03-10T10:18:17.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:16 vm05.local ceph-mon[59051]: from='client.24371 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:17.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:16 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/3960836119' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:18:17.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.081+0000 7f8d083a6700 1 -- 192.168.123.102:0/3870896874 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d00072b50 msgr2=0x7f8d00072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:17.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.081+0000 7f8d083a6700 1 --2- 192.168.123.102:0/3870896874 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d00072b50 0x7f8d00072f70 secure :-1 s=READY pgs=299 cs=0 l=1 rev1=1 crypto rx=0x7f8cfc008790 tx=0x7f8cfc008aa0 comp rx=0 tx=0).stop 2026-03-10T10:18:17.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.082+0000 7f8d083a6700 1 -- 192.168.123.102:0/3870896874 shutdown_connections 2026-03-10T10:18:17.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.082+0000 7f8d083a6700 1 --2- 192.168.123.102:0/3870896874 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d00075a40 0x7f8d00077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:17.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.082+0000 7f8d083a6700 1 --2- 192.168.123.102:0/3870896874 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d00072b50 0x7f8d00072f70 unknown :-1 s=CLOSED pgs=299 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:17.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.082+0000 7f8d083a6700 1 -- 192.168.123.102:0/3870896874 >> 192.168.123.102:0/3870896874 conn(0x7f8d0006dae0 msgr2=0x7f8d0006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.082+0000 7f8d083a6700 1 -- 192.168.123.102:0/3870896874 shutdown_connections 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.082+0000 7f8d083a6700 1 -- 192.168.123.102:0/3870896874 wait complete. 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.083+0000 7f8d083a6700 1 Processor -- start 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.083+0000 7f8d083a6700 1 -- start start 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.083+0000 7f8d083a6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d00075a40 0x7f8d000830d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.083+0000 7f8d083a6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d00083610 0x7f8d001b30e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.083+0000 7f8d083a6700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8d00083b50 con 0x7f8d00075a40 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.083+0000 7f8d083a6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8d00083cc0 con 0x7f8d00083610 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.084+0000 7f8d06142700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d00075a40 0x7f8d000830d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.084+0000 7f8d06142700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d00075a40 0x7f8d000830d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:47068/0 (socket says 192.168.123.102:47068) 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.084+0000 7f8d06142700 1 -- 192.168.123.102:0/3858166959 learned_addr learned my addr 192.168.123.102:0/3858166959 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.084+0000 7f8d05941700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d00083610 0x7f8d001b30e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.084+0000 7f8d05941700 1 -- 192.168.123.102:0/3858166959 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d00075a40 msgr2=0x7f8d000830d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:17.084 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.084+0000 7f8d05941700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d00075a40 0x7f8d000830d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:17.085 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.084+0000 7f8d05941700 1 -- 192.168.123.102:0/3858166959 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8cfc008440 con 0x7f8d00083610 2026-03-10T10:18:17.085 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.084+0000 7f8d05941700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d00083610 0x7f8d001b30e0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f8cf8007ce0 tx=0x7f8cf800d0b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:17.085 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.085+0000 7f8cf77fe700 1 -- 192.168.123.102:0/3858166959 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8cf800b590 con 0x7f8d00083610 2026-03-10T10:18:17.086 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.085+0000 7f8d083a6700 1 -- 192.168.123.102:0/3858166959 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8d001b3740 con 0x7f8d00083610 2026-03-10T10:18:17.087 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.085+0000 7f8d083a6700 1 -- 192.168.123.102:0/3858166959 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8d001b3c10 con 0x7f8d00083610 2026-03-10T10:18:17.087 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.086+0000 7f8cf77fe700 1 -- 192.168.123.102:0/3858166959 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8cf8016e10 con 0x7f8d00083610 2026-03-10T10:18:17.087 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.086+0000 7f8cf77fe700 1 -- 192.168.123.102:0/3858166959 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8cf8009c00 con 0x7f8d00083610 2026-03-10T10:18:17.088 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.087+0000 7f8d083a6700 1 -- 192.168.123.102:0/3858166959 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8ce4005320 con 0x7f8d00083610 2026-03-10T10:18:17.088 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.088+0000 7f8cf77fe700 1 -- 192.168.123.102:0/3858166959 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f8cf8016930 con 0x7f8d00083610 2026-03-10T10:18:17.088 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.088+0000 7f8cf77fe700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8cec06c600 0x7f8cec06eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:17.088 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.088+0000 7f8cf77fe700 1 -- 192.168.123.102:0/3858166959 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f8cf8093ed0 con 0x7f8d00083610 2026-03-10T10:18:17.089 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.089+0000 7f8d06142700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8cec06c600 0x7f8cec06eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:17.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.090+0000 7f8d06142700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8cec06c600 0x7f8cec06eac0 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f8cfc00f7b0 tx=0x7f8cfc00c040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:17.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.092+0000 7f8cf77fe700 1 -- 192.168.123.102:0/3858166959 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f8cf805e770 con 0x7f8d00083610 2026-03-10T10:18:17.288 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.288+0000 7f8d083a6700 1 -- 192.168.123.102:0/3858166959 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f8ce4005190 con 0x7f8d00083610 2026-03-10T10:18:17.289 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.288+0000 7f8cf77fe700 1 -- 192.168.123.102:0/3858166959 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f8cf8061d90 con 0x7f8d00083610 2026-03-10T10:18:17.289 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.291+0000 7f8cf57fa700 1 -- 192.168.123.102:0/3858166959 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8cec06c600 msgr2=0x7f8cec06eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.291+0000 7f8cf57fa700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8cec06c600 0x7f8cec06eac0 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f8cfc00f7b0 tx=0x7f8cfc00c040 comp rx=0 tx=0).stop 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.291+0000 7f8cf57fa700 1 -- 192.168.123.102:0/3858166959 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d00083610 msgr2=0x7f8d001b30e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.291+0000 7f8cf57fa700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d00083610 0x7f8d001b30e0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f8cf8007ce0 tx=0x7f8cf800d0b0 comp rx=0 tx=0).stop 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.292+0000 7f8cf57fa700 1 -- 192.168.123.102:0/3858166959 shutdown_connections 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.292+0000 7f8cf57fa700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d00075a40 0x7f8d000830d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.292+0000 7f8cf57fa700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f8cec06c600 0x7f8cec06eac0 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.292+0000 7f8cf57fa700 1 --2- 192.168.123.102:0/3858166959 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d00083610 0x7f8d001b30e0 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.292+0000 7f8cf57fa700 1 -- 192.168.123.102:0/3858166959 >> 192.168.123.102:0/3858166959 conn(0x7f8d0006dae0 msgr2=0x7f8d0006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.292+0000 7f8cf57fa700 1 -- 192.168.123.102:0/3858166959 shutdown_connections 2026-03-10T10:18:17.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:17.292+0000 7f8cf57fa700 1 -- 192.168.123.102:0/3858166959 wait complete. 2026-03-10T10:18:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:17 vm02.local ceph-mon[50200]: from='client.14602 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:17 vm02.local ceph-mon[50200]: from='client.14606 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:17 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/991673671' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:18:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:17 vm02.local ceph-mon[50200]: pgmap v118: 65 pgs: 65 active+clean; 16 MiB data, 212 MiB used, 120 GiB / 120 GiB avail; 145 KiB/s rd, 1.3 MiB/s wr, 70 op/s 2026-03-10T10:18:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:17 vm02.local ceph-mon[50200]: from='client.24389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:17 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/3858166959' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:18:18.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:17 vm05.local ceph-mon[59051]: from='client.14602 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:18.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:17 vm05.local ceph-mon[59051]: from='client.14606 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:18.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:17 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/991673671' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:18:18.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:17 vm05.local ceph-mon[59051]: pgmap v118: 65 pgs: 65 active+clean; 16 MiB data, 212 MiB used, 120 GiB / 120 GiB avail; 145 KiB/s rd, 1.3 MiB/s wr, 70 op/s 2026-03-10T10:18:18.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:17 vm05.local ceph-mon[59051]: from='client.24389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:18.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:17 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/3858166959' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:18:20.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:20 vm02.local ceph-mon[50200]: pgmap v119: 65 pgs: 65 active+clean; 28 MiB data, 251 MiB used, 120 GiB / 120 GiB avail; 144 KiB/s rd, 2.4 MiB/s wr, 118 op/s 2026-03-10T10:18:20.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:20 vm05.local ceph-mon[59051]: pgmap v119: 65 pgs: 65 active+clean; 28 MiB data, 251 MiB used, 120 GiB / 120 GiB avail; 144 KiB/s rd, 2.4 MiB/s wr, 118 op/s 2026-03-10T10:18:22.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:21 vm02.local ceph-mon[50200]: pgmap v120: 65 pgs: 65 active+clean; 28 MiB data, 264 MiB used, 120 GiB / 120 GiB avail; 144 KiB/s rd, 2.4 MiB/s wr, 135 op/s 2026-03-10T10:18:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:21 vm05.local ceph-mon[59051]: pgmap v120: 65 pgs: 65 active+clean; 28 MiB data, 264 MiB used, 120 GiB / 120 GiB avail; 144 KiB/s rd, 2.4 MiB/s wr, 135 op/s 2026-03-10T10:18:24.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:23 vm02.local ceph-mon[50200]: pgmap v121: 65 pgs: 65 active+clean; 40 MiB data, 316 MiB used, 120 GiB / 120 GiB avail; 721 KiB/s rd, 3.5 MiB/s wr, 277 op/s 2026-03-10T10:18:24.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:23 vm05.local ceph-mon[59051]: pgmap v121: 65 pgs: 65 active+clean; 40 MiB data, 316 MiB used, 120 GiB / 120 GiB avail; 721 KiB/s rd, 3.5 MiB/s wr, 277 op/s 2026-03-10T10:18:26.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:25 vm02.local ceph-mon[50200]: pgmap v122: 65 pgs: 65 active+clean; 42 MiB data, 325 MiB used, 120 GiB / 120 GiB avail; 720 KiB/s rd, 3.6 MiB/s wr, 313 op/s 2026-03-10T10:18:26.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:25 vm05.local ceph-mon[59051]: pgmap v122: 65 pgs: 65 active+clean; 42 MiB data, 325 MiB used, 120 GiB / 120 GiB avail; 720 KiB/s rd, 3.6 MiB/s wr, 313 op/s 2026-03-10T10:18:28.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:28 vm02.local ceph-mon[50200]: pgmap v123: 65 pgs: 65 active+clean; 54 MiB data, 452 MiB used, 120 GiB / 120 GiB avail; 1.3 MiB/s rd, 4.7 MiB/s wr, 429 op/s 2026-03-10T10:18:28.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:28 vm05.local ceph-mon[59051]: pgmap v123: 65 pgs: 65 active+clean; 54 MiB data, 452 MiB used, 120 GiB / 120 GiB avail; 1.3 MiB/s rd, 4.7 MiB/s wr, 429 op/s 2026-03-10T10:18:31.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:31 vm02.local ceph-mon[50200]: pgmap v124: 65 pgs: 65 active+clean; 61 MiB data, 548 MiB used, 119 GiB / 120 GiB avail; 1.5 MiB/s rd, 4.1 MiB/s wr, 424 op/s 2026-03-10T10:18:31.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:31 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:31.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:31 vm05.local ceph-mon[59051]: pgmap v124: 65 pgs: 65 active+clean; 61 MiB data, 548 MiB used, 119 GiB / 120 GiB avail; 1.5 MiB/s rd, 4.1 MiB/s wr, 424 op/s 2026-03-10T10:18:31.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:31 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:32.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:32 vm02.local ceph-mon[50200]: pgmap v125: 65 pgs: 65 active+clean; 64 MiB data, 572 MiB used, 119 GiB / 120 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 393 op/s 2026-03-10T10:18:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:32 vm05.local ceph-mon[59051]: pgmap v125: 65 pgs: 65 active+clean; 64 MiB data, 572 MiB used, 119 GiB / 120 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 393 op/s 2026-03-10T10:18:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:33 vm02.local ceph-mon[50200]: pgmap v126: 65 pgs: 65 active+clean; 90 MiB data, 727 MiB used, 119 GiB / 120 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 527 op/s 2026-03-10T10:18:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:33 vm05.local ceph-mon[59051]: pgmap v126: 65 pgs: 65 active+clean; 90 MiB data, 727 MiB used, 119 GiB / 120 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 527 op/s 2026-03-10T10:18:36.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:35 vm05.local ceph-mon[59051]: pgmap v127: 65 pgs: 65 active+clean; 92 MiB data, 745 MiB used, 119 GiB / 120 GiB avail; 1.5 MiB/s rd, 4.6 MiB/s wr, 418 op/s 2026-03-10T10:18:36.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:35 vm02.local ceph-mon[50200]: pgmap v127: 65 pgs: 65 active+clean; 92 MiB data, 745 MiB used, 119 GiB / 120 GiB avail; 1.5 MiB/s rd, 4.6 MiB/s wr, 418 op/s 2026-03-10T10:18:38.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:38 vm02.local ceph-mon[50200]: pgmap v128: 65 pgs: 65 active+clean; 103 MiB data, 846 MiB used, 119 GiB / 120 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 500 op/s 2026-03-10T10:18:38.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:38 vm05.local ceph-mon[59051]: pgmap v128: 65 pgs: 65 active+clean; 103 MiB data, 846 MiB used, 119 GiB / 120 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 500 op/s 2026-03-10T10:18:40.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:40 vm02.local ceph-mon[50200]: pgmap v129: 65 pgs: 65 active+clean; 108 MiB data, 907 MiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.8 MiB/s wr, 434 op/s 2026-03-10T10:18:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:40 vm05.local ceph-mon[59051]: pgmap v129: 65 pgs: 65 active+clean; 108 MiB data, 907 MiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.8 MiB/s wr, 434 op/s 2026-03-10T10:18:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:42 vm02.local ceph-mon[50200]: pgmap v130: 65 pgs: 65 active+clean; 109 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.4 MiB/s rd, 4.2 MiB/s wr, 390 op/s 2026-03-10T10:18:42.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:42 vm05.local ceph-mon[59051]: pgmap v130: 65 pgs: 65 active+clean; 109 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.4 MiB/s rd, 4.2 MiB/s wr, 390 op/s 2026-03-10T10:18:44.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:43 vm02.local ceph-mon[50200]: pgmap v131: 65 pgs: 65 active+clean; 116 MiB data, 946 MiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.7 MiB/s wr, 508 op/s 2026-03-10T10:18:44.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:43 vm05.local ceph-mon[59051]: pgmap v131: 65 pgs: 65 active+clean; 116 MiB data, 946 MiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.7 MiB/s wr, 508 op/s 2026-03-10T10:18:45.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:44 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:44 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:46.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:45 vm05.local ceph-mon[59051]: pgmap v132: 65 pgs: 65 active+clean; 117 MiB data, 982 MiB used, 119 GiB / 120 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 387 op/s 2026-03-10T10:18:46.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:45 vm02.local ceph-mon[50200]: pgmap v132: 65 pgs: 65 active+clean; 117 MiB data, 982 MiB used, 119 GiB / 120 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 387 op/s 2026-03-10T10:18:47.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.815+0000 7f97519e2700 1 -- 192.168.123.102:0/1062083946 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974c075a40 msgr2=0x7f974c077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:47.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.815+0000 7f97519e2700 1 --2- 192.168.123.102:0/1062083946 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974c075a40 0x7f974c077ed0 secure :-1 s=READY pgs=300 cs=0 l=1 rev1=1 crypto rx=0x7f974400d3f0 tx=0x7f974400d700 comp rx=0 tx=0).stop 2026-03-10T10:18:47.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.815+0000 7f97519e2700 1 -- 192.168.123.102:0/1062083946 shutdown_connections 2026-03-10T10:18:47.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.815+0000 7f97519e2700 1 --2- 192.168.123.102:0/1062083946 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974c075a40 0x7f974c077ed0 unknown :-1 s=CLOSED pgs=300 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:47.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.815+0000 7f97519e2700 1 --2- 192.168.123.102:0/1062083946 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f974c072b50 0x7f974c072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:47.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.815+0000 7f97519e2700 1 -- 192.168.123.102:0/1062083946 >> 192.168.123.102:0/1062083946 conn(0x7f974c06dae0 msgr2=0x7f974c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.815+0000 7f97519e2700 1 -- 192.168.123.102:0/1062083946 shutdown_connections 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f97519e2700 1 -- 192.168.123.102:0/1062083946 wait complete. 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f97519e2700 1 Processor -- start 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f97519e2700 1 -- start start 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f97519e2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974c072b50 0x7f974c0830e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f97519e2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f974c075a40 0x7f974c083620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f97519e2700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f974c083c40 con 0x7f974c072b50 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f97519e2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f974c083d80 con 0x7f974c075a40 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f974a7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f974c075a40 0x7f974c083620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f974a7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f974c075a40 0x7f974c083620 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:41646/0 (socket says 192.168.123.102:41646) 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f974a7fc700 1 -- 192.168.123.102:0/1512454763 learned_addr learned my addr 192.168.123.102:0/1512454763 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.816+0000 7f974affd700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974c072b50 0x7f974c0830e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.817+0000 7f974a7fc700 1 -- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974c072b50 msgr2=0x7f974c0830e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:47.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.817+0000 7f974a7fc700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974c072b50 0x7f974c0830e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:47.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.817+0000 7f974a7fc700 1 -- 192.168.123.102:0/1512454763 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9744007ed0 con 0x7f974c075a40 2026-03-10T10:18:47.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:47.817+0000 7f974a7fc700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f974c075a40 0x7f974c083620 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f97440060b0 tx=0x7f9744004060 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:48.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.008+0000 7f97509e0700 1 -- 192.168.123.102:0/1512454763 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f974401d070 con 0x7f974c075a40 2026-03-10T10:18:48.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.009+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f974c1b32b0 con 0x7f974c075a40 2026-03-10T10:18:48.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.009+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f974c1b37a0 con 0x7f974c075a40 2026-03-10T10:18:48.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.009+0000 7f97509e0700 1 -- 192.168.123.102:0/1512454763 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f974400de50 con 0x7f974c075a40 2026-03-10T10:18:48.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.009+0000 7f97509e0700 1 -- 192.168.123.102:0/1512454763 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9744017b40 con 0x7f974c075a40 2026-03-10T10:18:48.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.011+0000 7f97509e0700 1 -- 192.168.123.102:0/1512454763 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f9744017420 con 0x7f974c075a40 2026-03-10T10:18:48.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.011+0000 7f97509e0700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f973406c4e0 0x7f973406e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.012+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9738005320 con 0x7f974c075a40 2026-03-10T10:18:48.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.013+0000 7f974affd700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f973406c4e0 0x7f973406e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:48.015 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.014+0000 7f974affd700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f973406c4e0 0x7f973406e9a0 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f973c005950 tx=0x7f973c0058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:48.015 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.014+0000 7f97509e0700 1 -- 192.168.123.102:0/1512454763 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f9744013070 con 0x7f974c075a40 2026-03-10T10:18:48.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.018+0000 7f97509e0700 1 -- 192.168.123.102:0/1512454763 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f974405ba60 con 0x7f974c075a40 2026-03-10T10:18:48.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.189+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f9738000bf0 con 0x7f973406c4e0 2026-03-10T10:18:48.189 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:48 vm02.local ceph-mon[50200]: pgmap v133: 65 pgs: 65 active+clean; 126 MiB data, 1.0 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 470 op/s 2026-03-10T10:18:48.191 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.191+0000 7f97509e0700 1 -- 192.168.123.102:0/1512454763 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7f9738000bf0 con 0x7f973406c4e0 2026-03-10T10:18:48.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.200+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f973406c4e0 msgr2=0x7f973406e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.200+0000 7f97519e2700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f973406c4e0 0x7f973406e9a0 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f973c005950 tx=0x7f973c0058e0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.201+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f974c075a40 msgr2=0x7f974c083620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.201+0000 7f97519e2700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f974c075a40 0x7f974c083620 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f97440060b0 tx=0x7f9744004060 comp rx=0 tx=0).stop 2026-03-10T10:18:48.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.202+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 shutdown_connections 2026-03-10T10:18:48.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.202+0000 7f97519e2700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f974c072b50 0x7f974c0830e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.202+0000 7f97519e2700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f973406c4e0 0x7f973406e9a0 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.202+0000 7f97519e2700 1 --2- 192.168.123.102:0/1512454763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f974c075a40 0x7f974c083620 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.202+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 >> 192.168.123.102:0/1512454763 conn(0x7f974c06dae0 msgr2=0x7f974c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:48.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.202+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 shutdown_connections 2026-03-10T10:18:48.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.202+0000 7f97519e2700 1 -- 192.168.123.102:0/1512454763 wait complete. 2026-03-10T10:18:48.217 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:18:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:48 vm05.local ceph-mon[59051]: pgmap v133: 65 pgs: 65 active+clean; 126 MiB data, 1.0 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 470 op/s 2026-03-10T10:18:48.348 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.347+0000 7fa771314700 1 -- 192.168.123.102:0/3793525534 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa76c075a10 msgr2=0x7fa76c077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.349 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.347+0000 7fa771314700 1 --2- 192.168.123.102:0/3793525534 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa76c075a10 0x7fa76c077ea0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fa76400a390 tx=0x7fa76400a6a0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.349 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.348+0000 7fa771314700 1 -- 192.168.123.102:0/3793525534 shutdown_connections 2026-03-10T10:18:48.349 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.348+0000 7fa771314700 1 --2- 192.168.123.102:0/3793525534 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa76c075a10 0x7fa76c077ea0 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.349 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.348+0000 7fa771314700 1 --2- 192.168.123.102:0/3793525534 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76c072b20 0x7fa76c072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.349 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.348+0000 7fa771314700 1 -- 192.168.123.102:0/3793525534 >> 192.168.123.102:0/3793525534 conn(0x7fa76c06daa0 msgr2=0x7fa76c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:48.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.348+0000 7fa771314700 1 -- 192.168.123.102:0/3793525534 shutdown_connections 2026-03-10T10:18:48.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.348+0000 7fa771314700 1 -- 192.168.123.102:0/3793525534 wait complete. 2026-03-10T10:18:48.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.349+0000 7fa771314700 1 Processor -- start 2026-03-10T10:18:48.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.349+0000 7fa771314700 1 -- start start 2026-03-10T10:18:48.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.349+0000 7fa771314700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa76c072b20 0x7fa76c082fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.349+0000 7fa771314700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76c083520 0x7fa76c1b3070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.349+0000 7fa771314700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa76c083a30 con 0x7fa76c083520 2026-03-10T10:18:48.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.349+0000 7fa771314700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa76c083ba0 con 0x7fa76c072b20 2026-03-10T10:18:48.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa76bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa76c072b20 0x7fa76c082fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:48.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa76b7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76c083520 0x7fa76c1b3070 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:48.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa76b7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76c083520 0x7fa76c1b3070 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:60990/0 (socket says 192.168.123.102:60990) 2026-03-10T10:18:48.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa76b7fe700 1 -- 192.168.123.102:0/1111475759 learned_addr learned my addr 192.168.123.102:0/1111475759 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:48.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa76bfff700 1 -- 192.168.123.102:0/1111475759 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76c083520 msgr2=0x7fa76c1b3070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa76bfff700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76c083520 0x7fa76c1b3070 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa76bfff700 1 -- 192.168.123.102:0/1111475759 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa76400a040 con 0x7fa76c072b20 2026-03-10T10:18:48.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa76bfff700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa76c072b20 0x7fa76c082fe0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fa75c010bf0 tx=0x7fa75c010f00 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:48.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa7697fa700 1 -- 192.168.123.102:0/1111475759 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa75c00e9a0 con 0x7fa76c072b20 2026-03-10T10:18:48.353 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.350+0000 7fa771314700 1 -- 192.168.123.102:0/1111475759 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa76c1b3610 con 0x7fa76c072b20 2026-03-10T10:18:48.353 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.352+0000 7fa771314700 1 -- 192.168.123.102:0/1111475759 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa76c1b3b60 con 0x7fa76c072b20 2026-03-10T10:18:48.353 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.352+0000 7fa7697fa700 1 -- 192.168.123.102:0/1111475759 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa75c011040 con 0x7fa76c072b20 2026-03-10T10:18:48.353 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.352+0000 7fa7697fa700 1 -- 192.168.123.102:0/1111475759 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa75c01e5f0 con 0x7fa76c072b20 2026-03-10T10:18:48.354 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.353+0000 7fa7697fa700 1 -- 192.168.123.102:0/1111475759 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fa75c01e830 con 0x7fa76c072b20 2026-03-10T10:18:48.355 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.355+0000 7fa7697fa700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa75406c530 0x7fa75406e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.355 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.355+0000 7fa76b7fe700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa75406c530 0x7fa75406e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:48.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.355+0000 7fa7697fa700 1 -- 192.168.123.102:0/1111475759 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fa75c0909d0 con 0x7fa76c072b20 2026-03-10T10:18:48.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.355+0000 7fa771314700 1 -- 192.168.123.102:0/1111475759 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa758005320 con 0x7fa76c072b20 2026-03-10T10:18:48.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.356+0000 7fa76b7fe700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa75406c530 0x7fa75406e9f0 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7fa76400bfd0 tx=0x7fa76400bbe0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:48.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.360+0000 7fa7697fa700 1 -- 192.168.123.102:0/1111475759 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa75c05ec80 con 0x7fa76c072b20 2026-03-10T10:18:48.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.556+0000 7fa771314700 1 -- 192.168.123.102:0/1111475759 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fa758000bf0 con 0x7fa75406c530 2026-03-10T10:18:48.557 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.557+0000 7fa7697fa700 1 -- 192.168.123.102:0/1111475759 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7fa758000bf0 con 0x7fa75406c530 2026-03-10T10:18:48.559 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.559+0000 7fa752ffd700 1 -- 192.168.123.102:0/1111475759 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa75406c530 msgr2=0x7fa75406e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.559+0000 7fa752ffd700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa75406c530 0x7fa75406e9f0 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7fa76400bfd0 tx=0x7fa76400bbe0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.559+0000 7fa752ffd700 1 -- 192.168.123.102:0/1111475759 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa76c072b20 msgr2=0x7fa76c082fe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.559+0000 7fa752ffd700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa76c072b20 0x7fa76c082fe0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fa75c010bf0 tx=0x7fa75c010f00 comp rx=0 tx=0).stop 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.559+0000 7fa752ffd700 1 -- 192.168.123.102:0/1111475759 shutdown_connections 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.559+0000 7fa752ffd700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa75406c530 0x7fa75406e9f0 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.559+0000 7fa752ffd700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa76c072b20 0x7fa76c082fe0 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.559+0000 7fa752ffd700 1 --2- 192.168.123.102:0/1111475759 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa76c083520 0x7fa76c1b3070 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.560+0000 7fa752ffd700 1 -- 192.168.123.102:0/1111475759 >> 192.168.123.102:0/1111475759 conn(0x7fa76c06daa0 msgr2=0x7fa76c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.560+0000 7fa752ffd700 1 -- 192.168.123.102:0/1111475759 shutdown_connections 2026-03-10T10:18:48.560 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.560+0000 7fa752ffd700 1 -- 192.168.123.102:0/1111475759 wait complete. 2026-03-10T10:18:48.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.673+0000 7f2288d00700 1 -- 192.168.123.102:0/1648793620 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f228410a700 msgr2=0x7f228410cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.673+0000 7f2288d00700 1 --2- 192.168.123.102:0/1648793620 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f228410a700 0x7f228410cb90 secure :-1 s=READY pgs=301 cs=0 l=1 rev1=1 crypto rx=0x7f227c00b3a0 tx=0x7f227c00b6b0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.673+0000 7f2288d00700 1 -- 192.168.123.102:0/1648793620 shutdown_connections 2026-03-10T10:18:48.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.673+0000 7f2288d00700 1 --2- 192.168.123.102:0/1648793620 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f228410a700 0x7f228410cb90 unknown :-1 s=CLOSED pgs=301 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.673+0000 7f2288d00700 1 --2- 192.168.123.102:0/1648793620 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2284107d90 0x7f228410a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.673+0000 7f2288d00700 1 -- 192.168.123.102:0/1648793620 >> 192.168.123.102:0/1648793620 conn(0x7f228406dae0 msgr2=0x7f228406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:48.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.674+0000 7f2288d00700 1 -- 192.168.123.102:0/1648793620 shutdown_connections 2026-03-10T10:18:48.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.674+0000 7f2288d00700 1 -- 192.168.123.102:0/1648793620 wait complete. 2026-03-10T10:18:48.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.674+0000 7f2288d00700 1 Processor -- start 2026-03-10T10:18:48.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.674+0000 7f2288d00700 1 -- start start 2026-03-10T10:18:48.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.675+0000 7f2288d00700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2284107d90 0x7f2284116cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.675+0000 7f2288d00700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f228410a700 0x7f2284117230 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.675+0000 7f2288d00700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2284117850 con 0x7f2284107d90 2026-03-10T10:18:48.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.675+0000 7f2288d00700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2284117990 con 0x7f228410a700 2026-03-10T10:18:48.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.675+0000 7f2281d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f228410a700 0x7f2284117230 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:48.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.675+0000 7f2281d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f228410a700 0x7f2284117230 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:41696/0 (socket says 192.168.123.102:41696) 2026-03-10T10:18:48.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.675+0000 7f2281d9b700 1 -- 192.168.123.102:0/4068924612 learned_addr learned my addr 192.168.123.102:0/4068924612 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:48.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.676+0000 7f2281d9b700 1 -- 192.168.123.102:0/4068924612 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2284107d90 msgr2=0x7f2284116cf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.676+0000 7f2281d9b700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2284107d90 0x7f2284116cf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.676+0000 7f2281d9b700 1 -- 192.168.123.102:0/4068924612 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f227c00b050 con 0x7f228410a700 2026-03-10T10:18:48.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.676+0000 7f2281d9b700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f228410a700 0x7f2284117230 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f227c003c30 tx=0x7f227c003d10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:48.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.677+0000 7f22737fe700 1 -- 192.168.123.102:0/4068924612 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f227c00e070 con 0x7f228410a700 2026-03-10T10:18:48.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.677+0000 7f2288d00700 1 -- 192.168.123.102:0/4068924612 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f22841b3500 con 0x7f228410a700 2026-03-10T10:18:48.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.677+0000 7f2288d00700 1 -- 192.168.123.102:0/4068924612 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f22841b3a50 con 0x7f228410a700 2026-03-10T10:18:48.678 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.678+0000 7f22737fe700 1 -- 192.168.123.102:0/4068924612 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f227c007b10 con 0x7f228410a700 2026-03-10T10:18:48.680 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.678+0000 7f22737fe700 1 -- 192.168.123.102:0/4068924612 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f227c01be70 con 0x7f228410a700 2026-03-10T10:18:48.680 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.679+0000 7f22737fe700 1 -- 192.168.123.102:0/4068924612 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f227c019040 con 0x7f228410a700 2026-03-10T10:18:48.680 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.680+0000 7f22737fe700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f226c06c380 0x7f226c06e840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.680 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.680+0000 7f22737fe700 1 -- 192.168.123.102:0/4068924612 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f227c08c8f0 con 0x7f228410a700 2026-03-10T10:18:48.680 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.680+0000 7f228259c700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f226c06c380 0x7f226c06e840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:48.680 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.680+0000 7f2288d00700 1 -- 192.168.123.102:0/4068924612 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2264005320 con 0x7f228410a700 2026-03-10T10:18:48.681 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.680+0000 7f228259c700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f226c06c380 0x7f226c06e840 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f2274005950 tx=0x7f22740058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:48.684 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.683+0000 7f22737fe700 1 -- 192.168.123.102:0/4068924612 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f227c05ac20 con 0x7f228410a700 2026-03-10T10:18:48.855 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.855+0000 7f2288d00700 1 -- 192.168.123.102:0/4068924612 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f2264000bf0 con 0x7f226c06c380 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.863+0000 7f22737fe700 1 -- 192.168.123.102:0/4068924612 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3288 (secure 0 0 0) 0x7f2264000bf0 con 0x7f226c06c380 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (3m) 106s ago 3m 22.5M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (4m) 106s ago 4m 8154k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (3m) 107s ago 3m 8166k - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (4m) 106s ago 4m 7415k - 18.2.1 5be31c24972a 51802fb57170 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (3m) 107s ago 3m 7407k - 18.2.1 5be31c24972a f275982dc269 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (3m) 106s ago 3m 78.5M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (112s) 106s ago 112s 12.0M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (114s) 106s ago 114s 14.3M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (113s) 107s ago 113s 12.2M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (111s) 107s ago 111s 16.6M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:9283,8765,8443 running (4m) 106s ago 4m 502M - 18.2.1 5be31c24972a 8bea583521d3 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (3m) 107s ago 3m 450M - 18.2.1 5be31c24972a ff545ad0664a 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (4m) 106s ago 4m 52.7M 2048M 18.2.1 5be31c24972a ab92d831cc1d 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (3m) 107s ago 3m 45.0M 2048M 18.2.1 5be31c24972a cea7d23f93a6 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (4m) 106s ago 4m 16.0M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (3m) 107s ago 3m 14.6M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:18:48.864 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (3m) 106s ago 3m 45.5M 4096M 18.2.1 5be31c24972a 9d7f135a3f3b 2026-03-10T10:18:48.865 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (2m) 106s ago 2m 46.2M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:18:48.865 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (2m) 106s ago 2m 45.2M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:18:48.865 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (2m) 107s ago 2m 43.6M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:18:48.865 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (2m) 107s ago 2m 43.8M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:18:48.865 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (2m) 107s ago 2m 43.6M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:18:48.865 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (3m) 106s ago 3m 34.3M - 2.43.0 a07b618ecd1d a607fd039cb6 2026-03-10T10:18:48.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 -- 192.168.123.102:0/4068924612 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f226c06c380 msgr2=0x7f226c06e840 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f226c06c380 0x7f226c06e840 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f2274005950 tx=0x7f22740058e0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 -- 192.168.123.102:0/4068924612 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f228410a700 msgr2=0x7f2284117230 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f228410a700 0x7f2284117230 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f227c003c30 tx=0x7f227c003d10 comp rx=0 tx=0).stop 2026-03-10T10:18:48.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 -- 192.168.123.102:0/4068924612 shutdown_connections 2026-03-10T10:18:48.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2284107d90 0x7f2284116cf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f226c06c380 0x7f226c06e840 secure :-1 s=CLOSED pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f2274005950 tx=0x7f22740058e0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 --2- 192.168.123.102:0/4068924612 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f228410a700 0x7f2284117230 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 -- 192.168.123.102:0/4068924612 >> 192.168.123.102:0/4068924612 conn(0x7f228406dae0 msgr2=0x7f228406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:48.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 -- 192.168.123.102:0/4068924612 shutdown_connections 2026-03-10T10:18:48.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.866+0000 7f22717fa700 1 -- 192.168.123.102:0/4068924612 wait complete. 2026-03-10T10:18:48.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.972+0000 7f00e359e700 1 -- 192.168.123.102:0/2298171986 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 msgr2=0x7f00e410a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.972+0000 7f00e359e700 1 --2- 192.168.123.102:0/2298171986 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 0x7f00e410a1c0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f00cc009a60 tx=0x7f00cc009d70 comp rx=0 tx=0).stop 2026-03-10T10:18:48.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.973+0000 7f00e359e700 1 -- 192.168.123.102:0/2298171986 shutdown_connections 2026-03-10T10:18:48.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.973+0000 7f00e359e700 1 --2- 192.168.123.102:0/2298171986 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f00e410a700 0x7f00e410cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.973+0000 7f00e359e700 1 --2- 192.168.123.102:0/2298171986 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 0x7f00e410a1c0 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.973+0000 7f00e359e700 1 -- 192.168.123.102:0/2298171986 >> 192.168.123.102:0/2298171986 conn(0x7f00e406daa0 msgr2=0x7f00e406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:48.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.973+0000 7f00e359e700 1 -- 192.168.123.102:0/2298171986 shutdown_connections 2026-03-10T10:18:48.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.973+0000 7f00e359e700 1 -- 192.168.123.102:0/2298171986 wait complete. 2026-03-10T10:18:48.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.974+0000 7f00e359e700 1 Processor -- start 2026-03-10T10:18:48.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.974+0000 7f00e359e700 1 -- start start 2026-03-10T10:18:48.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.974+0000 7f00e359e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 0x7f00e4116d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.974+0000 7f00e359e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f00e410a700 0x7f00e4117260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.974+0000 7f00e359e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f00e4117880 con 0x7f00e410a700 2026-03-10T10:18:48.974 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.974+0000 7f00e359e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f00e41b3390 con 0x7f00e4107d90 2026-03-10T10:18:48.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.975+0000 7f00e259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 0x7f00e4116d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:48.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.975+0000 7f00e259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 0x7f00e4116d20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:41722/0 (socket says 192.168.123.102:41722) 2026-03-10T10:18:48.975 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.975+0000 7f00e259c700 1 -- 192.168.123.102:0/2176955637 learned_addr learned my addr 192.168.123.102:0/2176955637 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:48.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.975+0000 7f00e259c700 1 -- 192.168.123.102:0/2176955637 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f00e410a700 msgr2=0x7f00e4117260 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:48.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.975+0000 7f00e259c700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f00e410a700 0x7f00e4117260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:48.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.975+0000 7f00e259c700 1 -- 192.168.123.102:0/2176955637 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f00cc009710 con 0x7f00e4107d90 2026-03-10T10:18:48.976 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.976+0000 7f00e259c700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 0x7f00e4116d20 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f00cc009a30 tx=0x7f00cc0038b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:48.978 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.977+0000 7f00db7fe700 1 -- 192.168.123.102:0/2176955637 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f00cc01d070 con 0x7f00e4107d90 2026-03-10T10:18:48.978 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.977+0000 7f00e359e700 1 -- 192.168.123.102:0/2176955637 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f00e41b3530 con 0x7f00e4107d90 2026-03-10T10:18:48.978 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.977+0000 7f00e359e700 1 -- 192.168.123.102:0/2176955637 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f00e41b39d0 con 0x7f00e4107d90 2026-03-10T10:18:48.978 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.978+0000 7f00db7fe700 1 -- 192.168.123.102:0/2176955637 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f00cc0043c0 con 0x7f00e4107d90 2026-03-10T10:18:48.978 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.978+0000 7f00db7fe700 1 -- 192.168.123.102:0/2176955637 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f00cc00f7c0 con 0x7f00e4107d90 2026-03-10T10:18:48.980 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.979+0000 7f00db7fe700 1 -- 192.168.123.102:0/2176955637 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f00cc004530 con 0x7f00e4107d90 2026-03-10T10:18:48.980 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.980+0000 7f00db7fe700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f00d006c530 0x7f00d006e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:48.980 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.980+0000 7f00db7fe700 1 -- 192.168.123.102:0/2176955637 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f00cc08cb50 con 0x7f00e4107d90 2026-03-10T10:18:48.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.980+0000 7f00e359e700 1 -- 192.168.123.102:0/2176955637 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f00c4005320 con 0x7f00e4107d90 2026-03-10T10:18:48.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.981+0000 7f00e1d9b700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f00d006c530 0x7f00d006e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:48.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.982+0000 7f00e1d9b700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f00d006c530 0x7f00d006e9f0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f00d4005950 tx=0x7f00d40058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:48.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:48.987+0000 7f00db7fe700 1 -- 192.168.123.102:0/2176955637 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f00cc057420 con 0x7f00e4107d90 2026-03-10T10:18:49.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.182+0000 7f00e359e700 1 -- 192.168.123.102:0/2176955637 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f00c4006200 con 0x7f00e4107d90 2026-03-10T10:18:49.183 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:49 vm02.local ceph-mon[50200]: from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:49.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.184+0000 7f00db7fe700 1 -- 192.168.123.102:0/2176955637 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+558 (secure 0 0 0) 0x7f00cc027070 con 0x7f00e4107d90 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 6 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:18:49.185 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:18:49.186 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:18:49.186 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:18:49.186 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:18:49.186 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 14 2026-03-10T10:18:49.186 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:18:49.186 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:18:49.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.188+0000 7f00d97fa700 1 -- 192.168.123.102:0/2176955637 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f00d006c530 msgr2=0x7f00d006e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.188+0000 7f00d97fa700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f00d006c530 0x7f00d006e9f0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f00d4005950 tx=0x7f00d40058e0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.188+0000 7f00d97fa700 1 -- 192.168.123.102:0/2176955637 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 msgr2=0x7f00e4116d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.188+0000 7f00d97fa700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 0x7f00e4116d20 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f00cc009a30 tx=0x7f00cc0038b0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.189+0000 7f00d97fa700 1 -- 192.168.123.102:0/2176955637 shutdown_connections 2026-03-10T10:18:49.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.189+0000 7f00d97fa700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f00d006c530 0x7f00d006e9f0 unknown :-1 s=CLOSED pgs=138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.189+0000 7f00d97fa700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00e4107d90 0x7f00e4116d20 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.189+0000 7f00d97fa700 1 --2- 192.168.123.102:0/2176955637 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f00e410a700 0x7f00e4117260 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.189+0000 7f00d97fa700 1 -- 192.168.123.102:0/2176955637 >> 192.168.123.102:0/2176955637 conn(0x7f00e406daa0 msgr2=0x7f00e406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:49.190 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.190+0000 7f00d97fa700 1 -- 192.168.123.102:0/2176955637 shutdown_connections 2026-03-10T10:18:49.190 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.190+0000 7f00d97fa700 1 -- 192.168.123.102:0/2176955637 wait complete. 2026-03-10T10:18:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:49 vm05.local ceph-mon[59051]: from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:49.288 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.287+0000 7fbb0da79700 1 -- 192.168.123.102:0/3752888694 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 msgr2=0x7fbb08077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.288 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.287+0000 7fbb0da79700 1 --2- 192.168.123.102:0/3752888694 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 0x7fbb08077ea0 secure :-1 s=READY pgs=302 cs=0 l=1 rev1=1 crypto rx=0x7fbaf8008790 tx=0x7fbaf8008aa0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.288 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.288+0000 7fbb0da79700 1 -- 192.168.123.102:0/3752888694 shutdown_connections 2026-03-10T10:18:49.288 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.288+0000 7fbb0da79700 1 --2- 192.168.123.102:0/3752888694 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 0x7fbb08077ea0 unknown :-1 s=CLOSED pgs=302 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.288 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.288+0000 7fbb0da79700 1 --2- 192.168.123.102:0/3752888694 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb08072b20 0x7fbb08072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.288 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.288+0000 7fbb0da79700 1 -- 192.168.123.102:0/3752888694 >> 192.168.123.102:0/3752888694 conn(0x7fbb0806daa0 msgr2=0x7fbb0806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:49.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.289+0000 7fbb0da79700 1 -- 192.168.123.102:0/3752888694 shutdown_connections 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb0da79700 1 -- 192.168.123.102:0/3752888694 wait complete. 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb0da79700 1 Processor -- start 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb0da79700 1 -- start start 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb0da79700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb08072b20 0x7fbb08083040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb0da79700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 0x7fbb08083580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb0da79700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbb08083ba0 con 0x7fbb08075a10 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb0da79700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbb08083ce0 con 0x7fbb08072b20 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb07fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 0x7fbb08083580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb07fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 0x7fbb08083580 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:32816/0 (socket says 192.168.123.102:32816) 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb07fff700 1 -- 192.168.123.102:0/3275237908 learned_addr learned my addr 192.168.123.102:0/3275237908 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.290+0000 7fbb0ca77700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb08072b20 0x7fbb08083040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.291+0000 7fbb07fff700 1 -- 192.168.123.102:0/3275237908 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb08072b20 msgr2=0x7fbb08083040 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.291+0000 7fbb07fff700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb08072b20 0x7fbb08083040 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.291+0000 7fbb07fff700 1 -- 192.168.123.102:0/3275237908 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbaf8008440 con 0x7fbb08075a10 2026-03-10T10:18:49.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.291+0000 7fbb07fff700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 0x7fbb08083580 secure :-1 s=READY pgs=303 cs=0 l=1 rev1=1 crypto rx=0x7fbaf800a300 tx=0x7fbaf800a3e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:49.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.291+0000 7fbb05ffb700 1 -- 192.168.123.102:0/3275237908 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbaf8008dd0 con 0x7fbb08075a10 2026-03-10T10:18:49.294 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.292+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbb0812e500 con 0x7fbb08075a10 2026-03-10T10:18:49.294 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.292+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbb0812ea50 con 0x7fbb08075a10 2026-03-10T10:18:49.295 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.295+0000 7fbb05ffb700 1 -- 192.168.123.102:0/3275237908 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbaf8005d40 con 0x7fbb08075a10 2026-03-10T10:18:49.295 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.295+0000 7fbb05ffb700 1 -- 192.168.123.102:0/3275237908 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbaf8011660 con 0x7fbb08075a10 2026-03-10T10:18:49.296 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.295+0000 7fbb05ffb700 1 -- 192.168.123.102:0/3275237908 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fbaf8020070 con 0x7fbb08075a10 2026-03-10T10:18:49.296 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.296+0000 7fbb05ffb700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbaf006c5b0 0x7fbaf006ea70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:49.297 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.296+0000 7fbb05ffb700 1 -- 192.168.123.102:0/3275237908 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fbaf808e2b0 con 0x7fbb08075a10 2026-03-10T10:18:49.297 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.296+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbaf4005320 con 0x7fbb08075a10 2026-03-10T10:18:49.297 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.297+0000 7fbb0ca77700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbaf006c5b0 0x7fbaf006ea70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:49.298 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.297+0000 7fbb0ca77700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbaf006c5b0 0x7fbaf006ea70 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7fbb000060b0 tx=0x7fbb00007b80 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:49.303 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.301+0000 7fbb05ffb700 1 -- 192.168.123.102:0/3275237908 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fbaf805c5e0 con 0x7fbb08075a10 2026-03-10T10:18:49.470 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.470+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fbaf4006200 con 0x7fbb08075a10 2026-03-10T10:18:49.474 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.474+0000 7fbb05ffb700 1 -- 192.168.123.102:0/3275237908 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1854 (secure 0 0 0) 0x7fbaf805c170 con 0x7fbb08075a10 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:18:49.475 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:18:49.476 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:18:49.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.477+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbaf006c5b0 msgr2=0x7fbaf006ea70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.477+0000 7fbb0da79700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbaf006c5b0 0x7fbaf006ea70 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7fbb000060b0 tx=0x7fbb00007b80 comp rx=0 tx=0).stop 2026-03-10T10:18:49.479 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.477+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 msgr2=0x7fbb08083580 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.479 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.477+0000 7fbb0da79700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 0x7fbb08083580 secure :-1 s=READY pgs=303 cs=0 l=1 rev1=1 crypto rx=0x7fbaf800a300 tx=0x7fbaf800a3e0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.479 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.479+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 shutdown_connections 2026-03-10T10:18:49.479 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.479+0000 7fbb0da79700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fbaf006c5b0 0x7fbaf006ea70 unknown :-1 s=CLOSED pgs=139 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.479 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.479+0000 7fbb0da79700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbb08072b20 0x7fbb08083040 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.479 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.479+0000 7fbb0da79700 1 --2- 192.168.123.102:0/3275237908 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbb08075a10 0x7fbb08083580 unknown :-1 s=CLOSED pgs=303 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.479 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.479+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 >> 192.168.123.102:0/3275237908 conn(0x7fbb0806daa0 msgr2=0x7fbb0806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:49.480 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.479+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 shutdown_connections 2026-03-10T10:18:49.480 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.479+0000 7fbb0da79700 1 -- 192.168.123.102:0/3275237908 wait complete. 2026-03-10T10:18:49.482 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:18:49.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.564+0000 7f6d11e8d700 1 -- 192.168.123.102:0/357704884 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6d0c075a40 msgr2=0x7f6d0c077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.564+0000 7f6d11e8d700 1 --2- 192.168.123.102:0/357704884 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6d0c075a40 0x7f6d0c077ed0 secure :-1 s=READY pgs=304 cs=0 l=1 rev1=1 crypto rx=0x7f6d0400a390 tx=0x7f6d0400a6a0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.564+0000 7f6d11e8d700 1 -- 192.168.123.102:0/357704884 shutdown_connections 2026-03-10T10:18:49.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.564+0000 7f6d11e8d700 1 --2- 192.168.123.102:0/357704884 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6d0c075a40 0x7f6d0c077ed0 unknown :-1 s=CLOSED pgs=304 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.564+0000 7f6d11e8d700 1 --2- 192.168.123.102:0/357704884 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6d0c072b50 0x7f6d0c072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.564+0000 7f6d11e8d700 1 -- 192.168.123.102:0/357704884 >> 192.168.123.102:0/357704884 conn(0x7f6d0c06dae0 msgr2=0x7f6d0c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:49.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.564+0000 7f6d11e8d700 1 -- 192.168.123.102:0/357704884 shutdown_connections 2026-03-10T10:18:49.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.564+0000 7f6d11e8d700 1 -- 192.168.123.102:0/357704884 wait complete. 2026-03-10T10:18:49.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.565+0000 7f6d11e8d700 1 Processor -- start 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.565+0000 7f6d11e8d700 1 -- start start 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.565+0000 7f6d11e8d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6d0c072b50 0x7f6d0c083100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.565+0000 7f6d11e8d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6d0c083640 0x7f6d0c1b30f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.565+0000 7f6d11e8d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6d0c083b50 con 0x7f6d0c072b50 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.565+0000 7f6d11e8d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6d0c083cc0 con 0x7f6d0c083640 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d0affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6d0c083640 0x7f6d0c1b30f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d0affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6d0c083640 0x7f6d0c1b30f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:41746/0 (socket says 192.168.123.102:41746) 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d0affd700 1 -- 192.168.123.102:0/1747833608 learned_addr learned my addr 192.168.123.102:0/1747833608 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d0b7fe700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6d0c072b50 0x7f6d0c083100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d0affd700 1 -- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6d0c072b50 msgr2=0x7f6d0c083100 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d0affd700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6d0c072b50 0x7f6d0c083100 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d0affd700 1 -- 192.168.123.102:0/1747833608 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6d0400a040 con 0x7f6d0c083640 2026-03-10T10:18:49.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d0affd700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6d0c083640 0x7f6d0c1b30f0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f6d04009750 tx=0x7f6d040093b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d08ff9700 1 -- 192.168.123.102:0/1747833608 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6d0400a820 con 0x7f6d0c083640 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d11e8d700 1 -- 192.168.123.102:0/1747833608 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6d0c1b3690 con 0x7f6d0c083640 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.567+0000 7f6d11e8d700 1 -- 192.168.123.102:0/1747833608 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6d0c1b3be0 con 0x7f6d0c083640 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.568+0000 7f6d08ff9700 1 -- 192.168.123.102:0/1747833608 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6d04018070 con 0x7f6d0c083640 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.568+0000 7f6d08ff9700 1 -- 192.168.123.102:0/1747833608 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6d04007de0 con 0x7f6d0c083640 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.569+0000 7f6d08ff9700 1 -- 192.168.123.102:0/1747833608 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f6d0401a030 con 0x7f6d0c083640 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.570+0000 7f6d08ff9700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cf406c530 0x7f6cf406e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.570+0000 7f6d08ff9700 1 -- 192.168.123.102:0/1747833608 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f6d0408bb10 con 0x7f6d0c083640 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.570+0000 7f6d11e8d700 1 -- 192.168.123.102:0/1747833608 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6cf8005320 con 0x7f6d0c083640 2026-03-10T10:18:49.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.570+0000 7f6d0b7fe700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cf406c530 0x7f6cf406e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:49.571 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.571+0000 7f6d0b7fe700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cf406c530 0x7f6cf406e9f0 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7f6cfc005950 tx=0x7f6cfc0058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:49.577 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.573+0000 7f6d08ff9700 1 -- 192.168.123.102:0/1747833608 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f6d04056330 con 0x7f6d0c083640 2026-03-10T10:18:49.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.702+0000 7f6d11e8d700 1 -- 192.168.123.102:0/1747833608 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f6cf8000bf0 con 0x7f6cf406c530 2026-03-10T10:18:49.704 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.704+0000 7f6d08ff9700 1 -- 192.168.123.102:0/1747833608 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7f6cf8000bf0 con 0x7f6cf406c530 2026-03-10T10:18:49.704 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:18:49.705 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:18:49.705 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:18:49.705 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:18:49.705 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T10:18:49.705 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "0/23 daemons upgraded", 2026-03-10T10:18:49.705 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc image on host vm05", 2026-03-10T10:18:49.705 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:18:49.705 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:18:49.707 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.707+0000 7f6cf27fc700 1 -- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cf406c530 msgr2=0x7f6cf406e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.707 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.707+0000 7f6cf27fc700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cf406c530 0x7f6cf406e9f0 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7f6cfc005950 tx=0x7f6cfc0058e0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.707+0000 7f6cf27fc700 1 -- 192.168.123.102:0/1747833608 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6d0c083640 msgr2=0x7f6d0c1b30f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.707+0000 7f6cf27fc700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6d0c083640 0x7f6d0c1b30f0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f6d04009750 tx=0x7f6d040093b0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.707+0000 7f6cf27fc700 1 -- 192.168.123.102:0/1747833608 shutdown_connections 2026-03-10T10:18:49.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.707+0000 7f6cf27fc700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6d0c072b50 0x7f6d0c083100 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.707+0000 7f6cf27fc700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f6cf406c530 0x7f6cf406e9f0 unknown :-1 s=CLOSED pgs=140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.707+0000 7f6cf27fc700 1 --2- 192.168.123.102:0/1747833608 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6d0c083640 0x7f6d0c1b30f0 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.707+0000 7f6cf27fc700 1 -- 192.168.123.102:0/1747833608 >> 192.168.123.102:0/1747833608 conn(0x7f6d0c06dae0 msgr2=0x7f6d0c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:49.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.708+0000 7f6cf27fc700 1 -- 192.168.123.102:0/1747833608 shutdown_connections 2026-03-10T10:18:49.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.708+0000 7f6cf27fc700 1 -- 192.168.123.102:0/1747833608 wait complete. 2026-03-10T10:18:49.788 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.787+0000 7f1607443700 1 -- 192.168.123.102:0/1144948919 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f160010a700 msgr2=0x7f160010cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.789 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.787+0000 7f1607443700 1 --2- 192.168.123.102:0/1144948919 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f160010a700 0x7f160010cb90 secure :-1 s=READY pgs=305 cs=0 l=1 rev1=1 crypto rx=0x7f15f4009b00 tx=0x7f15f4009e10 comp rx=0 tx=0).stop 2026-03-10T10:18:49.789 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.788+0000 7f1607443700 1 -- 192.168.123.102:0/1144948919 shutdown_connections 2026-03-10T10:18:49.789 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.788+0000 7f1607443700 1 --2- 192.168.123.102:0/1144948919 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f160010a700 0x7f160010cb90 unknown :-1 s=CLOSED pgs=305 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.789 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.788+0000 7f1607443700 1 --2- 192.168.123.102:0/1144948919 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1600107d90 0x7f160010a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.789 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.788+0000 7f1607443700 1 -- 192.168.123.102:0/1144948919 >> 192.168.123.102:0/1144948919 conn(0x7f160006dae0 msgr2=0x7f160006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:49.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.789+0000 7f1607443700 1 -- 192.168.123.102:0/1144948919 shutdown_connections 2026-03-10T10:18:49.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.789+0000 7f1607443700 1 -- 192.168.123.102:0/1144948919 wait complete. 2026-03-10T10:18:49.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.790+0000 7f1607443700 1 Processor -- start 2026-03-10T10:18:49.791 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.791+0000 7f1607443700 1 -- start start 2026-03-10T10:18:49.791 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.791+0000 7f1607443700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1600107d90 0x7f16001a5620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:49.791 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.791+0000 7f1607443700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f160010a700 0x7f16001a5b60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.791+0000 7f1607443700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f16001a6180 con 0x7f1600107d90 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.791+0000 7f1607443700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f16001a62c0 con 0x7f160010a700 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.791+0000 7f16051df700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1600107d90 0x7f16001a5620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.791+0000 7f16051df700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1600107d90 0x7f16001a5620 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:32854/0 (socket says 192.168.123.102:32854) 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.791+0000 7f16051df700 1 -- 192.168.123.102:0/605153983 learned_addr learned my addr 192.168.123.102:0/605153983 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.792+0000 7f16049de700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f160010a700 0x7f16001a5b60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.792+0000 7f16049de700 1 -- 192.168.123.102:0/605153983 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1600107d90 msgr2=0x7f16001a5620 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.792+0000 7f16049de700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1600107d90 0x7f16001a5620 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.792+0000 7f16049de700 1 -- 192.168.123.102:0/605153983 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f15f40097e0 con 0x7f160010a700 2026-03-10T10:18:49.792 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.792+0000 7f16049de700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f160010a700 0x7f16001a5b60 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f15f4009b00 tx=0x7f15f4005340 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:49.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.792+0000 7f15fe7fc700 1 -- 192.168.123.102:0/605153983 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f15f401d070 con 0x7f160010a700 2026-03-10T10:18:49.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.793+0000 7f1607443700 1 -- 192.168.123.102:0/605153983 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f160010f600 con 0x7f160010a700 2026-03-10T10:18:49.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.793+0000 7f1607443700 1 -- 192.168.123.102:0/605153983 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f160010faf0 con 0x7f160010a700 2026-03-10T10:18:49.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.793+0000 7f15fe7fc700 1 -- 192.168.123.102:0/605153983 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f15f40053b0 con 0x7f160010a700 2026-03-10T10:18:49.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.794+0000 7f15fe7fc700 1 -- 192.168.123.102:0/605153983 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f15f400fe30 con 0x7f160010a700 2026-03-10T10:18:49.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.794+0000 7f1607443700 1 -- 192.168.123.102:0/605153983 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f15ec005320 con 0x7f160010a700 2026-03-10T10:18:49.795 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.795+0000 7f15fe7fc700 1 -- 192.168.123.102:0/605153983 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f15f400f460 con 0x7f160010a700 2026-03-10T10:18:49.796 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.795+0000 7f15fe7fc700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f15e806c330 0x7f15e806e7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:18:49.796 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.796+0000 7f16051df700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f15e806c330 0x7f15e806e7f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:18:49.796 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.796+0000 7f15fe7fc700 1 -- 192.168.123.102:0/605153983 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f15f408cc70 con 0x7f160010a700 2026-03-10T10:18:49.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.797+0000 7f16051df700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f15e806c330 0x7f15e806e7f0 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7f15f0009f10 tx=0x7f15f0009450 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:18:49.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.797+0000 7f15fe7fc700 1 -- 192.168.123.102:0/605153983 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f15f405afa0 con 0x7f160010a700 2026-03-10T10:18:49.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.951+0000 7f1607443700 1 -- 192.168.123.102:0/605153983 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f15ec005cc0 con 0x7f160010a700 2026-03-10T10:18:49.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.952+0000 7f15fe7fc700 1 -- 192.168.123.102:0/605153983 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f15f4005c00 con 0x7f160010a700 2026-03-10T10:18:49.952 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:18:49.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.955+0000 7f15e7fff700 1 -- 192.168.123.102:0/605153983 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f15e806c330 msgr2=0x7f15e806e7f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.955+0000 7f15e7fff700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f15e806c330 0x7f15e806e7f0 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7f15f0009f10 tx=0x7f15f0009450 comp rx=0 tx=0).stop 2026-03-10T10:18:49.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.955+0000 7f15e7fff700 1 -- 192.168.123.102:0/605153983 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f160010a700 msgr2=0x7f16001a5b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:18:49.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.955+0000 7f15e7fff700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f160010a700 0x7f16001a5b60 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f15f4009b00 tx=0x7f15f4005340 comp rx=0 tx=0).stop 2026-03-10T10:18:49.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.955+0000 7f15e7fff700 1 -- 192.168.123.102:0/605153983 shutdown_connections 2026-03-10T10:18:49.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.956+0000 7f15e7fff700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1600107d90 0x7f16001a5620 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.956+0000 7f15e7fff700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f15e806c330 0x7f15e806e7f0 unknown :-1 s=CLOSED pgs=141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.956+0000 7f15e7fff700 1 --2- 192.168.123.102:0/605153983 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f160010a700 0x7f16001a5b60 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:18:49.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.956+0000 7f15e7fff700 1 -- 192.168.123.102:0/605153983 >> 192.168.123.102:0/605153983 conn(0x7f160006dae0 msgr2=0x7f160006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:18:49.957 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.957+0000 7f15e7fff700 1 -- 192.168.123.102:0/605153983 shutdown_connections 2026-03-10T10:18:49.957 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:18:49.957+0000 7f15e7fff700 1 -- 192.168.123.102:0/605153983 wait complete. 2026-03-10T10:18:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:50 vm02.local ceph-mon[50200]: from='client.24401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:50 vm02.local ceph-mon[50200]: pgmap v134: 65 pgs: 65 active+clean; 133 MiB data, 1.1 GiB used, 119 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 411 op/s 2026-03-10T10:18:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:50 vm02.local ceph-mon[50200]: from='client.24405 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:50 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/2176955637' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:18:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:50 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/3275237908' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:18:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:50 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/605153983' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:18:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:50 vm05.local ceph-mon[59051]: from='client.24401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:50 vm05.local ceph-mon[59051]: pgmap v134: 65 pgs: 65 active+clean; 133 MiB data, 1.1 GiB used, 119 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 411 op/s 2026-03-10T10:18:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:50 vm05.local ceph-mon[59051]: from='client.24405 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:50 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/2176955637' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:18:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:50 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/3275237908' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:18:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:50 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/605153983' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:18:51.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:51 vm02.local ceph-mon[50200]: from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:51.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:51 vm05.local ceph-mon[59051]: from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:18:53.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:53 vm02.local ceph-mon[50200]: pgmap v135: 65 pgs: 65 active+clean; 134 MiB data, 1.1 GiB used, 119 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.4 MiB/s wr, 367 op/s 2026-03-10T10:18:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:53 vm05.local ceph-mon[59051]: pgmap v135: 65 pgs: 65 active+clean; 134 MiB data, 1.1 GiB used, 119 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.4 MiB/s wr, 367 op/s 2026-03-10T10:18:54.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:54 vm02.local ceph-mon[50200]: pgmap v136: 65 pgs: 65 active+clean; 154 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 446 op/s 2026-03-10T10:18:54.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:54 vm05.local ceph-mon[59051]: pgmap v136: 65 pgs: 65 active+clean; 154 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 446 op/s 2026-03-10T10:18:55.860 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:55 vm05.local ceph-mon[59051]: pgmap v137: 65 pgs: 65 active+clean; 155 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 340 op/s 2026-03-10T10:18:56.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:55 vm02.local ceph-mon[50200]: pgmap v137: 65 pgs: 65 active+clean; 155 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 340 op/s 2026-03-10T10:18:59.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:58 vm02.local ceph-mon[50200]: pgmap v138: 65 pgs: 65 active+clean; 165 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 424 op/s 2026-03-10T10:18:59.039 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:58 vm05.local ceph-mon[59051]: pgmap v138: 65 pgs: 65 active+clean; 165 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 424 op/s 2026-03-10T10:19:00.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:18:59 vm02.local ceph-mon[50200]: pgmap v139: 65 pgs: 65 active+clean; 170 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 363 op/s 2026-03-10T10:19:00.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:18:59 vm05.local ceph-mon[59051]: pgmap v139: 65 pgs: 65 active+clean; 170 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 363 op/s 2026-03-10T10:19:01.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:00 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:01.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:00 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:02.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:02 vm02.local ceph-mon[50200]: pgmap v140: 65 pgs: 65 active+clean; 171 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 1.4 MiB/s rd, 3.3 MiB/s wr, 319 op/s 2026-03-10T10:19:02.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:02 vm05.local ceph-mon[59051]: pgmap v140: 65 pgs: 65 active+clean; 171 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 1.4 MiB/s rd, 3.3 MiB/s wr, 319 op/s 2026-03-10T10:19:04.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:03 vm02.local ceph-mon[50200]: pgmap v141: 65 pgs: 65 active+clean; 187 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 423 op/s 2026-03-10T10:19:04.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:03 vm05.local ceph-mon[59051]: pgmap v141: 65 pgs: 65 active+clean; 187 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 423 op/s 2026-03-10T10:19:06.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:05 vm02.local ceph-mon[50200]: pgmap v142: 65 pgs: 65 active+clean; 189 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 354 op/s 2026-03-10T10:19:06.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:05 vm05.local ceph-mon[59051]: pgmap v142: 65 pgs: 65 active+clean; 189 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 354 op/s 2026-03-10T10:19:06.089 INFO:tasks.workunit.client.0.vm02.stderr:+ pushd ltp-full-20091231/testcases/kernel/fs/fsstress 2026-03-10T10:19:06.092 INFO:tasks.workunit.client.0.vm02.stdout:~/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress ~/cephtest/mnt.0/client.0/tmp/fsstress ~/cephtest/mnt.0/client.0/tmp 2026-03-10T10:19:06.092 INFO:tasks.workunit.client.0.vm02.stderr:+ make 2026-03-10T10:19:06.352 INFO:tasks.workunit.client.0.vm02.stdout:cc -DNO_XFS -I/home/ubuntu/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress -D_LARGEFILE64_SOURCE -D_GNU_SOURCE -I../../../../include -I../../../../include -L../../../../lib fsstress.c -o fsstress 2026-03-10T10:19:06.789 INFO:tasks.workunit.client.0.vm02.stderr:++ readlink -f fsstress 2026-03-10T10:19:06.791 INFO:tasks.workunit.client.0.vm02.stderr:+ BIN=/home/ubuntu/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress 2026-03-10T10:19:06.791 INFO:tasks.workunit.client.0.vm02.stderr:+ popd 2026-03-10T10:19:06.792 INFO:tasks.workunit.client.0.vm02.stdout:~/cephtest/mnt.0/client.0/tmp/fsstress ~/cephtest/mnt.0/client.0/tmp 2026-03-10T10:19:06.793 INFO:tasks.workunit.client.0.vm02.stderr:+ popd 2026-03-10T10:19:06.793 INFO:tasks.workunit.client.0.vm02.stdout:~/cephtest/mnt.0/client.0/tmp 2026-03-10T10:19:06.793 INFO:tasks.workunit.client.0.vm02.stderr:++ mktemp -d -p . 2026-03-10T10:19:06.796 INFO:tasks.workunit.client.0.vm02.stderr:+ T=./tmp.mQ3rEgPuFu 2026-03-10T10:19:06.796 INFO:tasks.workunit.client.0.vm02.stderr:+ /home/ubuntu/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress -d ./tmp.mQ3rEgPuFu -l 1 -n 1000 -p 10 -v 2026-03-10T10:19:06.799 INFO:tasks.workunit.client.0.vm02.stdout:seed = 1772340564 2026-03-10T10:19:06.822 INFO:tasks.workunit.client.0.vm02.stdout:0/0: dwrite - no filename 2026-03-10T10:19:06.822 INFO:tasks.workunit.client.0.vm02.stdout:0/1: chown . 17008 1 2026-03-10T10:19:06.822 INFO:tasks.workunit.client.0.vm02.stdout:0/2: rename - no filename 2026-03-10T10:19:06.822 INFO:tasks.workunit.client.0.vm02.stdout:0/3: write - no filename 2026-03-10T10:19:06.822 INFO:tasks.workunit.client.0.vm02.stdout:0/4: chown . 610282 1 2026-03-10T10:19:06.822 INFO:tasks.workunit.client.0.vm02.stdout:0/5: rmdir - no directory 2026-03-10T10:19:06.822 INFO:tasks.workunit.client.0.vm02.stdout:0/6: chown . 11708150 1 2026-03-10T10:19:06.822 INFO:tasks.workunit.client.0.vm02.stdout:0/7: truncate - no filename 2026-03-10T10:19:06.823 INFO:tasks.workunit.client.0.vm02.stdout:4/0: rename - no filename 2026-03-10T10:19:06.823 INFO:tasks.workunit.client.0.vm02.stdout:4/1: dwrite - no filename 2026-03-10T10:19:06.824 INFO:tasks.workunit.client.0.vm02.stdout:0/8: symlink l0 0 2026-03-10T10:19:06.825 INFO:tasks.workunit.client.0.vm02.stdout:0/9: chown l0 2985 1 2026-03-10T10:19:06.825 INFO:tasks.workunit.client.0.vm02.stdout:0/10: read - no filename 2026-03-10T10:19:06.825 INFO:tasks.workunit.client.0.vm02.stdout:0/11: dwrite - no filename 2026-03-10T10:19:06.826 INFO:tasks.workunit.client.0.vm02.stdout:4/2: creat f0 x:0 0 0 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:0/12: mkdir d1 0 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:7/0: link - no file 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:4/3: mkdir d1 0 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:9/0: rmdir - no directory 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:6/0: mkdir d0 0 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:6/1: chown d0 26700 1 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:6/2: read - no filename 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:6/3: dwrite - no filename 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:6/4: truncate - no filename 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:8/0: stat - no entries 2026-03-10T10:19:06.840 INFO:tasks.workunit.client.0.vm02.stdout:7/1: symlink l0 0 2026-03-10T10:19:06.842 INFO:tasks.workunit.client.0.vm02.stdout:7/2: chown l0 99356183 1 2026-03-10T10:19:06.843 INFO:tasks.workunit.client.0.vm02.stdout:7/3: chown l0 1279 1 2026-03-10T10:19:06.843 INFO:tasks.workunit.client.0.vm02.stdout:4/4: mkdir d1/d2 0 2026-03-10T10:19:06.845 INFO:tasks.workunit.client.0.vm02.stdout:0/13: rmdir d1 0 2026-03-10T10:19:06.849 INFO:tasks.workunit.client.0.vm02.stdout:5/0: write - no filename 2026-03-10T10:19:06.849 INFO:tasks.workunit.client.0.vm02.stdout:5/1: rename - no filename 2026-03-10T10:19:06.850 INFO:tasks.workunit.client.0.vm02.stdout:9/1: mknod c0 0 2026-03-10T10:19:06.850 INFO:tasks.workunit.client.0.vm02.stdout:9/2: readlink - no filename 2026-03-10T10:19:06.850 INFO:tasks.workunit.client.0.vm02.stdout:9/3: write - no filename 2026-03-10T10:19:06.854 INFO:tasks.workunit.client.0.vm02.stdout:6/5: mknod d0/c1 0 2026-03-10T10:19:06.854 INFO:tasks.workunit.client.0.vm02.stdout:6/6: dread - no filename 2026-03-10T10:19:06.854 INFO:tasks.workunit.client.0.vm02.stdout:6/7: truncate - no filename 2026-03-10T10:19:06.854 INFO:tasks.workunit.client.0.vm02.stdout:6/8: write - no filename 2026-03-10T10:19:06.854 INFO:tasks.workunit.client.0.vm02.stdout:6/9: dread - no filename 2026-03-10T10:19:06.856 INFO:tasks.workunit.client.0.vm02.stdout:7/4: mkdir d1 0 2026-03-10T10:19:06.856 INFO:tasks.workunit.client.0.vm02.stdout:7/5: chown d1 5 1 2026-03-10T10:19:06.856 INFO:tasks.workunit.client.0.vm02.stdout:7/6: dread - no filename 2026-03-10T10:19:06.856 INFO:tasks.workunit.client.0.vm02.stdout:7/7: dwrite - no filename 2026-03-10T10:19:06.856 INFO:tasks.workunit.client.0.vm02.stdout:7/8: dread - no filename 2026-03-10T10:19:06.856 INFO:tasks.workunit.client.0.vm02.stdout:7/9: readlink l0 0 2026-03-10T10:19:06.856 INFO:tasks.workunit.client.0.vm02.stdout:7/10: fsync - no filename 2026-03-10T10:19:06.860 INFO:tasks.workunit.client.0.vm02.stdout:0/14: creat f2 x:0 0 0 2026-03-10T10:19:06.866 INFO:tasks.workunit.client.0.vm02.stdout:3/0: rename - no filename 2026-03-10T10:19:06.866 INFO:tasks.workunit.client.0.vm02.stdout:3/1: stat - no entries 2026-03-10T10:19:06.866 INFO:tasks.workunit.client.0.vm02.stdout:3/2: rmdir - no directory 2026-03-10T10:19:06.866 INFO:tasks.workunit.client.0.vm02.stdout:3/3: chown . 454 1 2026-03-10T10:19:06.867 INFO:tasks.workunit.client.0.vm02.stdout:0/15: dwrite f2 [0,4194304] 0 2026-03-10T10:19:06.872 INFO:tasks.workunit.client.0.vm02.stdout:0/16: dread f2 [0,4194304] 0 2026-03-10T10:19:06.872 INFO:tasks.workunit.client.0.vm02.stdout:0/17: truncate f2 4413011 0 2026-03-10T10:19:06.873 INFO:tasks.workunit.client.0.vm02.stdout:0/18: readlink l0 0 2026-03-10T10:19:06.874 INFO:tasks.workunit.client.0.vm02.stdout:0/19: write f2 [298391,122162] 0 2026-03-10T10:19:06.886 INFO:tasks.workunit.client.0.vm02.stdout:8/1: mknod c0 0 2026-03-10T10:19:06.886 INFO:tasks.workunit.client.0.vm02.stdout:8/2: rmdir - no directory 2026-03-10T10:19:06.886 INFO:tasks.workunit.client.0.vm02.stdout:8/3: chown c0 5 1 2026-03-10T10:19:06.886 INFO:tasks.workunit.client.0.vm02.stdout:6/10: creat d0/f2 x:0 0 0 2026-03-10T10:19:06.890 INFO:tasks.workunit.client.0.vm02.stdout:5/2: getdents . 0 2026-03-10T10:19:06.890 INFO:tasks.workunit.client.0.vm02.stdout:5/3: fdatasync - no filename 2026-03-10T10:19:06.890 INFO:tasks.workunit.client.0.vm02.stdout:5/4: fdatasync - no filename 2026-03-10T10:19:06.890 INFO:tasks.workunit.client.0.vm02.stdout:5/5: stat - no entries 2026-03-10T10:19:06.890 INFO:tasks.workunit.client.0.vm02.stdout:5/6: dwrite - no filename 2026-03-10T10:19:06.890 INFO:tasks.workunit.client.0.vm02.stdout:4/5: link f0 d1/f3 0 2026-03-10T10:19:06.896 INFO:tasks.workunit.client.0.vm02.stdout:8/4: mkdir d1 0 2026-03-10T10:19:06.899 INFO:tasks.workunit.client.0.vm02.stdout:6/11: symlink d0/l3 0 2026-03-10T10:19:06.900 INFO:tasks.workunit.client.0.vm02.stdout:6/12: write d0/f2 [370575,26033] 0 2026-03-10T10:19:06.902 INFO:tasks.workunit.client.0.vm02.stdout:7/11: mkdir d1/d2 0 2026-03-10T10:19:06.902 INFO:tasks.workunit.client.0.vm02.stdout:7/12: dwrite - no filename 2026-03-10T10:19:06.902 INFO:tasks.workunit.client.0.vm02.stdout:7/13: write - no filename 2026-03-10T10:19:06.907 INFO:tasks.workunit.client.0.vm02.stdout:3/4: creat f0 x:0 0 0 2026-03-10T10:19:06.908 INFO:tasks.workunit.client.0.vm02.stdout:9/4: link c0 c1 0 2026-03-10T10:19:06.909 INFO:tasks.workunit.client.0.vm02.stdout:9/5: dread - no filename 2026-03-10T10:19:06.911 INFO:tasks.workunit.client.0.vm02.stdout:4/6: dwrite f0 [0,4194304] 0 2026-03-10T10:19:06.912 INFO:tasks.workunit.client.0.vm02.stdout:3/5: fdatasync f0 0 2026-03-10T10:19:06.925 INFO:tasks.workunit.client.0.vm02.stdout:0/20: link l0 l3 0 2026-03-10T10:19:06.927 INFO:tasks.workunit.client.0.vm02.stdout:5/7: creat f0 x:0 0 0 2026-03-10T10:19:06.927 INFO:tasks.workunit.client.0.vm02.stdout:9/6: symlink l2 0 2026-03-10T10:19:06.927 INFO:tasks.workunit.client.0.vm02.stdout:2/0: mkdir d0 0 2026-03-10T10:19:06.927 INFO:tasks.workunit.client.0.vm02.stdout:2/1: dread - no filename 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:2/2: rename d0 to d0/d1 22 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:2/3: dread - no filename 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:8/5: mkdir d1/d2 0 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:8/6: truncate - no filename 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:2/4: stat d0 0 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:2/5: dwrite - no filename 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:2/6: truncate - no filename 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:2/7: fdatasync - no filename 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:8/7: chown d1 1604 1 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:8/8: dread - no filename 2026-03-10T10:19:06.928 INFO:tasks.workunit.client.0.vm02.stdout:8/9: truncate - no filename 2026-03-10T10:19:06.936 INFO:tasks.workunit.client.0.vm02.stdout:5/8: dwrite f0 [0,4194304] 0 2026-03-10T10:19:06.936 INFO:tasks.workunit.client.0.vm02.stdout:7/14: creat d1/d2/f3 x:0 0 0 2026-03-10T10:19:06.939 INFO:tasks.workunit.client.0.vm02.stdout:7/15: write d1/d2/f3 [93270,27473] 0 2026-03-10T10:19:06.939 INFO:tasks.workunit.client.0.vm02.stdout:7/16: write d1/d2/f3 [69289,99308] 0 2026-03-10T10:19:06.943 INFO:tasks.workunit.client.0.vm02.stdout:7/17: write d1/d2/f3 [659466,116745] 0 2026-03-10T10:19:06.950 INFO:tasks.workunit.client.0.vm02.stdout:3/6: mkdir d1 0 2026-03-10T10:19:06.950 INFO:tasks.workunit.client.0.vm02.stdout:3/7: readlink - no filename 2026-03-10T10:19:06.951 INFO:tasks.workunit.client.0.vm02.stdout:0/21: write f2 [2801219,49053] 0 2026-03-10T10:19:06.956 INFO:tasks.workunit.client.0.vm02.stdout:1/0: creat f0 x:0 0 0 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:7/18: symlink d1/l4 0 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:2/8: symlink d0/l2 0 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:2/9: stat d0/l2 0 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:2/10: dread - no filename 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:2/11: dwrite - no filename 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:2/12: write - no filename 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:8/10: getdents d1/d2 0 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:8/11: chown d1/d2 4187680 1 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:7/19: creat d1/f5 x:0 0 0 2026-03-10T10:19:06.975 INFO:tasks.workunit.client.0.vm02.stdout:7/20: chown d1/d2 55533 1 2026-03-10T10:19:06.979 INFO:tasks.workunit.client.0.vm02.stdout:9/7: link c1 c3 0 2026-03-10T10:19:06.982 INFO:tasks.workunit.client.0.vm02.stdout:2/13: symlink d0/l3 0 2026-03-10T10:19:06.983 INFO:tasks.workunit.client.0.vm02.stdout:2/14: chown d0/l3 113 1 2026-03-10T10:19:06.985 INFO:tasks.workunit.client.0.vm02.stdout:1/1: link f0 f1 0 2026-03-10T10:19:06.986 INFO:tasks.workunit.client.0.vm02.stdout:1/2: truncate f1 51066 0 2026-03-10T10:19:06.987 INFO:tasks.workunit.client.0.vm02.stdout:7/21: rename l0 to d1/l6 0 2026-03-10T10:19:06.995 INFO:tasks.workunit.client.0.vm02.stdout:8/12: creat d1/d2/f3 x:0 0 0 2026-03-10T10:19:06.995 INFO:tasks.workunit.client.0.vm02.stdout:8/13: dread - d1/d2/f3 zero size 2026-03-10T10:19:06.995 INFO:tasks.workunit.client.0.vm02.stdout:8/14: dread - d1/d2/f3 zero size 2026-03-10T10:19:06.995 INFO:tasks.workunit.client.0.vm02.stdout:1/3: rename f1 to f2 0 2026-03-10T10:19:06.995 INFO:tasks.workunit.client.0.vm02.stdout:1/4: chown f2 1143 1 2026-03-10T10:19:06.995 INFO:tasks.workunit.client.0.vm02.stdout:9/8: rename c3 to c4 0 2026-03-10T10:19:06.995 INFO:tasks.workunit.client.0.vm02.stdout:9/9: write - no filename 2026-03-10T10:19:06.995 INFO:tasks.workunit.client.0.vm02.stdout:9/10: dwrite - no filename 2026-03-10T10:19:06.995 INFO:tasks.workunit.client.0.vm02.stdout:9/11: fdatasync - no filename 2026-03-10T10:19:06.998 INFO:tasks.workunit.client.0.vm02.stdout:8/15: creat d1/f4 x:0 0 0 2026-03-10T10:19:06.998 INFO:tasks.workunit.client.0.vm02.stdout:8/16: write d1/f4 [992546,125200] 0 2026-03-10T10:19:06.999 INFO:tasks.workunit.client.0.vm02.stdout:8/17: truncate d1/d2/f3 315687 0 2026-03-10T10:19:06.999 INFO:tasks.workunit.client.0.vm02.stdout:8/18: stat d1 0 2026-03-10T10:19:06.999 INFO:tasks.workunit.client.0.vm02.stdout:8/19: stat d1/d2 0 2026-03-10T10:19:07.002 INFO:tasks.workunit.client.0.vm02.stdout:8/20: dread d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:07.003 INFO:tasks.workunit.client.0.vm02.stdout:9/12: mknod c5 0 2026-03-10T10:19:07.003 INFO:tasks.workunit.client.0.vm02.stdout:9/13: truncate - no filename 2026-03-10T10:19:07.004 INFO:tasks.workunit.client.0.vm02.stdout:2/15: link d0/l2 d0/l4 0 2026-03-10T10:19:07.004 INFO:tasks.workunit.client.0.vm02.stdout:2/16: dwrite - no filename 2026-03-10T10:19:07.004 INFO:tasks.workunit.client.0.vm02.stdout:2/17: write - no filename 2026-03-10T10:19:07.004 INFO:tasks.workunit.client.0.vm02.stdout:2/18: dread - no filename 2026-03-10T10:19:07.004 INFO:tasks.workunit.client.0.vm02.stdout:2/19: truncate - no filename 2026-03-10T10:19:07.007 INFO:tasks.workunit.client.0.vm02.stdout:2/20: symlink d0/l5 0 2026-03-10T10:19:07.014 INFO:tasks.workunit.client.0.vm02.stdout:2/21: read - no filename 2026-03-10T10:19:07.014 INFO:tasks.workunit.client.0.vm02.stdout:2/22: write - no filename 2026-03-10T10:19:07.014 INFO:tasks.workunit.client.0.vm02.stdout:2/23: write - no filename 2026-03-10T10:19:07.014 INFO:tasks.workunit.client.0.vm02.stdout:2/24: mknod d0/c6 0 2026-03-10T10:19:07.014 INFO:tasks.workunit.client.0.vm02.stdout:9/14: link l2 l6 0 2026-03-10T10:19:07.017 INFO:tasks.workunit.client.0.vm02.stdout:9/15: creat f7 x:0 0 0 2026-03-10T10:19:07.019 INFO:tasks.workunit.client.0.vm02.stdout:2/25: chown d0/l2 36 1 2026-03-10T10:19:07.022 INFO:tasks.workunit.client.0.vm02.stdout:9/16: rename c5 to c8 0 2026-03-10T10:19:07.022 INFO:tasks.workunit.client.0.vm02.stdout:9/17: rmdir - no directory 2026-03-10T10:19:07.028 INFO:tasks.workunit.client.0.vm02.stdout:2/26: rename d0/l4 to d0/l7 0 2026-03-10T10:19:07.030 INFO:tasks.workunit.client.0.vm02.stdout:9/18: rename c1 to c9 0 2026-03-10T10:19:07.034 INFO:tasks.workunit.client.0.vm02.stdout:2/27: creat d0/f8 x:0 0 0 2026-03-10T10:19:07.036 INFO:tasks.workunit.client.0.vm02.stdout:9/19: mkdir da 0 2026-03-10T10:19:07.036 INFO:tasks.workunit.client.0.vm02.stdout:9/20: dread - f7 zero size 2026-03-10T10:19:07.039 INFO:tasks.workunit.client.0.vm02.stdout:2/28: creat d0/f9 x:0 0 0 2026-03-10T10:19:07.043 INFO:tasks.workunit.client.0.vm02.stdout:9/21: link f7 da/fb 0 2026-03-10T10:19:07.050 INFO:tasks.workunit.client.0.vm02.stdout:9/22: rmdir da 39 2026-03-10T10:19:07.051 INFO:tasks.workunit.client.0.vm02.stdout:9/23: dread - f7 zero size 2026-03-10T10:19:07.051 INFO:tasks.workunit.client.0.vm02.stdout:4/7: fdatasync f0 0 2026-03-10T10:19:07.053 INFO:tasks.workunit.client.0.vm02.stdout:6/13: dread d0/f2 [0,4194304] 0 2026-03-10T10:19:07.055 INFO:tasks.workunit.client.0.vm02.stdout:7/22: dread d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:07.080 INFO:tasks.workunit.client.0.vm02.stdout:9/24: write f7 [89758,10239] 0 2026-03-10T10:19:07.080 INFO:tasks.workunit.client.0.vm02.stdout:4/8: rename d1/f3 to d1/d2/f4 0 2026-03-10T10:19:07.082 INFO:tasks.workunit.client.0.vm02.stdout:6/14: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:07.083 INFO:tasks.workunit.client.0.vm02.stdout:6/15: readlink d0/l3 0 2026-03-10T10:19:07.089 INFO:tasks.workunit.client.0.vm02.stdout:6/16: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:07.093 INFO:tasks.workunit.client.0.vm02.stdout:9/25: fdatasync da/fb 0 2026-03-10T10:19:07.104 INFO:tasks.workunit.client.0.vm02.stdout:9/26: dwrite da/fb [0,4194304] 0 2026-03-10T10:19:07.127 INFO:tasks.workunit.client.0.vm02.stdout:6/17: mknod d0/c4 0 2026-03-10T10:19:07.129 INFO:tasks.workunit.client.0.vm02.stdout:9/27: symlink da/lc 0 2026-03-10T10:19:07.130 INFO:tasks.workunit.client.0.vm02.stdout:6/18: symlink d0/l5 0 2026-03-10T10:19:07.135 INFO:tasks.workunit.client.0.vm02.stdout:6/19: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:07.150 INFO:tasks.workunit.client.0.vm02.stdout:6/20: symlink d0/l6 0 2026-03-10T10:19:07.153 INFO:tasks.workunit.client.0.vm02.stdout:9/28: link l6 da/ld 0 2026-03-10T10:19:07.155 INFO:tasks.workunit.client.0.vm02.stdout:6/21: mkdir d0/d7 0 2026-03-10T10:19:07.325 INFO:tasks.workunit.client.0.vm02.stdout:3/8: fsync f0 0 2026-03-10T10:19:07.335 INFO:tasks.workunit.client.0.vm02.stdout:3/9: fsync f0 0 2026-03-10T10:19:07.335 INFO:tasks.workunit.client.0.vm02.stdout:3/10: dread - f0 zero size 2026-03-10T10:19:07.336 INFO:tasks.workunit.client.0.vm02.stdout:3/11: read - f0 zero size 2026-03-10T10:19:07.336 INFO:tasks.workunit.client.0.vm02.stdout:3/12: truncate f0 310714 0 2026-03-10T10:19:07.336 INFO:tasks.workunit.client.0.vm02.stdout:3/13: mknod d1/c2 0 2026-03-10T10:19:07.336 INFO:tasks.workunit.client.0.vm02.stdout:3/14: creat d1/f3 x:0 0 0 2026-03-10T10:19:07.336 INFO:tasks.workunit.client.0.vm02.stdout:3/15: dwrite f0 [0,4194304] 0 2026-03-10T10:19:07.338 INFO:tasks.workunit.client.0.vm02.stdout:5/9: fsync f0 0 2026-03-10T10:19:07.346 INFO:tasks.workunit.client.0.vm02.stdout:3/16: symlink d1/l4 0 2026-03-10T10:19:07.348 INFO:tasks.workunit.client.0.vm02.stdout:5/10: mkdir d1 0 2026-03-10T10:19:07.348 INFO:tasks.workunit.client.0.vm02.stdout:5/11: write f0 [2037526,3421] 0 2026-03-10T10:19:07.348 INFO:tasks.workunit.client.0.vm02.stdout:5/12: stat f0 0 2026-03-10T10:19:07.349 INFO:tasks.workunit.client.0.vm02.stdout:5/13: truncate f0 4313279 0 2026-03-10T10:19:07.351 INFO:tasks.workunit.client.0.vm02.stdout:3/17: creat d1/f5 x:0 0 0 2026-03-10T10:19:07.353 INFO:tasks.workunit.client.0.vm02.stdout:7/23: rmdir d1/d2 39 2026-03-10T10:19:07.363 INFO:tasks.workunit.client.0.vm02.stdout:0/22: truncate f2 4346511 0 2026-03-10T10:19:07.364 INFO:tasks.workunit.client.0.vm02.stdout:0/23: symlink l4 0 2026-03-10T10:19:07.365 INFO:tasks.workunit.client.0.vm02.stdout:0/24: chown l4 0 1 2026-03-10T10:19:07.365 INFO:tasks.workunit.client.0.vm02.stdout:0/25: rmdir - no directory 2026-03-10T10:19:07.372 INFO:tasks.workunit.client.0.vm02.stdout:8/21: fsync d1/d2/f3 0 2026-03-10T10:19:07.373 INFO:tasks.workunit.client.0.vm02.stdout:8/22: dread d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:07.374 INFO:tasks.workunit.client.0.vm02.stdout:8/23: fsync d1/d2/f3 0 2026-03-10T10:19:07.375 INFO:tasks.workunit.client.0.vm02.stdout:1/5: rename f2 to f3 0 2026-03-10T10:19:07.377 INFO:tasks.workunit.client.0.vm02.stdout:8/24: mknod d1/c5 0 2026-03-10T10:19:07.381 INFO:tasks.workunit.client.0.vm02.stdout:8/25: dwrite d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:07.385 INFO:tasks.workunit.client.0.vm02.stdout:8/26: dread d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:07.385 INFO:tasks.workunit.client.0.vm02.stdout:8/27: fdatasync d1/f4 0 2026-03-10T10:19:07.386 INFO:tasks.workunit.client.0.vm02.stdout:8/28: write d1/d2/f3 [1084238,121050] 0 2026-03-10T10:19:07.387 INFO:tasks.workunit.client.0.vm02.stdout:1/6: mkdir d4 0 2026-03-10T10:19:07.394 INFO:tasks.workunit.client.0.vm02.stdout:8/29: dwrite d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:07.424 INFO:tasks.workunit.client.0.vm02.stdout:8/30: mkdir d1/d2/d6 0 2026-03-10T10:19:07.424 INFO:tasks.workunit.client.0.vm02.stdout:8/31: dwrite d1/f4 [0,4194304] 0 2026-03-10T10:19:07.424 INFO:tasks.workunit.client.0.vm02.stdout:2/29: rename d0/l7 to d0/la 0 2026-03-10T10:19:07.424 INFO:tasks.workunit.client.0.vm02.stdout:2/30: read - d0/f8 zero size 2026-03-10T10:19:07.424 INFO:tasks.workunit.client.0.vm02.stdout:2/31: chown d0/f8 393899 1 2026-03-10T10:19:07.424 INFO:tasks.workunit.client.0.vm02.stdout:2/32: mkdir d0/db 0 2026-03-10T10:19:07.424 INFO:tasks.workunit.client.0.vm02.stdout:2/33: write d0/f9 [29540,94315] 0 2026-03-10T10:19:07.444 INFO:tasks.workunit.client.0.vm02.stdout:9/29: write da/fb [4231675,104884] 0 2026-03-10T10:19:07.471 INFO:tasks.workunit.client.0.vm02.stdout:9/30: creat da/fe x:0 0 0 2026-03-10T10:19:07.474 INFO:tasks.workunit.client.0.vm02.stdout:9/31: dwrite f7 [0,4194304] 0 2026-03-10T10:19:07.479 INFO:tasks.workunit.client.0.vm02.stdout:9/32: creat da/ff x:0 0 0 2026-03-10T10:19:07.479 INFO:tasks.workunit.client.0.vm02.stdout:9/33: stat f7 0 2026-03-10T10:19:07.479 INFO:tasks.workunit.client.0.vm02.stdout:9/34: write da/fe [557833,117875] 0 2026-03-10T10:19:07.482 INFO:tasks.workunit.client.0.vm02.stdout:9/35: chown c0 16 1 2026-03-10T10:19:07.482 INFO:tasks.workunit.client.0.vm02.stdout:9/36: chown da/fe 36982340 1 2026-03-10T10:19:07.496 INFO:tasks.workunit.client.0.vm02.stdout:9/37: mkdir da/d10 0 2026-03-10T10:19:07.498 INFO:tasks.workunit.client.0.vm02.stdout:9/38: unlink da/fe 0 2026-03-10T10:19:07.502 INFO:tasks.workunit.client.0.vm02.stdout:9/39: mknod da/c11 0 2026-03-10T10:19:07.506 INFO:tasks.workunit.client.0.vm02.stdout:9/40: mknod da/d10/c12 0 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:9/41: creat da/f13 x:0 0 0 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:9/42: chown da/d10 2248747 1 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:9/43: dwrite f7 [4194304,4194304] 0 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:3/18: rmdir d1 39 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:7/24: truncate d1/d2/f3 692042 0 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:3/19: stat d1 0 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:3/20: readlink d1/l4 0 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:3/21: write d1/f3 [123091,96464] 0 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:0/26: dread f2 [0,4194304] 0 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:0/27: rename l0 to l5 0 2026-03-10T10:19:07.550 INFO:tasks.workunit.client.0.vm02.stdout:0/28: mknod c6 0 2026-03-10T10:19:07.551 INFO:tasks.workunit.client.0.vm02.stdout:0/29: fdatasync f2 0 2026-03-10T10:19:07.560 INFO:tasks.workunit.client.0.vm02.stdout:0/30: dwrite f2 [0,4194304] 0 2026-03-10T10:19:07.592 INFO:tasks.workunit.client.0.vm02.stdout:0/31: dwrite f2 [0,4194304] 0 2026-03-10T10:19:07.732 INFO:tasks.workunit.client.0.vm02.stdout:2/34: dread d0/f9 [0,4194304] 0 2026-03-10T10:19:07.744 INFO:tasks.workunit.client.0.vm02.stdout:2/35: mkdir d0/dc 0 2026-03-10T10:19:07.796 INFO:tasks.workunit.client.0.vm02.stdout:2/36: readlink d0/l2 0 2026-03-10T10:19:07.796 INFO:tasks.workunit.client.0.vm02.stdout:2/37: dread - d0/f8 zero size 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/38: chown d0/dc 326191 1 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/39: dwrite d0/f9 [0,4194304] 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/40: rename d0/c6 to d0/cd 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/41: creat d0/fe x:0 0 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/42: dwrite d0/f8 [0,4194304] 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/43: rename d0 to d0/df 22 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/44: rename d0/dc to d0/d10 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/45: dread d0/f8 [0,4194304] 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/46: getdents d0/db 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/47: write d0/f9 [1761894,80131] 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/48: chown d0/f9 106008738 1 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/49: truncate d0/fe 550031 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/50: dread d0/f8 [0,4194304] 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/51: read d0/f9 [523532,3174] 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/52: symlink d0/l11 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/53: write d0/fe [576715,43002] 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/54: dread d0/f9 [0,4194304] 0 2026-03-10T10:19:07.797 INFO:tasks.workunit.client.0.vm02.stdout:2/55: link d0/l11 d0/db/l12 0 2026-03-10T10:19:07.836 INFO:tasks.workunit.client.1.vm05.stderr:+ pushd ltp-full-20091231/testcases/kernel/fs/fsstress 2026-03-10T10:19:07.848 INFO:tasks.workunit.client.1.vm05.stdout:~/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress ~/cephtest/mnt.1/client.1/tmp/fsstress ~/cephtest/mnt.1/client.1/tmp 2026-03-10T10:19:07.848 INFO:tasks.workunit.client.1.vm05.stderr:+ make 2026-03-10T10:19:07.921 INFO:tasks.workunit.client.0.vm02.stdout:9/44: fdatasync da/fb 0 2026-03-10T10:19:07.933 INFO:tasks.workunit.client.0.vm02.stdout:9/45: unlink c9 0 2026-03-10T10:19:07.934 INFO:tasks.workunit.client.0.vm02.stdout:9/46: read da/fb [1697286,125708] 0 2026-03-10T10:19:07.934 INFO:tasks.workunit.client.0.vm02.stdout:1/7: write f0 [547198,74167] 0 2026-03-10T10:19:07.947 INFO:tasks.workunit.client.0.vm02.stdout:9/47: read - da/f13 zero size 2026-03-10T10:19:07.947 INFO:tasks.workunit.client.0.vm02.stdout:9/48: creat da/f14 x:0 0 0 2026-03-10T10:19:07.947 INFO:tasks.workunit.client.0.vm02.stdout:1/8: creat d4/f5 x:0 0 0 2026-03-10T10:19:07.947 INFO:tasks.workunit.client.0.vm02.stdout:1/9: stat f3 0 2026-03-10T10:19:07.997 INFO:tasks.workunit.client.0.vm02.stdout:0/32: fsync f2 0 2026-03-10T10:19:07.997 INFO:tasks.workunit.client.0.vm02.stdout:0/33: read f2 [1027982,16564] 0 2026-03-10T10:19:07.999 INFO:tasks.workunit.client.0.vm02.stdout:0/34: symlink l7 0 2026-03-10T10:19:08.001 INFO:tasks.workunit.client.0.vm02.stdout:0/35: rename c6 to c8 0 2026-03-10T10:19:08.001 INFO:tasks.workunit.client.0.vm02.stdout:0/36: rmdir - no directory 2026-03-10T10:19:08.001 INFO:tasks.workunit.client.0.vm02.stdout:0/37: read f2 [403616,111171] 0 2026-03-10T10:19:08.001 INFO:tasks.workunit.client.0.vm02.stdout:0/38: chown f2 82957813 1 2026-03-10T10:19:08.004 INFO:tasks.workunit.client.0.vm02.stdout:0/39: dread f2 [0,4194304] 0 2026-03-10T10:19:08.004 INFO:tasks.workunit.client.0.vm02.stdout:0/40: fsync f2 0 2026-03-10T10:19:08.004 INFO:tasks.workunit.client.0.vm02.stdout:0/41: fdatasync f2 0 2026-03-10T10:19:08.005 INFO:tasks.workunit.client.0.vm02.stdout:0/42: write f2 [3306721,10556] 0 2026-03-10T10:19:08.006 INFO:tasks.workunit.client.0.vm02.stdout:0/43: read f2 [223738,107972] 0 2026-03-10T10:19:08.022 INFO:tasks.workunit.client.0.vm02.stdout:0/44: mkdir d9 0 2026-03-10T10:19:08.033 INFO:tasks.workunit.client.0.vm02.stdout:0/45: chown l4 626108 1 2026-03-10T10:19:08.033 INFO:tasks.workunit.client.0.vm02.stdout:0/46: link l3 d9/la 0 2026-03-10T10:19:08.033 INFO:tasks.workunit.client.0.vm02.stdout:0/47: unlink l3 0 2026-03-10T10:19:08.033 INFO:tasks.workunit.client.0.vm02.stdout:0/48: rmdir d9 39 2026-03-10T10:19:08.033 INFO:tasks.workunit.client.0.vm02.stdout:0/49: write f2 [3662029,43538] 0 2026-03-10T10:19:08.036 INFO:tasks.workunit.client.0.vm02.stdout:0/50: dread f2 [0,4194304] 0 2026-03-10T10:19:08.037 INFO:tasks.workunit.client.0.vm02.stdout:0/51: readlink l4 0 2026-03-10T10:19:08.037 INFO:tasks.workunit.client.0.vm02.stdout:0/52: chown d9 316 1 2026-03-10T10:19:08.040 INFO:tasks.workunit.client.0.vm02.stdout:0/53: dread f2 [0,4194304] 0 2026-03-10T10:19:08.043 INFO:tasks.workunit.client.0.vm02.stdout:0/54: dread f2 [0,4194304] 0 2026-03-10T10:19:08.048 INFO:tasks.workunit.client.0.vm02.stdout:0/55: link l4 d9/lb 0 2026-03-10T10:19:08.054 INFO:tasks.workunit.client.0.vm02.stdout:0/56: creat d9/fc x:0 0 0 2026-03-10T10:19:08.093 INFO:tasks.workunit.client.0.vm02.stdout:0/57: mknod d9/cd 0 2026-03-10T10:19:08.093 INFO:tasks.workunit.client.0.vm02.stdout:0/58: dread - d9/fc zero size 2026-03-10T10:19:08.097 INFO:tasks.workunit.client.0.vm02.stdout:1/10: fsync f3 0 2026-03-10T10:19:08.098 INFO:tasks.workunit.client.0.vm02.stdout:1/11: rename d4 to d4/d6 22 2026-03-10T10:19:08.098 INFO:tasks.workunit.client.0.vm02.stdout:1/12: dread - d4/f5 zero size 2026-03-10T10:19:08.098 INFO:tasks.workunit.client.0.vm02.stdout:1/13: stat d4 0 2026-03-10T10:19:08.099 INFO:tasks.workunit.client.0.vm02.stdout:1/14: write f0 [775103,51498] 0 2026-03-10T10:19:08.128 INFO:tasks.workunit.client.0.vm02.stdout:1/15: fsync f3 0 2026-03-10T10:19:08.129 INFO:tasks.workunit.client.0.vm02.stdout:1/16: creat d4/f7 x:0 0 0 2026-03-10T10:19:08.130 INFO:tasks.workunit.client.0.vm02.stdout:1/17: rename d4/f7 to d4/f8 0 2026-03-10T10:19:08.130 INFO:tasks.workunit.client.0.vm02.stdout:1/18: chown f0 143973158 1 2026-03-10T10:19:08.131 INFO:tasks.workunit.client.0.vm02.stdout:1/19: mknod d4/c9 0 2026-03-10T10:19:08.132 INFO:tasks.workunit.client.0.vm02.stdout:1/20: write d4/f5 [849392,95161] 0 2026-03-10T10:19:08.134 INFO:tasks.workunit.client.0.vm02.stdout:1/21: mkdir d4/da 0 2026-03-10T10:19:08.134 INFO:tasks.workunit.client.0.vm02.stdout:1/22: fdatasync f0 0 2026-03-10T10:19:08.135 INFO:tasks.workunit.client.0.vm02.stdout:1/23: read f3 [42947,42880] 0 2026-03-10T10:19:08.139 INFO:tasks.workunit.client.0.vm02.stdout:1/24: dwrite d4/f8 [0,4194304] 0 2026-03-10T10:19:08.143 INFO:tasks.workunit.client.0.vm02.stdout:1/25: symlink d4/da/lb 0 2026-03-10T10:19:08.144 INFO:tasks.workunit.client.0.vm02.stdout:1/26: creat d4/da/fc x:0 0 0 2026-03-10T10:19:08.151 INFO:tasks.workunit.client.0.vm02.stdout:1/27: write f0 [13106,120789] 0 2026-03-10T10:19:08.152 INFO:tasks.workunit.client.0.vm02.stdout:1/28: symlink d4/ld 0 2026-03-10T10:19:08.153 INFO:tasks.workunit.client.0.vm02.stdout:1/29: creat d4/fe x:0 0 0 2026-03-10T10:19:08.192 INFO:tasks.workunit.client.0.vm02.stdout:4/9: sync 2026-03-10T10:19:08.192 INFO:tasks.workunit.client.0.vm02.stdout:6/22: sync 2026-03-10T10:19:08.192 INFO:tasks.workunit.client.0.vm02.stdout:5/14: sync 2026-03-10T10:19:08.192 INFO:tasks.workunit.client.0.vm02.stdout:8/32: sync 2026-03-10T10:19:08.193 INFO:tasks.workunit.client.0.vm02.stdout:5/15: write f0 [1922067,64316] 0 2026-03-10T10:19:08.196 INFO:tasks.workunit.client.0.vm02.stdout:6/23: unlink d0/l6 0 2026-03-10T10:19:08.197 INFO:tasks.workunit.client.0.vm02.stdout:5/16: dwrite f0 [0,4194304] 0 2026-03-10T10:19:08.197 INFO:tasks.workunit.client.0.vm02.stdout:8/33: rename d1/c5 to d1/c7 0 2026-03-10T10:19:08.199 INFO:tasks.workunit.client.0.vm02.stdout:8/34: chown d1/d2/f3 13 1 2026-03-10T10:19:08.200 INFO:tasks.workunit.client.0.vm02.stdout:8/35: fsync d1/f4 0 2026-03-10T10:19:08.215 INFO:tasks.workunit.client.0.vm02.stdout:6/24: mkdir d0/d8 0 2026-03-10T10:19:08.231 INFO:tasks.workunit.client.0.vm02.stdout:3/22: truncate f0 2179342 0 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:5/17: mknod d1/c2 0 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:4/10: getdents d1/d2 0 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:5/18: rename f0 to d1/f3 0 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:5/19: rename d1 to d1/d4 22 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:4/11: mkdir d1/d2/d5 0 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:4/12: write d1/d2/f4 [336566,56802] 0 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:4/13: creat d1/d2/d5/f6 x:0 0 0 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:5/20: dwrite d1/f3 [4194304,4194304] 0 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:4/14: stat d1/d2 0 2026-03-10T10:19:08.232 INFO:tasks.workunit.client.0.vm02.stdout:4/15: chown f0 42211 1 2026-03-10T10:19:08.233 INFO:tasks.workunit.client.0.vm02.stdout:5/21: creat d1/f5 x:0 0 0 2026-03-10T10:19:08.234 INFO:tasks.workunit.client.0.vm02.stdout:5/22: stat d1/f3 0 2026-03-10T10:19:08.237 INFO:tasks.workunit.client.0.vm02.stdout:4/16: symlink d1/l7 0 2026-03-10T10:19:08.240 INFO:tasks.workunit.client.0.vm02.stdout:4/17: dread d1/d2/f4 [0,4194304] 0 2026-03-10T10:19:08.240 INFO:tasks.workunit.client.0.vm02.stdout:4/18: readlink d1/l7 0 2026-03-10T10:19:08.241 INFO:tasks.workunit.client.0.vm02.stdout:4/19: dread - d1/d2/d5/f6 zero size 2026-03-10T10:19:08.241 INFO:tasks.workunit.client.0.vm02.stdout:4/20: stat d1 0 2026-03-10T10:19:08.242 INFO:tasks.workunit.client.0.vm02.stdout:4/21: creat d1/d2/d5/f8 x:0 0 0 2026-03-10T10:19:08.246 INFO:tasks.workunit.client.0.vm02.stdout:4/22: dwrite d1/d2/f4 [0,4194304] 0 2026-03-10T10:19:08.284 INFO:tasks.workunit.client.0.vm02.stdout:4/23: rename d1/l7 to d1/l9 0 2026-03-10T10:19:08.337 INFO:tasks.workunit.client.0.vm02.stdout:1/30: fdatasync f0 0 2026-03-10T10:19:08.338 INFO:tasks.workunit.client.0.vm02.stdout:1/31: truncate d4/da/fc 595231 0 2026-03-10T10:19:08.338 INFO:tasks.workunit.client.0.vm02.stdout:1/32: chown d4/f8 0 1 2026-03-10T10:19:08.339 INFO:tasks.workunit.client.0.vm02.stdout:1/33: chown f3 2552 1 2026-03-10T10:19:08.339 INFO:tasks.workunit.client.0.vm02.stdout:1/34: write d4/f8 [3020858,70314] 0 2026-03-10T10:19:08.344 INFO:tasks.workunit.client.0.vm02.stdout:1/35: dwrite d4/f5 [0,4194304] 0 2026-03-10T10:19:08.346 INFO:tasks.workunit.client.0.vm02.stdout:1/36: write d4/da/fc [1317453,29744] 0 2026-03-10T10:19:08.356 INFO:tasks.workunit.client.0.vm02.stdout:3/23: sync 2026-03-10T10:19:08.357 INFO:tasks.workunit.client.0.vm02.stdout:3/24: write d1/f3 [775944,41423] 0 2026-03-10T10:19:08.359 INFO:tasks.workunit.client.0.vm02.stdout:3/25: mkdir d1/d6 0 2026-03-10T10:19:08.360 INFO:tasks.workunit.client.0.vm02.stdout:3/26: rename d1/l4 to d1/l7 0 2026-03-10T10:19:08.362 INFO:tasks.workunit.client.0.vm02.stdout:3/27: mkdir d1/d8 0 2026-03-10T10:19:08.363 INFO:tasks.workunit.client.0.vm02.stdout:3/28: creat d1/d8/f9 x:0 0 0 2026-03-10T10:19:08.402 INFO:tasks.workunit.client.0.vm02.stdout:4/24: fsync d1/d2/f4 0 2026-03-10T10:19:08.403 INFO:tasks.workunit.client.0.vm02.stdout:4/25: mknod d1/ca 0 2026-03-10T10:19:08.404 INFO:tasks.workunit.client.0.vm02.stdout:4/26: truncate d1/d2/d5/f8 482104 0 2026-03-10T10:19:08.407 INFO:tasks.workunit.client.0.vm02.stdout:4/27: dread f0 [0,4194304] 0 2026-03-10T10:19:08.408 INFO:tasks.workunit.client.0.vm02.stdout:4/28: chown d1/d2/d5/f6 501 1 2026-03-10T10:19:08.410 INFO:tasks.workunit.client.0.vm02.stdout:4/29: dread d1/d2/d5/f8 [0,4194304] 0 2026-03-10T10:19:08.413 INFO:tasks.workunit.client.0.vm02.stdout:4/30: mkdir d1/d2/d5/db 0 2026-03-10T10:19:08.416 INFO:tasks.workunit.client.0.vm02.stdout:4/31: dread f0 [0,4194304] 0 2026-03-10T10:19:08.419 INFO:tasks.workunit.client.0.vm02.stdout:4/32: dread d1/d2/f4 [0,4194304] 0 2026-03-10T10:19:08.426 INFO:tasks.workunit.client.0.vm02.stdout:4/33: dwrite d1/d2/d5/f8 [0,4194304] 0 2026-03-10T10:19:08.524 INFO:tasks.workunit.client.0.vm02.stdout:2/56: truncate d0/f9 1333050 0 2026-03-10T10:19:08.525 INFO:tasks.workunit.client.0.vm02.stdout:2/57: write d0/f8 [1901526,75568] 0 2026-03-10T10:19:08.525 INFO:tasks.workunit.client.0.vm02.stdout:2/58: stat d0/l3 0 2026-03-10T10:19:08.526 INFO:tasks.workunit.client.0.vm02.stdout:2/59: read d0/f8 [64168,33015] 0 2026-03-10T10:19:08.530 INFO:tasks.workunit.client.0.vm02.stdout:2/60: chown d0/cd 4 1 2026-03-10T10:19:08.548 INFO:tasks.workunit.client.0.vm02.stdout:9/49: rmdir da 39 2026-03-10T10:19:08.549 INFO:tasks.workunit.client.0.vm02.stdout:9/50: unlink c8 0 2026-03-10T10:19:08.551 INFO:tasks.workunit.client.0.vm02.stdout:9/51: creat da/f15 x:0 0 0 2026-03-10T10:19:08.551 INFO:tasks.workunit.client.0.vm02.stdout:9/52: readlink da/lc 0 2026-03-10T10:19:08.551 INFO:tasks.workunit.client.1.vm05.stdout:cc -DNO_XFS -I/home/ubuntu/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress -D_LARGEFILE64_SOURCE -D_GNU_SOURCE -I../../../../include -I../../../../include -L../../../../lib fsstress.c -o fsstress 2026-03-10T10:19:08.552 INFO:tasks.workunit.client.0.vm02.stdout:9/53: chown c4 407310287 1 2026-03-10T10:19:08.554 INFO:tasks.workunit.client.0.vm02.stdout:9/54: rename l6 to da/l16 0 2026-03-10T10:19:08.556 INFO:tasks.workunit.client.0.vm02.stdout:9/55: getdents da/d10 0 2026-03-10T10:19:08.556 INFO:tasks.workunit.client.0.vm02.stdout:9/56: chown c4 927800 1 2026-03-10T10:19:08.557 INFO:tasks.workunit.client.0.vm02.stdout:9/57: creat da/d10/f17 x:0 0 0 2026-03-10T10:19:08.576 INFO:tasks.workunit.client.0.vm02.stdout:0/59: truncate f2 1757860 0 2026-03-10T10:19:08.577 INFO:tasks.workunit.client.0.vm02.stdout:0/60: mknod d9/ce 0 2026-03-10T10:19:08.577 INFO:tasks.workunit.client.0.vm02.stdout:0/61: stat f2 0 2026-03-10T10:19:08.584 INFO:tasks.workunit.client.0.vm02.stdout:1/37: truncate f0 252766 0 2026-03-10T10:19:08.588 INFO:tasks.workunit.client.0.vm02.stdout:7/25: dwrite d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:08.589 INFO:tasks.workunit.client.0.vm02.stdout:9/58: sync 2026-03-10T10:19:08.593 INFO:tasks.workunit.client.0.vm02.stdout:9/59: dwrite da/f15 [0,4194304] 0 2026-03-10T10:19:08.598 INFO:tasks.workunit.client.0.vm02.stdout:7/26: getdents d1 0 2026-03-10T10:19:08.598 INFO:tasks.workunit.client.0.vm02.stdout:7/27: dread - d1/f5 zero size 2026-03-10T10:19:08.600 INFO:tasks.workunit.client.0.vm02.stdout:7/28: creat d1/f7 x:0 0 0 2026-03-10T10:19:08.603 INFO:tasks.workunit.client.0.vm02.stdout:8/36: write d1/f4 [4421838,82598] 0 2026-03-10T10:19:08.605 INFO:tasks.workunit.client.0.vm02.stdout:8/37: getdents d1 0 2026-03-10T10:19:08.606 INFO:tasks.workunit.client.0.vm02.stdout:8/38: creat d1/f8 x:0 0 0 2026-03-10T10:19:08.607 INFO:tasks.workunit.client.0.vm02.stdout:8/39: mknod d1/d2/d6/c9 0 2026-03-10T10:19:08.610 INFO:tasks.workunit.client.0.vm02.stdout:6/25: truncate d0/f2 4009769 0 2026-03-10T10:19:08.610 INFO:tasks.workunit.client.0.vm02.stdout:8/40: dread d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:08.612 INFO:tasks.workunit.client.0.vm02.stdout:8/41: write d1/d2/f3 [4961751,89159] 0 2026-03-10T10:19:08.613 INFO:tasks.workunit.client.0.vm02.stdout:8/42: write d1/f8 [94332,85886] 0 2026-03-10T10:19:08.627 INFO:tasks.workunit.client.0.vm02.stdout:8/43: dread d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:08.631 INFO:tasks.workunit.client.0.vm02.stdout:5/23: getdents d1 0 2026-03-10T10:19:08.642 INFO:tasks.workunit.client.0.vm02.stdout:3/29: readlink d1/l7 0 2026-03-10T10:19:08.642 INFO:tasks.workunit.client.0.vm02.stdout:3/30: write d1/f5 [356276,80358] 0 2026-03-10T10:19:08.651 INFO:tasks.workunit.client.0.vm02.stdout:4/34: dwrite d1/d2/d5/f8 [4194304,4194304] 0 2026-03-10T10:19:08.655 INFO:tasks.workunit.client.0.vm02.stdout:2/61: truncate d0/f9 1908518 0 2026-03-10T10:19:08.658 INFO:tasks.workunit.client.0.vm02.stdout:5/24: sync 2026-03-10T10:19:08.665 INFO:tasks.workunit.client.0.vm02.stdout:9/60: rmdir da/d10 39 2026-03-10T10:19:08.666 INFO:tasks.workunit.client.0.vm02.stdout:0/62: write f2 [1246091,73926] 0 2026-03-10T10:19:08.666 INFO:tasks.workunit.client.0.vm02.stdout:0/63: dread - d9/fc zero size 2026-03-10T10:19:08.667 INFO:tasks.workunit.client.0.vm02.stdout:0/64: chown d9/cd 602 1 2026-03-10T10:19:08.676 INFO:tasks.workunit.client.0.vm02.stdout:1/38: rename f0 to d4/ff 0 2026-03-10T10:19:08.677 INFO:tasks.workunit.client.0.vm02.stdout:1/39: chown d4/da/lb 62163157 1 2026-03-10T10:19:08.677 INFO:tasks.workunit.client.0.vm02.stdout:1/40: truncate d4/fe 277559 0 2026-03-10T10:19:08.678 INFO:tasks.workunit.client.0.vm02.stdout:1/41: write d4/da/fc [1978503,107578] 0 2026-03-10T10:19:08.707 INFO:tasks.workunit.client.0.vm02.stdout:6/26: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:08.746 INFO:tasks.workunit.client.0.vm02.stdout:5/25: rmdir d1 39 2026-03-10T10:19:08.747 INFO:tasks.workunit.client.0.vm02.stdout:0/65: mknod d9/cf 0 2026-03-10T10:19:08.747 INFO:tasks.workunit.client.0.vm02.stdout:0/66: read - d9/fc zero size 2026-03-10T10:19:08.750 INFO:tasks.workunit.client.0.vm02.stdout:2/62: dwrite d0/f8 [0,4194304] 0 2026-03-10T10:19:08.752 INFO:tasks.workunit.client.0.vm02.stdout:2/63: readlink d0/l3 0 2026-03-10T10:19:08.752 INFO:tasks.workunit.client.0.vm02.stdout:8/44: rename d1/c7 to d1/d2/ca 0 2026-03-10T10:19:08.752 INFO:tasks.workunit.client.0.vm02.stdout:2/64: readlink d0/l5 0 2026-03-10T10:19:08.753 INFO:tasks.workunit.client.0.vm02.stdout:4/35: rename d1 to d1/d2/d5/db/dc 22 2026-03-10T10:19:08.753 INFO:tasks.workunit.client.0.vm02.stdout:4/36: read d1/d2/f4 [3856539,103391] 0 2026-03-10T10:19:08.759 INFO:tasks.workunit.client.0.vm02.stdout:8/45: dwrite d1/f4 [0,4194304] 0 2026-03-10T10:19:08.759 INFO:tasks.workunit.client.0.vm02.stdout:8/46: fsync d1/f8 0 2026-03-10T10:19:08.760 INFO:tasks.workunit.client.0.vm02.stdout:8/47: chown d1/f8 28 1 2026-03-10T10:19:08.771 INFO:tasks.workunit.client.0.vm02.stdout:1/42: mknod d4/da/c10 0 2026-03-10T10:19:08.773 INFO:tasks.workunit.client.0.vm02.stdout:7/29: chown d1/l6 6 1 2026-03-10T10:19:08.777 INFO:tasks.workunit.client.0.vm02.stdout:5/26: write d1/f3 [6470693,98040] 0 2026-03-10T10:19:08.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:08 vm02.local ceph-mon[50200]: pgmap v143: 65 pgs: 65 active+clean; 211 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.9 MiB/s rd, 4.9 MiB/s wr, 429 op/s 2026-03-10T10:19:08.796 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:08 vm05.local ceph-mon[59051]: pgmap v143: 65 pgs: 65 active+clean; 211 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.9 MiB/s rd, 4.9 MiB/s wr, 429 op/s 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:0/67: creat d9/f10 x:0 0 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:2/65: symlink d0/d10/l13 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:8/48: mknod d1/cb 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:1/43: creat d4/da/f11 x:0 0 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:1/44: fsync d4/da/fc 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:1/45: fsync d4/da/fc 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:1/46: read d4/da/fc [11178,57109] 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:1/47: read d4/da/fc [2022737,10793] 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:7/30: stat d1/l4 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:6/27: mkdir d0/d8/d9 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:5/27: symlink d1/l6 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:9/61: creat da/f18 x:0 0 0 2026-03-10T10:19:08.796 INFO:tasks.workunit.client.0.vm02.stdout:5/28: dwrite d1/f5 [0,4194304] 0 2026-03-10T10:19:08.797 INFO:tasks.workunit.client.0.vm02.stdout:2/66: creat d0/d10/f14 x:0 0 0 2026-03-10T10:19:08.808 INFO:tasks.workunit.client.0.vm02.stdout:8/49: dwrite d1/d2/f3 [4194304,4194304] 0 2026-03-10T10:19:08.809 INFO:tasks.workunit.client.0.vm02.stdout:8/50: truncate d1/f4 4612694 0 2026-03-10T10:19:08.815 INFO:tasks.workunit.client.0.vm02.stdout:8/51: dwrite d1/f8 [0,4194304] 0 2026-03-10T10:19:08.828 INFO:tasks.workunit.client.0.vm02.stdout:0/68: unlink l5 0 2026-03-10T10:19:08.829 INFO:tasks.workunit.client.0.vm02.stdout:0/69: truncate d9/fc 803243 0 2026-03-10T10:19:08.829 INFO:tasks.workunit.client.0.vm02.stdout:0/70: stat l7 0 2026-03-10T10:19:08.829 INFO:tasks.workunit.client.0.vm02.stdout:0/71: fsync d9/fc 0 2026-03-10T10:19:08.834 INFO:tasks.workunit.client.0.vm02.stdout:6/28: sync 2026-03-10T10:19:08.834 INFO:tasks.workunit.client.0.vm02.stdout:5/29: mknod d1/c7 0 2026-03-10T10:19:08.834 INFO:tasks.workunit.client.0.vm02.stdout:2/67: symlink d0/d10/l15 0 2026-03-10T10:19:08.837 INFO:tasks.workunit.client.0.vm02.stdout:6/29: chown d0/d8 211731180 1 2026-03-10T10:19:08.838 INFO:tasks.workunit.client.0.vm02.stdout:8/52: unlink d1/f8 0 2026-03-10T10:19:08.839 INFO:tasks.workunit.client.0.vm02.stdout:7/31: mknod d1/c8 0 2026-03-10T10:19:08.839 INFO:tasks.workunit.client.0.vm02.stdout:9/62: rmdir da/d10 39 2026-03-10T10:19:08.839 INFO:tasks.workunit.client.0.vm02.stdout:6/30: write d0/f2 [3649257,128169] 0 2026-03-10T10:19:08.841 INFO:tasks.workunit.client.0.vm02.stdout:4/37: getdents d1/d2/d5 0 2026-03-10T10:19:08.841 INFO:tasks.workunit.client.0.vm02.stdout:4/38: chown d1/d2/d5/f6 997389426 1 2026-03-10T10:19:08.842 INFO:tasks.workunit.client.0.vm02.stdout:2/68: rename d0/l3 to d0/l16 0 2026-03-10T10:19:08.847 INFO:tasks.workunit.client.0.vm02.stdout:7/32: read d1/d2/f3 [2199037,15384] 0 2026-03-10T10:19:08.851 INFO:tasks.workunit.client.0.vm02.stdout:0/72: dwrite f2 [0,4194304] 0 2026-03-10T10:19:08.851 INFO:tasks.workunit.client.0.vm02.stdout:7/33: write d1/f7 [529440,112043] 0 2026-03-10T10:19:08.851 INFO:tasks.workunit.client.0.vm02.stdout:8/53: dwrite d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:08.853 INFO:tasks.workunit.client.0.vm02.stdout:6/31: symlink d0/d8/la 0 2026-03-10T10:19:08.854 INFO:tasks.workunit.client.0.vm02.stdout:0/73: dread f2 [0,4194304] 0 2026-03-10T10:19:08.855 INFO:tasks.workunit.client.0.vm02.stdout:0/74: dread - d9/f10 zero size 2026-03-10T10:19:08.855 INFO:tasks.workunit.client.0.vm02.stdout:0/75: fsync d9/fc 0 2026-03-10T10:19:08.859 INFO:tasks.workunit.client.0.vm02.stdout:6/32: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:08.859 INFO:tasks.workunit.client.0.vm02.stdout:6/33: write d0/f2 [2188405,80058] 0 2026-03-10T10:19:08.883 INFO:tasks.workunit.client.0.vm02.stdout:7/34: truncate d1/f5 17232 0 2026-03-10T10:19:08.884 INFO:tasks.workunit.client.0.vm02.stdout:8/54: mknod d1/d2/d6/cc 0 2026-03-10T10:19:08.888 INFO:tasks.workunit.client.0.vm02.stdout:8/55: dread d1/f4 [0,4194304] 0 2026-03-10T10:19:08.888 INFO:tasks.workunit.client.0.vm02.stdout:8/56: rename d1 to d1/d2/d6/dd 22 2026-03-10T10:19:08.889 INFO:tasks.workunit.client.0.vm02.stdout:8/57: write d1/d2/f3 [9241197,123887] 0 2026-03-10T10:19:08.892 INFO:tasks.workunit.client.0.vm02.stdout:8/58: dread d1/f4 [0,4194304] 0 2026-03-10T10:19:08.892 INFO:tasks.workunit.client.0.vm02.stdout:8/59: stat d1/f4 0 2026-03-10T10:19:08.893 INFO:tasks.workunit.client.0.vm02.stdout:8/60: chown d1/f4 192218894 1 2026-03-10T10:19:08.895 INFO:tasks.workunit.client.0.vm02.stdout:8/61: dread d1/d2/f3 [4194304,4194304] 0 2026-03-10T10:19:08.896 INFO:tasks.workunit.client.0.vm02.stdout:0/76: chown l4 85 1 2026-03-10T10:19:08.897 INFO:tasks.workunit.client.0.vm02.stdout:0/77: fdatasync d9/f10 0 2026-03-10T10:19:08.900 INFO:tasks.workunit.client.0.vm02.stdout:4/39: link d1/d2/f4 d1/fd 0 2026-03-10T10:19:08.902 INFO:tasks.workunit.client.0.vm02.stdout:7/35: fsync d1/d2/f3 0 2026-03-10T10:19:08.905 INFO:tasks.workunit.client.0.vm02.stdout:8/62: rename d1/d2/d6/c9 to d1/ce 0 2026-03-10T10:19:08.908 INFO:tasks.workunit.client.0.vm02.stdout:4/40: mkdir d1/d2/de 0 2026-03-10T10:19:08.912 INFO:tasks.workunit.client.0.vm02.stdout:4/41: dwrite d1/d2/d5/f8 [4194304,4194304] 0 2026-03-10T10:19:08.919 INFO:tasks.workunit.client.0.vm02.stdout:0/78: rename l4 to d9/l11 0 2026-03-10T10:19:08.919 INFO:tasks.workunit.client.0.vm02.stdout:6/34: link d0/c4 d0/cb 0 2026-03-10T10:19:08.919 INFO:tasks.workunit.client.0.vm02.stdout:9/63: getdents da/d10 0 2026-03-10T10:19:08.919 INFO:tasks.workunit.client.0.vm02.stdout:4/42: creat d1/ff x:0 0 0 2026-03-10T10:19:08.920 INFO:tasks.workunit.client.0.vm02.stdout:8/63: dwrite d1/f4 [4194304,4194304] 0 2026-03-10T10:19:08.921 INFO:tasks.workunit.client.0.vm02.stdout:4/43: read - d1/ff zero size 2026-03-10T10:19:08.924 INFO:tasks.workunit.client.0.vm02.stdout:9/64: creat da/d10/f19 x:0 0 0 2026-03-10T10:19:08.926 INFO:tasks.workunit.client.0.vm02.stdout:9/65: fdatasync da/f14 0 2026-03-10T10:19:08.928 INFO:tasks.workunit.client.0.vm02.stdout:9/66: read - da/f18 zero size 2026-03-10T10:19:08.930 INFO:tasks.workunit.client.0.vm02.stdout:8/64: dwrite d1/f4 [4194304,4194304] 0 2026-03-10T10:19:08.935 INFO:tasks.workunit.client.0.vm02.stdout:4/44: rename d1/d2/d5 to d1/d10 0 2026-03-10T10:19:08.936 INFO:tasks.workunit.client.0.vm02.stdout:7/36: link d1/l6 d1/d2/l9 0 2026-03-10T10:19:08.941 INFO:tasks.workunit.client.0.vm02.stdout:9/67: mknod da/d10/c1a 0 2026-03-10T10:19:08.943 INFO:tasks.workunit.client.0.vm02.stdout:9/68: write da/f15 [1721055,7578] 0 2026-03-10T10:19:08.943 INFO:tasks.workunit.client.0.vm02.stdout:4/45: dwrite d1/d10/f8 [4194304,4194304] 0 2026-03-10T10:19:08.946 INFO:tasks.workunit.client.0.vm02.stdout:9/69: read - da/d10/f19 zero size 2026-03-10T10:19:08.948 INFO:tasks.workunit.client.0.vm02.stdout:9/70: dread - da/f13 zero size 2026-03-10T10:19:08.953 INFO:tasks.workunit.client.0.vm02.stdout:0/79: link d9/ce d9/c12 0 2026-03-10T10:19:08.959 INFO:tasks.workunit.client.0.vm02.stdout:9/71: dwrite da/d10/f17 [0,4194304] 0 2026-03-10T10:19:08.961 INFO:tasks.workunit.client.0.vm02.stdout:9/72: dread da/fb [4194304,4194304] 0 2026-03-10T10:19:08.966 INFO:tasks.workunit.client.0.vm02.stdout:3/31: link f0 d1/d6/fa 0 2026-03-10T10:19:08.966 INFO:tasks.workunit.client.0.vm02.stdout:3/32: chown d1 0 1 2026-03-10T10:19:08.967 INFO:tasks.workunit.client.0.vm02.stdout:1/48: rmdir d4 39 2026-03-10T10:19:08.969 INFO:tasks.workunit.client.0.vm02.stdout:3/33: dread d1/f5 [0,4194304] 0 2026-03-10T10:19:08.969 INFO:tasks.workunit.client.0.vm02.stdout:3/34: dread - d1/d8/f9 zero size 2026-03-10T10:19:08.975 INFO:tasks.workunit.client.0.vm02.stdout:4/46: readlink d1/l9 0 2026-03-10T10:19:08.983 INFO:tasks.workunit.client.0.vm02.stdout:4/47: dwrite d1/d10/f8 [4194304,4194304] 0 2026-03-10T10:19:08.987 INFO:tasks.workunit.client.0.vm02.stdout:2/69: truncate d0/f9 1769920 0 2026-03-10T10:19:08.993 INFO:tasks.workunit.client.0.vm02.stdout:9/73: sync 2026-03-10T10:19:08.999 INFO:tasks.workunit.client.0.vm02.stdout:2/70: dread d0/fe [0,4194304] 0 2026-03-10T10:19:09.008 INFO:tasks.workunit.client.0.vm02.stdout:1/49: rmdir d4/da 39 2026-03-10T10:19:09.008 INFO:tasks.workunit.client.0.vm02.stdout:1/50: read d4/f8 [2057320,18088] 0 2026-03-10T10:19:09.010 INFO:tasks.workunit.client.0.vm02.stdout:3/35: rename d1/d6/fa to d1/d8/fb 0 2026-03-10T10:19:09.012 INFO:tasks.workunit.client.0.vm02.stdout:0/80: link d9/lb d9/l13 0 2026-03-10T10:19:09.020 INFO:tasks.workunit.client.0.vm02.stdout:0/81: unlink d9/f10 0 2026-03-10T10:19:09.021 INFO:tasks.workunit.client.0.vm02.stdout:9/74: creat da/f1b x:0 0 0 2026-03-10T10:19:09.022 INFO:tasks.workunit.client.0.vm02.stdout:9/75: write da/f1b [994365,94945] 0 2026-03-10T10:19:09.027 INFO:tasks.workunit.client.0.vm02.stdout:2/71: link d0/d10/l15 d0/l17 0 2026-03-10T10:19:09.029 INFO:tasks.workunit.client.0.vm02.stdout:9/76: symlink da/d10/l1c 0 2026-03-10T10:19:09.033 INFO:tasks.workunit.client.0.vm02.stdout:2/72: symlink d0/db/l18 0 2026-03-10T10:19:09.051 INFO:tasks.workunit.client.0.vm02.stdout:5/30: dwrite d1/f5 [4194304,4194304] 0 2026-03-10T10:19:09.066 INFO:tasks.workunit.client.0.vm02.stdout:5/31: dwrite d1/f3 [8388608,4194304] 0 2026-03-10T10:19:09.069 INFO:tasks.workunit.client.0.vm02.stdout:6/35: fsync d0/f2 0 2026-03-10T10:19:09.076 INFO:tasks.workunit.client.0.vm02.stdout:6/36: dread d0/f2 [0,4194304] 0 2026-03-10T10:19:09.088 INFO:tasks.workunit.client.0.vm02.stdout:6/37: unlink d0/d8/la 0 2026-03-10T10:19:09.094 INFO:tasks.workunit.client.0.vm02.stdout:6/38: symlink d0/d7/lc 0 2026-03-10T10:19:09.099 INFO:tasks.workunit.client.0.vm02.stdout:6/39: dwrite d0/f2 [4194304,4194304] 0 2026-03-10T10:19:09.110 INFO:tasks.workunit.client.0.vm02.stdout:6/40: dread d0/f2 [0,4194304] 0 2026-03-10T10:19:09.128 INFO:tasks.workunit.client.0.vm02.stdout:8/65: chown d1/ce 0 1 2026-03-10T10:19:09.128 INFO:tasks.workunit.client.0.vm02.stdout:6/41: symlink d0/d8/ld 0 2026-03-10T10:19:09.133 INFO:tasks.workunit.client.0.vm02.stdout:6/42: unlink d0/d7/lc 0 2026-03-10T10:19:09.133 INFO:tasks.workunit.client.0.vm02.stdout:6/43: chown d0/d8/ld 0 1 2026-03-10T10:19:09.135 INFO:tasks.workunit.client.0.vm02.stdout:7/37: rename d1/d2 to d1/da 0 2026-03-10T10:19:09.138 INFO:tasks.workunit.client.0.vm02.stdout:4/48: rmdir d1 39 2026-03-10T10:19:09.139 INFO:tasks.workunit.client.0.vm02.stdout:7/38: mkdir d1/da/db 0 2026-03-10T10:19:09.144 INFO:tasks.workunit.client.0.vm02.stdout:6/44: link d0/l5 d0/d8/d9/le 0 2026-03-10T10:19:09.146 INFO:tasks.workunit.client.0.vm02.stdout:6/45: creat d0/d7/ff x:0 0 0 2026-03-10T10:19:09.150 INFO:tasks.workunit.client.0.vm02.stdout:6/46: dwrite d0/d7/ff [0,4194304] 0 2026-03-10T10:19:09.173 INFO:tasks.workunit.client.0.vm02.stdout:9/77: getdents da 0 2026-03-10T10:19:09.178 INFO:tasks.workunit.client.0.vm02.stdout:1/51: dwrite d4/ff [0,4194304] 0 2026-03-10T10:19:09.180 INFO:tasks.workunit.client.0.vm02.stdout:9/78: dwrite da/f18 [0,4194304] 0 2026-03-10T10:19:09.196 INFO:tasks.workunit.client.0.vm02.stdout:9/79: rmdir da 39 2026-03-10T10:19:09.197 INFO:tasks.workunit.client.0.vm02.stdout:1/52: truncate d4/f8 493008 0 2026-03-10T10:19:09.202 INFO:tasks.workunit.client.0.vm02.stdout:9/80: creat da/d10/f1d x:0 0 0 2026-03-10T10:19:09.205 INFO:tasks.workunit.client.0.vm02.stdout:9/81: unlink da/f18 0 2026-03-10T10:19:09.209 INFO:tasks.workunit.client.0.vm02.stdout:2/73: dread d0/f9 [0,4194304] 0 2026-03-10T10:19:09.220 INFO:tasks.workunit.client.1.vm05.stderr:++ readlink -f fsstress 2026-03-10T10:19:09.222 INFO:tasks.workunit.client.1.vm05.stderr:+ BIN=/home/ubuntu/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress 2026-03-10T10:19:09.222 INFO:tasks.workunit.client.1.vm05.stderr:+ popd 2026-03-10T10:19:09.223 INFO:tasks.workunit.client.1.vm05.stdout:~/cephtest/mnt.1/client.1/tmp/fsstress ~/cephtest/mnt.1/client.1/tmp 2026-03-10T10:19:09.223 INFO:tasks.workunit.client.1.vm05.stderr:+ popd 2026-03-10T10:19:09.224 INFO:tasks.workunit.client.1.vm05.stdout:~/cephtest/mnt.1/client.1/tmp 2026-03-10T10:19:09.224 INFO:tasks.workunit.client.1.vm05.stderr:++ mktemp -d -p . 2026-03-10T10:19:09.227 INFO:tasks.workunit.client.1.vm05.stderr:+ T=./tmp.FLwvv0QMyN 2026-03-10T10:19:09.227 INFO:tasks.workunit.client.1.vm05.stderr:+ /home/ubuntu/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress -d ./tmp.FLwvv0QMyN -l 1 -n 1000 -p 10 -v 2026-03-10T10:19:09.230 INFO:tasks.workunit.client.1.vm05.stdout:seed = 1772910096 2026-03-10T10:19:09.235 INFO:tasks.workunit.client.1.vm05.stdout:8/0: chown . 1917896 1 2026-03-10T10:19:09.236 INFO:tasks.workunit.client.1.vm05.stdout:8/1: dwrite - no filename 2026-03-10T10:19:09.236 INFO:tasks.workunit.client.1.vm05.stdout:8/2: rename - no filename 2026-03-10T10:19:09.236 INFO:tasks.workunit.client.1.vm05.stdout:8/3: truncate - no filename 2026-03-10T10:19:09.236 INFO:tasks.workunit.client.1.vm05.stdout:8/4: chown . 64689568 1 2026-03-10T10:19:09.236 INFO:tasks.workunit.client.1.vm05.stdout:8/5: dwrite - no filename 2026-03-10T10:19:09.236 INFO:tasks.workunit.client.1.vm05.stdout:8/6: write - no filename 2026-03-10T10:19:09.241 INFO:tasks.workunit.client.1.vm05.stdout:0/0: getdents . 0 2026-03-10T10:19:09.241 INFO:tasks.workunit.client.1.vm05.stdout:0/1: dwrite - no filename 2026-03-10T10:19:09.246 INFO:tasks.workunit.client.1.vm05.stdout:7/0: stat - no entries 2026-03-10T10:19:09.246 INFO:tasks.workunit.client.1.vm05.stdout:7/1: readlink - no filename 2026-03-10T10:19:09.246 INFO:tasks.workunit.client.1.vm05.stdout:7/2: fdatasync - no filename 2026-03-10T10:19:09.246 INFO:tasks.workunit.client.1.vm05.stdout:7/3: rename - no filename 2026-03-10T10:19:09.247 INFO:tasks.workunit.client.1.vm05.stdout:0/2: creat f0 x:0 0 0 2026-03-10T10:19:09.247 INFO:tasks.workunit.client.1.vm05.stdout:9/0: getdents . 0 2026-03-10T10:19:09.247 INFO:tasks.workunit.client.1.vm05.stdout:9/1: dwrite - no filename 2026-03-10T10:19:09.247 INFO:tasks.workunit.client.1.vm05.stdout:9/2: unlink - no file 2026-03-10T10:19:09.248 INFO:tasks.workunit.client.1.vm05.stdout:0/3: write f0 [433036,9615] 0 2026-03-10T10:19:09.251 INFO:tasks.workunit.client.1.vm05.stdout:7/4: mknod c0 0 2026-03-10T10:19:09.251 INFO:tasks.workunit.client.1.vm05.stdout:7/5: write - no filename 2026-03-10T10:19:09.251 INFO:tasks.workunit.client.0.vm02.stdout:1/53: sync 2026-03-10T10:19:09.252 INFO:tasks.workunit.client.0.vm02.stdout:2/74: sync 2026-03-10T10:19:09.256 INFO:tasks.workunit.client.1.vm05.stdout:9/3: mkdir d0 0 2026-03-10T10:19:09.256 INFO:tasks.workunit.client.1.vm05.stdout:9/4: read - no filename 2026-03-10T10:19:09.257 INFO:tasks.workunit.client.1.vm05.stdout:9/5: unlink - no file 2026-03-10T10:19:09.257 INFO:tasks.workunit.client.0.vm02.stdout:5/32: truncate d1/f5 6641936 0 2026-03-10T10:19:09.260 INFO:tasks.workunit.client.1.vm05.stdout:4/0: rename - no filename 2026-03-10T10:19:09.260 INFO:tasks.workunit.client.1.vm05.stdout:4/1: dwrite - no filename 2026-03-10T10:19:09.260 INFO:tasks.workunit.client.1.vm05.stdout:4/2: stat - no entries 2026-03-10T10:19:09.260 INFO:tasks.workunit.client.1.vm05.stdout:4/3: dwrite - no filename 2026-03-10T10:19:09.262 INFO:tasks.workunit.client.0.vm02.stdout:2/75: truncate d0/fe 1168142 0 2026-03-10T10:19:09.263 INFO:tasks.workunit.client.0.vm02.stdout:5/33: symlink d1/l8 0 2026-03-10T10:19:09.263 INFO:tasks.workunit.client.0.vm02.stdout:5/34: readlink d1/l6 0 2026-03-10T10:19:09.265 INFO:tasks.workunit.client.0.vm02.stdout:1/54: creat d4/da/f12 x:0 0 0 2026-03-10T10:19:09.268 INFO:tasks.workunit.client.0.vm02.stdout:2/76: creat d0/d10/f19 x:0 0 0 2026-03-10T10:19:09.268 INFO:tasks.workunit.client.0.vm02.stdout:2/77: write d0/f8 [1083753,110934] 0 2026-03-10T10:19:09.272 INFO:tasks.workunit.client.0.vm02.stdout:1/55: sync 2026-03-10T10:19:09.273 INFO:tasks.workunit.client.0.vm02.stdout:1/56: dread d4/fe [0,4194304] 0 2026-03-10T10:19:09.279 INFO:tasks.workunit.client.1.vm05.stdout:6/0: symlink l0 0 2026-03-10T10:19:09.279 INFO:tasks.workunit.client.0.vm02.stdout:8/66: truncate d1/f4 2013811 0 2026-03-10T10:19:09.284 INFO:tasks.workunit.client.1.vm05.stdout:9/6: mkdir d0/d1 0 2026-03-10T10:19:09.284 INFO:tasks.workunit.client.1.vm05.stdout:9/7: truncate - no filename 2026-03-10T10:19:09.284 INFO:tasks.workunit.client.1.vm05.stdout:9/8: dwrite - no filename 2026-03-10T10:19:09.285 INFO:tasks.workunit.client.1.vm05.stdout:4/4: getdents . 0 2026-03-10T10:19:09.285 INFO:tasks.workunit.client.1.vm05.stdout:4/5: stat - no entries 2026-03-10T10:19:09.285 INFO:tasks.workunit.client.1.vm05.stdout:4/6: fdatasync - no filename 2026-03-10T10:19:09.285 INFO:tasks.workunit.client.0.vm02.stdout:1/57: creat d4/da/f13 x:0 0 0 2026-03-10T10:19:09.286 INFO:tasks.workunit.client.1.vm05.stdout:4/7: chown . 101901718 1 2026-03-10T10:19:09.286 INFO:tasks.workunit.client.1.vm05.stdout:4/8: rmdir - no directory 2026-03-10T10:19:09.286 INFO:tasks.workunit.client.1.vm05.stdout:4/9: dread - no filename 2026-03-10T10:19:09.286 INFO:tasks.workunit.client.1.vm05.stdout:4/10: write - no filename 2026-03-10T10:19:09.286 INFO:tasks.workunit.client.1.vm05.stdout:4/11: read - no filename 2026-03-10T10:19:09.286 INFO:tasks.workunit.client.1.vm05.stdout:4/12: dwrite - no filename 2026-03-10T10:19:09.286 INFO:tasks.workunit.client.0.vm02.stdout:1/58: truncate d4/da/f12 423203 0 2026-03-10T10:19:09.287 INFO:tasks.workunit.client.0.vm02.stdout:7/39: rename d1/da to d1/dc 0 2026-03-10T10:19:09.288 INFO:tasks.workunit.client.0.vm02.stdout:1/59: dread d4/f5 [0,4194304] 0 2026-03-10T10:19:09.293 INFO:tasks.workunit.client.0.vm02.stdout:4/49: dwrite f0 [0,4194304] 0 2026-03-10T10:19:09.295 INFO:tasks.workunit.client.0.vm02.stdout:5/35: link d1/c2 d1/c9 0 2026-03-10T10:19:09.296 INFO:tasks.workunit.client.1.vm05.stdout:5/0: symlink l0 0 2026-03-10T10:19:09.296 INFO:tasks.workunit.client.1.vm05.stdout:5/1: dread - no filename 2026-03-10T10:19:09.296 INFO:tasks.workunit.client.1.vm05.stdout:5/2: readlink l0 0 2026-03-10T10:19:09.296 INFO:tasks.workunit.client.1.vm05.stdout:5/3: dread - no filename 2026-03-10T10:19:09.296 INFO:tasks.workunit.client.1.vm05.stdout:5/4: dread - no filename 2026-03-10T10:19:09.296 INFO:tasks.workunit.client.1.vm05.stdout:5/5: stat l0 0 2026-03-10T10:19:09.296 INFO:tasks.workunit.client.1.vm05.stdout:5/6: read - no filename 2026-03-10T10:19:09.296 INFO:tasks.workunit.client.1.vm05.stdout:5/7: write - no filename 2026-03-10T10:19:09.297 INFO:tasks.workunit.client.0.vm02.stdout:6/47: fsync d0/d7/ff 0 2026-03-10T10:19:09.297 INFO:tasks.workunit.client.1.vm05.stdout:5/8: chown l0 87407 1 2026-03-10T10:19:09.297 INFO:tasks.workunit.client.1.vm05.stdout:6/1: creat f1 x:0 0 0 2026-03-10T10:19:09.302 INFO:tasks.workunit.client.1.vm05.stdout:2/0: fdatasync - no filename 2026-03-10T10:19:09.313 INFO:tasks.workunit.client.1.vm05.stdout:9/9: mkdir d0/d2 0 2026-03-10T10:19:09.314 INFO:tasks.workunit.client.0.vm02.stdout:2/78: dwrite d0/f9 [0,4194304] 0 2026-03-10T10:19:09.314 INFO:tasks.workunit.client.0.vm02.stdout:2/79: dwrite d0/f8 [0,4194304] 0 2026-03-10T10:19:09.323 INFO:tasks.workunit.client.1.vm05.stdout:3/0: symlink l0 0 2026-03-10T10:19:09.323 INFO:tasks.workunit.client.1.vm05.stdout:3/1: write - no filename 2026-03-10T10:19:09.323 INFO:tasks.workunit.client.1.vm05.stdout:3/2: write - no filename 2026-03-10T10:19:09.323 INFO:tasks.workunit.client.1.vm05.stdout:3/3: truncate - no filename 2026-03-10T10:19:09.323 INFO:tasks.workunit.client.1.vm05.stdout:3/4: write - no filename 2026-03-10T10:19:09.325 INFO:tasks.workunit.client.0.vm02.stdout:3/36: dwrite d1/d8/fb [0,4194304] 0 2026-03-10T10:19:09.327 INFO:tasks.workunit.client.0.vm02.stdout:1/60: mkdir d4/da/d14 0 2026-03-10T10:19:09.330 INFO:tasks.workunit.client.1.vm05.stdout:5/9: mknod c1 0 2026-03-10T10:19:09.353 INFO:tasks.workunit.client.0.vm02.stdout:4/50: rename d1/d2 to d1/d2/de/d11 22 2026-03-10T10:19:09.353 INFO:tasks.workunit.client.0.vm02.stdout:6/48: dwrite d0/f2 [4194304,4194304] 0 2026-03-10T10:19:09.353 INFO:tasks.workunit.client.0.vm02.stdout:2/80: rmdir d0 39 2026-03-10T10:19:09.353 INFO:tasks.workunit.client.1.vm05.stdout:6/2: rename f1 to f2 0 2026-03-10T10:19:09.353 INFO:tasks.workunit.client.1.vm05.stdout:9/10: creat d0/f3 x:0 0 0 2026-03-10T10:19:09.353 INFO:tasks.workunit.client.1.vm05.stdout:9/11: chown d0/d2 5634126 1 2026-03-10T10:19:09.353 INFO:tasks.workunit.client.1.vm05.stdout:9/12: dread - d0/f3 zero size 2026-03-10T10:19:09.353 INFO:tasks.workunit.client.1.vm05.stdout:4/13: getdents . 0 2026-03-10T10:19:09.354 INFO:tasks.workunit.client.1.vm05.stdout:4/14: fsync - no filename 2026-03-10T10:19:09.354 INFO:tasks.workunit.client.1.vm05.stdout:3/5: creat f1 x:0 0 0 2026-03-10T10:19:09.354 INFO:tasks.workunit.client.1.vm05.stdout:5/10: mknod c2 0 2026-03-10T10:19:09.354 INFO:tasks.workunit.client.1.vm05.stdout:5/11: rmdir - no directory 2026-03-10T10:19:09.354 INFO:tasks.workunit.client.1.vm05.stdout:5/12: dwrite - no filename 2026-03-10T10:19:09.354 INFO:tasks.workunit.client.1.vm05.stdout:9/13: dwrite d0/f3 [0,4194304] 0 2026-03-10T10:19:09.354 INFO:tasks.workunit.client.1.vm05.stdout:6/3: write f2 [531017,121083] 0 2026-03-10T10:19:09.357 INFO:tasks.workunit.client.0.vm02.stdout:3/37: creat d1/fc x:0 0 0 2026-03-10T10:19:09.357 INFO:tasks.workunit.client.1.vm05.stdout:2/1: creat f0 x:0 0 0 2026-03-10T10:19:09.357 INFO:tasks.workunit.client.0.vm02.stdout:3/38: stat d1 0 2026-03-10T10:19:09.357 INFO:tasks.workunit.client.0.vm02.stdout:3/39: write f0 [2288468,8245] 0 2026-03-10T10:19:09.357 INFO:tasks.workunit.client.1.vm05.stdout:2/2: truncate f0 449628 0 2026-03-10T10:19:09.359 INFO:tasks.workunit.client.0.vm02.stdout:9/82: fdatasync da/f1b 0 2026-03-10T10:19:09.359 INFO:tasks.workunit.client.1.vm05.stdout:3/6: creat f2 x:0 0 0 2026-03-10T10:19:09.359 INFO:tasks.workunit.client.1.vm05.stdout:3/7: dread - f1 zero size 2026-03-10T10:19:09.360 INFO:tasks.workunit.client.1.vm05.stdout:3/8: dread - f1 zero size 2026-03-10T10:19:09.361 INFO:tasks.workunit.client.1.vm05.stdout:3/9: rmdir - no directory 2026-03-10T10:19:09.363 INFO:tasks.workunit.client.0.vm02.stdout:4/51: mknod d1/c12 0 2026-03-10T10:19:09.365 INFO:tasks.workunit.client.0.vm02.stdout:4/52: dread d1/d10/f8 [4194304,4194304] 0 2026-03-10T10:19:09.365 INFO:tasks.workunit.client.0.vm02.stdout:5/36: chown d1/c9 658 1 2026-03-10T10:19:09.369 INFO:tasks.workunit.client.1.vm05.stdout:3/10: dwrite f2 [0,4194304] 0 2026-03-10T10:19:09.369 INFO:tasks.workunit.client.0.vm02.stdout:5/37: dwrite d1/f3 [0,4194304] 0 2026-03-10T10:19:09.371 INFO:tasks.workunit.client.1.vm05.stdout:3/11: dread - f1 zero size 2026-03-10T10:19:09.372 INFO:tasks.workunit.client.1.vm05.stdout:2/3: rename f0 to f1 0 2026-03-10T10:19:09.372 INFO:tasks.workunit.client.1.vm05.stdout:3/12: stat f2 0 2026-03-10T10:19:09.373 INFO:tasks.workunit.client.1.vm05.stdout:3/13: chown l0 13887 1 2026-03-10T10:19:09.381 INFO:tasks.workunit.client.0.vm02.stdout:7/40: creat d1/fd x:0 0 0 2026-03-10T10:19:09.430 INFO:tasks.workunit.client.0.vm02.stdout:7/41: chown d1/dc/db 415020 1 2026-03-10T10:19:09.430 INFO:tasks.workunit.client.0.vm02.stdout:2/81: dread - d0/d10/f14 zero size 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:9/83: rmdir da 39 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:4/53: rename d1/l9 to d1/d2/l13 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:4/54: read - d1/d10/f6 zero size 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:7/42: unlink d1/f7 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:2/82: rename d0/db to d0/d1a 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:7/43: chown d1/dc/l9 21 1 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:0/82: link d9/l11 d9/l14 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:6/49: getdents d0/d8 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:5/38: link d1/c9 d1/ca 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:2/83: fdatasync d0/fe 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:9/84: rmdir da/d10 39 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:6/50: mknod d0/d7/c10 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:5/39: mkdir d1/db 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:6/51: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:5/40: dwrite d1/f3 [0,4194304] 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.0.vm02.stdout:2/84: creat d0/f1b x:0 0 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:1/0: creat f0 x:0 0 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:9/14: creat d0/d2/f4 x:0 0 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:6/4: link f2 f3 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:2/4: write f1 [1112008,105318] 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:3/14: creat f3 x:0 0 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:1/1: creat f1 x:0 0 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:1/2: chown f1 41508800 1 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:5/13: link l0 l3 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:5/14: chown c2 4553 1 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:6/5: write f3 [302451,12611] 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:2/5: write f1 [1188653,105034] 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:1/3: mkdir d2 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:3/15: dwrite f1 [0,4194304] 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:1/4: write f1 [391540,84337] 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:5/15: rename l3 to l4 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:5/16: truncate - no filename 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:5/17: stat c1 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:5/18: dwrite - no filename 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:5/19: dread - no filename 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:2/6: dwrite f1 [0,4194304] 0 2026-03-10T10:19:09.431 INFO:tasks.workunit.client.1.vm05.stdout:3/16: write f1 [3829840,5895] 0 2026-03-10T10:19:09.432 INFO:tasks.workunit.client.1.vm05.stdout:1/5: dwrite f0 [0,4194304] 0 2026-03-10T10:19:09.432 INFO:tasks.workunit.client.1.vm05.stdout:5/20: creat f5 x:0 0 0 2026-03-10T10:19:09.432 INFO:tasks.workunit.client.1.vm05.stdout:1/6: write f1 [686383,81743] 0 2026-03-10T10:19:09.432 INFO:tasks.workunit.client.1.vm05.stdout:3/17: mknod c4 0 2026-03-10T10:19:09.432 INFO:tasks.workunit.client.0.vm02.stdout:7/44: rmdir d1/dc/db 0 2026-03-10T10:19:09.433 INFO:tasks.workunit.client.1.vm05.stdout:5/21: dwrite f5 [0,4194304] 0 2026-03-10T10:19:09.435 INFO:tasks.workunit.client.0.vm02.stdout:5/41: rename d1/l6 to d1/lc 0 2026-03-10T10:19:09.436 INFO:tasks.workunit.client.0.vm02.stdout:7/45: dwrite d1/dc/f3 [0,4194304] 0 2026-03-10T10:19:09.436 INFO:tasks.workunit.client.0.vm02.stdout:7/46: stat d1/c8 0 2026-03-10T10:19:09.436 INFO:tasks.workunit.client.1.vm05.stdout:6/6: dwrite f2 [0,4194304] 0 2026-03-10T10:19:09.436 INFO:tasks.workunit.client.0.vm02.stdout:7/47: chown d1/dc/f3 2081072 1 2026-03-10T10:19:09.439 INFO:tasks.workunit.client.0.vm02.stdout:2/85: symlink d0/l1c 0 2026-03-10T10:19:09.463 INFO:tasks.workunit.client.1.vm05.stdout:3/18: dread f2 [0,4194304] 0 2026-03-10T10:19:09.463 INFO:tasks.workunit.client.1.vm05.stdout:6/7: mknod c4 0 2026-03-10T10:19:09.463 INFO:tasks.workunit.client.1.vm05.stdout:3/19: write f2 [2732135,1875] 0 2026-03-10T10:19:09.463 INFO:tasks.workunit.client.1.vm05.stdout:6/8: mkdir d5 0 2026-03-10T10:19:09.463 INFO:tasks.workunit.client.1.vm05.stdout:3/20: dread f1 [0,4194304] 0 2026-03-10T10:19:09.463 INFO:tasks.workunit.client.1.vm05.stdout:6/9: dread f3 [0,4194304] 0 2026-03-10T10:19:09.463 INFO:tasks.workunit.client.1.vm05.stdout:3/21: unlink c4 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/86: mknod d0/c1d 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/87: dread - d0/d10/f14 zero size 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:5/42: creat d1/db/fd x:0 0 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/88: symlink d0/d10/l1e 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:5/43: creat d1/fe x:0 0 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/89: creat d0/d10/f1f x:0 0 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/90: write d0/f1b [633025,24896] 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:5/44: chown d1/ca 45531855 1 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/91: creat d0/d1a/f20 x:0 0 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/92: mkdir d0/d10/d21 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/93: symlink d0/d1a/l22 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/94: dread - d0/d10/f1f zero size 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/95: write d0/f9 [2801914,87983] 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/96: stat d0/la 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/97: symlink d0/d10/l23 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.1.vm05.stdout:3/22: creat f5 x:0 0 0 2026-03-10T10:19:09.464 INFO:tasks.workunit.client.0.vm02.stdout:2/98: rmdir d0/d10/d21 0 2026-03-10T10:19:09.465 INFO:tasks.workunit.client.0.vm02.stdout:2/99: chown d0/f9 2238749 1 2026-03-10T10:19:09.465 INFO:tasks.workunit.client.0.vm02.stdout:2/100: write d0/d10/f1f [122552,37097] 0 2026-03-10T10:19:09.466 INFO:tasks.workunit.client.1.vm05.stdout:3/23: chown f3 6246 1 2026-03-10T10:19:09.466 INFO:tasks.workunit.client.1.vm05.stdout:3/24: read f1 [3134139,19562] 0 2026-03-10T10:19:09.466 INFO:tasks.workunit.client.1.vm05.stdout:6/10: rmdir d5 0 2026-03-10T10:19:09.468 INFO:tasks.workunit.client.1.vm05.stdout:6/11: write f3 [4035383,121090] 0 2026-03-10T10:19:09.468 INFO:tasks.workunit.client.1.vm05.stdout:6/12: write f3 [2155189,98281] 0 2026-03-10T10:19:09.469 INFO:tasks.workunit.client.1.vm05.stdout:6/13: stat l0 0 2026-03-10T10:19:09.470 INFO:tasks.workunit.client.0.vm02.stdout:2/101: dwrite d0/f1b [0,4194304] 0 2026-03-10T10:19:09.470 INFO:tasks.workunit.client.0.vm02.stdout:2/102: read - d0/d10/f14 zero size 2026-03-10T10:19:09.471 INFO:tasks.workunit.client.0.vm02.stdout:2/103: read d0/f1b [3396928,71519] 0 2026-03-10T10:19:09.473 INFO:tasks.workunit.client.1.vm05.stdout:6/14: mknod c6 0 2026-03-10T10:19:09.473 INFO:tasks.workunit.client.0.vm02.stdout:2/104: mkdir d0/d1a/d24 0 2026-03-10T10:19:09.477 INFO:tasks.workunit.client.0.vm02.stdout:2/105: dwrite d0/d10/f14 [0,4194304] 0 2026-03-10T10:19:09.480 INFO:tasks.workunit.client.0.vm02.stdout:2/106: dread - d0/d1a/f20 zero size 2026-03-10T10:19:09.481 INFO:tasks.workunit.client.0.vm02.stdout:2/107: creat d0/d1a/f25 x:0 0 0 2026-03-10T10:19:09.482 INFO:tasks.workunit.client.0.vm02.stdout:2/108: write d0/d1a/f25 [941775,108312] 0 2026-03-10T10:19:09.684 INFO:tasks.workunit.client.0.vm02.stdout:7/48: sync 2026-03-10T10:19:09.684 INFO:tasks.workunit.client.0.vm02.stdout:0/83: sync 2026-03-10T10:19:09.692 INFO:tasks.workunit.client.0.vm02.stdout:7/49: symlink d1/dc/le 0 2026-03-10T10:19:09.695 INFO:tasks.workunit.client.0.vm02.stdout:0/84: unlink d9/l14 0 2026-03-10T10:19:09.696 INFO:tasks.workunit.client.0.vm02.stdout:8/67: dread d1/f4 [0,4194304] 0 2026-03-10T10:19:09.699 INFO:tasks.workunit.client.0.vm02.stdout:8/68: fdatasync d1/d2/f3 0 2026-03-10T10:19:09.700 INFO:tasks.workunit.client.0.vm02.stdout:8/69: truncate d1/f4 2069892 0 2026-03-10T10:19:09.700 INFO:tasks.workunit.client.0.vm02.stdout:8/70: chown d1/ce 422 1 2026-03-10T10:19:09.701 INFO:tasks.workunit.client.0.vm02.stdout:0/85: symlink d9/l15 0 2026-03-10T10:19:09.702 INFO:tasks.workunit.client.0.vm02.stdout:8/71: symlink d1/d2/d6/lf 0 2026-03-10T10:19:09.702 INFO:tasks.workunit.client.0.vm02.stdout:0/86: rmdir d9 39 2026-03-10T10:19:09.702 INFO:tasks.workunit.client.0.vm02.stdout:8/72: chown d1/d2/d6/lf 169128 1 2026-03-10T10:19:09.703 INFO:tasks.workunit.client.0.vm02.stdout:8/73: chown d1/d2 713 1 2026-03-10T10:19:09.704 INFO:tasks.workunit.client.0.vm02.stdout:0/87: write f2 [269710,97192] 0 2026-03-10T10:19:09.707 INFO:tasks.workunit.client.0.vm02.stdout:8/74: dwrite d1/d2/f3 [4194304,4194304] 0 2026-03-10T10:19:09.711 INFO:tasks.workunit.client.1.vm05.stdout:8/7: sync 2026-03-10T10:19:09.718 INFO:tasks.workunit.client.1.vm05.stdout:8/8: creat f0 x:0 0 0 2026-03-10T10:19:09.718 INFO:tasks.workunit.client.1.vm05.stdout:8/9: creat f1 x:0 0 0 2026-03-10T10:19:09.718 INFO:tasks.workunit.client.0.vm02.stdout:8/75: creat d1/f10 x:0 0 0 2026-03-10T10:19:09.718 INFO:tasks.workunit.client.0.vm02.stdout:4/55: fdatasync d1/fd 0 2026-03-10T10:19:09.718 INFO:tasks.workunit.client.0.vm02.stdout:0/88: dwrite d9/fc [0,4194304] 0 2026-03-10T10:19:09.719 INFO:tasks.workunit.client.0.vm02.stdout:8/76: dwrite d1/f10 [0,4194304] 0 2026-03-10T10:19:09.721 INFO:tasks.workunit.client.1.vm05.stdout:0/4: sync 2026-03-10T10:19:09.721 INFO:tasks.workunit.client.1.vm05.stdout:7/6: sync 2026-03-10T10:19:09.721 INFO:tasks.workunit.client.1.vm05.stdout:7/7: rmdir - no directory 2026-03-10T10:19:09.721 INFO:tasks.workunit.client.1.vm05.stdout:7/8: write - no filename 2026-03-10T10:19:09.721 INFO:tasks.workunit.client.1.vm05.stdout:7/9: dread - no filename 2026-03-10T10:19:09.721 INFO:tasks.workunit.client.1.vm05.stdout:7/10: write - no filename 2026-03-10T10:19:09.721 INFO:tasks.workunit.client.1.vm05.stdout:7/11: dread - no filename 2026-03-10T10:19:09.722 INFO:tasks.workunit.client.1.vm05.stdout:7/12: dread - no filename 2026-03-10T10:19:09.722 INFO:tasks.workunit.client.1.vm05.stdout:7/13: dwrite - no filename 2026-03-10T10:19:09.722 INFO:tasks.workunit.client.1.vm05.stdout:4/15: sync 2026-03-10T10:19:09.722 INFO:tasks.workunit.client.1.vm05.stdout:4/16: fsync - no filename 2026-03-10T10:19:09.723 INFO:tasks.workunit.client.1.vm05.stdout:8/10: write f1 [198299,4469] 0 2026-03-10T10:19:09.726 INFO:tasks.workunit.client.0.vm02.stdout:8/77: dread d1/f10 [0,4194304] 0 2026-03-10T10:19:09.729 INFO:tasks.workunit.client.0.vm02.stdout:8/78: dwrite d1/f10 [0,4194304] 0 2026-03-10T10:19:09.731 INFO:tasks.workunit.client.0.vm02.stdout:0/89: dread f2 [0,4194304] 0 2026-03-10T10:19:09.741 INFO:tasks.workunit.client.1.vm05.stdout:0/5: dwrite f0 [0,4194304] 0 2026-03-10T10:19:09.760 INFO:tasks.workunit.client.1.vm05.stdout:7/14: symlink l1 0 2026-03-10T10:19:09.760 INFO:tasks.workunit.client.0.vm02.stdout:4/56: rename f0 to d1/d2/de/f14 0 2026-03-10T10:19:09.769 INFO:tasks.workunit.client.1.vm05.stdout:8/11: creat f2 x:0 0 0 2026-03-10T10:19:09.769 INFO:tasks.workunit.client.1.vm05.stdout:0/6: mkdir d1 0 2026-03-10T10:19:09.769 INFO:tasks.workunit.client.1.vm05.stdout:4/17: creat f0 x:0 0 0 2026-03-10T10:19:09.769 INFO:tasks.workunit.client.1.vm05.stdout:0/7: chown d1 0 1 2026-03-10T10:19:09.770 INFO:tasks.workunit.client.1.vm05.stdout:4/18: truncate f0 685089 0 2026-03-10T10:19:09.772 INFO:tasks.workunit.client.0.vm02.stdout:4/57: getdents d1/d10/db 0 2026-03-10T10:19:09.778 INFO:tasks.workunit.client.1.vm05.stdout:4/19: mkdir d1 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.0.vm02.stdout:4/58: unlink d1/c12 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.0.vm02.stdout:4/59: dwrite d1/d10/f8 [4194304,4194304] 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.0.vm02.stdout:4/60: creat d1/d10/db/f15 x:0 0 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.0.vm02.stdout:4/61: chown d1/ff 1584892 1 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.0.vm02.stdout:4/62: unlink d1/ff 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:7/15: link l1 l2 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:4/20: dread f0 [0,4194304] 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:4/21: readlink - no filename 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:0/8: mkdir d1/d2 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:0/9: readlink - no filename 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:7/16: chown c0 275670 1 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:4/22: symlink d1/l2 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:7/17: rename l1 to l3 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:7/18: dread - no filename 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:7/19: write - no filename 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:4/23: mkdir d1/d3 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:0/10: dwrite f0 [0,4194304] 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.1.vm05.stdout:4/24: dwrite f0 [0,4194304] 0 2026-03-10T10:19:09.799 INFO:tasks.workunit.client.0.vm02.stdout:4/63: dwrite d1/d10/db/f15 [0,4194304] 0 2026-03-10T10:19:09.801 INFO:tasks.workunit.client.0.vm02.stdout:4/64: dread d1/d2/de/f14 [0,4194304] 0 2026-03-10T10:19:09.804 INFO:tasks.workunit.client.1.vm05.stdout:4/25: write f0 [243489,88215] 0 2026-03-10T10:19:09.805 INFO:tasks.workunit.client.1.vm05.stdout:4/26: write f0 [1092598,97220] 0 2026-03-10T10:19:09.805 INFO:tasks.workunit.client.1.vm05.stdout:4/27: write f0 [2761519,22127] 0 2026-03-10T10:19:09.810 INFO:tasks.workunit.client.0.vm02.stdout:8/79: sync 2026-03-10T10:19:09.810 INFO:tasks.workunit.client.1.vm05.stdout:8/12: sync 2026-03-10T10:19:09.811 INFO:tasks.workunit.client.1.vm05.stdout:8/13: write f0 [710472,16131] 0 2026-03-10T10:19:09.821 INFO:tasks.workunit.client.0.vm02.stdout:8/80: rename d1/ce to d1/d2/d6/c11 0 2026-03-10T10:19:09.821 INFO:tasks.workunit.client.0.vm02.stdout:4/65: link d1/d10/f8 d1/d10/db/f16 0 2026-03-10T10:19:09.822 INFO:tasks.workunit.client.0.vm02.stdout:4/66: write d1/d10/db/f15 [204484,56761] 0 2026-03-10T10:19:09.827 INFO:tasks.workunit.client.0.vm02.stdout:4/67: dwrite d1/d10/f8 [0,4194304] 0 2026-03-10T10:19:09.829 INFO:tasks.workunit.client.1.vm05.stdout:8/14: dwrite f1 [0,4194304] 0 2026-03-10T10:19:09.833 INFO:tasks.workunit.client.1.vm05.stdout:4/28: dwrite f0 [0,4194304] 0 2026-03-10T10:19:09.835 INFO:tasks.workunit.client.1.vm05.stdout:8/15: chown f1 460 1 2026-03-10T10:19:09.848 INFO:tasks.workunit.client.1.vm05.stdout:8/16: dwrite f0 [0,4194304] 0 2026-03-10T10:19:09.917 INFO:tasks.workunit.client.0.vm02.stdout:7/50: dread d1/f5 [0,4194304] 0 2026-03-10T10:19:09.918 INFO:tasks.workunit.client.0.vm02.stdout:3/40: fsync f0 0 2026-03-10T10:19:09.956 INFO:tasks.workunit.client.1.vm05.stdout:5/22: fdatasync f5 0 2026-03-10T10:19:09.956 INFO:tasks.workunit.client.1.vm05.stdout:5/23: fdatasync f5 0 2026-03-10T10:19:09.958 INFO:tasks.workunit.client.0.vm02.stdout:7/51: sync 2026-03-10T10:19:09.960 INFO:tasks.workunit.client.0.vm02.stdout:7/52: creat d1/dc/ff x:0 0 0 2026-03-10T10:19:09.961 INFO:tasks.workunit.client.0.vm02.stdout:7/53: mkdir d1/dc/d10 0 2026-03-10T10:19:09.962 INFO:tasks.workunit.client.0.vm02.stdout:7/54: truncate d1/f5 641396 0 2026-03-10T10:19:09.969 INFO:tasks.workunit.client.0.vm02.stdout:7/55: dwrite d1/dc/f3 [0,4194304] 0 2026-03-10T10:19:09.975 INFO:tasks.workunit.client.0.vm02.stdout:7/56: dwrite d1/fd [0,4194304] 0 2026-03-10T10:19:09.980 INFO:tasks.workunit.client.0.vm02.stdout:7/57: symlink d1/dc/d10/l11 0 2026-03-10T10:19:09.981 INFO:tasks.workunit.client.0.vm02.stdout:7/58: mknod d1/dc/c12 0 2026-03-10T10:19:09.982 INFO:tasks.workunit.client.0.vm02.stdout:7/59: creat d1/dc/d10/f13 x:0 0 0 2026-03-10T10:19:10.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:09 vm02.local ceph-mon[50200]: pgmap v144: 65 pgs: 65 active+clean; 215 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 360 op/s 2026-03-10T10:19:10.075 INFO:tasks.workunit.client.1.vm05.stdout:8/17: fdatasync f0 0 2026-03-10T10:19:10.106 INFO:tasks.workunit.client.1.vm05.stdout:0/11: fsync f0 0 2026-03-10T10:19:10.112 INFO:tasks.workunit.client.1.vm05.stdout:0/12: mkdir d1/d2/d3 0 2026-03-10T10:19:10.113 INFO:tasks.workunit.client.1.vm05.stdout:0/13: creat d1/d2/d3/f4 x:0 0 0 2026-03-10T10:19:10.114 INFO:tasks.workunit.client.1.vm05.stdout:0/14: symlink d1/d2/d3/l5 0 2026-03-10T10:19:10.118 INFO:tasks.workunit.client.1.vm05.stdout:0/15: dwrite f0 [0,4194304] 0 2026-03-10T10:19:10.131 INFO:tasks.workunit.client.0.vm02.stdout:9/85: dread da/f1b [0,4194304] 0 2026-03-10T10:19:10.132 INFO:tasks.workunit.client.0.vm02.stdout:9/86: rmdir da/d10 39 2026-03-10T10:19:10.133 INFO:tasks.workunit.client.0.vm02.stdout:9/87: chown da/l16 113 1 2026-03-10T10:19:10.133 INFO:tasks.workunit.client.0.vm02.stdout:9/88: write da/fb [6766321,55434] 0 2026-03-10T10:19:10.135 INFO:tasks.workunit.client.0.vm02.stdout:9/89: rmdir da/d10 39 2026-03-10T10:19:10.141 INFO:tasks.workunit.client.0.vm02.stdout:9/90: creat da/f1e x:0 0 0 2026-03-10T10:19:10.152 INFO:tasks.workunit.client.0.vm02.stdout:9/91: truncate da/f1e 559494 0 2026-03-10T10:19:10.152 INFO:tasks.workunit.client.0.vm02.stdout:9/92: dwrite da/ff [0,4194304] 0 2026-03-10T10:19:10.152 INFO:tasks.workunit.client.0.vm02.stdout:9/93: stat da/lc 0 2026-03-10T10:19:10.152 INFO:tasks.workunit.client.0.vm02.stdout:9/94: dread - da/f13 zero size 2026-03-10T10:19:10.259 INFO:tasks.workunit.client.0.vm02.stdout:1/61: rmdir d4/da 39 2026-03-10T10:19:10.259 INFO:tasks.workunit.client.0.vm02.stdout:1/62: rename d4 to d4/d15 22 2026-03-10T10:19:10.268 INFO:tasks.workunit.client.1.vm05.stdout:9/15: getdents d0/d2 0 2026-03-10T10:19:10.269 INFO:tasks.workunit.client.1.vm05.stdout:9/16: stat d0 0 2026-03-10T10:19:10.272 INFO:tasks.workunit.client.0.vm02.stdout:4/68: readlink d1/d2/l13 0 2026-03-10T10:19:10.275 INFO:tasks.workunit.client.1.vm05.stdout:9/17: creat d0/d1/f5 x:0 0 0 2026-03-10T10:19:10.275 INFO:tasks.workunit.client.1.vm05.stdout:9/18: write d0/f3 [4718625,68793] 0 2026-03-10T10:19:10.275 INFO:tasks.workunit.client.0.vm02.stdout:4/69: rmdir d1/d2 39 2026-03-10T10:19:10.278 INFO:tasks.workunit.client.0.vm02.stdout:2/109: dread d0/d10/f1f [0,4194304] 0 2026-03-10T10:19:10.278 INFO:tasks.workunit.client.0.vm02.stdout:2/110: fsync d0/d10/f14 0 2026-03-10T10:19:10.283 INFO:tasks.workunit.client.1.vm05.stdout:9/19: rmdir d0/d2 39 2026-03-10T10:19:10.287 INFO:tasks.workunit.client.0.vm02.stdout:4/70: readlink d1/d2/l13 0 2026-03-10T10:19:10.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:09 vm05.local ceph-mon[59051]: pgmap v144: 65 pgs: 65 active+clean; 215 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 360 op/s 2026-03-10T10:19:10.289 INFO:tasks.workunit.client.1.vm05.stdout:1/7: write f0 [5203642,121795] 0 2026-03-10T10:19:10.290 INFO:tasks.workunit.client.0.vm02.stdout:2/111: unlink d0/d10/l15 0 2026-03-10T10:19:10.292 INFO:tasks.workunit.client.0.vm02.stdout:4/71: symlink d1/d2/de/l17 0 2026-03-10T10:19:10.293 INFO:tasks.workunit.client.1.vm05.stdout:9/20: symlink d0/d2/l6 0 2026-03-10T10:19:10.293 INFO:tasks.workunit.client.1.vm05.stdout:2/7: write f1 [4240189,29213] 0 2026-03-10T10:19:10.295 INFO:tasks.workunit.client.1.vm05.stdout:9/21: write d0/f3 [5635284,39543] 0 2026-03-10T10:19:10.298 INFO:tasks.workunit.client.0.vm02.stdout:2/112: readlink d0/d1a/l12 0 2026-03-10T10:19:10.299 INFO:tasks.workunit.client.0.vm02.stdout:2/113: dread d0/d10/f1f [0,4194304] 0 2026-03-10T10:19:10.302 INFO:tasks.workunit.client.0.vm02.stdout:4/72: mknod d1/d10/c18 0 2026-03-10T10:19:10.304 INFO:tasks.workunit.client.1.vm05.stdout:5/24: write f5 [4960600,39142] 0 2026-03-10T10:19:10.305 INFO:tasks.workunit.client.1.vm05.stdout:2/8: dwrite f1 [4194304,4194304] 0 2026-03-10T10:19:10.305 INFO:tasks.workunit.client.0.vm02.stdout:6/52: truncate d0/f2 3005622 0 2026-03-10T10:19:10.305 INFO:tasks.workunit.client.0.vm02.stdout:6/53: chown d0 67884 1 2026-03-10T10:19:10.306 INFO:tasks.workunit.client.1.vm05.stdout:2/9: write f1 [7056652,15810] 0 2026-03-10T10:19:10.306 INFO:tasks.workunit.client.1.vm05.stdout:2/10: rmdir - no directory 2026-03-10T10:19:10.306 INFO:tasks.workunit.client.1.vm05.stdout:2/11: rmdir - no directory 2026-03-10T10:19:10.306 INFO:tasks.workunit.client.1.vm05.stdout:2/12: stat f1 0 2026-03-10T10:19:10.307 INFO:tasks.workunit.client.0.vm02.stdout:2/114: creat d0/d1a/f26 x:0 0 0 2026-03-10T10:19:10.307 INFO:tasks.workunit.client.0.vm02.stdout:2/115: write d0/d1a/f25 [1817686,99222] 0 2026-03-10T10:19:10.316 INFO:tasks.workunit.client.0.vm02.stdout:4/73: mknod d1/d2/de/c19 0 2026-03-10T10:19:10.318 INFO:tasks.workunit.client.0.vm02.stdout:4/74: dread d1/fd [0,4194304] 0 2026-03-10T10:19:10.319 INFO:tasks.workunit.client.0.vm02.stdout:4/75: dread - d1/d10/f6 zero size 2026-03-10T10:19:10.319 INFO:tasks.workunit.client.0.vm02.stdout:4/76: chown d1/d10/c18 1 1 2026-03-10T10:19:10.321 INFO:tasks.workunit.client.1.vm05.stdout:5/25: mknod c6 0 2026-03-10T10:19:10.322 INFO:tasks.workunit.client.0.vm02.stdout:6/54: mknod d0/d7/c11 0 2026-03-10T10:19:10.324 INFO:tasks.workunit.client.1.vm05.stdout:1/8: rmdir d2 0 2026-03-10T10:19:10.325 INFO:tasks.workunit.client.0.vm02.stdout:2/116: symlink d0/l27 0 2026-03-10T10:19:10.325 INFO:tasks.workunit.client.0.vm02.stdout:2/117: fdatasync d0/d1a/f26 0 2026-03-10T10:19:10.326 INFO:tasks.workunit.client.1.vm05.stdout:5/26: dread f5 [0,4194304] 0 2026-03-10T10:19:10.329 INFO:tasks.workunit.client.0.vm02.stdout:4/77: mkdir d1/d2/d1a 0 2026-03-10T10:19:10.329 INFO:tasks.workunit.client.0.vm02.stdout:4/78: fdatasync d1/d10/f6 0 2026-03-10T10:19:10.329 INFO:tasks.workunit.client.1.vm05.stdout:6/15: dwrite f2 [4194304,4194304] 0 2026-03-10T10:19:10.333 INFO:tasks.workunit.client.0.vm02.stdout:4/79: dwrite d1/d10/db/f15 [0,4194304] 0 2026-03-10T10:19:10.341 INFO:tasks.workunit.client.1.vm05.stdout:3/25: truncate f2 1405983 0 2026-03-10T10:19:10.353 INFO:tasks.workunit.client.1.vm05.stdout:6/16: symlink l7 0 2026-03-10T10:19:10.353 INFO:tasks.workunit.client.1.vm05.stdout:6/17: write f2 [2214091,68428] 0 2026-03-10T10:19:10.353 INFO:tasks.workunit.client.1.vm05.stdout:6/18: chown f2 38925 1 2026-03-10T10:19:10.353 INFO:tasks.workunit.client.0.vm02.stdout:5/45: link d1/lc d1/lf 0 2026-03-10T10:19:10.353 INFO:tasks.workunit.client.0.vm02.stdout:5/46: write d1/f3 [7643814,113711] 0 2026-03-10T10:19:10.353 INFO:tasks.workunit.client.0.vm02.stdout:4/80: creat d1/d2/de/f1b x:0 0 0 2026-03-10T10:19:10.353 INFO:tasks.workunit.client.0.vm02.stdout:4/81: write d1/d2/de/f1b [736820,40378] 0 2026-03-10T10:19:10.353 INFO:tasks.workunit.client.0.vm02.stdout:2/118: rename d0/l17 to d0/d10/l28 0 2026-03-10T10:19:10.354 INFO:tasks.workunit.client.1.vm05.stdout:9/22: sync 2026-03-10T10:19:10.354 INFO:tasks.workunit.client.1.vm05.stdout:2/13: sync 2026-03-10T10:19:10.356 INFO:tasks.workunit.client.1.vm05.stdout:9/23: write d0/f3 [5437404,64545] 0 2026-03-10T10:19:10.359 INFO:tasks.workunit.client.1.vm05.stdout:2/14: creat f2 x:0 0 0 2026-03-10T10:19:10.362 INFO:tasks.workunit.client.1.vm05.stdout:2/15: write f2 [917999,55000] 0 2026-03-10T10:19:10.363 INFO:tasks.workunit.client.1.vm05.stdout:2/16: chown f1 0 1 2026-03-10T10:19:10.363 INFO:tasks.workunit.client.1.vm05.stdout:2/17: truncate f1 8663027 0 2026-03-10T10:19:10.364 INFO:tasks.workunit.client.1.vm05.stdout:2/18: write f1 [5948679,92502] 0 2026-03-10T10:19:10.364 INFO:tasks.workunit.client.1.vm05.stdout:2/19: write f2 [826266,22604] 0 2026-03-10T10:19:10.377 INFO:tasks.workunit.client.1.vm05.stdout:2/20: creat f3 x:0 0 0 2026-03-10T10:19:10.412 INFO:tasks.workunit.client.0.vm02.stdout:4/82: sync 2026-03-10T10:19:10.473 INFO:tasks.workunit.client.0.vm02.stdout:0/90: getdents d9 0 2026-03-10T10:19:10.476 INFO:tasks.workunit.client.0.vm02.stdout:0/91: dread f2 [0,4194304] 0 2026-03-10T10:19:10.476 INFO:tasks.workunit.client.0.vm02.stdout:8/81: fsync d1/f10 0 2026-03-10T10:19:10.477 INFO:tasks.workunit.client.1.vm05.stdout:8/18: fsync f1 0 2026-03-10T10:19:10.478 INFO:tasks.workunit.client.0.vm02.stdout:8/82: write d1/f10 [4854354,94791] 0 2026-03-10T10:19:10.482 INFO:tasks.workunit.client.0.vm02.stdout:0/92: mknod d9/c16 0 2026-03-10T10:19:10.483 INFO:tasks.workunit.client.1.vm05.stdout:8/19: rename f1 to f3 0 2026-03-10T10:19:10.486 INFO:tasks.workunit.client.0.vm02.stdout:0/93: stat d9/la 0 2026-03-10T10:19:10.487 INFO:tasks.workunit.client.0.vm02.stdout:4/83: write d1/d10/f8 [6620966,109592] 0 2026-03-10T10:19:10.491 INFO:tasks.workunit.client.1.vm05.stdout:1/9: dread f1 [0,4194304] 0 2026-03-10T10:19:10.495 INFO:tasks.workunit.client.0.vm02.stdout:4/84: mknod d1/d2/d1a/c1c 0 2026-03-10T10:19:10.495 INFO:tasks.workunit.client.0.vm02.stdout:0/94: creat d9/f17 x:0 0 0 2026-03-10T10:19:10.495 INFO:tasks.workunit.client.1.vm05.stdout:1/10: rename f1 to f3 0 2026-03-10T10:19:10.495 INFO:tasks.workunit.client.1.vm05.stdout:1/11: chown f3 13226 1 2026-03-10T10:19:10.495 INFO:tasks.workunit.client.1.vm05.stdout:1/12: chown f0 798592 1 2026-03-10T10:19:10.495 INFO:tasks.workunit.client.1.vm05.stdout:1/13: read f3 [128246,124435] 0 2026-03-10T10:19:10.496 INFO:tasks.workunit.client.1.vm05.stdout:7/20: unlink l3 0 2026-03-10T10:19:10.496 INFO:tasks.workunit.client.0.vm02.stdout:8/83: sync 2026-03-10T10:19:10.498 INFO:tasks.workunit.client.0.vm02.stdout:0/95: write d9/fc [4556585,44468] 0 2026-03-10T10:19:10.501 INFO:tasks.workunit.client.0.vm02.stdout:0/96: write d9/fc [4619147,50585] 0 2026-03-10T10:19:10.501 INFO:tasks.workunit.client.0.vm02.stdout:0/97: write d9/fc [1815054,61634] 0 2026-03-10T10:19:10.507 INFO:tasks.workunit.client.1.vm05.stdout:1/14: mkdir d4 0 2026-03-10T10:19:10.518 INFO:tasks.workunit.client.1.vm05.stdout:1/15: creat d4/f5 x:0 0 0 2026-03-10T10:19:10.519 INFO:tasks.workunit.client.1.vm05.stdout:7/21: link l2 l4 0 2026-03-10T10:19:10.519 INFO:tasks.workunit.client.1.vm05.stdout:7/22: dread - no filename 2026-03-10T10:19:10.519 INFO:tasks.workunit.client.1.vm05.stdout:7/23: dwrite - no filename 2026-03-10T10:19:10.519 INFO:tasks.workunit.client.1.vm05.stdout:7/24: read - no filename 2026-03-10T10:19:10.519 INFO:tasks.workunit.client.1.vm05.stdout:7/25: truncate - no filename 2026-03-10T10:19:10.522 INFO:tasks.workunit.client.1.vm05.stdout:1/16: symlink d4/l6 0 2026-03-10T10:19:10.523 INFO:tasks.workunit.client.1.vm05.stdout:4/29: truncate f0 598134 0 2026-03-10T10:19:10.523 INFO:tasks.workunit.client.0.vm02.stdout:4/85: link d1/d10/f6 d1/f1d 0 2026-03-10T10:19:10.525 INFO:tasks.workunit.client.1.vm05.stdout:1/17: dread f0 [0,4194304] 0 2026-03-10T10:19:10.527 INFO:tasks.workunit.client.0.vm02.stdout:3/41: dwrite d1/f5 [0,4194304] 0 2026-03-10T10:19:10.528 INFO:tasks.workunit.client.0.vm02.stdout:0/98: mkdir d9/d18 0 2026-03-10T10:19:10.533 INFO:tasks.workunit.client.0.vm02.stdout:4/86: creat d1/d10/db/f1e x:0 0 0 2026-03-10T10:19:10.533 INFO:tasks.workunit.client.0.vm02.stdout:4/87: stat d1/d10/db/f15 0 2026-03-10T10:19:10.535 INFO:tasks.workunit.client.1.vm05.stdout:7/26: chown l2 534330 1 2026-03-10T10:19:10.536 INFO:tasks.workunit.client.0.vm02.stdout:3/42: mknod d1/cd 0 2026-03-10T10:19:10.537 INFO:tasks.workunit.client.0.vm02.stdout:0/99: stat d9/ce 0 2026-03-10T10:19:10.539 INFO:tasks.workunit.client.0.vm02.stdout:3/43: dwrite d1/f3 [0,4194304] 0 2026-03-10T10:19:10.542 INFO:tasks.workunit.client.0.vm02.stdout:8/84: getdents d1/d2/d6 0 2026-03-10T10:19:10.553 INFO:tasks.workunit.client.1.vm05.stdout:7/27: mkdir d5 0 2026-03-10T10:19:10.553 INFO:tasks.workunit.client.1.vm05.stdout:7/28: mknod d5/c6 0 2026-03-10T10:19:10.553 INFO:tasks.workunit.client.0.vm02.stdout:3/44: dwrite d1/d8/fb [0,4194304] 0 2026-03-10T10:19:10.553 INFO:tasks.workunit.client.0.vm02.stdout:8/85: rename d1/f4 to d1/f12 0 2026-03-10T10:19:10.553 INFO:tasks.workunit.client.0.vm02.stdout:8/86: truncate d1/f12 2319424 0 2026-03-10T10:19:10.553 INFO:tasks.workunit.client.0.vm02.stdout:0/100: unlink d9/c12 0 2026-03-10T10:19:10.553 INFO:tasks.workunit.client.0.vm02.stdout:3/45: creat d1/fe x:0 0 0 2026-03-10T10:19:10.554 INFO:tasks.workunit.client.0.vm02.stdout:4/88: sync 2026-03-10T10:19:10.556 INFO:tasks.workunit.client.1.vm05.stdout:7/29: readlink l4 0 2026-03-10T10:19:10.557 INFO:tasks.workunit.client.0.vm02.stdout:8/87: mknod d1/c13 0 2026-03-10T10:19:10.558 INFO:tasks.workunit.client.0.vm02.stdout:3/46: dwrite d1/d8/f9 [0,4194304] 0 2026-03-10T10:19:10.563 INFO:tasks.workunit.client.0.vm02.stdout:4/89: rename d1/d10/c18 to d1/d2/c1f 0 2026-03-10T10:19:10.566 INFO:tasks.workunit.client.0.vm02.stdout:8/88: dwrite d1/d2/f3 [0,4194304] 0 2026-03-10T10:19:10.581 INFO:tasks.workunit.client.1.vm05.stdout:7/30: chown l4 126867962 1 2026-03-10T10:19:10.581 INFO:tasks.workunit.client.0.vm02.stdout:3/47: mknod d1/d6/cf 0 2026-03-10T10:19:10.582 INFO:tasks.workunit.client.0.vm02.stdout:0/101: mknod d9/c19 0 2026-03-10T10:19:10.583 INFO:tasks.workunit.client.0.vm02.stdout:3/48: dread d1/d8/f9 [0,4194304] 0 2026-03-10T10:19:10.584 INFO:tasks.workunit.client.1.vm05.stdout:7/31: mknod d5/c7 0 2026-03-10T10:19:10.584 INFO:tasks.workunit.client.1.vm05.stdout:7/32: dread - no filename 2026-03-10T10:19:10.584 INFO:tasks.workunit.client.0.vm02.stdout:4/90: creat d1/d10/db/f20 x:0 0 0 2026-03-10T10:19:10.585 INFO:tasks.workunit.client.0.vm02.stdout:8/89: creat d1/d2/d6/f14 x:0 0 0 2026-03-10T10:19:10.586 INFO:tasks.workunit.client.0.vm02.stdout:3/49: symlink d1/d8/l10 0 2026-03-10T10:19:10.586 INFO:tasks.workunit.client.0.vm02.stdout:3/50: chown d1/f5 46360 1 2026-03-10T10:19:10.588 INFO:tasks.workunit.client.0.vm02.stdout:0/102: mkdir d9/d18/d1a 0 2026-03-10T10:19:10.592 INFO:tasks.workunit.client.0.vm02.stdout:3/51: dwrite d1/d8/fb [0,4194304] 0 2026-03-10T10:19:10.593 INFO:tasks.workunit.client.0.vm02.stdout:0/103: unlink d9/cd 0 2026-03-10T10:19:10.595 INFO:tasks.workunit.client.0.vm02.stdout:0/104: creat d9/f1b x:0 0 0 2026-03-10T10:19:10.597 INFO:tasks.workunit.client.0.vm02.stdout:0/105: rename d9/l15 to d9/l1c 0 2026-03-10T10:19:10.599 INFO:tasks.workunit.client.0.vm02.stdout:0/106: dread f2 [0,4194304] 0 2026-03-10T10:19:10.601 INFO:tasks.workunit.client.0.vm02.stdout:0/107: unlink d9/l11 0 2026-03-10T10:19:10.602 INFO:tasks.workunit.client.0.vm02.stdout:0/108: unlink d9/cf 0 2026-03-10T10:19:10.603 INFO:tasks.workunit.client.0.vm02.stdout:0/109: rename d9/la to d9/d18/d1a/l1d 0 2026-03-10T10:19:10.630 INFO:tasks.workunit.client.0.vm02.stdout:0/110: write d9/fc [1039592,62507] 0 2026-03-10T10:19:10.631 INFO:tasks.workunit.client.0.vm02.stdout:0/111: dwrite f2 [0,4194304] 0 2026-03-10T10:19:10.631 INFO:tasks.workunit.client.0.vm02.stdout:0/112: dread - d9/f1b zero size 2026-03-10T10:19:10.631 INFO:tasks.workunit.client.0.vm02.stdout:0/113: dwrite d9/f17 [0,4194304] 0 2026-03-10T10:19:10.631 INFO:tasks.workunit.client.0.vm02.stdout:0/114: chown d9/f17 25673 1 2026-03-10T10:19:10.631 INFO:tasks.workunit.client.0.vm02.stdout:0/115: creat d9/d18/f1e x:0 0 0 2026-03-10T10:19:10.631 INFO:tasks.workunit.client.0.vm02.stdout:0/116: unlink d9/c19 0 2026-03-10T10:19:10.631 INFO:tasks.workunit.client.0.vm02.stdout:4/91: sync 2026-03-10T10:19:10.632 INFO:tasks.workunit.client.0.vm02.stdout:4/92: write d1/d2/de/f1b [1671429,98364] 0 2026-03-10T10:19:10.637 INFO:tasks.workunit.client.0.vm02.stdout:4/93: dwrite d1/d10/db/f1e [0,4194304] 0 2026-03-10T10:19:10.638 INFO:tasks.workunit.client.0.vm02.stdout:4/94: dread - d1/f1d zero size 2026-03-10T10:19:10.642 INFO:tasks.workunit.client.1.vm05.stdout:7/33: sync 2026-03-10T10:19:10.642 INFO:tasks.workunit.client.1.vm05.stdout:7/34: write - no filename 2026-03-10T10:19:10.642 INFO:tasks.workunit.client.1.vm05.stdout:7/35: dwrite - no filename 2026-03-10T10:19:10.642 INFO:tasks.workunit.client.0.vm02.stdout:4/95: dread - d1/d10/f6 zero size 2026-03-10T10:19:10.654 INFO:tasks.workunit.client.1.vm05.stdout:7/36: mknod d5/c8 0 2026-03-10T10:19:10.654 INFO:tasks.workunit.client.1.vm05.stdout:7/37: dwrite - no filename 2026-03-10T10:19:10.655 INFO:tasks.workunit.client.1.vm05.stdout:7/38: chown c0 1 1 2026-03-10T10:19:10.655 INFO:tasks.workunit.client.1.vm05.stdout:7/39: write - no filename 2026-03-10T10:19:10.655 INFO:tasks.workunit.client.1.vm05.stdout:7/40: dwrite - no filename 2026-03-10T10:19:10.657 INFO:tasks.workunit.client.1.vm05.stdout:7/41: symlink d5/l9 0 2026-03-10T10:19:10.657 INFO:tasks.workunit.client.1.vm05.stdout:7/42: dwrite - no filename 2026-03-10T10:19:10.657 INFO:tasks.workunit.client.1.vm05.stdout:7/43: stat d5/l9 0 2026-03-10T10:19:10.658 INFO:tasks.workunit.client.1.vm05.stdout:7/44: chown d5 2 1 2026-03-10T10:19:10.658 INFO:tasks.workunit.client.1.vm05.stdout:7/45: truncate - no filename 2026-03-10T10:19:10.658 INFO:tasks.workunit.client.1.vm05.stdout:7/46: dread - no filename 2026-03-10T10:19:10.658 INFO:tasks.workunit.client.1.vm05.stdout:7/47: fdatasync - no filename 2026-03-10T10:19:10.660 INFO:tasks.workunit.client.1.vm05.stdout:7/48: creat d5/fa x:0 0 0 2026-03-10T10:19:10.663 INFO:tasks.workunit.client.1.vm05.stdout:7/49: creat d5/fb x:0 0 0 2026-03-10T10:19:10.665 INFO:tasks.workunit.client.1.vm05.stdout:7/50: mknod d5/cc 0 2026-03-10T10:19:10.666 INFO:tasks.workunit.client.1.vm05.stdout:7/51: dread - d5/fa zero size 2026-03-10T10:19:10.667 INFO:tasks.workunit.client.1.vm05.stdout:7/52: mkdir d5/dd 0 2026-03-10T10:19:10.667 INFO:tasks.workunit.client.1.vm05.stdout:7/53: chown c0 133644 1 2026-03-10T10:19:10.668 INFO:tasks.workunit.client.1.vm05.stdout:7/54: creat d5/fe x:0 0 0 2026-03-10T10:19:10.669 INFO:tasks.workunit.client.1.vm05.stdout:7/55: creat d5/ff x:0 0 0 2026-03-10T10:19:10.670 INFO:tasks.workunit.client.1.vm05.stdout:7/56: truncate d5/fb 586836 0 2026-03-10T10:19:10.671 INFO:tasks.workunit.client.1.vm05.stdout:7/57: truncate d5/fa 479691 0 2026-03-10T10:19:10.686 INFO:tasks.workunit.client.1.vm05.stdout:7/58: sync 2026-03-10T10:19:10.687 INFO:tasks.workunit.client.1.vm05.stdout:7/59: dread - d5/fe zero size 2026-03-10T10:19:10.695 INFO:tasks.workunit.client.1.vm05.stdout:7/60: symlink d5/dd/l10 0 2026-03-10T10:19:10.695 INFO:tasks.workunit.client.1.vm05.stdout:7/61: write d5/fe [183403,7013] 0 2026-03-10T10:19:10.696 INFO:tasks.workunit.client.0.vm02.stdout:0/117: chown d9/l13 22 1 2026-03-10T10:19:10.696 INFO:tasks.workunit.client.1.vm05.stdout:7/62: write d5/fe [316707,40198] 0 2026-03-10T10:19:10.696 INFO:tasks.workunit.client.0.vm02.stdout:0/118: read - d9/f1b zero size 2026-03-10T10:19:10.697 INFO:tasks.workunit.client.1.vm05.stdout:7/63: write d5/fb [313137,68193] 0 2026-03-10T10:19:10.702 INFO:tasks.workunit.client.0.vm02.stdout:0/119: dwrite d9/f1b [0,4194304] 0 2026-03-10T10:19:10.706 INFO:tasks.workunit.client.1.vm05.stdout:7/64: symlink d5/l11 0 2026-03-10T10:19:10.709 INFO:tasks.workunit.client.0.vm02.stdout:0/120: chown d9/l13 24218072 1 2026-03-10T10:19:10.710 INFO:tasks.workunit.client.1.vm05.stdout:7/65: dwrite d5/fb [0,4194304] 0 2026-03-10T10:19:10.710 INFO:tasks.workunit.client.0.vm02.stdout:0/121: read d9/f17 [1991154,51707] 0 2026-03-10T10:19:10.714 INFO:tasks.workunit.client.1.vm05.stdout:7/66: creat d5/dd/f12 x:0 0 0 2026-03-10T10:19:10.714 INFO:tasks.workunit.client.0.vm02.stdout:4/96: dwrite d1/f1d [0,4194304] 0 2026-03-10T10:19:10.716 INFO:tasks.workunit.client.0.vm02.stdout:0/122: creat d9/d18/d1a/f1f x:0 0 0 2026-03-10T10:19:10.727 INFO:tasks.workunit.client.1.vm05.stdout:7/67: dwrite d5/fb [0,4194304] 0 2026-03-10T10:19:10.732 INFO:tasks.workunit.client.0.vm02.stdout:4/97: write d1/fd [1961537,120269] 0 2026-03-10T10:19:10.748 INFO:tasks.workunit.client.1.vm05.stdout:7/68: dwrite d5/fb [0,4194304] 0 2026-03-10T10:19:10.748 INFO:tasks.workunit.client.1.vm05.stdout:7/69: rename d5/fb to d5/f13 0 2026-03-10T10:19:10.748 INFO:tasks.workunit.client.1.vm05.stdout:7/70: dwrite d5/f13 [0,4194304] 0 2026-03-10T10:19:10.748 INFO:tasks.workunit.client.0.vm02.stdout:4/98: symlink d1/d2/de/l21 0 2026-03-10T10:19:10.748 INFO:tasks.workunit.client.0.vm02.stdout:4/99: rename d1/d2/de/c19 to d1/d10/db/c22 0 2026-03-10T10:19:10.749 INFO:tasks.workunit.client.0.vm02.stdout:4/100: write d1/d2/de/f14 [3444610,6924] 0 2026-03-10T10:19:10.749 INFO:tasks.workunit.client.0.vm02.stdout:4/101: rename d1/ca to d1/c23 0 2026-03-10T10:19:10.753 INFO:tasks.workunit.client.1.vm05.stdout:7/71: write d5/f13 [707825,118112] 0 2026-03-10T10:19:10.754 INFO:tasks.workunit.client.1.vm05.stdout:7/72: truncate d5/fe 1306253 0 2026-03-10T10:19:10.754 INFO:tasks.workunit.client.1.vm05.stdout:7/73: write d5/fa [802486,2400] 0 2026-03-10T10:19:10.759 INFO:tasks.workunit.client.0.vm02.stdout:0/123: sync 2026-03-10T10:19:10.788 INFO:tasks.workunit.client.0.vm02.stdout:7/60: rmdir d1/dc 39 2026-03-10T10:19:10.788 INFO:tasks.workunit.client.1.vm05.stdout:7/74: read d5/fe [466248,55260] 0 2026-03-10T10:19:10.788 INFO:tasks.workunit.client.0.vm02.stdout:7/61: readlink d1/dc/d10/l11 0 2026-03-10T10:19:10.790 INFO:tasks.workunit.client.0.vm02.stdout:7/62: write d1/f5 [53814,12670] 0 2026-03-10T10:19:10.795 INFO:tasks.workunit.client.0.vm02.stdout:7/63: dwrite d1/dc/ff [0,4194304] 0 2026-03-10T10:19:10.800 INFO:tasks.workunit.client.0.vm02.stdout:7/64: rename d1/l6 to d1/dc/d10/l14 0 2026-03-10T10:19:10.800 INFO:tasks.workunit.client.0.vm02.stdout:7/65: chown d1/f5 97 1 2026-03-10T10:19:10.800 INFO:tasks.workunit.client.0.vm02.stdout:7/66: fsync d1/dc/f3 0 2026-03-10T10:19:10.804 INFO:tasks.workunit.client.0.vm02.stdout:7/67: dwrite d1/fd [0,4194304] 0 2026-03-10T10:19:10.805 INFO:tasks.workunit.client.0.vm02.stdout:7/68: fdatasync d1/dc/f3 0 2026-03-10T10:19:10.805 INFO:tasks.workunit.client.0.vm02.stdout:7/69: write d1/dc/f3 [4763936,22619] 0 2026-03-10T10:19:10.808 INFO:tasks.workunit.client.0.vm02.stdout:7/70: creat d1/f15 x:0 0 0 2026-03-10T10:19:10.826 INFO:tasks.workunit.client.0.vm02.stdout:7/71: sync 2026-03-10T10:19:10.827 INFO:tasks.workunit.client.0.vm02.stdout:7/72: readlink d1/dc/l9 0 2026-03-10T10:19:10.835 INFO:tasks.workunit.client.0.vm02.stdout:7/73: dwrite d1/dc/ff [0,4194304] 0 2026-03-10T10:19:10.844 INFO:tasks.workunit.client.0.vm02.stdout:7/74: truncate d1/f5 674272 0 2026-03-10T10:19:10.844 INFO:tasks.workunit.client.0.vm02.stdout:7/75: mkdir d1/dc/d16 0 2026-03-10T10:19:10.844 INFO:tasks.workunit.client.0.vm02.stdout:7/76: creat d1/f17 x:0 0 0 2026-03-10T10:19:10.844 INFO:tasks.workunit.client.0.vm02.stdout:7/77: creat d1/dc/d10/f18 x:0 0 0 2026-03-10T10:19:10.844 INFO:tasks.workunit.client.0.vm02.stdout:7/78: readlink d1/dc/d10/l11 0 2026-03-10T10:19:10.844 INFO:tasks.workunit.client.0.vm02.stdout:7/79: truncate d1/f15 935311 0 2026-03-10T10:19:10.844 INFO:tasks.workunit.client.0.vm02.stdout:7/80: mkdir d1/dc/d19 0 2026-03-10T10:19:10.844 INFO:tasks.workunit.client.0.vm02.stdout:7/81: write d1/dc/f3 [865588,85248] 0 2026-03-10T10:19:10.854 INFO:tasks.workunit.client.0.vm02.stdout:4/102: read d1/d2/de/f1b [1365980,75640] 0 2026-03-10T10:19:10.856 INFO:tasks.workunit.client.0.vm02.stdout:7/82: link d1/dc/d10/l11 d1/dc/d10/l1a 0 2026-03-10T10:19:10.857 INFO:tasks.workunit.client.0.vm02.stdout:4/103: creat d1/d10/db/f24 x:0 0 0 2026-03-10T10:19:10.858 INFO:tasks.workunit.client.0.vm02.stdout:7/83: mkdir d1/d1b 0 2026-03-10T10:19:10.860 INFO:tasks.workunit.client.0.vm02.stdout:4/104: dread d1/d10/f6 [0,4194304] 0 2026-03-10T10:19:10.861 INFO:tasks.workunit.client.0.vm02.stdout:7/84: symlink d1/l1c 0 2026-03-10T10:19:10.862 INFO:tasks.workunit.client.0.vm02.stdout:4/105: symlink d1/d2/de/l25 0 2026-03-10T10:19:10.863 INFO:tasks.workunit.client.0.vm02.stdout:4/106: mknod d1/d2/d1a/c26 0 2026-03-10T10:19:10.871 INFO:tasks.workunit.client.0.vm02.stdout:4/107: dwrite d1/d2/de/f1b [0,4194304] 0 2026-03-10T10:19:10.878 INFO:tasks.workunit.client.0.vm02.stdout:7/85: dwrite d1/f5 [0,4194304] 0 2026-03-10T10:19:10.878 INFO:tasks.workunit.client.0.vm02.stdout:4/108: symlink d1/d10/db/l27 0 2026-03-10T10:19:10.887 INFO:tasks.workunit.client.0.vm02.stdout:0/124: fdatasync d9/fc 0 2026-03-10T10:19:10.943 INFO:tasks.workunit.client.0.vm02.stdout:0/125: sync 2026-03-10T10:19:10.947 INFO:tasks.workunit.client.0.vm02.stdout:0/126: write d9/f17 [3254929,110514] 0 2026-03-10T10:19:10.947 INFO:tasks.workunit.client.0.vm02.stdout:0/127: rename d9/d18 to d9/d18/d1a/d20 22 2026-03-10T10:19:10.995 INFO:tasks.workunit.client.1.vm05.stdout:0/16: truncate f0 365183 0 2026-03-10T10:19:10.996 INFO:tasks.workunit.client.1.vm05.stdout:0/17: write d1/d2/d3/f4 [406478,99534] 0 2026-03-10T10:19:11.004 INFO:tasks.workunit.client.0.vm02.stdout:9/95: truncate da/f15 1296064 0 2026-03-10T10:19:11.005 INFO:tasks.workunit.client.0.vm02.stdout:9/96: readlink da/l16 0 2026-03-10T10:19:11.038 INFO:tasks.workunit.client.1.vm05.stdout:0/18: sync 2026-03-10T10:19:11.039 INFO:tasks.workunit.client.0.vm02.stdout:9/97: sync 2026-03-10T10:19:11.039 INFO:tasks.workunit.client.0.vm02.stdout:9/98: stat c4 0 2026-03-10T10:19:11.040 INFO:tasks.workunit.client.0.vm02.stdout:9/99: chown f7 1105 1 2026-03-10T10:19:11.044 INFO:tasks.workunit.client.1.vm05.stdout:0/19: sync 2026-03-10T10:19:11.053 INFO:tasks.workunit.client.0.vm02.stdout:9/100: dwrite da/f1b [0,4194304] 0 2026-03-10T10:19:11.057 INFO:tasks.workunit.client.1.vm05.stdout:0/20: mknod d1/d2/c6 0 2026-03-10T10:19:11.065 INFO:tasks.workunit.client.0.vm02.stdout:9/101: dread da/d10/f17 [0,4194304] 0 2026-03-10T10:19:11.065 INFO:tasks.workunit.client.1.vm05.stdout:9/24: fsync d0/d1/f5 0 2026-03-10T10:19:11.067 INFO:tasks.workunit.client.0.vm02.stdout:9/102: dread da/ff [0,4194304] 0 2026-03-10T10:19:11.067 INFO:tasks.workunit.client.0.vm02.stdout:9/103: write da/f13 [772362,12517] 0 2026-03-10T10:19:11.068 INFO:tasks.workunit.client.0.vm02.stdout:9/104: write da/d10/f19 [640332,90503] 0 2026-03-10T10:19:11.073 INFO:tasks.workunit.client.1.vm05.stdout:9/25: creat d0/f7 x:0 0 0 2026-03-10T10:19:11.077 INFO:tasks.workunit.client.1.vm05.stdout:9/26: creat d0/d2/f8 x:0 0 0 2026-03-10T10:19:11.077 INFO:tasks.workunit.client.0.vm02.stdout:9/105: creat da/f1f x:0 0 0 2026-03-10T10:19:11.077 INFO:tasks.workunit.client.1.vm05.stdout:9/27: write d0/d2/f4 [217181,90192] 0 2026-03-10T10:19:11.078 INFO:tasks.workunit.client.1.vm05.stdout:9/28: creat d0/d1/f9 x:0 0 0 2026-03-10T10:19:11.079 INFO:tasks.workunit.client.0.vm02.stdout:9/106: link da/d10/f1d da/d10/f20 0 2026-03-10T10:19:11.080 INFO:tasks.workunit.client.0.vm02.stdout:9/107: read da/f13 [285523,34309] 0 2026-03-10T10:19:11.083 INFO:tasks.workunit.client.0.vm02.stdout:9/108: mknod da/c21 0 2026-03-10T10:19:11.084 INFO:tasks.workunit.client.1.vm05.stdout:9/29: creat d0/fa x:0 0 0 2026-03-10T10:19:11.085 INFO:tasks.workunit.client.1.vm05.stdout:9/30: unlink d0/f3 0 2026-03-10T10:19:11.095 INFO:tasks.workunit.client.1.vm05.stdout:5/27: getdents . 0 2026-03-10T10:19:11.097 INFO:tasks.workunit.client.0.vm02.stdout:6/55: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:11.100 INFO:tasks.workunit.client.0.vm02.stdout:6/56: read d0/d7/ff [1966630,99151] 0 2026-03-10T10:19:11.100 INFO:tasks.workunit.client.0.vm02.stdout:6/57: read d0/d7/ff [2870446,31507] 0 2026-03-10T10:19:11.113 INFO:tasks.workunit.client.1.vm05.stdout:6/19: getdents . 0 2026-03-10T10:19:11.114 INFO:tasks.workunit.client.1.vm05.stdout:6/20: fdatasync f3 0 2026-03-10T10:19:11.117 INFO:tasks.workunit.client.1.vm05.stdout:5/28: link c1 c7 0 2026-03-10T10:19:11.117 INFO:tasks.workunit.client.0.vm02.stdout:5/47: truncate d1/f3 11693808 0 2026-03-10T10:19:11.119 INFO:tasks.workunit.client.0.vm02.stdout:5/48: dread d1/f5 [0,4194304] 0 2026-03-10T10:19:11.119 INFO:tasks.workunit.client.0.vm02.stdout:5/49: fdatasync d1/db/fd 0 2026-03-10T10:19:11.120 INFO:tasks.workunit.client.0.vm02.stdout:5/50: write d1/db/fd [423391,51650] 0 2026-03-10T10:19:11.125 INFO:tasks.workunit.client.0.vm02.stdout:2/119: truncate d0/d1a/f25 921733 0 2026-03-10T10:19:11.130 INFO:tasks.workunit.client.1.vm05.stdout:2/21: truncate f1 101681 0 2026-03-10T10:19:11.130 INFO:tasks.workunit.client.1.vm05.stdout:6/21: symlink l8 0 2026-03-10T10:19:11.131 INFO:tasks.workunit.client.0.vm02.stdout:6/58: sync 2026-03-10T10:19:11.132 INFO:tasks.workunit.client.1.vm05.stdout:8/20: getdents . 0 2026-03-10T10:19:11.133 INFO:tasks.workunit.client.1.vm05.stdout:2/22: symlink l4 0 2026-03-10T10:19:11.134 INFO:tasks.workunit.client.1.vm05.stdout:2/23: write f2 [975090,4997] 0 2026-03-10T10:19:11.135 INFO:tasks.workunit.client.0.vm02.stdout:6/59: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:11.135 INFO:tasks.workunit.client.1.vm05.stdout:2/24: dread - f3 zero size 2026-03-10T10:19:11.137 INFO:tasks.workunit.client.0.vm02.stdout:6/60: write d0/d7/ff [836394,84024] 0 2026-03-10T10:19:11.137 INFO:tasks.workunit.client.1.vm05.stdout:5/29: sync 2026-03-10T10:19:11.138 INFO:tasks.workunit.client.0.vm02.stdout:6/61: truncate d0/f2 4549793 0 2026-03-10T10:19:11.142 INFO:tasks.workunit.client.0.vm02.stdout:6/62: dwrite d0/d7/ff [0,4194304] 0 2026-03-10T10:19:11.145 INFO:tasks.workunit.client.1.vm05.stdout:6/22: mknod c9 0 2026-03-10T10:19:11.149 INFO:tasks.workunit.client.1.vm05.stdout:8/21: symlink l4 0 2026-03-10T10:19:11.150 INFO:tasks.workunit.client.0.vm02.stdout:6/63: stat d0/cb 0 2026-03-10T10:19:11.153 INFO:tasks.workunit.client.1.vm05.stdout:2/25: rename f2 to f5 0 2026-03-10T10:19:11.154 INFO:tasks.workunit.client.1.vm05.stdout:1/18: rmdir d4 39 2026-03-10T10:19:11.154 INFO:tasks.workunit.client.0.vm02.stdout:6/64: symlink d0/d8/l12 0 2026-03-10T10:19:11.156 INFO:tasks.workunit.client.1.vm05.stdout:4/30: write f0 [600127,126590] 0 2026-03-10T10:19:11.156 INFO:tasks.workunit.client.1.vm05.stdout:5/30: symlink l8 0 2026-03-10T10:19:11.162 INFO:tasks.workunit.client.0.vm02.stdout:6/65: creat d0/d8/d9/f13 x:0 0 0 2026-03-10T10:19:11.162 INFO:tasks.workunit.client.0.vm02.stdout:3/52: write d1/f5 [4977829,11591] 0 2026-03-10T10:19:11.162 INFO:tasks.workunit.client.0.vm02.stdout:3/53: fsync f0 0 2026-03-10T10:19:11.162 INFO:tasks.workunit.client.1.vm05.stdout:8/22: fsync f2 0 2026-03-10T10:19:11.162 INFO:tasks.workunit.client.1.vm05.stdout:6/23: symlink la 0 2026-03-10T10:19:11.162 INFO:tasks.workunit.client.0.vm02.stdout:6/66: write d0/d8/d9/f13 [847521,38916] 0 2026-03-10T10:19:11.165 INFO:tasks.workunit.client.1.vm05.stdout:1/19: dread f0 [0,4194304] 0 2026-03-10T10:19:11.171 INFO:tasks.workunit.client.0.vm02.stdout:3/54: mknod d1/c11 0 2026-03-10T10:19:11.171 INFO:tasks.workunit.client.0.vm02.stdout:3/55: truncate d1/fe 946122 0 2026-03-10T10:19:11.172 INFO:tasks.workunit.client.0.vm02.stdout:3/56: stat d1/f3 0 2026-03-10T10:19:11.173 INFO:tasks.workunit.client.1.vm05.stdout:2/26: symlink l6 0 2026-03-10T10:19:11.178 INFO:tasks.workunit.client.0.vm02.stdout:3/57: creat d1/f12 x:0 0 0 2026-03-10T10:19:11.179 INFO:tasks.workunit.client.1.vm05.stdout:6/24: dwrite f3 [4194304,4194304] 0 2026-03-10T10:19:11.183 INFO:tasks.workunit.client.0.vm02.stdout:8/90: getdents d1/d2/d6 0 2026-03-10T10:19:11.185 INFO:tasks.workunit.client.0.vm02.stdout:4/109: chown d1/d2/c1f 6213 1 2026-03-10T10:19:11.189 INFO:tasks.workunit.client.1.vm05.stdout:5/31: chown c1 0 1 2026-03-10T10:19:11.190 INFO:tasks.workunit.client.0.vm02.stdout:8/91: rename d1/cb to d1/c15 0 2026-03-10T10:19:11.192 INFO:tasks.workunit.client.0.vm02.stdout:8/92: dread d1/f10 [0,4194304] 0 2026-03-10T10:19:11.193 INFO:tasks.workunit.client.1.vm05.stdout:1/20: dwrite f0 [4194304,4194304] 0 2026-03-10T10:19:11.195 INFO:tasks.workunit.client.0.vm02.stdout:8/93: dwrite d1/d2/f3 [4194304,4194304] 0 2026-03-10T10:19:11.205 INFO:tasks.workunit.client.1.vm05.stdout:5/32: dread f5 [0,4194304] 0 2026-03-10T10:19:11.223 INFO:tasks.workunit.client.1.vm05.stdout:6/25: creat fb x:0 0 0 2026-03-10T10:19:11.226 INFO:tasks.workunit.client.0.vm02.stdout:4/110: creat d1/d2/de/f28 x:0 0 0 2026-03-10T10:19:11.226 INFO:tasks.workunit.client.0.vm02.stdout:4/111: write d1/d2/f4 [1623655,43400] 0 2026-03-10T10:19:11.228 INFO:tasks.workunit.client.1.vm05.stdout:8/23: unlink f3 0 2026-03-10T10:19:11.228 INFO:tasks.workunit.client.1.vm05.stdout:8/24: truncate f2 766032 0 2026-03-10T10:19:11.231 INFO:tasks.workunit.client.1.vm05.stdout:6/26: dread f3 [4194304,4194304] 0 2026-03-10T10:19:11.243 INFO:tasks.workunit.client.1.vm05.stdout:5/33: creat f9 x:0 0 0 2026-03-10T10:19:11.243 INFO:tasks.workunit.client.1.vm05.stdout:5/34: stat c6 0 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.1.vm05.stdout:5/35: write f9 [267447,86171] 0 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.1.vm05.stdout:1/21: mknod d4/c7 0 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.0.vm02.stdout:8/94: creat d1/f16 x:0 0 0 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.0.vm02.stdout:8/95: readlink d1/d2/d6/lf 0 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.0.vm02.stdout:4/112: symlink d1/d2/l29 0 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.0.vm02.stdout:4/113: chown d1/d2/de/f28 106636 1 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.0.vm02.stdout:4/114: dread - d1/d2/de/f28 zero size 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.0.vm02.stdout:4/115: read d1/d2/de/f1b [858854,90516] 0 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.0.vm02.stdout:4/116: dread - d1/d2/de/f28 zero size 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.1.vm05.stdout:8/25: dread f0 [0,4194304] 0 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.1.vm05.stdout:6/27: dwrite fb [0,4194304] 0 2026-03-10T10:19:11.244 INFO:tasks.workunit.client.1.vm05.stdout:6/28: rmdir - no directory 2026-03-10T10:19:11.247 INFO:tasks.workunit.client.1.vm05.stdout:5/36: mkdir da 0 2026-03-10T10:19:11.261 INFO:tasks.workunit.client.0.vm02.stdout:4/117: symlink d1/d2/d1a/l2a 0 2026-03-10T10:19:11.262 INFO:tasks.workunit.client.1.vm05.stdout:1/22: symlink d4/l8 0 2026-03-10T10:19:11.262 INFO:tasks.workunit.client.0.vm02.stdout:4/118: write d1/d10/db/f15 [4401824,56752] 0 2026-03-10T10:19:11.267 INFO:tasks.workunit.client.0.vm02.stdout:4/119: rmdir d1/d10 39 2026-03-10T10:19:11.268 INFO:tasks.workunit.client.1.vm05.stdout:1/23: symlink d4/l9 0 2026-03-10T10:19:11.271 INFO:tasks.workunit.client.1.vm05.stdout:5/37: mkdir da/db 0 2026-03-10T10:19:11.271 INFO:tasks.workunit.client.0.vm02.stdout:4/120: symlink d1/d10/l2b 0 2026-03-10T10:19:11.273 INFO:tasks.workunit.client.1.vm05.stdout:8/26: creat f5 x:0 0 0 2026-03-10T10:19:11.282 INFO:tasks.workunit.client.1.vm05.stdout:5/38: dwrite f5 [0,4194304] 0 2026-03-10T10:19:11.285 INFO:tasks.workunit.client.1.vm05.stdout:8/27: dwrite f0 [0,4194304] 0 2026-03-10T10:19:11.286 INFO:tasks.workunit.client.1.vm05.stdout:8/28: chown f2 28013409 1 2026-03-10T10:19:11.290 INFO:tasks.workunit.client.1.vm05.stdout:5/39: rename c2 to da/db/cc 0 2026-03-10T10:19:11.296 INFO:tasks.workunit.client.1.vm05.stdout:5/40: read f5 [4692017,49538] 0 2026-03-10T10:19:11.300 INFO:tasks.workunit.client.0.vm02.stdout:8/96: fsync d1/d2/f3 0 2026-03-10T10:19:11.303 INFO:tasks.workunit.client.1.vm05.stdout:8/29: rename f5 to f6 0 2026-03-10T10:19:11.305 INFO:tasks.workunit.client.1.vm05.stdout:8/30: mkdir d7 0 2026-03-10T10:19:11.310 INFO:tasks.workunit.client.1.vm05.stdout:4/31: dread f0 [0,4194304] 0 2026-03-10T10:19:11.315 INFO:tasks.workunit.client.0.vm02.stdout:4/121: dwrite d1/d10/f6 [4194304,4194304] 0 2026-03-10T10:19:11.317 INFO:tasks.workunit.client.0.vm02.stdout:4/122: write d1/d2/de/f14 [162722,103522] 0 2026-03-10T10:19:11.318 INFO:tasks.workunit.client.0.vm02.stdout:4/123: chown d1/d2/d1a 10125 1 2026-03-10T10:19:11.318 INFO:tasks.workunit.client.1.vm05.stdout:4/32: dwrite f0 [0,4194304] 0 2026-03-10T10:19:11.318 INFO:tasks.workunit.client.0.vm02.stdout:4/124: rename d1 to d1/d10/d2c 22 2026-03-10T10:19:11.321 INFO:tasks.workunit.client.0.vm02.stdout:4/125: dwrite d1/f1d [4194304,4194304] 0 2026-03-10T10:19:11.337 INFO:tasks.workunit.client.1.vm05.stdout:7/75: rmdir d5 39 2026-03-10T10:19:11.342 INFO:tasks.workunit.client.0.vm02.stdout:0/128: truncate d9/fc 2302375 0 2026-03-10T10:19:11.361 INFO:tasks.workunit.client.0.vm02.stdout:7/86: truncate d1/dc/f3 2849621 0 2026-03-10T10:19:11.382 INFO:tasks.workunit.client.0.vm02.stdout:7/87: rmdir d1/dc/d19 0 2026-03-10T10:19:11.389 INFO:tasks.workunit.client.1.vm05.stdout:5/41: fdatasync f9 0 2026-03-10T10:19:11.393 INFO:tasks.workunit.client.1.vm05.stdout:5/42: dread f5 [0,4194304] 0 2026-03-10T10:19:11.393 INFO:tasks.workunit.client.1.vm05.stdout:5/43: readlink l4 0 2026-03-10T10:19:11.396 INFO:tasks.workunit.client.1.vm05.stdout:5/44: creat da/db/fd x:0 0 0 2026-03-10T10:19:11.405 INFO:tasks.workunit.client.0.vm02.stdout:4/126: sync 2026-03-10T10:19:11.405 INFO:tasks.workunit.client.0.vm02.stdout:4/127: fsync d1/d10/db/f1e 0 2026-03-10T10:19:11.415 INFO:tasks.workunit.client.1.vm05.stdout:0/21: dwrite f0 [0,4194304] 0 2026-03-10T10:19:11.421 INFO:tasks.workunit.client.1.vm05.stdout:4/33: sync 2026-03-10T10:19:11.422 INFO:tasks.workunit.client.1.vm05.stdout:0/22: dread f0 [0,4194304] 0 2026-03-10T10:19:11.424 INFO:tasks.workunit.client.0.vm02.stdout:1/63: dwrite d4/f8 [0,4194304] 0 2026-03-10T10:19:11.427 INFO:tasks.workunit.client.0.vm02.stdout:3/58: fdatasync d1/f5 0 2026-03-10T10:19:11.437 INFO:tasks.workunit.client.0.vm02.stdout:1/64: dread - d4/da/f13 zero size 2026-03-10T10:19:11.438 INFO:tasks.workunit.client.0.vm02.stdout:3/59: rename d1/d8/l10 to d1/d6/l13 0 2026-03-10T10:19:11.439 INFO:tasks.workunit.client.0.vm02.stdout:3/60: fdatasync d1/fe 0 2026-03-10T10:19:11.444 INFO:tasks.workunit.client.0.vm02.stdout:3/61: dwrite d1/d8/f9 [0,4194304] 0 2026-03-10T10:19:11.449 INFO:tasks.workunit.client.1.vm05.stdout:0/23: rename d1/d2/d3 to d1/d7 0 2026-03-10T10:19:11.449 INFO:tasks.workunit.client.1.vm05.stdout:4/34: getdents d1/d3 0 2026-03-10T10:19:11.449 INFO:tasks.workunit.client.1.vm05.stdout:4/35: dread f0 [0,4194304] 0 2026-03-10T10:19:11.453 INFO:tasks.workunit.client.0.vm02.stdout:1/65: mknod d4/da/d14/c16 0 2026-03-10T10:19:11.457 INFO:tasks.workunit.client.0.vm02.stdout:1/66: symlink d4/da/l17 0 2026-03-10T10:19:11.465 INFO:tasks.workunit.client.0.vm02.stdout:9/109: getdents da 0 2026-03-10T10:19:11.465 INFO:tasks.workunit.client.1.vm05.stdout:4/36: dwrite f0 [0,4194304] 0 2026-03-10T10:19:11.465 INFO:tasks.workunit.client.1.vm05.stdout:9/31: getdents d0 0 2026-03-10T10:19:11.465 INFO:tasks.workunit.client.0.vm02.stdout:5/51: rmdir d1 39 2026-03-10T10:19:11.466 INFO:tasks.workunit.client.1.vm05.stdout:0/24: getdents d1/d7 0 2026-03-10T10:19:11.466 INFO:tasks.workunit.client.0.vm02.stdout:1/67: creat d4/f18 x:0 0 0 2026-03-10T10:19:11.471 INFO:tasks.workunit.client.0.vm02.stdout:5/52: dread d1/f3 [4194304,4194304] 0 2026-03-10T10:19:11.471 INFO:tasks.workunit.client.1.vm05.stdout:0/25: dwrite d1/d7/f4 [0,4194304] 0 2026-03-10T10:19:11.474 INFO:tasks.workunit.client.0.vm02.stdout:2/120: write d0/d10/f1f [195288,13610] 0 2026-03-10T10:19:11.475 INFO:tasks.workunit.client.1.vm05.stdout:0/26: write d1/d7/f4 [3556506,74494] 0 2026-03-10T10:19:11.479 INFO:tasks.workunit.client.0.vm02.stdout:2/121: dwrite d0/d10/f14 [0,4194304] 0 2026-03-10T10:19:11.481 INFO:tasks.workunit.client.0.vm02.stdout:5/53: getdents d1 0 2026-03-10T10:19:11.482 INFO:tasks.workunit.client.0.vm02.stdout:5/54: write d1/fe [447735,32302] 0 2026-03-10T10:19:11.483 INFO:tasks.workunit.client.0.vm02.stdout:5/55: readlink d1/lc 0 2026-03-10T10:19:11.483 INFO:tasks.workunit.client.0.vm02.stdout:5/56: chown d1/l8 71766583 1 2026-03-10T10:19:11.486 INFO:tasks.workunit.client.0.vm02.stdout:2/122: rename d0/d10/l1e to d0/l29 0 2026-03-10T10:19:11.487 INFO:tasks.workunit.client.0.vm02.stdout:5/57: dwrite d1/fe [0,4194304] 0 2026-03-10T10:19:11.489 INFO:tasks.workunit.client.0.vm02.stdout:2/123: readlink d0/la 0 2026-03-10T10:19:11.493 INFO:tasks.workunit.client.0.vm02.stdout:5/58: dwrite d1/f3 [8388608,4194304] 0 2026-03-10T10:19:11.507 INFO:tasks.workunit.client.0.vm02.stdout:5/59: write d1/f5 [4334137,129902] 0 2026-03-10T10:19:11.511 INFO:tasks.workunit.client.0.vm02.stdout:2/124: dwrite d0/d1a/f26 [0,4194304] 0 2026-03-10T10:19:11.515 INFO:tasks.workunit.client.0.vm02.stdout:5/60: dwrite d1/f3 [12582912,4194304] 0 2026-03-10T10:19:11.517 INFO:tasks.workunit.client.0.vm02.stdout:2/125: readlink d0/l11 0 2026-03-10T10:19:11.518 INFO:tasks.workunit.client.0.vm02.stdout:5/61: creat d1/f10 x:0 0 0 2026-03-10T10:19:11.522 INFO:tasks.workunit.client.0.vm02.stdout:5/62: mkdir d1/db/d11 0 2026-03-10T10:19:11.523 INFO:tasks.workunit.client.0.vm02.stdout:2/126: mknod d0/c2a 0 2026-03-10T10:19:11.529 INFO:tasks.workunit.client.0.vm02.stdout:5/63: dwrite d1/f10 [0,4194304] 0 2026-03-10T10:19:11.530 INFO:tasks.workunit.client.1.vm05.stdout:5/45: dread f9 [0,4194304] 0 2026-03-10T10:19:11.531 INFO:tasks.workunit.client.1.vm05.stdout:5/46: write da/db/fd [930450,81401] 0 2026-03-10T10:19:11.533 INFO:tasks.workunit.client.0.vm02.stdout:2/127: dread d0/f1b [0,4194304] 0 2026-03-10T10:19:11.534 INFO:tasks.workunit.client.0.vm02.stdout:5/64: creat d1/f12 x:0 0 0 2026-03-10T10:19:11.535 INFO:tasks.workunit.client.1.vm05.stdout:5/47: dwrite da/db/fd [0,4194304] 0 2026-03-10T10:19:11.544 INFO:tasks.workunit.client.1.vm05.stdout:9/32: sync 2026-03-10T10:19:11.548 INFO:tasks.workunit.client.1.vm05.stdout:9/33: write d0/d1/f9 [568540,107910] 0 2026-03-10T10:19:11.563 INFO:tasks.workunit.client.1.vm05.stdout:9/34: creat d0/d1/fb x:0 0 0 2026-03-10T10:19:11.563 INFO:tasks.workunit.client.1.vm05.stdout:9/35: stat d0/fa 0 2026-03-10T10:19:11.563 INFO:tasks.workunit.client.1.vm05.stdout:9/36: mkdir d0/d2/dc 0 2026-03-10T10:19:11.563 INFO:tasks.workunit.client.1.vm05.stdout:9/37: dread - d0/fa zero size 2026-03-10T10:19:11.678 INFO:tasks.workunit.client.1.vm05.stdout:1/24: fdatasync f0 0 2026-03-10T10:19:11.684 INFO:tasks.workunit.client.1.vm05.stdout:2/27: unlink f5 0 2026-03-10T10:19:11.687 INFO:tasks.workunit.client.0.vm02.stdout:3/62: truncate f0 3670652 0 2026-03-10T10:19:11.687 INFO:tasks.workunit.client.0.vm02.stdout:6/67: truncate d0/d7/ff 902423 0 2026-03-10T10:19:11.689 INFO:tasks.workunit.client.1.vm05.stdout:1/25: unlink d4/c7 0 2026-03-10T10:19:11.689 INFO:tasks.workunit.client.0.vm02.stdout:3/63: creat d1/f14 x:0 0 0 2026-03-10T10:19:11.692 INFO:tasks.workunit.client.0.vm02.stdout:3/64: dread d1/f3 [0,4194304] 0 2026-03-10T10:19:11.692 INFO:tasks.workunit.client.0.vm02.stdout:3/65: chown d1/d8/fb 7 1 2026-03-10T10:19:11.694 INFO:tasks.workunit.client.0.vm02.stdout:6/68: unlink d0/d8/ld 0 2026-03-10T10:19:11.700 INFO:tasks.workunit.client.0.vm02.stdout:6/69: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:11.700 INFO:tasks.workunit.client.1.vm05.stdout:2/28: rename f3 to f7 0 2026-03-10T10:19:11.700 INFO:tasks.workunit.client.1.vm05.stdout:1/26: write f3 [605384,8611] 0 2026-03-10T10:19:11.700 INFO:tasks.workunit.client.0.vm02.stdout:3/66: symlink d1/l15 0 2026-03-10T10:19:11.701 INFO:tasks.workunit.client.0.vm02.stdout:3/67: readlink d1/l15 0 2026-03-10T10:19:11.710 INFO:tasks.workunit.client.1.vm05.stdout:0/27: fsync f0 0 2026-03-10T10:19:11.717 INFO:tasks.workunit.client.0.vm02.stdout:6/70: unlink d0/d8/l12 0 2026-03-10T10:19:11.718 INFO:tasks.workunit.client.1.vm05.stdout:1/27: mknod d4/ca 0 2026-03-10T10:19:11.718 INFO:tasks.workunit.client.1.vm05.stdout:0/28: mknod d1/c8 0 2026-03-10T10:19:11.718 INFO:tasks.workunit.client.1.vm05.stdout:1/28: chown d4 169603 1 2026-03-10T10:19:11.718 INFO:tasks.workunit.client.1.vm05.stdout:1/29: fdatasync f0 0 2026-03-10T10:19:11.718 INFO:tasks.workunit.client.0.vm02.stdout:6/71: creat d0/d8/d9/f14 x:0 0 0 2026-03-10T10:19:11.719 INFO:tasks.workunit.client.0.vm02.stdout:6/72: fsync d0/d8/d9/f13 0 2026-03-10T10:19:11.719 INFO:tasks.workunit.client.1.vm05.stdout:1/30: write f3 [1742019,111709] 0 2026-03-10T10:19:11.719 INFO:tasks.workunit.client.0.vm02.stdout:6/73: chown d0/d8 30 1 2026-03-10T10:19:11.726 INFO:tasks.workunit.client.1.vm05.stdout:1/31: dwrite f0 [4194304,4194304] 0 2026-03-10T10:19:11.733 INFO:tasks.workunit.client.0.vm02.stdout:6/74: link d0/l3 d0/d7/l15 0 2026-03-10T10:19:11.735 INFO:tasks.workunit.client.1.vm05.stdout:1/32: dwrite f3 [0,4194304] 0 2026-03-10T10:19:11.735 INFO:tasks.workunit.client.0.vm02.stdout:6/75: mknod d0/d8/d9/c16 0 2026-03-10T10:19:11.736 INFO:tasks.workunit.client.0.vm02.stdout:6/76: creat d0/d7/f17 x:0 0 0 2026-03-10T10:19:11.737 INFO:tasks.workunit.client.0.vm02.stdout:6/77: mknod d0/d8/c18 0 2026-03-10T10:19:11.737 INFO:tasks.workunit.client.0.vm02.stdout:6/78: read d0/f2 [3685765,30892] 0 2026-03-10T10:19:11.738 INFO:tasks.workunit.client.1.vm05.stdout:1/33: rename d4 to d4/db 22 2026-03-10T10:19:11.738 INFO:tasks.workunit.client.0.vm02.stdout:6/79: symlink d0/d8/l19 0 2026-03-10T10:19:11.743 INFO:tasks.workunit.client.0.vm02.stdout:8/97: unlink d1/c15 0 2026-03-10T10:19:11.748 INFO:tasks.workunit.client.0.vm02.stdout:8/98: dread d1/f12 [0,4194304] 0 2026-03-10T10:19:11.749 INFO:tasks.workunit.client.1.vm05.stdout:1/34: dwrite d4/f5 [0,4194304] 0 2026-03-10T10:19:11.751 INFO:tasks.workunit.client.0.vm02.stdout:8/99: dread d1/f10 [0,4194304] 0 2026-03-10T10:19:11.759 INFO:tasks.workunit.client.0.vm02.stdout:8/100: symlink d1/l17 0 2026-03-10T10:19:11.763 INFO:tasks.workunit.client.0.vm02.stdout:8/101: write d1/f12 [73825,80938] 0 2026-03-10T10:19:11.763 INFO:tasks.workunit.client.0.vm02.stdout:8/102: dread d1/f10 [4194304,4194304] 0 2026-03-10T10:19:11.763 INFO:tasks.workunit.client.0.vm02.stdout:8/103: dread - d1/d2/d6/f14 zero size 2026-03-10T10:19:11.765 INFO:tasks.workunit.client.0.vm02.stdout:8/104: creat d1/d2/f18 x:0 0 0 2026-03-10T10:19:11.765 INFO:tasks.workunit.client.1.vm05.stdout:1/35: link d4/l8 d4/lc 0 2026-03-10T10:19:11.766 INFO:tasks.workunit.client.0.vm02.stdout:8/105: creat d1/f19 x:0 0 0 2026-03-10T10:19:11.767 INFO:tasks.workunit.client.1.vm05.stdout:1/36: mkdir d4/dd 0 2026-03-10T10:19:11.769 INFO:tasks.workunit.client.0.vm02.stdout:8/106: link d1/d2/ca d1/d2/d6/c1a 0 2026-03-10T10:19:11.773 INFO:tasks.workunit.client.0.vm02.stdout:8/107: dwrite d1/f19 [0,4194304] 0 2026-03-10T10:19:11.787 INFO:tasks.workunit.client.0.vm02.stdout:8/108: chown d1/d2/d6/c11 605 1 2026-03-10T10:19:11.787 INFO:tasks.workunit.client.0.vm02.stdout:8/109: creat d1/f1b x:0 0 0 2026-03-10T10:19:11.787 INFO:tasks.workunit.client.0.vm02.stdout:8/110: stat d1/d2/d6/c1a 0 2026-03-10T10:19:11.795 INFO:tasks.workunit.client.1.vm05.stdout:0/29: fsync d1/d7/f4 0 2026-03-10T10:19:11.796 INFO:tasks.workunit.client.0.vm02.stdout:6/80: sync 2026-03-10T10:19:11.797 INFO:tasks.workunit.client.1.vm05.stdout:0/30: dread d1/d7/f4 [0,4194304] 0 2026-03-10T10:19:11.798 INFO:tasks.workunit.client.0.vm02.stdout:6/81: chown d0/c1 82511 1 2026-03-10T10:19:11.800 INFO:tasks.workunit.client.0.vm02.stdout:2/128: dread d0/d1a/f25 [0,4194304] 0 2026-03-10T10:19:11.802 INFO:tasks.workunit.client.1.vm05.stdout:0/31: mkdir d1/d2/d9 0 2026-03-10T10:19:11.812 INFO:tasks.workunit.client.0.vm02.stdout:6/82: symlink d0/d8/l1a 0 2026-03-10T10:19:11.812 INFO:tasks.workunit.client.1.vm05.stdout:6/29: truncate f3 1153887 0 2026-03-10T10:19:11.812 INFO:tasks.workunit.client.1.vm05.stdout:0/32: dread d1/d7/f4 [0,4194304] 0 2026-03-10T10:19:11.812 INFO:tasks.workunit.client.1.vm05.stdout:6/30: rename l0 to lc 0 2026-03-10T10:19:11.812 INFO:tasks.workunit.client.1.vm05.stdout:0/33: write d1/d7/f4 [3952986,102448] 0 2026-03-10T10:19:11.812 INFO:tasks.workunit.client.1.vm05.stdout:5/48: truncate f5 4727643 0 2026-03-10T10:19:11.813 INFO:tasks.workunit.client.1.vm05.stdout:5/49: chown l4 0 1 2026-03-10T10:19:11.814 INFO:tasks.workunit.client.1.vm05.stdout:8/31: truncate f0 1577423 0 2026-03-10T10:19:11.816 INFO:tasks.workunit.client.1.vm05.stdout:7/76: readlink l2 0 2026-03-10T10:19:11.819 INFO:tasks.workunit.client.0.vm02.stdout:0/129: unlink d9/fc 0 2026-03-10T10:19:11.825 INFO:tasks.workunit.client.1.vm05.stdout:5/50: mkdir da/db/de 0 2026-03-10T10:19:11.826 INFO:tasks.workunit.client.0.vm02.stdout:7/88: rmdir d1 39 2026-03-10T10:19:11.829 INFO:tasks.workunit.client.0.vm02.stdout:4/128: truncate d1/d2/f4 2047707 0 2026-03-10T10:19:11.829 INFO:tasks.workunit.client.0.vm02.stdout:4/129: write d1/d10/f6 [4539429,986] 0 2026-03-10T10:19:11.830 INFO:tasks.workunit.client.0.vm02.stdout:7/89: fdatasync d1/dc/f3 0 2026-03-10T10:19:11.831 INFO:tasks.workunit.client.0.vm02.stdout:7/90: write d1/f5 [3833430,93378] 0 2026-03-10T10:19:11.832 INFO:tasks.workunit.client.1.vm05.stdout:7/77: dread - d5/dd/f12 zero size 2026-03-10T10:19:11.832 INFO:tasks.workunit.client.0.vm02.stdout:4/130: read d1/d2/de/f1b [1211255,87671] 0 2026-03-10T10:19:11.835 INFO:tasks.workunit.client.0.vm02.stdout:4/131: symlink d1/d10/l2d 0 2026-03-10T10:19:11.836 INFO:tasks.workunit.client.0.vm02.stdout:4/132: creat d1/d2/de/f2e x:0 0 0 2026-03-10T10:19:11.837 INFO:tasks.workunit.client.0.vm02.stdout:4/133: mknod d1/d2/d1a/c2f 0 2026-03-10T10:19:11.838 INFO:tasks.workunit.client.0.vm02.stdout:4/134: fdatasync d1/d2/de/f1b 0 2026-03-10T10:19:11.839 INFO:tasks.workunit.client.1.vm05.stdout:3/26: truncate f2 829559 0 2026-03-10T10:19:11.839 INFO:tasks.workunit.client.1.vm05.stdout:0/34: link d1/d2/c6 d1/d7/ca 0 2026-03-10T10:19:11.839 INFO:tasks.workunit.client.1.vm05.stdout:5/51: dwrite f9 [0,4194304] 0 2026-03-10T10:19:11.840 INFO:tasks.workunit.client.0.vm02.stdout:4/135: getdents d1/d2 0 2026-03-10T10:19:11.841 INFO:tasks.workunit.client.1.vm05.stdout:0/35: fsync f0 0 2026-03-10T10:19:11.842 INFO:tasks.workunit.client.1.vm05.stdout:0/36: chown d1/d7/l5 3 1 2026-03-10T10:19:11.843 INFO:tasks.workunit.client.0.vm02.stdout:4/136: dread d1/d10/f8 [0,4194304] 0 2026-03-10T10:19:11.858 INFO:tasks.workunit.client.0.vm02.stdout:4/137: unlink d1/d10/db/c22 0 2026-03-10T10:19:11.858 INFO:tasks.workunit.client.1.vm05.stdout:3/27: rename f5 to f6 0 2026-03-10T10:19:11.858 INFO:tasks.workunit.client.1.vm05.stdout:0/37: mkdir d1/d7/db 0 2026-03-10T10:19:11.858 INFO:tasks.workunit.client.1.vm05.stdout:0/38: creat d1/d2/fc x:0 0 0 2026-03-10T10:19:11.858 INFO:tasks.workunit.client.1.vm05.stdout:3/28: dread f1 [0,4194304] 0 2026-03-10T10:19:11.860 INFO:tasks.workunit.client.1.vm05.stdout:3/29: fdatasync f3 0 2026-03-10T10:19:11.861 INFO:tasks.workunit.client.1.vm05.stdout:5/52: link c6 da/db/de/cf 0 2026-03-10T10:19:11.863 INFO:tasks.workunit.client.1.vm05.stdout:0/39: dwrite d1/d2/fc [0,4194304] 0 2026-03-10T10:19:11.871 INFO:tasks.workunit.client.0.vm02.stdout:1/68: truncate f3 840215 0 2026-03-10T10:19:11.871 INFO:tasks.workunit.client.1.vm05.stdout:4/37: truncate f0 4052235 0 2026-03-10T10:19:11.874 INFO:tasks.workunit.client.1.vm05.stdout:0/40: creat d1/d2/d9/fd x:0 0 0 2026-03-10T10:19:11.874 INFO:tasks.workunit.client.1.vm05.stdout:5/53: unlink c7 0 2026-03-10T10:19:11.874 INFO:tasks.workunit.client.1.vm05.stdout:3/30: link l0 l7 0 2026-03-10T10:19:11.874 INFO:tasks.workunit.client.1.vm05.stdout:3/31: rmdir - no directory 2026-03-10T10:19:11.876 INFO:tasks.workunit.client.1.vm05.stdout:5/54: creat da/f10 x:0 0 0 2026-03-10T10:19:11.886 INFO:tasks.workunit.client.1.vm05.stdout:0/41: dwrite d1/d7/f4 [0,4194304] 0 2026-03-10T10:19:11.888 INFO:tasks.workunit.client.1.vm05.stdout:0/42: write f0 [935861,59367] 0 2026-03-10T10:19:11.890 INFO:tasks.workunit.client.1.vm05.stdout:0/43: read f0 [2613497,127512] 0 2026-03-10T10:19:11.891 INFO:tasks.workunit.client.1.vm05.stdout:0/44: rename d1/d2 to d1/d2/de 22 2026-03-10T10:19:11.897 INFO:tasks.workunit.client.1.vm05.stdout:0/45: mknod d1/cf 0 2026-03-10T10:19:11.899 INFO:tasks.workunit.client.1.vm05.stdout:0/46: read f0 [1979718,5254] 0 2026-03-10T10:19:11.990 INFO:tasks.workunit.client.1.vm05.stdout:7/78: sync 2026-03-10T10:19:11.993 INFO:tasks.workunit.client.1.vm05.stdout:7/79: dread d5/f13 [0,4194304] 0 2026-03-10T10:19:11.997 INFO:tasks.workunit.client.1.vm05.stdout:7/80: dwrite d5/ff [0,4194304] 0 2026-03-10T10:19:12.010 INFO:tasks.workunit.client.0.vm02.stdout:4/138: fsync d1/d10/f6 0 2026-03-10T10:19:12.011 INFO:tasks.workunit.client.1.vm05.stdout:0/47: fsync d1/d7/f4 0 2026-03-10T10:19:12.012 INFO:tasks.workunit.client.0.vm02.stdout:4/139: creat d1/d10/f30 x:0 0 0 2026-03-10T10:19:12.014 INFO:tasks.workunit.client.0.vm02.stdout:5/65: write d1/f10 [4327568,38333] 0 2026-03-10T10:19:12.017 INFO:tasks.workunit.client.0.vm02.stdout:5/66: dread d1/fe [0,4194304] 0 2026-03-10T10:19:12.020 INFO:tasks.workunit.client.0.vm02.stdout:4/140: dwrite d1/d10/db/f15 [0,4194304] 0 2026-03-10T10:19:12.021 INFO:tasks.workunit.client.0.vm02.stdout:4/141: fdatasync d1/d2/de/f2e 0 2026-03-10T10:19:12.024 INFO:tasks.workunit.client.0.vm02.stdout:4/142: creat d1/d2/f31 x:0 0 0 2026-03-10T10:19:12.026 INFO:tasks.workunit.client.0.vm02.stdout:5/67: mkdir d1/db/d11/d13 0 2026-03-10T10:19:12.027 INFO:tasks.workunit.client.0.vm02.stdout:5/68: write d1/f12 [154136,104400] 0 2026-03-10T10:19:12.028 INFO:tasks.workunit.client.0.vm02.stdout:5/69: mknod d1/c14 0 2026-03-10T10:19:12.029 INFO:tasks.workunit.client.0.vm02.stdout:4/143: mkdir d1/d32 0 2026-03-10T10:19:12.030 INFO:tasks.workunit.client.0.vm02.stdout:4/144: write d1/d2/de/f28 [921895,109216] 0 2026-03-10T10:19:12.031 INFO:tasks.workunit.client.0.vm02.stdout:4/145: chown d1/d2/d1a/c26 98 1 2026-03-10T10:19:12.032 INFO:tasks.workunit.client.0.vm02.stdout:5/70: creat d1/db/f15 x:0 0 0 2026-03-10T10:19:12.036 INFO:tasks.workunit.client.0.vm02.stdout:5/71: dwrite d1/f5 [4194304,4194304] 0 2026-03-10T10:19:12.040 INFO:tasks.workunit.client.0.vm02.stdout:5/72: write d1/db/f15 [1032754,29153] 0 2026-03-10T10:19:12.040 INFO:tasks.workunit.client.0.vm02.stdout:5/73: chown d1/c7 196417 1 2026-03-10T10:19:12.046 INFO:tasks.workunit.client.0.vm02.stdout:4/146: symlink d1/d10/db/l33 0 2026-03-10T10:19:12.046 INFO:tasks.workunit.client.0.vm02.stdout:4/147: chown d1/d2/d1a 7 1 2026-03-10T10:19:12.049 INFO:tasks.workunit.client.0.vm02.stdout:5/74: read d1/db/f15 [824179,99569] 0 2026-03-10T10:19:12.050 INFO:tasks.workunit.client.0.vm02.stdout:4/148: creat d1/d2/f34 x:0 0 0 2026-03-10T10:19:12.050 INFO:tasks.workunit.client.0.vm02.stdout:4/149: stat d1/d2/d1a 0 2026-03-10T10:19:12.055 INFO:tasks.workunit.client.0.vm02.stdout:5/75: mkdir d1/db/d11/d16 0 2026-03-10T10:19:12.056 INFO:tasks.workunit.client.0.vm02.stdout:4/150: creat d1/d10/db/f35 x:0 0 0 2026-03-10T10:19:12.065 INFO:tasks.workunit.client.0.vm02.stdout:4/151: symlink d1/d32/l36 0 2026-03-10T10:19:12.065 INFO:tasks.workunit.client.0.vm02.stdout:4/152: truncate d1/d2/f31 370831 0 2026-03-10T10:19:12.066 INFO:tasks.workunit.client.0.vm02.stdout:5/76: mkdir d1/db/d11/d13/d17 0 2026-03-10T10:19:12.068 INFO:tasks.workunit.client.0.vm02.stdout:5/77: creat d1/db/f18 x:0 0 0 2026-03-10T10:19:12.070 INFO:tasks.workunit.client.0.vm02.stdout:5/78: creat d1/db/d11/d16/f19 x:0 0 0 2026-03-10T10:19:12.070 INFO:tasks.workunit.client.0.vm02.stdout:5/79: dread - d1/db/f18 zero size 2026-03-10T10:19:12.081 INFO:tasks.workunit.client.0.vm02.stdout:5/80: mkdir d1/db/d11/d1a 0 2026-03-10T10:19:12.081 INFO:tasks.workunit.client.0.vm02.stdout:5/81: dread - d1/db/f18 zero size 2026-03-10T10:19:12.081 INFO:tasks.workunit.client.0.vm02.stdout:5/82: mknod d1/db/d11/d16/c1b 0 2026-03-10T10:19:12.081 INFO:tasks.workunit.client.0.vm02.stdout:5/83: chown d1/db/d11/d16/f19 1038 1 2026-03-10T10:19:12.081 INFO:tasks.workunit.client.0.vm02.stdout:5/84: dwrite d1/f3 [12582912,4194304] 0 2026-03-10T10:19:12.085 INFO:tasks.workunit.client.0.vm02.stdout:5/85: dread d1/db/f15 [0,4194304] 0 2026-03-10T10:19:12.086 INFO:tasks.workunit.client.0.vm02.stdout:5/86: stat d1/db/f18 0 2026-03-10T10:19:12.089 INFO:tasks.workunit.client.0.vm02.stdout:5/87: write d1/f3 [10645570,46127] 0 2026-03-10T10:19:12.090 INFO:tasks.workunit.client.0.vm02.stdout:5/88: chown d1/f3 174188451 1 2026-03-10T10:19:12.095 INFO:tasks.workunit.client.0.vm02.stdout:5/89: write d1/db/f18 [662781,24118] 0 2026-03-10T10:19:12.096 INFO:tasks.workunit.client.0.vm02.stdout:5/90: creat d1/db/d11/d13/f1c x:0 0 0 2026-03-10T10:19:12.098 INFO:tasks.workunit.client.1.vm05.stdout:9/38: rmdir d0/d1 39 2026-03-10T10:19:12.100 INFO:tasks.workunit.client.0.vm02.stdout:5/91: mknod d1/db/c1d 0 2026-03-10T10:19:12.101 INFO:tasks.workunit.client.0.vm02.stdout:5/92: truncate d1/db/f15 1489495 0 2026-03-10T10:19:12.101 INFO:tasks.workunit.client.0.vm02.stdout:5/93: truncate d1/f12 471188 0 2026-03-10T10:19:12.121 INFO:tasks.workunit.client.1.vm05.stdout:9/39: rmdir d0/d2/dc 0 2026-03-10T10:19:12.131 INFO:tasks.workunit.client.0.vm02.stdout:3/68: fsync f0 0 2026-03-10T10:19:12.139 INFO:tasks.workunit.client.1.vm05.stdout:2/29: truncate f1 936302 0 2026-03-10T10:19:12.139 INFO:tasks.workunit.client.1.vm05.stdout:2/30: dread - f7 zero size 2026-03-10T10:19:12.156 INFO:tasks.workunit.client.0.vm02.stdout:5/94: sync 2026-03-10T10:19:12.158 INFO:tasks.workunit.client.0.vm02.stdout:5/95: unlink d1/db/f18 0 2026-03-10T10:19:12.162 INFO:tasks.workunit.client.0.vm02.stdout:5/96: dwrite d1/f3 [0,4194304] 0 2026-03-10T10:19:12.164 INFO:tasks.workunit.client.0.vm02.stdout:5/97: fdatasync d1/db/d11/d16/f19 0 2026-03-10T10:19:12.166 INFO:tasks.workunit.client.0.vm02.stdout:5/98: creat d1/db/f1e x:0 0 0 2026-03-10T10:19:12.166 INFO:tasks.workunit.client.0.vm02.stdout:5/99: stat d1/db/d11/d16/c1b 0 2026-03-10T10:19:12.171 INFO:tasks.workunit.client.0.vm02.stdout:5/100: creat d1/db/d11/d13/f1f x:0 0 0 2026-03-10T10:19:12.187 INFO:tasks.workunit.client.0.vm02.stdout:5/101: rmdir d1/db/d11/d13/d17 0 2026-03-10T10:19:12.196 INFO:tasks.workunit.client.1.vm05.stdout:1/37: rmdir d4 39 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.1.vm05.stdout:1/38: dwrite f3 [0,4194304] 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.1.vm05.stdout:6/31: dread f2 [0,4194304] 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.1.vm05.stdout:6/32: readlink l8 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.1.vm05.stdout:8/32: dread f0 [0,4194304] 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:8/111: rename d1/d2/d6 to d1/d1c 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:8/112: dread - d1/d1c/f14 zero size 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:6/83: rename d0/cb to d0/d8/d9/c1b 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:6/84: dwrite d0/d7/f17 [0,4194304] 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:7/91: rename d1/l4 to d1/d1b/l1d 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:6/85: read d0/f2 [2994746,69781] 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:2/129: write d0/fe [612069,73884] 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:6/86: dwrite d0/d8/d9/f14 [0,4194304] 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:2/130: write d0/f9 [2196469,28856] 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:2/131: dwrite d0/d10/f14 [0,4194304] 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:2/132: fsync d0/d1a/f20 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:3/69: rename d1/d6/l13 to d1/l16 0 2026-03-10T10:19:12.238 INFO:tasks.workunit.client.0.vm02.stdout:3/70: truncate d1/f12 217064 0 2026-03-10T10:19:12.239 INFO:tasks.workunit.client.0.vm02.stdout:3/71: dwrite d1/f12 [0,4194304] 0 2026-03-10T10:19:12.242 INFO:tasks.workunit.client.0.vm02.stdout:7/92: creat d1/dc/d16/f1e x:0 0 0 2026-03-10T10:19:12.242 INFO:tasks.workunit.client.0.vm02.stdout:7/93: chown d1/dc 3470764 1 2026-03-10T10:19:12.243 INFO:tasks.workunit.client.0.vm02.stdout:7/94: write d1/dc/ff [1497016,58028] 0 2026-03-10T10:19:12.245 INFO:tasks.workunit.client.0.vm02.stdout:4/153: fsync d1/fd 0 2026-03-10T10:19:12.249 INFO:tasks.workunit.client.0.vm02.stdout:7/95: dwrite d1/dc/d16/f1e [0,4194304] 0 2026-03-10T10:19:12.252 INFO:tasks.workunit.client.1.vm05.stdout:6/33: write f2 [362562,3400] 0 2026-03-10T10:19:12.255 INFO:tasks.workunit.client.1.vm05.stdout:8/33: write f6 [407647,26676] 0 2026-03-10T10:19:12.258 INFO:tasks.workunit.client.1.vm05.stdout:2/31: sync 2026-03-10T10:19:12.258 INFO:tasks.workunit.client.1.vm05.stdout:2/32: chown f7 19 1 2026-03-10T10:19:12.264 INFO:tasks.workunit.client.1.vm05.stdout:6/34: dwrite fb [4194304,4194304] 0 2026-03-10T10:19:12.267 INFO:tasks.workunit.client.0.vm02.stdout:8/113: rename d1/d2/f3 to d1/d1c/f1d 0 2026-03-10T10:19:12.267 INFO:tasks.workunit.client.0.vm02.stdout:8/114: dread - d1/f1b zero size 2026-03-10T10:19:12.275 INFO:tasks.workunit.client.1.vm05.stdout:6/35: mkdir dd 0 2026-03-10T10:19:12.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:11 vm02.local ceph-mon[50200]: pgmap v145: 65 pgs: 65 active+clean; 238 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.7 MiB/s rd, 7.3 MiB/s wr, 317 op/s 2026-03-10T10:19:12.284 INFO:tasks.workunit.client.1.vm05.stdout:8/34: getdents d7 0 2026-03-10T10:19:12.285 INFO:tasks.workunit.client.1.vm05.stdout:5/55: truncate da/db/fd 1021413 0 2026-03-10T10:19:12.285 INFO:tasks.workunit.client.1.vm05.stdout:0/48: truncate d1/d7/f4 924248 0 2026-03-10T10:19:12.285 INFO:tasks.workunit.client.1.vm05.stdout:1/39: getdents d4 0 2026-03-10T10:19:12.285 INFO:tasks.workunit.client.1.vm05.stdout:2/33: dwrite f7 [0,4194304] 0 2026-03-10T10:19:12.285 INFO:tasks.workunit.client.0.vm02.stdout:9/110: truncate da/f15 880633 0 2026-03-10T10:19:12.287 INFO:tasks.workunit.client.1.vm05.stdout:2/34: chown l4 18575305 1 2026-03-10T10:19:12.287 INFO:tasks.workunit.client.1.vm05.stdout:4/38: dread f0 [0,4194304] 0 2026-03-10T10:19:12.287 INFO:tasks.workunit.client.1.vm05.stdout:8/35: dread f0 [0,4194304] 0 2026-03-10T10:19:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:12 vm05.local ceph-mon[59051]: pgmap v145: 65 pgs: 65 active+clean; 238 MiB data, 1.8 GiB used, 118 GiB / 120 GiB avail; 1.7 MiB/s rd, 7.3 MiB/s wr, 317 op/s 2026-03-10T10:19:12.287 INFO:tasks.workunit.client.1.vm05.stdout:1/40: write d4/f5 [1579688,11245] 0 2026-03-10T10:19:12.288 INFO:tasks.workunit.client.1.vm05.stdout:5/56: mknod da/c11 0 2026-03-10T10:19:12.288 INFO:tasks.workunit.client.1.vm05.stdout:6/36: dwrite f2 [0,4194304] 0 2026-03-10T10:19:12.288 INFO:tasks.workunit.client.1.vm05.stdout:1/41: read f0 [621173,89002] 0 2026-03-10T10:19:12.295 INFO:tasks.workunit.client.0.vm02.stdout:8/115: rename d1/d2/f18 to d1/d1c/f1e 0 2026-03-10T10:19:12.296 INFO:tasks.workunit.client.1.vm05.stdout:0/49: mknod d1/d2/d9/c10 0 2026-03-10T10:19:12.296 INFO:tasks.workunit.client.1.vm05.stdout:8/36: fsync f0 0 2026-03-10T10:19:12.297 INFO:tasks.workunit.client.1.vm05.stdout:8/37: chown f0 508485580 1 2026-03-10T10:19:12.300 INFO:tasks.workunit.client.1.vm05.stdout:5/57: write f5 [3835434,8445] 0 2026-03-10T10:19:12.302 INFO:tasks.workunit.client.0.vm02.stdout:7/96: link d1/dc/d10/f18 d1/dc/d16/f1f 0 2026-03-10T10:19:12.304 INFO:tasks.workunit.client.0.vm02.stdout:7/97: dread d1/dc/f3 [0,4194304] 0 2026-03-10T10:19:12.310 INFO:tasks.workunit.client.1.vm05.stdout:1/42: rename d4/f5 to d4/fe 0 2026-03-10T10:19:12.310 INFO:tasks.workunit.client.1.vm05.stdout:1/43: write f3 [1908282,67096] 0 2026-03-10T10:19:12.310 INFO:tasks.workunit.client.0.vm02.stdout:7/98: truncate d1/f5 5101827 0 2026-03-10T10:19:12.310 INFO:tasks.workunit.client.0.vm02.stdout:9/111: truncate da/f13 466182 0 2026-03-10T10:19:12.319 INFO:tasks.workunit.client.0.vm02.stdout:8/116: stat d1/d1c/c1a 0 2026-03-10T10:19:12.329 INFO:tasks.workunit.client.1.vm05.stdout:0/50: write f0 [3525821,35677] 0 2026-03-10T10:19:12.329 INFO:tasks.workunit.client.1.vm05.stdout:4/39: symlink d1/d3/l4 0 2026-03-10T10:19:12.329 INFO:tasks.workunit.client.0.vm02.stdout:8/117: dread d1/f10 [0,4194304] 0 2026-03-10T10:19:12.329 INFO:tasks.workunit.client.0.vm02.stdout:8/118: readlink d1/d1c/lf 0 2026-03-10T10:19:12.329 INFO:tasks.workunit.client.0.vm02.stdout:8/119: read - d1/f1b zero size 2026-03-10T10:19:12.329 INFO:tasks.workunit.client.0.vm02.stdout:7/99: mknod d1/dc/d10/c20 0 2026-03-10T10:19:12.329 INFO:tasks.workunit.client.0.vm02.stdout:9/112: symlink da/d10/l22 0 2026-03-10T10:19:12.329 INFO:tasks.workunit.client.0.vm02.stdout:2/133: getdents d0/d1a 0 2026-03-10T10:19:12.330 INFO:tasks.workunit.client.1.vm05.stdout:6/37: creat dd/fe x:0 0 0 2026-03-10T10:19:12.330 INFO:tasks.workunit.client.1.vm05.stdout:6/38: stat c6 0 2026-03-10T10:19:12.331 INFO:tasks.workunit.client.0.vm02.stdout:0/130: rename d9/lb to d9/d18/d1a/l21 0 2026-03-10T10:19:12.334 INFO:tasks.workunit.client.0.vm02.stdout:8/120: mknod d1/d1c/c1f 0 2026-03-10T10:19:12.335 INFO:tasks.workunit.client.0.vm02.stdout:8/121: write d1/d1c/f14 [121115,31906] 0 2026-03-10T10:19:12.336 INFO:tasks.workunit.client.1.vm05.stdout:6/39: dwrite fb [0,4194304] 0 2026-03-10T10:19:12.337 INFO:tasks.workunit.client.0.vm02.stdout:5/102: fdatasync d1/f3 0 2026-03-10T10:19:12.339 INFO:tasks.workunit.client.1.vm05.stdout:6/40: write fb [366493,11169] 0 2026-03-10T10:19:12.342 INFO:tasks.workunit.client.1.vm05.stdout:6/41: write f2 [4522271,78060] 0 2026-03-10T10:19:12.348 INFO:tasks.workunit.client.1.vm05.stdout:6/42: dwrite f3 [0,4194304] 0 2026-03-10T10:19:12.357 INFO:tasks.workunit.client.1.vm05.stdout:6/43: dwrite f3 [4194304,4194304] 0 2026-03-10T10:19:12.361 INFO:tasks.workunit.client.1.vm05.stdout:1/44: mkdir d4/df 0 2026-03-10T10:19:12.363 INFO:tasks.workunit.client.1.vm05.stdout:1/45: chown d4/df 1448360 1 2026-03-10T10:19:12.365 INFO:tasks.workunit.client.1.vm05.stdout:6/44: dwrite dd/fe [0,4194304] 0 2026-03-10T10:19:12.385 INFO:tasks.workunit.client.1.vm05.stdout:0/51: creat d1/f11 x:0 0 0 2026-03-10T10:19:12.385 INFO:tasks.workunit.client.1.vm05.stdout:4/40: creat d1/d3/f5 x:0 0 0 2026-03-10T10:19:12.385 INFO:tasks.workunit.client.0.vm02.stdout:8/122: creat d1/d1c/f20 x:0 0 0 2026-03-10T10:19:12.390 INFO:tasks.workunit.client.1.vm05.stdout:5/58: mkdir da/db/de/d12 0 2026-03-10T10:19:12.391 INFO:tasks.workunit.client.0.vm02.stdout:7/100: mknod d1/c21 0 2026-03-10T10:19:12.400 INFO:tasks.workunit.client.0.vm02.stdout:8/123: creat d1/f21 x:0 0 0 2026-03-10T10:19:12.403 INFO:tasks.workunit.client.0.vm02.stdout:8/124: dread d1/f10 [0,4194304] 0 2026-03-10T10:19:12.405 INFO:tasks.workunit.client.0.vm02.stdout:7/101: unlink d1/dc/d10/f18 0 2026-03-10T10:19:12.406 INFO:tasks.workunit.client.0.vm02.stdout:7/102: stat d1/dc/d16/f1e 0 2026-03-10T10:19:12.410 INFO:tasks.workunit.client.0.vm02.stdout:8/125: symlink d1/l22 0 2026-03-10T10:19:12.417 INFO:tasks.workunit.client.0.vm02.stdout:7/103: unlink d1/dc/d10/c20 0 2026-03-10T10:19:12.419 INFO:tasks.workunit.client.1.vm05.stdout:4/41: creat d1/f6 x:0 0 0 2026-03-10T10:19:12.419 INFO:tasks.workunit.client.0.vm02.stdout:2/134: getdents d0/d10 0 2026-03-10T10:19:12.427 INFO:tasks.workunit.client.1.vm05.stdout:5/59: mknod da/c13 0 2026-03-10T10:19:12.427 INFO:tasks.workunit.client.0.vm02.stdout:8/126: mkdir d1/d1c/d23 0 2026-03-10T10:19:12.428 INFO:tasks.workunit.client.1.vm05.stdout:0/52: mkdir d1/d7/db/d12 0 2026-03-10T10:19:12.428 INFO:tasks.workunit.client.0.vm02.stdout:5/103: link d1/c2 d1/c20 0 2026-03-10T10:19:12.429 INFO:tasks.workunit.client.0.vm02.stdout:5/104: chown d1/db/d11/d13 1572909 1 2026-03-10T10:19:12.429 INFO:tasks.workunit.client.0.vm02.stdout:5/105: fdatasync d1/f10 0 2026-03-10T10:19:12.430 INFO:tasks.workunit.client.1.vm05.stdout:4/42: symlink d1/d3/l7 0 2026-03-10T10:19:12.431 INFO:tasks.workunit.client.1.vm05.stdout:4/43: dread - d1/f6 zero size 2026-03-10T10:19:12.432 INFO:tasks.workunit.client.0.vm02.stdout:2/135: rmdir d0 39 2026-03-10T10:19:12.433 INFO:tasks.workunit.client.1.vm05.stdout:0/53: mkdir d1/d7/db/d13 0 2026-03-10T10:19:12.433 INFO:tasks.workunit.client.0.vm02.stdout:8/127: mkdir d1/d1c/d24 0 2026-03-10T10:19:12.435 INFO:tasks.workunit.client.1.vm05.stdout:0/54: symlink d1/d2/d9/l14 0 2026-03-10T10:19:12.437 INFO:tasks.workunit.client.0.vm02.stdout:2/136: fdatasync d0/fe 0 2026-03-10T10:19:12.439 INFO:tasks.workunit.client.1.vm05.stdout:0/55: mkdir d1/d7/db/d13/d15 0 2026-03-10T10:19:12.439 INFO:tasks.workunit.client.1.vm05.stdout:0/56: stat d1/d2/d9/fd 0 2026-03-10T10:19:12.441 INFO:tasks.workunit.client.0.vm02.stdout:8/128: mkdir d1/d1c/d23/d25 0 2026-03-10T10:19:12.443 INFO:tasks.workunit.client.0.vm02.stdout:2/137: dwrite d0/d1a/f25 [0,4194304] 0 2026-03-10T10:19:12.463 INFO:tasks.workunit.client.0.vm02.stdout:8/129: creat d1/d1c/d23/d25/f26 x:0 0 0 2026-03-10T10:19:12.465 INFO:tasks.workunit.client.0.vm02.stdout:2/138: getdents d0/d1a 0 2026-03-10T10:19:12.467 INFO:tasks.workunit.client.0.vm02.stdout:2/139: write d0/d10/f14 [1592386,43553] 0 2026-03-10T10:19:12.470 INFO:tasks.workunit.client.0.vm02.stdout:2/140: mknod d0/d10/c2b 0 2026-03-10T10:19:12.476 INFO:tasks.workunit.client.1.vm05.stdout:7/81: write d5/fe [2225598,53374] 0 2026-03-10T10:19:12.477 INFO:tasks.workunit.client.1.vm05.stdout:7/82: dwrite d5/ff [0,4194304] 0 2026-03-10T10:19:12.480 INFO:tasks.workunit.client.0.vm02.stdout:5/106: rmdir d1/db/d11/d16 39 2026-03-10T10:19:12.481 INFO:tasks.workunit.client.0.vm02.stdout:5/107: stat d1/c14 0 2026-03-10T10:19:12.481 INFO:tasks.workunit.client.0.vm02.stdout:5/108: fsync d1/f10 0 2026-03-10T10:19:12.496 INFO:tasks.workunit.client.0.vm02.stdout:5/109: creat d1/db/d11/d13/f21 x:0 0 0 2026-03-10T10:19:12.499 INFO:tasks.workunit.client.0.vm02.stdout:5/110: creat d1/db/f22 x:0 0 0 2026-03-10T10:19:12.504 INFO:tasks.workunit.client.1.vm05.stdout:9/40: rmdir d0/d2 39 2026-03-10T10:19:12.505 INFO:tasks.workunit.client.0.vm02.stdout:3/72: dwrite f0 [0,4194304] 0 2026-03-10T10:19:12.507 INFO:tasks.workunit.client.0.vm02.stdout:5/111: stat d1/c9 0 2026-03-10T10:19:12.508 INFO:tasks.workunit.client.0.vm02.stdout:5/112: readlink d1/lc 0 2026-03-10T10:19:12.509 INFO:tasks.workunit.client.0.vm02.stdout:3/73: symlink d1/d6/l17 0 2026-03-10T10:19:12.511 INFO:tasks.workunit.client.0.vm02.stdout:5/113: unlink d1/f5 0 2026-03-10T10:19:12.512 INFO:tasks.workunit.client.0.vm02.stdout:5/114: truncate d1/db/d11/d13/f1c 52762 0 2026-03-10T10:19:12.513 INFO:tasks.workunit.client.0.vm02.stdout:3/74: mkdir d1/d18 0 2026-03-10T10:19:12.514 INFO:tasks.workunit.client.0.vm02.stdout:3/75: chown d1/d18 646454 1 2026-03-10T10:19:12.514 INFO:tasks.workunit.client.1.vm05.stdout:9/41: dwrite d0/f7 [0,4194304] 0 2026-03-10T10:19:12.519 INFO:tasks.workunit.client.1.vm05.stdout:9/42: dread d0/f7 [0,4194304] 0 2026-03-10T10:19:12.521 INFO:tasks.workunit.client.0.vm02.stdout:5/115: mknod d1/db/d11/d13/c23 0 2026-03-10T10:19:12.522 INFO:tasks.workunit.client.0.vm02.stdout:5/116: dread - d1/db/d11/d13/f21 zero size 2026-03-10T10:19:12.523 INFO:tasks.workunit.client.0.vm02.stdout:5/117: write d1/db/d11/d13/f21 [531636,5017] 0 2026-03-10T10:19:12.528 INFO:tasks.workunit.client.0.vm02.stdout:3/76: rename d1/fc to d1/f19 0 2026-03-10T10:19:12.542 INFO:tasks.workunit.client.0.vm02.stdout:5/118: rmdir d1/db/d11 39 2026-03-10T10:19:12.542 INFO:tasks.workunit.client.0.vm02.stdout:5/119: write d1/db/f1e [730598,93911] 0 2026-03-10T10:19:12.543 INFO:tasks.workunit.client.0.vm02.stdout:3/77: creat d1/d8/f1a x:0 0 0 2026-03-10T10:19:12.543 INFO:tasks.workunit.client.0.vm02.stdout:3/78: dwrite d1/d8/f9 [4194304,4194304] 0 2026-03-10T10:19:12.543 INFO:tasks.workunit.client.1.vm05.stdout:9/43: mknod d0/d2/cd 0 2026-03-10T10:19:12.543 INFO:tasks.workunit.client.1.vm05.stdout:9/44: mkdir d0/d2/de 0 2026-03-10T10:19:12.543 INFO:tasks.workunit.client.1.vm05.stdout:9/45: chown d0/d1/fb 15 1 2026-03-10T10:19:12.543 INFO:tasks.workunit.client.1.vm05.stdout:9/46: mkdir d0/df 0 2026-03-10T10:19:12.543 INFO:tasks.workunit.client.1.vm05.stdout:9/47: dread d0/f7 [0,4194304] 0 2026-03-10T10:19:12.543 INFO:tasks.workunit.client.0.vm02.stdout:3/79: rmdir d1/d18 0 2026-03-10T10:19:12.543 INFO:tasks.workunit.client.0.vm02.stdout:3/80: write d1/d8/f1a [226428,63138] 0 2026-03-10T10:19:12.545 INFO:tasks.workunit.client.0.vm02.stdout:3/81: creat d1/d6/f1b x:0 0 0 2026-03-10T10:19:12.547 INFO:tasks.workunit.client.1.vm05.stdout:9/48: link d0/d2/l6 d0/d1/l10 0 2026-03-10T10:19:12.548 INFO:tasks.workunit.client.1.vm05.stdout:9/49: truncate d0/fa 557387 0 2026-03-10T10:19:12.549 INFO:tasks.workunit.client.1.vm05.stdout:9/50: readlink d0/d1/l10 0 2026-03-10T10:19:12.550 INFO:tasks.workunit.client.1.vm05.stdout:9/51: truncate d0/d1/fb 355186 0 2026-03-10T10:19:12.551 INFO:tasks.workunit.client.1.vm05.stdout:9/52: mkdir d0/df/d11 0 2026-03-10T10:19:12.552 INFO:tasks.workunit.client.1.vm05.stdout:9/53: chown d0/d2/l6 14 1 2026-03-10T10:19:12.552 INFO:tasks.workunit.client.1.vm05.stdout:9/54: chown d0/d2 207435 1 2026-03-10T10:19:12.600 INFO:tasks.workunit.client.1.vm05.stdout:7/83: sync 2026-03-10T10:19:12.602 INFO:tasks.workunit.client.1.vm05.stdout:7/84: rename c0 to d5/c14 0 2026-03-10T10:19:12.603 INFO:tasks.workunit.client.1.vm05.stdout:7/85: chown l2 180286395 1 2026-03-10T10:19:12.604 INFO:tasks.workunit.client.1.vm05.stdout:7/86: rmdir d5/dd 39 2026-03-10T10:19:12.662 INFO:tasks.workunit.client.0.vm02.stdout:6/87: write d0/d8/d9/f14 [4650032,113993] 0 2026-03-10T10:19:12.662 INFO:tasks.workunit.client.0.vm02.stdout:6/88: fsync d0/d8/d9/f13 0 2026-03-10T10:19:12.663 INFO:tasks.workunit.client.1.vm05.stdout:8/38: read f6 [147445,25677] 0 2026-03-10T10:19:12.663 INFO:tasks.workunit.client.1.vm05.stdout:8/39: dread f0 [0,4194304] 0 2026-03-10T10:19:12.664 INFO:tasks.workunit.client.0.vm02.stdout:6/89: dread d0/d7/ff [0,4194304] 0 2026-03-10T10:19:12.665 INFO:tasks.workunit.client.1.vm05.stdout:3/32: write f2 [1510799,2375] 0 2026-03-10T10:19:12.666 INFO:tasks.workunit.client.0.vm02.stdout:1/69: write f3 [551644,96784] 0 2026-03-10T10:19:12.669 INFO:tasks.workunit.client.1.vm05.stdout:8/40: read f6 [269346,116378] 0 2026-03-10T10:19:12.671 INFO:tasks.workunit.client.0.vm02.stdout:6/90: dwrite d0/d8/d9/f13 [0,4194304] 0 2026-03-10T10:19:12.679 INFO:tasks.workunit.client.1.vm05.stdout:7/87: fdatasync d5/ff 0 2026-03-10T10:19:12.680 INFO:tasks.workunit.client.0.vm02.stdout:5/120: read d1/db/d11/d13/f21 [198579,8177] 0 2026-03-10T10:19:12.691 INFO:tasks.workunit.client.1.vm05.stdout:7/88: fsync d5/fe 0 2026-03-10T10:19:12.691 INFO:tasks.workunit.client.1.vm05.stdout:7/89: readlink l2 0 2026-03-10T10:19:12.692 INFO:tasks.workunit.client.1.vm05.stdout:3/33: symlink l8 0 2026-03-10T10:19:12.696 INFO:tasks.workunit.client.1.vm05.stdout:8/41: rename f0 to d7/f8 0 2026-03-10T10:19:12.720 INFO:tasks.workunit.client.0.vm02.stdout:6/91: rename d0/d7/ff to d0/f1c 0 2026-03-10T10:19:12.720 INFO:tasks.workunit.client.0.vm02.stdout:5/121: creat d1/f24 x:0 0 0 2026-03-10T10:19:12.720 INFO:tasks.workunit.client.0.vm02.stdout:5/122: dwrite d1/f12 [0,4194304] 0 2026-03-10T10:19:12.720 INFO:tasks.workunit.client.0.vm02.stdout:5/123: chown d1/db/d11/d1a 7672516 1 2026-03-10T10:19:12.720 INFO:tasks.workunit.client.0.vm02.stdout:6/92: dwrite d0/f1c [0,4194304] 0 2026-03-10T10:19:12.720 INFO:tasks.workunit.client.0.vm02.stdout:4/154: truncate d1/d10/db/f16 7409718 0 2026-03-10T10:19:12.720 INFO:tasks.workunit.client.0.vm02.stdout:4/155: dread - d1/d10/db/f20 zero size 2026-03-10T10:19:12.720 INFO:tasks.workunit.client.0.vm02.stdout:5/124: rename d1/f24 to d1/db/d11/d13/f25 0 2026-03-10T10:19:12.720 INFO:tasks.workunit.client.0.vm02.stdout:6/93: symlink d0/d8/l1d 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:5/60: dwrite da/db/fd [0,4194304] 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:5/61: stat l8 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:7/90: dread d5/ff [0,4194304] 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:7/91: chown d5/cc 181 1 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:3/34: readlink l0 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:5/62: dwrite da/db/fd [0,4194304] 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:5/63: rename da to da/d14 22 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:3/35: write f3 [1023950,114770] 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:5/64: write da/f10 [56086,20253] 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:3/36: dread - f6 zero size 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:5/65: write da/db/fd [497156,16212] 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:3/37: readlink l7 0 2026-03-10T10:19:12.721 INFO:tasks.workunit.client.1.vm05.stdout:3/38: readlink l0 0 2026-03-10T10:19:12.723 INFO:tasks.workunit.client.0.vm02.stdout:4/156: rename d1/d2/de to d1/d2/d37 0 2026-03-10T10:19:12.727 INFO:tasks.workunit.client.1.vm05.stdout:7/92: dread d5/f13 [0,4194304] 0 2026-03-10T10:19:12.727 INFO:tasks.workunit.client.0.vm02.stdout:4/157: fsync d1/d10/db/f15 0 2026-03-10T10:19:12.727 INFO:tasks.workunit.client.0.vm02.stdout:5/125: dwrite d1/db/f22 [0,4194304] 0 2026-03-10T10:19:12.728 INFO:tasks.workunit.client.0.vm02.stdout:1/70: sync 2026-03-10T10:19:12.729 INFO:tasks.workunit.client.0.vm02.stdout:1/71: chown d4/da/fc 1414 1 2026-03-10T10:19:12.735 INFO:tasks.workunit.client.0.vm02.stdout:6/94: symlink d0/d7/l1e 0 2026-03-10T10:19:12.736 INFO:tasks.workunit.client.1.vm05.stdout:5/66: creat da/f15 x:0 0 0 2026-03-10T10:19:12.739 INFO:tasks.workunit.client.0.vm02.stdout:4/158: symlink d1/d10/l38 0 2026-03-10T10:19:12.740 INFO:tasks.workunit.client.0.vm02.stdout:4/159: truncate d1/d2/f34 893230 0 2026-03-10T10:19:12.740 INFO:tasks.workunit.client.0.vm02.stdout:5/126: creat d1/f26 x:0 0 0 2026-03-10T10:19:12.743 INFO:tasks.workunit.client.0.vm02.stdout:4/160: dread d1/d10/f6 [4194304,4194304] 0 2026-03-10T10:19:12.751 INFO:tasks.workunit.client.1.vm05.stdout:7/93: write d5/dd/f12 [852976,4936] 0 2026-03-10T10:19:12.751 INFO:tasks.workunit.client.1.vm05.stdout:3/39: dwrite f1 [4194304,4194304] 0 2026-03-10T10:19:12.751 INFO:tasks.workunit.client.0.vm02.stdout:6/95: symlink d0/d8/l1f 0 2026-03-10T10:19:12.751 INFO:tasks.workunit.client.0.vm02.stdout:1/72: link d4/ff d4/da/d14/f19 0 2026-03-10T10:19:12.751 INFO:tasks.workunit.client.0.vm02.stdout:4/161: unlink d1/fd 0 2026-03-10T10:19:12.751 INFO:tasks.workunit.client.0.vm02.stdout:5/127: creat d1/db/d11/d1a/f27 x:0 0 0 2026-03-10T10:19:12.751 INFO:tasks.workunit.client.0.vm02.stdout:6/96: stat d0/l5 0 2026-03-10T10:19:12.751 INFO:tasks.workunit.client.0.vm02.stdout:1/73: rmdir d4/da 39 2026-03-10T10:19:12.752 INFO:tasks.workunit.client.0.vm02.stdout:1/74: dread d4/ff [0,4194304] 0 2026-03-10T10:19:12.754 INFO:tasks.workunit.client.1.vm05.stdout:5/67: rmdir da/db/de/d12 0 2026-03-10T10:19:12.754 INFO:tasks.workunit.client.0.vm02.stdout:5/128: mkdir d1/db/d11/d13/d28 0 2026-03-10T10:19:12.760 INFO:tasks.workunit.client.0.vm02.stdout:6/97: creat d0/f20 x:0 0 0 2026-03-10T10:19:12.761 INFO:tasks.workunit.client.0.vm02.stdout:6/98: readlink d0/d8/l19 0 2026-03-10T10:19:12.761 INFO:tasks.workunit.client.0.vm02.stdout:6/99: dread - d0/f20 zero size 2026-03-10T10:19:12.772 INFO:tasks.workunit.client.1.vm05.stdout:3/40: link f2 f9 0 2026-03-10T10:19:12.772 INFO:tasks.workunit.client.1.vm05.stdout:5/68: rmdir da/db/de 39 2026-03-10T10:19:12.774 INFO:tasks.workunit.client.1.vm05.stdout:7/94: rename d5/c8 to d5/c15 0 2026-03-10T10:19:12.774 INFO:tasks.workunit.client.0.vm02.stdout:6/100: creat d0/f21 x:0 0 0 2026-03-10T10:19:12.777 INFO:tasks.workunit.client.0.vm02.stdout:5/129: mkdir d1/db/d11/d16/d29 0 2026-03-10T10:19:12.777 INFO:tasks.workunit.client.0.vm02.stdout:5/130: dread - d1/f26 zero size 2026-03-10T10:19:12.779 INFO:tasks.workunit.client.1.vm05.stdout:3/41: creat fa x:0 0 0 2026-03-10T10:19:12.780 INFO:tasks.workunit.client.0.vm02.stdout:5/131: dread d1/f10 [0,4194304] 0 2026-03-10T10:19:12.782 INFO:tasks.workunit.client.1.vm05.stdout:5/69: symlink da/db/de/l16 0 2026-03-10T10:19:12.782 INFO:tasks.workunit.client.0.vm02.stdout:5/132: creat d1/f2a x:0 0 0 2026-03-10T10:19:12.783 INFO:tasks.workunit.client.0.vm02.stdout:5/133: read - d1/db/d11/d16/f19 zero size 2026-03-10T10:19:12.785 INFO:tasks.workunit.client.1.vm05.stdout:5/70: mkdir da/db/d17 0 2026-03-10T10:19:12.786 INFO:tasks.workunit.client.0.vm02.stdout:5/134: rename d1/ca to d1/db/d11/d16/c2b 0 2026-03-10T10:19:12.786 INFO:tasks.workunit.client.0.vm02.stdout:5/135: chown d1/fe 80859087 1 2026-03-10T10:19:12.788 INFO:tasks.workunit.client.1.vm05.stdout:5/71: dread da/db/fd [0,4194304] 0 2026-03-10T10:19:12.788 INFO:tasks.workunit.client.0.vm02.stdout:5/136: creat d1/db/d11/d13/d28/f2c x:0 0 0 2026-03-10T10:19:12.789 INFO:tasks.workunit.client.0.vm02.stdout:5/137: mknod d1/c2d 0 2026-03-10T10:19:12.789 INFO:tasks.workunit.client.0.vm02.stdout:5/138: chown d1/lc 841326 1 2026-03-10T10:19:12.791 INFO:tasks.workunit.client.0.vm02.stdout:5/139: creat d1/db/d11/d13/d28/f2e x:0 0 0 2026-03-10T10:19:12.791 INFO:tasks.workunit.client.1.vm05.stdout:5/72: symlink da/db/de/l18 0 2026-03-10T10:19:12.791 INFO:tasks.workunit.client.0.vm02.stdout:5/140: write d1/db/d11/d13/f1c [708064,103707] 0 2026-03-10T10:19:12.794 INFO:tasks.workunit.client.1.vm05.stdout:5/73: write da/f10 [118353,26738] 0 2026-03-10T10:19:12.795 INFO:tasks.workunit.client.1.vm05.stdout:5/74: symlink da/db/de/l19 0 2026-03-10T10:19:12.797 INFO:tasks.workunit.client.1.vm05.stdout:5/75: unlink da/db/de/l16 0 2026-03-10T10:19:12.800 INFO:tasks.workunit.client.1.vm05.stdout:5/76: dwrite f9 [0,4194304] 0 2026-03-10T10:19:12.815 INFO:tasks.workunit.client.1.vm05.stdout:5/77: dwrite da/f15 [0,4194304] 0 2026-03-10T10:19:12.822 INFO:tasks.workunit.client.1.vm05.stdout:5/78: rename da/db/cc to da/db/c1a 0 2026-03-10T10:19:12.825 INFO:tasks.workunit.client.1.vm05.stdout:5/79: unlink l4 0 2026-03-10T10:19:12.826 INFO:tasks.workunit.client.1.vm05.stdout:5/80: unlink l8 0 2026-03-10T10:19:12.835 INFO:tasks.workunit.client.1.vm05.stdout:5/81: link c1 da/db/de/c1b 0 2026-03-10T10:19:12.850 INFO:tasks.workunit.client.1.vm05.stdout:5/82: stat c1 0 2026-03-10T10:19:12.850 INFO:tasks.workunit.client.1.vm05.stdout:5/83: chown da/f10 123541 1 2026-03-10T10:19:12.850 INFO:tasks.workunit.client.1.vm05.stdout:5/84: creat da/db/d17/f1c x:0 0 0 2026-03-10T10:19:12.850 INFO:tasks.workunit.client.1.vm05.stdout:5/85: dread f5 [0,4194304] 0 2026-03-10T10:19:12.850 INFO:tasks.workunit.client.1.vm05.stdout:5/86: dread f9 [0,4194304] 0 2026-03-10T10:19:12.850 INFO:tasks.workunit.client.1.vm05.stdout:5/87: dwrite f5 [0,4194304] 0 2026-03-10T10:19:12.893 INFO:tasks.workunit.client.1.vm05.stdout:7/95: sync 2026-03-10T10:19:12.894 INFO:tasks.workunit.client.1.vm05.stdout:7/96: symlink d5/l16 0 2026-03-10T10:19:12.898 INFO:tasks.workunit.client.1.vm05.stdout:7/97: dread d5/dd/f12 [0,4194304] 0 2026-03-10T10:19:12.901 INFO:tasks.workunit.client.1.vm05.stdout:7/98: dread d5/f13 [0,4194304] 0 2026-03-10T10:19:12.902 INFO:tasks.workunit.client.1.vm05.stdout:7/99: mkdir d5/d17 0 2026-03-10T10:19:12.904 INFO:tasks.workunit.client.1.vm05.stdout:7/100: creat d5/d17/f18 x:0 0 0 2026-03-10T10:19:12.905 INFO:tasks.workunit.client.1.vm05.stdout:7/101: creat d5/d17/f19 x:0 0 0 2026-03-10T10:19:12.943 INFO:tasks.workunit.client.0.vm02.stdout:7/104: fsync d1/dc/d16/f1f 0 2026-03-10T10:19:12.944 INFO:tasks.workunit.client.0.vm02.stdout:5/141: fsync d1/db/d11/d13/f1c 0 2026-03-10T10:19:12.944 INFO:tasks.workunit.client.0.vm02.stdout:5/142: dread - d1/f26 zero size 2026-03-10T10:19:12.945 INFO:tasks.workunit.client.0.vm02.stdout:5/143: read d1/db/f22 [2407639,71210] 0 2026-03-10T10:19:12.950 INFO:tasks.workunit.client.1.vm05.stdout:1/46: unlink d4/fe 0 2026-03-10T10:19:12.952 INFO:tasks.workunit.client.1.vm05.stdout:6/45: fsync dd/fe 0 2026-03-10T10:19:12.952 INFO:tasks.workunit.client.0.vm02.stdout:0/131: chown d9 4180265 1 2026-03-10T10:19:12.952 INFO:tasks.workunit.client.0.vm02.stdout:0/132: stat f2 0 2026-03-10T10:19:12.955 INFO:tasks.workunit.client.0.vm02.stdout:9/113: dwrite da/d10/f20 [0,4194304] 0 2026-03-10T10:19:12.979 INFO:tasks.workunit.client.0.vm02.stdout:0/133: dwrite d9/f1b [0,4194304] 0 2026-03-10T10:19:12.992 INFO:tasks.workunit.client.0.vm02.stdout:0/134: dwrite d9/d18/f1e [0,4194304] 0 2026-03-10T10:19:13.008 INFO:tasks.workunit.client.1.vm05.stdout:4/44: truncate f0 1206849 0 2026-03-10T10:19:13.009 INFO:tasks.workunit.client.0.vm02.stdout:5/144: creat d1/db/f2f x:0 0 0 2026-03-10T10:19:13.010 INFO:tasks.workunit.client.0.vm02.stdout:8/130: getdents d1/d1c/d23/d25 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.1.vm05.stdout:2/35: write f1 [1575400,80766] 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.1.vm05.stdout:2/36: chown f1 35677 1 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.1.vm05.stdout:2/37: readlink l4 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:0/135: fdatasync d9/f17 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:5/145: mkdir d1/db/d11/d13/d30 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:5/146: dread - d1/db/d11/d13/d28/f2c zero size 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:5/147: chown d1/lf 7260379 1 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:5/148: read d1/db/f15 [8933,93961] 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:8/131: write d1/d1c/f1d [8778207,17721] 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:2/141: chown d0/d10/l23 8365190 1 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:3/82: truncate d1/d8/fb 3917926 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:3/83: dwrite d1/d8/f1a [0,4194304] 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:0/136: mkdir d9/d18/d1a/d22 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:3/84: dread d1/d8/f1a [0,4194304] 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:0/137: dread f2 [0,4194304] 0 2026-03-10T10:19:13.041 INFO:tasks.workunit.client.0.vm02.stdout:0/138: dread f2 [0,4194304] 0 2026-03-10T10:19:13.042 INFO:tasks.workunit.client.1.vm05.stdout:9/55: truncate d0/fa 500037 0 2026-03-10T10:19:13.043 INFO:tasks.workunit.client.0.vm02.stdout:0/139: dwrite f2 [0,4194304] 0 2026-03-10T10:19:13.044 INFO:tasks.workunit.client.0.vm02.stdout:0/140: write d9/d18/f1e [431358,9138] 0 2026-03-10T10:19:13.047 INFO:tasks.workunit.client.0.vm02.stdout:7/105: link d1/dc/d10/l14 d1/l22 0 2026-03-10T10:19:13.048 INFO:tasks.workunit.client.0.vm02.stdout:8/132: rename d1/d1c/d23/d25/f26 to d1/d2/f27 0 2026-03-10T10:19:13.048 INFO:tasks.workunit.client.0.vm02.stdout:5/149: creat d1/db/d11/d13/d28/f31 x:0 0 0 2026-03-10T10:19:13.051 INFO:tasks.workunit.client.0.vm02.stdout:7/106: write d1/dc/ff [4835425,27895] 0 2026-03-10T10:19:13.055 INFO:tasks.workunit.client.0.vm02.stdout:5/150: chown d1/lc 60756665 1 2026-03-10T10:19:13.062 INFO:tasks.workunit.client.0.vm02.stdout:8/133: unlink c0 0 2026-03-10T10:19:13.073 INFO:tasks.workunit.client.0.vm02.stdout:3/85: truncate d1/fe 696914 0 2026-03-10T10:19:13.073 INFO:tasks.workunit.client.0.vm02.stdout:3/86: chown d1/d8 5923 1 2026-03-10T10:19:13.073 INFO:tasks.workunit.client.0.vm02.stdout:3/87: chown d1/d6 1150 1 2026-03-10T10:19:13.073 INFO:tasks.workunit.client.0.vm02.stdout:3/88: stat d1/f3 0 2026-03-10T10:19:13.073 INFO:tasks.workunit.client.0.vm02.stdout:8/134: creat d1/d2/f28 x:0 0 0 2026-03-10T10:19:13.073 INFO:tasks.workunit.client.0.vm02.stdout:8/135: write d1/f12 [665861,7162] 0 2026-03-10T10:19:13.073 INFO:tasks.workunit.client.0.vm02.stdout:2/142: creat d0/f2c x:0 0 0 2026-03-10T10:19:13.073 INFO:tasks.workunit.client.0.vm02.stdout:8/136: dwrite d1/d1c/f14 [0,4194304] 0 2026-03-10T10:19:13.074 INFO:tasks.workunit.client.0.vm02.stdout:8/137: fsync d1/f16 0 2026-03-10T10:19:13.080 INFO:tasks.workunit.client.0.vm02.stdout:3/89: creat d1/f1c x:0 0 0 2026-03-10T10:19:13.080 INFO:tasks.workunit.client.0.vm02.stdout:3/90: dread - d1/f1c zero size 2026-03-10T10:19:13.082 INFO:tasks.workunit.client.0.vm02.stdout:0/141: creat d9/f23 x:0 0 0 2026-03-10T10:19:13.086 INFO:tasks.workunit.client.0.vm02.stdout:5/151: rmdir d1/db/d11/d13/d30 0 2026-03-10T10:19:13.086 INFO:tasks.workunit.client.0.vm02.stdout:5/152: readlink d1/lf 0 2026-03-10T10:19:13.090 INFO:tasks.workunit.client.0.vm02.stdout:2/143: creat d0/f2d x:0 0 0 2026-03-10T10:19:13.094 INFO:tasks.workunit.client.0.vm02.stdout:5/153: link d1/f2a d1/f32 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:5/154: write d1/f3 [5553936,12334] 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:0/142: mkdir d9/d18/d1a/d22/d24 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:2/144: symlink d0/d1a/l2e 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:2/145: dread - d0/d1a/f20 zero size 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:2/146: rename d0/d1a/l22 to d0/d1a/l2f 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:0/143: mkdir d9/d18/d1a/d22/d24/d25 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:5/155: rename d1/f2a to d1/db/d11/f33 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:0/144: creat d9/d18/d1a/d22/d24/f26 x:0 0 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:0/145: mknod d9/d18/d1a/c27 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:5/156: symlink d1/db/d11/l34 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:5/157: write d1/db/d11/d1a/f27 [462742,107481] 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:2/147: creat d0/f30 x:0 0 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:5/158: creat d1/db/d11/d13/d28/f35 x:0 0 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:5/159: chown d1/f32 119179236 1 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:2/148: creat d0/d1a/f31 x:0 0 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:2/149: fdatasync d0/d1a/f26 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:2/150: dread d0/d10/f1f [0,4194304] 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:0/146: creat d9/f28 x:0 0 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:5/160: mknod d1/db/d11/d16/d29/c36 0 2026-03-10T10:19:13.125 INFO:tasks.workunit.client.0.vm02.stdout:0/147: chown c8 10839 1 2026-03-10T10:19:13.126 INFO:tasks.workunit.client.0.vm02.stdout:0/148: read - d9/d18/d1a/f1f zero size 2026-03-10T10:19:13.126 INFO:tasks.workunit.client.0.vm02.stdout:2/151: mknod d0/c32 0 2026-03-10T10:19:13.126 INFO:tasks.workunit.client.0.vm02.stdout:5/161: write d1/db/d11/d13/f25 [614022,104914] 0 2026-03-10T10:19:13.126 INFO:tasks.workunit.client.0.vm02.stdout:0/149: write d9/f1b [1742573,110315] 0 2026-03-10T10:19:13.126 INFO:tasks.workunit.client.0.vm02.stdout:0/150: stat d9/f23 0 2026-03-10T10:19:13.126 INFO:tasks.workunit.client.0.vm02.stdout:0/151: chown d9/d18/d1a/f1f 18736475 1 2026-03-10T10:19:13.128 INFO:tasks.workunit.client.0.vm02.stdout:0/152: dread d9/d18/f1e [0,4194304] 0 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:2/152: creat d0/d1a/f33 x:0 0 0 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:5/162: mkdir d1/db/d11/d13/d28/d37 0 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:0/153: mknod d9/d18/d1a/c29 0 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:0/154: read - d9/f28 zero size 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:0/155: dread d9/f1b [0,4194304] 0 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:0/156: truncate f2 4739108 0 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:0/157: chown d9/d18/d1a 106320533 1 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:5/163: mknod d1/db/d11/d13/d28/c38 0 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:5/164: dwrite d1/db/d11/d13/d28/f2e [0,4194304] 0 2026-03-10T10:19:13.141 INFO:tasks.workunit.client.0.vm02.stdout:2/153: creat d0/d1a/d24/f34 x:0 0 0 2026-03-10T10:19:13.145 INFO:tasks.workunit.client.0.vm02.stdout:0/158: chown d9/d18/d1a/l21 7096965 1 2026-03-10T10:19:13.146 INFO:tasks.workunit.client.0.vm02.stdout:2/154: creat d0/d1a/f35 x:0 0 0 2026-03-10T10:19:13.156 INFO:tasks.workunit.client.0.vm02.stdout:0/159: write d9/d18/d1a/d22/d24/f26 [377711,8049] 0 2026-03-10T10:19:13.157 INFO:tasks.workunit.client.0.vm02.stdout:0/160: rename d9/f23 to d9/d18/f2a 0 2026-03-10T10:19:13.159 INFO:tasks.workunit.client.0.vm02.stdout:0/161: write f2 [2659583,117263] 0 2026-03-10T10:19:13.167 INFO:tasks.workunit.client.0.vm02.stdout:2/155: creat d0/f36 x:0 0 0 2026-03-10T10:19:13.170 INFO:tasks.workunit.client.0.vm02.stdout:2/156: dwrite d0/fe [0,4194304] 0 2026-03-10T10:19:13.180 INFO:tasks.workunit.client.0.vm02.stdout:0/162: fdatasync d9/d18/f2a 0 2026-03-10T10:19:13.180 INFO:tasks.workunit.client.0.vm02.stdout:0/163: fdatasync d9/d18/f2a 0 2026-03-10T10:19:13.183 INFO:tasks.workunit.client.0.vm02.stdout:0/164: symlink d9/l2b 0 2026-03-10T10:19:13.183 INFO:tasks.workunit.client.0.vm02.stdout:0/165: dread - d9/f28 zero size 2026-03-10T10:19:13.195 INFO:tasks.workunit.client.0.vm02.stdout:2/157: link d0/l11 d0/d10/l37 0 2026-03-10T10:19:13.199 INFO:tasks.workunit.client.0.vm02.stdout:0/166: creat d9/d18/d1a/d22/d24/d25/f2c x:0 0 0 2026-03-10T10:19:13.199 INFO:tasks.workunit.client.0.vm02.stdout:0/167: rename d9/d18/d1a to d9/d18/d1a/d2d 22 2026-03-10T10:19:13.206 INFO:tasks.workunit.client.0.vm02.stdout:0/168: rename d9/d18/d1a/d22/d24/d25/f2c to d9/d18/f2e 0 2026-03-10T10:19:13.211 INFO:tasks.workunit.client.0.vm02.stdout:0/169: creat d9/d18/d1a/d22/d24/f2f x:0 0 0 2026-03-10T10:19:13.211 INFO:tasks.workunit.client.0.vm02.stdout:0/170: write d9/d18/f2a [152261,99345] 0 2026-03-10T10:19:13.216 INFO:tasks.workunit.client.0.vm02.stdout:2/158: creat d0/f38 x:0 0 0 2026-03-10T10:19:13.219 INFO:tasks.workunit.client.0.vm02.stdout:0/171: mkdir d9/d18/d1a/d30 0 2026-03-10T10:19:13.219 INFO:tasks.workunit.client.0.vm02.stdout:0/172: stat d9/d18/d1a/l21 0 2026-03-10T10:19:13.219 INFO:tasks.workunit.client.0.vm02.stdout:0/173: chown d9 8040364 1 2026-03-10T10:19:13.227 INFO:tasks.workunit.client.0.vm02.stdout:2/159: symlink d0/d10/l39 0 2026-03-10T10:19:13.266 INFO:tasks.workunit.client.1.vm05.stdout:4/45: creat d1/d3/f8 x:0 0 0 2026-03-10T10:19:13.282 INFO:tasks.workunit.client.1.vm05.stdout:2/38: symlink l8 0 2026-03-10T10:19:13.283 INFO:tasks.workunit.client.1.vm05.stdout:2/39: chown l4 1305633 1 2026-03-10T10:19:13.283 INFO:tasks.workunit.client.1.vm05.stdout:9/56: creat d0/df/d11/f12 x:0 0 0 2026-03-10T10:19:13.283 INFO:tasks.workunit.client.1.vm05.stdout:9/57: dwrite d0/d1/f5 [0,4194304] 0 2026-03-10T10:19:13.338 INFO:tasks.workunit.client.0.vm02.stdout:2/160: sync 2026-03-10T10:19:13.341 INFO:tasks.workunit.client.0.vm02.stdout:2/161: symlink d0/d1a/d24/l3a 0 2026-03-10T10:19:13.341 INFO:tasks.workunit.client.0.vm02.stdout:2/162: chown d0/d10 451938227 1 2026-03-10T10:19:13.347 INFO:tasks.workunit.client.0.vm02.stdout:2/163: mknod d0/d1a/d24/c3b 0 2026-03-10T10:19:13.350 INFO:tasks.workunit.client.1.vm05.stdout:0/57: truncate d1/d7/f4 977361 0 2026-03-10T10:19:13.351 INFO:tasks.workunit.client.1.vm05.stdout:0/58: fdatasync d1/d2/d9/fd 0 2026-03-10T10:19:13.356 INFO:tasks.workunit.client.0.vm02.stdout:2/164: dwrite d0/f8 [0,4194304] 0 2026-03-10T10:19:13.359 INFO:tasks.workunit.client.0.vm02.stdout:2/165: chown d0/l2 13498 1 2026-03-10T10:19:13.360 INFO:tasks.workunit.client.1.vm05.stdout:8/42: truncate f2 30838 0 2026-03-10T10:19:13.364 INFO:tasks.workunit.client.0.vm02.stdout:1/75: rename d4/da/d14 to d4/da/d1a 0 2026-03-10T10:19:13.364 INFO:tasks.workunit.client.0.vm02.stdout:1/76: stat d4/ff 0 2026-03-10T10:19:13.365 INFO:tasks.workunit.client.1.vm05.stdout:8/43: dread f6 [0,4194304] 0 2026-03-10T10:19:13.367 INFO:tasks.workunit.client.0.vm02.stdout:2/166: mkdir d0/d10/d3c 0 2026-03-10T10:19:13.410 INFO:tasks.workunit.client.1.vm05.stdout:8/44: fsync d7/f8 0 2026-03-10T10:19:13.410 INFO:tasks.workunit.client.1.vm05.stdout:8/45: chown d7 38 1 2026-03-10T10:19:13.410 INFO:tasks.workunit.client.1.vm05.stdout:8/46: creat d7/f9 x:0 0 0 2026-03-10T10:19:13.410 INFO:tasks.workunit.client.1.vm05.stdout:8/47: unlink l4 0 2026-03-10T10:19:13.410 INFO:tasks.workunit.client.1.vm05.stdout:8/48: dread - d7/f9 zero size 2026-03-10T10:19:13.410 INFO:tasks.workunit.client.1.vm05.stdout:8/49: dread f6 [0,4194304] 0 2026-03-10T10:19:13.410 INFO:tasks.workunit.client.1.vm05.stdout:3/42: truncate f1 4545397 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:1/77: rmdir d4/da 39 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:2/167: mknod d0/d1a/c3d 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:2/168: write d0/f38 [827003,41893] 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:2/169: dread - d0/f30 zero size 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:9/114: dread da/f15 [0,4194304] 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:1/78: mkdir d4/d1b 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:1/79: write d4/f8 [2080671,109278] 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:9/115: creat da/d10/f23 x:0 0 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:9/116: stat da/d10/f19 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:9/117: readlink da/d10/l22 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:1/80: chown d4/da/d1a/c16 1900350 1 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:1/81: fsync d4/da/f12 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:6/101: dwrite d0/f2 [4194304,4194304] 0 2026-03-10T10:19:13.411 INFO:tasks.workunit.client.0.vm02.stdout:6/102: rename d0/d8/l1a to d0/d8/l22 0 2026-03-10T10:19:13.413 INFO:tasks.workunit.client.0.vm02.stdout:2/170: link d0/d10/l23 d0/d1a/l3e 0 2026-03-10T10:19:13.414 INFO:tasks.workunit.client.0.vm02.stdout:6/103: dread d0/f2 [0,4194304] 0 2026-03-10T10:19:13.415 INFO:tasks.workunit.client.0.vm02.stdout:1/82: link d4/da/f11 d4/da/d1a/f1c 0 2026-03-10T10:19:13.416 INFO:tasks.workunit.client.0.vm02.stdout:1/83: dread - d4/da/f13 zero size 2026-03-10T10:19:13.416 INFO:tasks.workunit.client.0.vm02.stdout:1/84: fdatasync d4/da/f12 0 2026-03-10T10:19:13.417 INFO:tasks.workunit.client.0.vm02.stdout:1/85: fsync d4/f18 0 2026-03-10T10:19:13.417 INFO:tasks.workunit.client.0.vm02.stdout:9/118: link da/d10/l22 da/l24 0 2026-03-10T10:19:13.419 INFO:tasks.workunit.client.1.vm05.stdout:5/88: truncate da/f15 1985525 0 2026-03-10T10:19:13.423 INFO:tasks.workunit.client.0.vm02.stdout:2/171: link d0/d1a/c3d d0/d1a/d24/c3f 0 2026-03-10T10:19:13.465 INFO:tasks.workunit.client.1.vm05.stdout:7/102: fsync d5/d17/f19 0 2026-03-10T10:19:13.465 INFO:tasks.workunit.client.1.vm05.stdout:7/103: chown d5/d17/f18 50179 1 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.1.vm05.stdout:7/104: dwrite d5/fa [0,4194304] 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.1.vm05.stdout:7/105: rmdir d5/d17 39 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.1.vm05.stdout:7/106: readlink d5/l11 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.1.vm05.stdout:7/107: creat d5/dd/f1a x:0 0 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.1.vm05.stdout:7/108: symlink d5/l1b 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.1.vm05.stdout:7/109: write d5/d17/f19 [574480,97992] 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:1/86: creat d4/f1d x:0 0 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:2/172: unlink d0/l11 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:5/165: dread d1/db/d11/d1a/f27 [0,4194304] 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:1/87: dwrite d4/ff [0,4194304] 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:5/166: creat d1/db/d11/d13/f39 x:0 0 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:5/167: rmdir d1/db/d11/d16/d29 39 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:2/173: rmdir d0/d10/d3c 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:2/174: write d0/f30 [1996,124251] 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:5/168: link d1/c7 d1/db/d11/d1a/c3a 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:5/169: read d1/db/fd [212984,124639] 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:5/170: dwrite d1/db/d11/d13/d28/f2e [0,4194304] 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:5/171: dread - d1/db/d11/d13/d28/f35 zero size 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.0.vm02.stdout:5/172: write d1/db/f22 [636286,71260] 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.1.vm05.stdout:7/110: write d5/d17/f19 [584983,66124] 0 2026-03-10T10:19:13.466 INFO:tasks.workunit.client.1.vm05.stdout:7/111: stat d5/dd 0 2026-03-10T10:19:13.467 INFO:tasks.workunit.client.1.vm05.stdout:7/112: write d5/ff [931162,37685] 0 2026-03-10T10:19:13.468 INFO:tasks.workunit.client.0.vm02.stdout:5/173: dread d1/db/d11/d13/d28/f2e [0,4194304] 0 2026-03-10T10:19:13.482 INFO:tasks.workunit.client.0.vm02.stdout:5/174: dwrite d1/db/d11/d13/d28/f31 [0,4194304] 0 2026-03-10T10:19:13.484 INFO:tasks.workunit.client.0.vm02.stdout:5/175: dread - d1/db/d11/d16/f19 zero size 2026-03-10T10:19:13.485 INFO:tasks.workunit.client.0.vm02.stdout:5/176: chown d1/db/fd 2 1 2026-03-10T10:19:13.486 INFO:tasks.workunit.client.0.vm02.stdout:5/177: rename d1/db/d11/d13 to d1/db/d11/d13/d3b 22 2026-03-10T10:19:13.487 INFO:tasks.workunit.client.0.vm02.stdout:5/178: rmdir d1/db/d11/d16 39 2026-03-10T10:19:13.487 INFO:tasks.workunit.client.0.vm02.stdout:5/179: chown d1/db/f15 3005295 1 2026-03-10T10:19:13.488 INFO:tasks.workunit.client.0.vm02.stdout:5/180: chown d1/db/d11/f33 1990305 1 2026-03-10T10:19:13.488 INFO:tasks.workunit.client.0.vm02.stdout:5/181: truncate d1/db/f2f 815326 0 2026-03-10T10:19:13.491 INFO:tasks.workunit.client.0.vm02.stdout:5/182: link d1/f10 d1/db/d11/d13/d28/d37/f3c 0 2026-03-10T10:19:13.493 INFO:tasks.workunit.client.0.vm02.stdout:5/183: mkdir d1/db/d11/d13/d28/d37/d3d 0 2026-03-10T10:19:13.493 INFO:tasks.workunit.client.0.vm02.stdout:5/184: unlink d1/db/f22 0 2026-03-10T10:19:13.494 INFO:tasks.workunit.client.1.vm05.stdout:2/40: sync 2026-03-10T10:19:13.494 INFO:tasks.workunit.client.0.vm02.stdout:5/185: write d1/db/d11/d13/d28/f31 [1836476,74660] 0 2026-03-10T10:19:13.495 INFO:tasks.workunit.client.1.vm05.stdout:2/41: readlink l8 0 2026-03-10T10:19:13.506 INFO:tasks.workunit.client.1.vm05.stdout:2/42: symlink l9 0 2026-03-10T10:19:13.507 INFO:tasks.workunit.client.0.vm02.stdout:5/186: creat d1/db/d11/f3e x:0 0 0 2026-03-10T10:19:13.509 INFO:tasks.workunit.client.0.vm02.stdout:5/187: symlink d1/db/d11/l3f 0 2026-03-10T10:19:13.510 INFO:tasks.workunit.client.0.vm02.stdout:5/188: dread - d1/f32 zero size 2026-03-10T10:19:13.512 INFO:tasks.workunit.client.0.vm02.stdout:5/189: dread d1/f3 [12582912,4194304] 0 2026-03-10T10:19:13.515 INFO:tasks.workunit.client.1.vm05.stdout:2/43: dwrite f7 [0,4194304] 0 2026-03-10T10:19:13.517 INFO:tasks.workunit.client.1.vm05.stdout:2/44: chown l6 1215459094 1 2026-03-10T10:19:13.517 INFO:tasks.workunit.client.0.vm02.stdout:5/190: dwrite d1/db/f2f [0,4194304] 0 2026-03-10T10:19:13.525 INFO:tasks.workunit.client.0.vm02.stdout:4/162: dread d1/d2/d37/f14 [0,4194304] 0 2026-03-10T10:19:13.531 INFO:tasks.workunit.client.1.vm05.stdout:2/45: dwrite f7 [4194304,4194304] 0 2026-03-10T10:19:13.533 INFO:tasks.workunit.client.1.vm05.stdout:2/46: write f7 [7466697,126769] 0 2026-03-10T10:19:13.535 INFO:tasks.workunit.client.0.vm02.stdout:8/138: fdatasync d1/f12 0 2026-03-10T10:19:13.535 INFO:tasks.workunit.client.0.vm02.stdout:5/191: write d1/db/d11/d13/f21 [1151084,33028] 0 2026-03-10T10:19:13.537 INFO:tasks.workunit.client.1.vm05.stdout:2/47: mknod ca 0 2026-03-10T10:19:13.537 INFO:tasks.workunit.client.0.vm02.stdout:8/139: creat d1/d2/f29 x:0 0 0 2026-03-10T10:19:13.537 INFO:tasks.workunit.client.0.vm02.stdout:8/140: dread - d1/f1b zero size 2026-03-10T10:19:13.539 INFO:tasks.workunit.client.0.vm02.stdout:8/141: unlink d1/l22 0 2026-03-10T10:19:13.558 INFO:tasks.workunit.client.1.vm05.stdout:2/48: dwrite f1 [0,4194304] 0 2026-03-10T10:19:13.558 INFO:tasks.workunit.client.0.vm02.stdout:8/142: creat d1/d1c/f2a x:0 0 0 2026-03-10T10:19:13.558 INFO:tasks.workunit.client.0.vm02.stdout:8/143: write d1/d1c/f1d [2008395,50353] 0 2026-03-10T10:19:13.558 INFO:tasks.workunit.client.0.vm02.stdout:8/144: creat d1/d1c/d23/d25/f2b x:0 0 0 2026-03-10T10:19:13.558 INFO:tasks.workunit.client.0.vm02.stdout:8/145: dwrite d1/d1c/f1d [8388608,4194304] 0 2026-03-10T10:19:13.558 INFO:tasks.workunit.client.0.vm02.stdout:8/146: dread - d1/f16 zero size 2026-03-10T10:19:13.558 INFO:tasks.workunit.client.0.vm02.stdout:8/147: symlink d1/d1c/l2c 0 2026-03-10T10:19:13.558 INFO:tasks.workunit.client.0.vm02.stdout:8/148: dwrite d1/d1c/d23/d25/f2b [0,4194304] 0 2026-03-10T10:19:13.559 INFO:tasks.workunit.client.0.vm02.stdout:8/149: fdatasync d1/f21 0 2026-03-10T10:19:13.566 INFO:tasks.workunit.client.1.vm05.stdout:2/49: dwrite f7 [0,4194304] 0 2026-03-10T10:19:13.566 INFO:tasks.workunit.client.0.vm02.stdout:8/150: creat d1/d1c/d23/f2d x:0 0 0 2026-03-10T10:19:13.568 INFO:tasks.workunit.client.0.vm02.stdout:8/151: unlink d1/d1c/f1d 0 2026-03-10T10:19:13.569 INFO:tasks.workunit.client.1.vm05.stdout:2/50: write f1 [141307,54647] 0 2026-03-10T10:19:13.569 INFO:tasks.workunit.client.1.vm05.stdout:2/51: chown l4 83085 1 2026-03-10T10:19:13.570 INFO:tasks.workunit.client.0.vm02.stdout:8/152: link d1/d1c/l2c d1/d2/l2e 0 2026-03-10T10:19:13.571 INFO:tasks.workunit.client.0.vm02.stdout:8/153: fdatasync d1/f1b 0 2026-03-10T10:19:13.576 INFO:tasks.workunit.client.0.vm02.stdout:8/154: dwrite d1/d1c/f20 [0,4194304] 0 2026-03-10T10:19:13.583 INFO:tasks.workunit.client.0.vm02.stdout:8/155: mknod d1/d1c/d23/d25/c2f 0 2026-03-10T10:19:13.583 INFO:tasks.workunit.client.0.vm02.stdout:8/156: mknod d1/d1c/d23/d25/c30 0 2026-03-10T10:19:13.583 INFO:tasks.workunit.client.0.vm02.stdout:8/157: creat d1/d1c/d24/f31 x:0 0 0 2026-03-10T10:19:13.586 INFO:tasks.workunit.client.0.vm02.stdout:8/158: symlink d1/d1c/d23/d25/l32 0 2026-03-10T10:19:13.595 INFO:tasks.workunit.client.0.vm02.stdout:8/159: dwrite d1/d1c/f2a [0,4194304] 0 2026-03-10T10:19:13.595 INFO:tasks.workunit.client.0.vm02.stdout:8/160: dread - d1/d1c/d24/f31 zero size 2026-03-10T10:19:13.597 INFO:tasks.workunit.client.0.vm02.stdout:4/163: sync 2026-03-10T10:19:13.599 INFO:tasks.workunit.client.0.vm02.stdout:5/192: sync 2026-03-10T10:19:13.599 INFO:tasks.workunit.client.0.vm02.stdout:8/161: creat d1/d1c/f33 x:0 0 0 2026-03-10T10:19:13.604 INFO:tasks.workunit.client.0.vm02.stdout:8/162: unlink d1/d1c/cc 0 2026-03-10T10:19:13.605 INFO:tasks.workunit.client.0.vm02.stdout:5/193: dwrite d1/db/d11/d13/f39 [0,4194304] 0 2026-03-10T10:19:13.616 INFO:tasks.workunit.client.0.vm02.stdout:8/163: creat d1/d1c/f34 x:0 0 0 2026-03-10T10:19:13.619 INFO:tasks.workunit.client.0.vm02.stdout:8/164: mkdir d1/d1c/d24/d35 0 2026-03-10T10:19:13.622 INFO:tasks.workunit.client.0.vm02.stdout:8/165: creat d1/d2/f36 x:0 0 0 2026-03-10T10:19:13.622 INFO:tasks.workunit.client.0.vm02.stdout:8/166: chown d1/d2/f36 10 1 2026-03-10T10:19:13.624 INFO:tasks.workunit.client.0.vm02.stdout:5/194: mkdir d1/db/d11/d16/d29/d40 0 2026-03-10T10:19:13.626 INFO:tasks.workunit.client.0.vm02.stdout:8/167: mknod d1/d1c/c37 0 2026-03-10T10:19:13.626 INFO:tasks.workunit.client.0.vm02.stdout:8/168: write d1/d1c/f33 [820380,109288] 0 2026-03-10T10:19:13.630 INFO:tasks.workunit.client.0.vm02.stdout:8/169: fdatasync d1/d1c/f1e 0 2026-03-10T10:19:13.631 INFO:tasks.workunit.client.0.vm02.stdout:5/195: chown d1/c7 1 1 2026-03-10T10:19:13.632 INFO:tasks.workunit.client.0.vm02.stdout:5/196: write d1/db/f2f [62955,121859] 0 2026-03-10T10:19:13.635 INFO:tasks.workunit.client.0.vm02.stdout:5/197: getdents d1/db/d11/d16/d29/d40 0 2026-03-10T10:19:13.639 INFO:tasks.workunit.client.0.vm02.stdout:5/198: mkdir d1/db/d11/d13/d28/d37/d41 0 2026-03-10T10:19:13.643 INFO:tasks.workunit.client.0.vm02.stdout:5/199: rename d1/db/d11/d13/f39 to d1/db/d11/d16/d29/d40/f42 0 2026-03-10T10:19:13.648 INFO:tasks.workunit.client.0.vm02.stdout:5/200: symlink d1/db/d11/d13/d28/d37/d41/l43 0 2026-03-10T10:19:13.694 INFO:tasks.workunit.client.1.vm05.stdout:2/52: sync 2026-03-10T10:19:13.709 INFO:tasks.workunit.client.0.vm02.stdout:8/170: sync 2026-03-10T10:19:13.709 INFO:tasks.workunit.client.0.vm02.stdout:5/201: sync 2026-03-10T10:19:13.711 INFO:tasks.workunit.client.0.vm02.stdout:8/171: dread d1/f12 [0,4194304] 0 2026-03-10T10:19:13.717 INFO:tasks.workunit.client.0.vm02.stdout:8/172: read - d1/d2/f28 zero size 2026-03-10T10:19:13.717 INFO:tasks.workunit.client.0.vm02.stdout:5/202: mknod d1/db/d11/d13/c44 0 2026-03-10T10:19:13.717 INFO:tasks.workunit.client.0.vm02.stdout:8/173: symlink d1/d2/l38 0 2026-03-10T10:19:13.717 INFO:tasks.workunit.client.0.vm02.stdout:8/174: truncate d1/d2/f29 219061 0 2026-03-10T10:19:13.717 INFO:tasks.workunit.client.0.vm02.stdout:8/175: fdatasync d1/f1b 0 2026-03-10T10:19:13.726 INFO:tasks.workunit.client.0.vm02.stdout:5/203: mknod d1/db/d11/d13/c45 0 2026-03-10T10:19:13.726 INFO:tasks.workunit.client.0.vm02.stdout:5/204: stat d1/f12 0 2026-03-10T10:19:13.727 INFO:tasks.workunit.client.0.vm02.stdout:5/205: readlink d1/lf 0 2026-03-10T10:19:13.727 INFO:tasks.workunit.client.0.vm02.stdout:5/206: read d1/db/d11/d16/d29/d40/f42 [1518999,68485] 0 2026-03-10T10:19:13.728 INFO:tasks.workunit.client.0.vm02.stdout:8/176: mknod d1/c39 0 2026-03-10T10:19:13.730 INFO:tasks.workunit.client.0.vm02.stdout:5/207: mknod d1/c46 0 2026-03-10T10:19:13.739 INFO:tasks.workunit.client.0.vm02.stdout:8/177: mknod d1/d1c/c3a 0 2026-03-10T10:19:13.739 INFO:tasks.workunit.client.0.vm02.stdout:8/178: dwrite d1/d1c/d24/f31 [0,4194304] 0 2026-03-10T10:19:13.741 INFO:tasks.workunit.client.0.vm02.stdout:5/208: creat d1/db/d11/f47 x:0 0 0 2026-03-10T10:19:13.746 INFO:tasks.workunit.client.0.vm02.stdout:5/209: dwrite d1/db/d11/f47 [0,4194304] 0 2026-03-10T10:19:13.769 INFO:tasks.workunit.client.0.vm02.stdout:5/210: dwrite d1/db/f15 [0,4194304] 0 2026-03-10T10:19:13.779 INFO:tasks.workunit.client.0.vm02.stdout:5/211: dread d1/db/f15 [0,4194304] 0 2026-03-10T10:19:13.781 INFO:tasks.workunit.client.1.vm05.stdout:1/47: dwrite f0 [0,4194304] 0 2026-03-10T10:19:13.798 INFO:tasks.workunit.client.0.vm02.stdout:5/212: dread d1/db/fd [0,4194304] 0 2026-03-10T10:19:13.800 INFO:tasks.workunit.client.0.vm02.stdout:5/213: chown d1/db/d11/d1a/c3a 473 1 2026-03-10T10:19:13.800 INFO:tasks.workunit.client.0.vm02.stdout:5/214: readlink d1/l8 0 2026-03-10T10:19:13.806 INFO:tasks.workunit.client.0.vm02.stdout:5/215: unlink d1/db/d11/d1a/c3a 0 2026-03-10T10:19:13.811 INFO:tasks.workunit.client.0.vm02.stdout:5/216: dwrite d1/db/d11/d13/d28/f2e [0,4194304] 0 2026-03-10T10:19:13.828 INFO:tasks.workunit.client.0.vm02.stdout:5/217: mkdir d1/db/d11/d16/d48 0 2026-03-10T10:19:13.828 INFO:tasks.workunit.client.0.vm02.stdout:5/218: creat d1/db/d11/d13/d28/d37/d3d/f49 x:0 0 0 2026-03-10T10:19:13.938 INFO:tasks.workunit.client.1.vm05.stdout:7/113: fdatasync d5/d17/f19 0 2026-03-10T10:19:13.947 INFO:tasks.workunit.client.1.vm05.stdout:7/114: dread d5/ff [0,4194304] 0 2026-03-10T10:19:13.949 INFO:tasks.workunit.client.1.vm05.stdout:7/115: mknod d5/d17/c1c 0 2026-03-10T10:19:13.952 INFO:tasks.workunit.client.1.vm05.stdout:7/116: mkdir d5/d1d 0 2026-03-10T10:19:13.966 INFO:tasks.workunit.client.0.vm02.stdout:7/107: dwrite d1/dc/d16/f1e [0,4194304] 0 2026-03-10T10:19:13.975 INFO:tasks.workunit.client.0.vm02.stdout:3/91: dread f0 [0,4194304] 0 2026-03-10T10:19:13.978 INFO:tasks.workunit.client.0.vm02.stdout:3/92: symlink d1/l1d 0 2026-03-10T10:19:13.979 INFO:tasks.workunit.client.0.vm02.stdout:7/108: mknod d1/c23 0 2026-03-10T10:19:13.980 INFO:tasks.workunit.client.0.vm02.stdout:7/109: write d1/dc/ff [3963220,13768] 0 2026-03-10T10:19:13.980 INFO:tasks.workunit.client.0.vm02.stdout:7/110: chown d1/f15 25694 1 2026-03-10T10:19:13.984 INFO:tasks.workunit.client.0.vm02.stdout:3/93: symlink d1/d6/l1e 0 2026-03-10T10:19:13.984 INFO:tasks.workunit.client.0.vm02.stdout:7/111: dwrite d1/dc/d10/f13 [0,4194304] 0 2026-03-10T10:19:13.985 INFO:tasks.workunit.client.0.vm02.stdout:3/94: fdatasync d1/f1c 0 2026-03-10T10:19:13.991 INFO:tasks.workunit.client.0.vm02.stdout:3/95: mknod d1/d8/c1f 0 2026-03-10T10:19:13.993 INFO:tasks.workunit.client.0.vm02.stdout:7/112: dread d1/dc/d16/f1e [0,4194304] 0 2026-03-10T10:19:13.995 INFO:tasks.workunit.client.0.vm02.stdout:3/96: mkdir d1/d20 0 2026-03-10T10:19:13.995 INFO:tasks.workunit.client.0.vm02.stdout:3/97: write d1/f1c [504372,10020] 0 2026-03-10T10:19:13.996 INFO:tasks.workunit.client.0.vm02.stdout:7/113: creat d1/dc/d10/f24 x:0 0 0 2026-03-10T10:19:13.997 INFO:tasks.workunit.client.0.vm02.stdout:3/98: mkdir d1/d8/d21 0 2026-03-10T10:19:13.999 INFO:tasks.workunit.client.0.vm02.stdout:7/114: creat d1/dc/f25 x:0 0 0 2026-03-10T10:19:14.009 INFO:tasks.workunit.client.0.vm02.stdout:7/115: creat d1/dc/f26 x:0 0 0 2026-03-10T10:19:14.009 INFO:tasks.workunit.client.0.vm02.stdout:7/116: fdatasync d1/f15 0 2026-03-10T10:19:14.009 INFO:tasks.workunit.client.0.vm02.stdout:7/117: link d1/dc/f3 d1/dc/d10/f27 0 2026-03-10T10:19:14.009 INFO:tasks.workunit.client.0.vm02.stdout:7/118: dwrite d1/dc/d10/f24 [0,4194304] 0 2026-03-10T10:19:14.012 INFO:tasks.workunit.client.0.vm02.stdout:7/119: mkdir d1/dc/d16/d28 0 2026-03-10T10:19:14.019 INFO:tasks.workunit.client.0.vm02.stdout:0/174: fsync d9/d18/d1a/d22/d24/f26 0 2026-03-10T10:19:14.024 INFO:tasks.workunit.client.0.vm02.stdout:0/175: dwrite d9/d18/d1a/d22/d24/f2f [0,4194304] 0 2026-03-10T10:19:14.025 INFO:tasks.workunit.client.0.vm02.stdout:0/176: write d9/f17 [3515775,61903] 0 2026-03-10T10:19:14.025 INFO:tasks.workunit.client.0.vm02.stdout:0/177: truncate d9/f17 4344500 0 2026-03-10T10:19:14.046 INFO:tasks.workunit.client.1.vm05.stdout:6/46: truncate fb 1270027 0 2026-03-10T10:19:14.047 INFO:tasks.workunit.client.1.vm05.stdout:6/47: mkdir dd/df 0 2026-03-10T10:19:14.047 INFO:tasks.workunit.client.1.vm05.stdout:6/48: readlink l8 0 2026-03-10T10:19:14.063 INFO:tasks.workunit.client.0.vm02.stdout:0/178: sync 2026-03-10T10:19:14.064 INFO:tasks.workunit.client.0.vm02.stdout:0/179: fdatasync d9/d18/f1e 0 2026-03-10T10:19:14.065 INFO:tasks.workunit.client.0.vm02.stdout:0/180: mkdir d9/d18/d1a/d31 0 2026-03-10T10:19:14.085 INFO:tasks.workunit.client.1.vm05.stdout:9/58: truncate d0/f7 259721 0 2026-03-10T10:19:14.086 INFO:tasks.workunit.client.1.vm05.stdout:9/59: write d0/d2/f8 [8310,4607] 0 2026-03-10T10:19:14.091 INFO:tasks.workunit.client.1.vm05.stdout:9/60: rename d0/d2 to d0/d1/d13 0 2026-03-10T10:19:14.097 INFO:tasks.workunit.client.1.vm05.stdout:9/61: rename d0 to d0/d1/d13/d14 22 2026-03-10T10:19:14.097 INFO:tasks.workunit.client.1.vm05.stdout:9/62: write d0/d1/f9 [1642834,51788] 0 2026-03-10T10:19:14.099 INFO:tasks.workunit.client.1.vm05.stdout:9/63: write d0/df/d11/f12 [929581,39388] 0 2026-03-10T10:19:14.101 INFO:tasks.workunit.client.1.vm05.stdout:9/64: rename d0/d1/d13/cd to d0/d1/c15 0 2026-03-10T10:19:14.104 INFO:tasks.workunit.client.1.vm05.stdout:9/65: write d0/d1/f5 [2049902,81687] 0 2026-03-10T10:19:14.105 INFO:tasks.workunit.client.1.vm05.stdout:0/59: truncate d1/d2/fc 834622 0 2026-03-10T10:19:14.110 INFO:tasks.workunit.client.1.vm05.stdout:0/60: creat d1/d7/f16 x:0 0 0 2026-03-10T10:19:14.118 INFO:tasks.workunit.client.1.vm05.stdout:0/61: readlink d1/d7/l5 0 2026-03-10T10:19:14.118 INFO:tasks.workunit.client.1.vm05.stdout:3/43: dwrite f1 [0,4194304] 0 2026-03-10T10:19:14.119 INFO:tasks.workunit.client.0.vm02.stdout:6/104: truncate d0/f2 3635611 0 2026-03-10T10:19:14.119 INFO:tasks.workunit.client.0.vm02.stdout:6/105: chown d0/d8/l19 1 1 2026-03-10T10:19:14.119 INFO:tasks.workunit.client.0.vm02.stdout:2/175: getdents d0/d1a/d24 0 2026-03-10T10:19:14.119 INFO:tasks.workunit.client.0.vm02.stdout:2/176: dread - d0/d10/f19 zero size 2026-03-10T10:19:14.119 INFO:tasks.workunit.client.0.vm02.stdout:9/119: write da/ff [3241017,31736] 0 2026-03-10T10:19:14.119 INFO:tasks.workunit.client.0.vm02.stdout:9/120: fdatasync f7 0 2026-03-10T10:19:14.119 INFO:tasks.workunit.client.0.vm02.stdout:2/177: write d0/d1a/f25 [3990508,125297] 0 2026-03-10T10:19:14.122 INFO:tasks.workunit.client.0.vm02.stdout:2/178: dwrite d0/f38 [0,4194304] 0 2026-03-10T10:19:14.123 INFO:tasks.workunit.client.0.vm02.stdout:2/179: truncate d0/d10/f19 805181 0 2026-03-10T10:19:14.123 INFO:tasks.workunit.client.1.vm05.stdout:0/62: mkdir d1/d7/db/d13/d17 0 2026-03-10T10:19:14.124 INFO:tasks.workunit.client.1.vm05.stdout:0/63: stat d1/d7/db/d12 0 2026-03-10T10:19:14.125 INFO:tasks.workunit.client.0.vm02.stdout:2/180: chown d0/l1c 6445 1 2026-03-10T10:19:14.128 INFO:tasks.workunit.client.1.vm05.stdout:0/64: fdatasync d1/d2/d9/fd 0 2026-03-10T10:19:14.134 INFO:tasks.workunit.client.1.vm05.stdout:0/65: truncate d1/f11 72378 0 2026-03-10T10:19:14.134 INFO:tasks.workunit.client.1.vm05.stdout:5/89: dwrite da/f15 [0,4194304] 0 2026-03-10T10:19:14.137 INFO:tasks.workunit.client.0.vm02.stdout:2/181: mknod d0/c40 0 2026-03-10T10:19:14.138 INFO:tasks.workunit.client.0.vm02.stdout:2/182: mknod d0/c41 0 2026-03-10T10:19:14.142 INFO:tasks.workunit.client.1.vm05.stdout:5/90: dwrite f5 [0,4194304] 0 2026-03-10T10:19:14.143 INFO:tasks.workunit.client.0.vm02.stdout:2/183: fdatasync d0/d10/f19 0 2026-03-10T10:19:14.143 INFO:tasks.workunit.client.0.vm02.stdout:2/184: mknod d0/d1a/c42 0 2026-03-10T10:19:14.143 INFO:tasks.workunit.client.0.vm02.stdout:2/185: dwrite d0/d1a/f31 [0,4194304] 0 2026-03-10T10:19:14.147 INFO:tasks.workunit.client.1.vm05.stdout:0/66: dwrite d1/f11 [0,4194304] 0 2026-03-10T10:19:14.164 INFO:tasks.workunit.client.1.vm05.stdout:5/91: creat da/db/f1d x:0 0 0 2026-03-10T10:19:14.170 INFO:tasks.workunit.client.1.vm05.stdout:5/92: dread - da/db/d17/f1c zero size 2026-03-10T10:19:14.170 INFO:tasks.workunit.client.1.vm05.stdout:5/93: stat f9 0 2026-03-10T10:19:14.170 INFO:tasks.workunit.client.1.vm05.stdout:5/94: stat da/c11 0 2026-03-10T10:19:14.181 INFO:tasks.workunit.client.1.vm05.stdout:5/95: creat da/db/f1e x:0 0 0 2026-03-10T10:19:14.183 INFO:tasks.workunit.client.1.vm05.stdout:5/96: rmdir da/db/d17 39 2026-03-10T10:19:14.185 INFO:tasks.workunit.client.1.vm05.stdout:5/97: dread f9 [0,4194304] 0 2026-03-10T10:19:14.186 INFO:tasks.workunit.client.1.vm05.stdout:5/98: dread - da/db/f1e zero size 2026-03-10T10:19:14.188 INFO:tasks.workunit.client.1.vm05.stdout:5/99: chown l0 6591 1 2026-03-10T10:19:14.189 INFO:tasks.workunit.client.1.vm05.stdout:5/100: stat da/db/de/c1b 0 2026-03-10T10:19:14.192 INFO:tasks.workunit.client.1.vm05.stdout:5/101: mknod da/db/de/c1f 0 2026-03-10T10:19:14.193 INFO:tasks.workunit.client.1.vm05.stdout:5/102: creat da/f20 x:0 0 0 2026-03-10T10:19:14.239 INFO:tasks.workunit.client.0.vm02.stdout:1/88: dwrite d4/fe [0,4194304] 0 2026-03-10T10:19:14.239 INFO:tasks.workunit.client.0.vm02.stdout:9/121: dread da/ff [0,4194304] 0 2026-03-10T10:19:14.242 INFO:tasks.workunit.client.0.vm02.stdout:1/89: rmdir d4 39 2026-03-10T10:19:14.245 INFO:tasks.workunit.client.0.vm02.stdout:1/90: chown d4/da/d1a/c16 10367 1 2026-03-10T10:19:14.247 INFO:tasks.workunit.client.0.vm02.stdout:1/91: symlink d4/da/d1a/l1e 0 2026-03-10T10:19:14.258 INFO:tasks.workunit.client.0.vm02.stdout:9/122: rename da/fb to da/f25 0 2026-03-10T10:19:14.258 INFO:tasks.workunit.client.0.vm02.stdout:9/123: chown l2 806 1 2026-03-10T10:19:14.258 INFO:tasks.workunit.client.0.vm02.stdout:1/92: mknod d4/c1f 0 2026-03-10T10:19:14.258 INFO:tasks.workunit.client.0.vm02.stdout:1/93: write d4/da/d1a/f19 [2332001,22204] 0 2026-03-10T10:19:14.259 INFO:tasks.workunit.client.0.vm02.stdout:1/94: link d4/da/c10 d4/c20 0 2026-03-10T10:19:14.260 INFO:tasks.workunit.client.0.vm02.stdout:1/95: read d4/da/f12 [376145,16495] 0 2026-03-10T10:19:14.264 INFO:tasks.workunit.client.0.vm02.stdout:1/96: dwrite d4/da/fc [0,4194304] 0 2026-03-10T10:19:14.268 INFO:tasks.workunit.client.0.vm02.stdout:1/97: dwrite f3 [0,4194304] 0 2026-03-10T10:19:14.283 INFO:tasks.workunit.client.0.vm02.stdout:1/98: creat d4/f21 x:0 0 0 2026-03-10T10:19:14.283 INFO:tasks.workunit.client.0.vm02.stdout:1/99: stat d4 0 2026-03-10T10:19:14.283 INFO:tasks.workunit.client.0.vm02.stdout:1/100: stat d4/da/f12 0 2026-03-10T10:19:14.356 INFO:tasks.workunit.client.1.vm05.stdout:2/53: fsync f7 0 2026-03-10T10:19:14.363 INFO:tasks.workunit.client.1.vm05.stdout:2/54: dwrite f1 [0,4194304] 0 2026-03-10T10:19:14.370 INFO:tasks.workunit.client.1.vm05.stdout:2/55: dread f1 [0,4194304] 0 2026-03-10T10:19:14.377 INFO:tasks.workunit.client.1.vm05.stdout:5/103: sync 2026-03-10T10:19:14.377 INFO:tasks.workunit.client.1.vm05.stdout:2/56: mkdir db 0 2026-03-10T10:19:14.386 INFO:tasks.workunit.client.1.vm05.stdout:2/57: dwrite f7 [4194304,4194304] 0 2026-03-10T10:19:14.389 INFO:tasks.workunit.client.1.vm05.stdout:5/104: dwrite da/db/d17/f1c [0,4194304] 0 2026-03-10T10:19:14.393 INFO:tasks.workunit.client.1.vm05.stdout:2/58: write f7 [9095778,43575] 0 2026-03-10T10:19:14.399 INFO:tasks.workunit.client.1.vm05.stdout:5/105: mknod da/db/c21 0 2026-03-10T10:19:14.406 INFO:tasks.workunit.client.1.vm05.stdout:2/59: link l6 db/lc 0 2026-03-10T10:19:14.407 INFO:tasks.workunit.client.1.vm05.stdout:5/106: dread - da/f20 zero size 2026-03-10T10:19:14.407 INFO:tasks.workunit.client.1.vm05.stdout:2/60: mknod db/cd 0 2026-03-10T10:19:14.407 INFO:tasks.workunit.client.1.vm05.stdout:2/61: chown ca 2333 1 2026-03-10T10:19:14.408 INFO:tasks.workunit.client.1.vm05.stdout:2/62: rmdir db 39 2026-03-10T10:19:14.438 INFO:tasks.workunit.client.0.vm02.stdout:4/164: dwrite d1/d2/d37/f14 [0,4194304] 0 2026-03-10T10:19:14.439 INFO:tasks.workunit.client.0.vm02.stdout:4/165: write d1/d2/f31 [693949,33088] 0 2026-03-10T10:19:14.440 INFO:tasks.workunit.client.0.vm02.stdout:4/166: chown d1/d32/l36 39 1 2026-03-10T10:19:14.447 INFO:tasks.workunit.client.0.vm02.stdout:4/167: symlink d1/d32/l39 0 2026-03-10T10:19:14.447 INFO:tasks.workunit.client.0.vm02.stdout:4/168: chown d1 198028 1 2026-03-10T10:19:14.450 INFO:tasks.workunit.client.0.vm02.stdout:4/169: dwrite d1/d2/f34 [0,4194304] 0 2026-03-10T10:19:14.453 INFO:tasks.workunit.client.0.vm02.stdout:5/219: rename d1/db/d11/d16/d29/d40/f42 to d1/db/d11/f4a 0 2026-03-10T10:19:14.456 INFO:tasks.workunit.client.0.vm02.stdout:4/170: mkdir d1/d10/d3a 0 2026-03-10T10:19:14.457 INFO:tasks.workunit.client.0.vm02.stdout:8/179: truncate d1/d1c/d23/d25/f2b 1006622 0 2026-03-10T10:19:14.459 INFO:tasks.workunit.client.0.vm02.stdout:4/171: rename d1/d2/c1f to d1/d32/c3b 0 2026-03-10T10:19:14.462 INFO:tasks.workunit.client.0.vm02.stdout:8/180: creat d1/d1c/d23/f3b x:0 0 0 2026-03-10T10:19:14.463 INFO:tasks.workunit.client.1.vm05.stdout:1/48: truncate f3 3388426 0 2026-03-10T10:19:14.470 INFO:tasks.workunit.client.1.vm05.stdout:1/49: symlink d4/l10 0 2026-03-10T10:19:14.473 INFO:tasks.workunit.client.1.vm05.stdout:7/117: truncate d5/ff 1447293 0 2026-03-10T10:19:14.473 INFO:tasks.workunit.client.0.vm02.stdout:4/172: rmdir d1/d10/d3a 0 2026-03-10T10:19:14.473 INFO:tasks.workunit.client.0.vm02.stdout:4/173: dread - d1/d10/db/f24 zero size 2026-03-10T10:19:14.473 INFO:tasks.workunit.client.0.vm02.stdout:4/174: read d1/d2/d37/f14 [766882,123839] 0 2026-03-10T10:19:14.476 INFO:tasks.workunit.client.0.vm02.stdout:1/101: fdatasync f3 0 2026-03-10T10:19:14.478 INFO:tasks.workunit.client.0.vm02.stdout:4/175: creat d1/d2/d37/f3c x:0 0 0 2026-03-10T10:19:14.480 INFO:tasks.workunit.client.0.vm02.stdout:8/181: getdents d1/d2 0 2026-03-10T10:19:14.480 INFO:tasks.workunit.client.1.vm05.stdout:4/46: dwrite f0 [0,4194304] 0 2026-03-10T10:19:14.481 INFO:tasks.workunit.client.0.vm02.stdout:3/99: getdents d1/d6 0 2026-03-10T10:19:14.485 INFO:tasks.workunit.client.0.vm02.stdout:1/102: mkdir d4/da/d1a/d22 0 2026-03-10T10:19:14.497 INFO:tasks.workunit.client.0.vm02.stdout:4/176: unlink d1/d10/l2b 0 2026-03-10T10:19:14.497 INFO:tasks.workunit.client.0.vm02.stdout:7/120: truncate d1/dc/f3 126649 0 2026-03-10T10:19:14.497 INFO:tasks.workunit.client.0.vm02.stdout:3/100: rename d1/d8/f9 to d1/d20/f22 0 2026-03-10T10:19:14.497 INFO:tasks.workunit.client.1.vm05.stdout:9/66: fsync d0/fa 0 2026-03-10T10:19:14.497 INFO:tasks.workunit.client.1.vm05.stdout:9/67: chown d0/d1 80391763 1 2026-03-10T10:19:14.497 INFO:tasks.workunit.client.1.vm05.stdout:9/68: chown d0/d1/fb 3724 1 2026-03-10T10:19:14.497 INFO:tasks.workunit.client.1.vm05.stdout:9/69: dread d0/d1/f5 [0,4194304] 0 2026-03-10T10:19:14.497 INFO:tasks.workunit.client.1.vm05.stdout:4/47: mkdir d1/d3/d9 0 2026-03-10T10:19:14.498 INFO:tasks.workunit.client.0.vm02.stdout:5/220: sync 2026-03-10T10:19:14.519 INFO:tasks.workunit.client.1.vm05.stdout:9/70: mkdir d0/d1/d16 0 2026-03-10T10:19:14.519 INFO:tasks.workunit.client.0.vm02.stdout:7/121: mknod d1/dc/c29 0 2026-03-10T10:19:14.523 INFO:tasks.workunit.client.0.vm02.stdout:7/122: dread d1/dc/d10/f24 [0,4194304] 0 2026-03-10T10:19:14.523 INFO:tasks.workunit.client.1.vm05.stdout:4/48: mknod d1/ca 0 2026-03-10T10:19:14.523 INFO:tasks.workunit.client.1.vm05.stdout:6/49: dread fb [0,4194304] 0 2026-03-10T10:19:14.524 INFO:tasks.workunit.client.1.vm05.stdout:9/71: write d0/fa [193789,88278] 0 2026-03-10T10:19:14.524 INFO:tasks.workunit.client.1.vm05.stdout:4/49: stat d1 0 2026-03-10T10:19:14.525 INFO:tasks.workunit.client.1.vm05.stdout:4/50: chown d1 11063 1 2026-03-10T10:19:14.525 INFO:tasks.workunit.client.1.vm05.stdout:4/51: chown d1 117968201 1 2026-03-10T10:19:14.527 INFO:tasks.workunit.client.0.vm02.stdout:7/123: dwrite d1/dc/f25 [0,4194304] 0 2026-03-10T10:19:14.531 INFO:tasks.workunit.client.0.vm02.stdout:7/124: dwrite d1/f17 [0,4194304] 0 2026-03-10T10:19:14.533 INFO:tasks.workunit.client.0.vm02.stdout:7/125: chown d1/f5 43 1 2026-03-10T10:19:14.533 INFO:tasks.workunit.client.0.vm02.stdout:7/126: write d1/dc/d10/f24 [2067012,95202] 0 2026-03-10T10:19:14.535 INFO:tasks.workunit.client.0.vm02.stdout:0/181: truncate f2 1444843 0 2026-03-10T10:19:14.541 INFO:tasks.workunit.client.1.vm05.stdout:4/52: dwrite d1/f6 [0,4194304] 0 2026-03-10T10:19:14.541 INFO:tasks.workunit.client.0.vm02.stdout:0/182: fsync d9/f28 0 2026-03-10T10:19:14.549 INFO:tasks.workunit.client.0.vm02.stdout:3/101: mknod d1/d6/c23 0 2026-03-10T10:19:14.554 INFO:tasks.workunit.client.0.vm02.stdout:1/103: creat d4/da/d1a/d22/f23 x:0 0 0 2026-03-10T10:19:14.557 INFO:tasks.workunit.client.1.vm05.stdout:8/50: write f2 [477110,31427] 0 2026-03-10T10:19:14.557 INFO:tasks.workunit.client.1.vm05.stdout:9/72: stat d0/d1/d13/l6 0 2026-03-10T10:19:14.560 INFO:tasks.workunit.client.1.vm05.stdout:4/53: unlink d1/d3/f8 0 2026-03-10T10:19:14.561 INFO:tasks.workunit.client.1.vm05.stdout:8/51: mknod d7/ca 0 2026-03-10T10:19:14.563 INFO:tasks.workunit.client.1.vm05.stdout:9/73: symlink d0/d1/d13/l17 0 2026-03-10T10:19:14.575 INFO:tasks.workunit.client.1.vm05.stdout:4/54: rename d1/f6 to d1/fb 0 2026-03-10T10:19:14.575 INFO:tasks.workunit.client.0.vm02.stdout:6/106: write d0/f2 [3622180,67436] 0 2026-03-10T10:19:14.575 INFO:tasks.workunit.client.0.vm02.stdout:6/107: stat d0/f21 0 2026-03-10T10:19:14.575 INFO:tasks.workunit.client.0.vm02.stdout:6/108: fsync d0/f1c 0 2026-03-10T10:19:14.575 INFO:tasks.workunit.client.0.vm02.stdout:0/183: mknod d9/c32 0 2026-03-10T10:19:14.576 INFO:tasks.workunit.client.0.vm02.stdout:0/184: dread d9/d18/d1a/d22/d24/f2f [0,4194304] 0 2026-03-10T10:19:14.576 INFO:tasks.workunit.client.0.vm02.stdout:0/185: fsync d9/f28 0 2026-03-10T10:19:14.577 INFO:tasks.workunit.client.1.vm05.stdout:8/52: creat d7/fb x:0 0 0 2026-03-10T10:19:14.578 INFO:tasks.workunit.client.0.vm02.stdout:3/102: symlink d1/d8/d21/l24 0 2026-03-10T10:19:14.584 INFO:tasks.workunit.client.0.vm02.stdout:3/103: chown d1/l7 1161258 1 2026-03-10T10:19:14.586 INFO:tasks.workunit.client.1.vm05.stdout:9/74: unlink d0/d1/f5 0 2026-03-10T10:19:14.586 INFO:tasks.workunit.client.1.vm05.stdout:6/50: getdents dd 0 2026-03-10T10:19:14.594 INFO:tasks.workunit.client.1.vm05.stdout:4/55: mkdir d1/d3/d9/dc 0 2026-03-10T10:19:14.594 INFO:tasks.workunit.client.1.vm05.stdout:4/56: truncate d1/d3/f5 133742 0 2026-03-10T10:19:14.598 INFO:tasks.workunit.client.1.vm05.stdout:4/57: dread f0 [0,4194304] 0 2026-03-10T10:19:14.606 INFO:tasks.workunit.client.1.vm05.stdout:6/51: mknod dd/df/c10 0 2026-03-10T10:19:14.606 INFO:tasks.workunit.client.1.vm05.stdout:9/75: dwrite d0/d1/fb [0,4194304] 0 2026-03-10T10:19:14.613 INFO:tasks.workunit.client.1.vm05.stdout:6/52: mknod dd/c11 0 2026-03-10T10:19:14.615 INFO:tasks.workunit.client.1.vm05.stdout:4/58: dwrite d1/fb [0,4194304] 0 2026-03-10T10:19:14.616 INFO:tasks.workunit.client.1.vm05.stdout:9/76: creat d0/d1/d16/f18 x:0 0 0 2026-03-10T10:19:14.616 INFO:tasks.workunit.client.1.vm05.stdout:4/59: stat d1 0 2026-03-10T10:19:14.617 INFO:tasks.workunit.client.1.vm05.stdout:6/53: mkdir dd/df/d12 0 2026-03-10T10:19:14.629 INFO:tasks.workunit.client.1.vm05.stdout:4/60: fsync f0 0 2026-03-10T10:19:14.630 INFO:tasks.workunit.client.1.vm05.stdout:6/54: getdents dd/df/d12 0 2026-03-10T10:19:14.636 INFO:tasks.workunit.client.1.vm05.stdout:4/61: creat d1/d3/d9/fd x:0 0 0 2026-03-10T10:19:14.645 INFO:tasks.workunit.client.1.vm05.stdout:4/62: symlink d1/d3/le 0 2026-03-10T10:19:14.675 INFO:tasks.workunit.client.1.vm05.stdout:8/53: sync 2026-03-10T10:19:14.675 INFO:tasks.workunit.client.1.vm05.stdout:8/54: dread - d7/f9 zero size 2026-03-10T10:19:14.677 INFO:tasks.workunit.client.1.vm05.stdout:8/55: creat d7/fc x:0 0 0 2026-03-10T10:19:14.691 INFO:tasks.workunit.client.1.vm05.stdout:3/44: write f9 [245911,88643] 0 2026-03-10T10:19:14.692 INFO:tasks.workunit.client.0.vm02.stdout:2/186: getdents d0 0 2026-03-10T10:19:14.695 INFO:tasks.workunit.client.1.vm05.stdout:3/45: creat fb x:0 0 0 2026-03-10T10:19:14.696 INFO:tasks.workunit.client.1.vm05.stdout:0/67: write d1/f11 [5022275,67288] 0 2026-03-10T10:19:14.699 INFO:tasks.workunit.client.1.vm05.stdout:5/107: getdents da 0 2026-03-10T10:19:14.700 INFO:tasks.workunit.client.1.vm05.stdout:3/46: mknod cc 0 2026-03-10T10:19:14.703 INFO:tasks.workunit.client.0.vm02.stdout:9/124: truncate da/f25 5736574 0 2026-03-10T10:19:14.705 INFO:tasks.workunit.client.0.vm02.stdout:9/125: readlink da/lc 0 2026-03-10T10:19:14.708 INFO:tasks.workunit.client.1.vm05.stdout:3/47: mkdir dd 0 2026-03-10T10:19:14.708 INFO:tasks.workunit.client.1.vm05.stdout:0/68: link d1/d2/d9/fd d1/d7/db/d13/d17/f18 0 2026-03-10T10:19:14.709 INFO:tasks.workunit.client.1.vm05.stdout:2/63: truncate f1 2938275 0 2026-03-10T10:19:14.711 INFO:tasks.workunit.client.1.vm05.stdout:5/108: dwrite da/db/f1e [0,4194304] 0 2026-03-10T10:19:14.715 INFO:tasks.workunit.client.0.vm02.stdout:8/182: rmdir d1/d1c/d23/d25 39 2026-03-10T10:19:14.718 INFO:tasks.workunit.client.0.vm02.stdout:4/177: unlink d1/d32/c3b 0 2026-03-10T10:19:14.721 INFO:tasks.workunit.client.0.vm02.stdout:8/183: dwrite d1/f10 [0,4194304] 0 2026-03-10T10:19:14.721 INFO:tasks.workunit.client.1.vm05.stdout:3/48: sync 2026-03-10T10:19:14.724 INFO:tasks.workunit.client.0.vm02.stdout:8/184: write d1/f1b [689006,51346] 0 2026-03-10T10:19:14.727 INFO:tasks.workunit.client.1.vm05.stdout:0/69: dwrite d1/d7/f16 [0,4194304] 0 2026-03-10T10:19:14.729 INFO:tasks.workunit.client.1.vm05.stdout:0/70: chown d1/d7/f16 3342 1 2026-03-10T10:19:14.736 INFO:tasks.workunit.client.1.vm05.stdout:1/50: dread f3 [0,4194304] 0 2026-03-10T10:19:14.741 INFO:tasks.workunit.client.0.vm02.stdout:8/185: mknod d1/d1c/d23/d25/c3c 0 2026-03-10T10:19:14.748 INFO:tasks.workunit.client.1.vm05.stdout:0/71: dwrite d1/d7/f16 [0,4194304] 0 2026-03-10T10:19:14.748 INFO:tasks.workunit.client.1.vm05.stdout:5/109: symlink da/db/d17/l22 0 2026-03-10T10:19:14.752 INFO:tasks.workunit.client.0.vm02.stdout:8/186: fsync d1/d2/f27 0 2026-03-10T10:19:14.753 INFO:tasks.workunit.client.0.vm02.stdout:8/187: truncate d1/d1c/d23/f3b 228175 0 2026-03-10T10:19:14.753 INFO:tasks.workunit.client.0.vm02.stdout:8/188: fsync d1/d1c/d24/f31 0 2026-03-10T10:19:14.757 INFO:tasks.workunit.client.0.vm02.stdout:8/189: dread d1/d1c/f33 [0,4194304] 0 2026-03-10T10:19:14.757 INFO:tasks.workunit.client.0.vm02.stdout:8/190: fsync d1/f12 0 2026-03-10T10:19:14.760 INFO:tasks.workunit.client.0.vm02.stdout:8/191: read d1/f1b [244370,17115] 0 2026-03-10T10:19:14.776 INFO:tasks.workunit.client.1.vm05.stdout:5/110: mkdir da/d23 0 2026-03-10T10:19:14.778 INFO:tasks.workunit.client.1.vm05.stdout:3/49: creat dd/fe x:0 0 0 2026-03-10T10:19:14.785 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:14 vm02.local ceph-mon[50200]: pgmap v146: 65 pgs: 65 active+clean; 445 MiB data, 2.5 GiB used, 117 GiB / 120 GiB avail; 2.8 MiB/s rd, 38 MiB/s wr, 366 op/s 2026-03-10T10:19:14.785 INFO:tasks.workunit.client.1.vm05.stdout:3/50: chown l7 7596501 1 2026-03-10T10:19:14.785 INFO:tasks.workunit.client.1.vm05.stdout:2/64: link db/lc db/le 0 2026-03-10T10:19:14.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:14 vm05.local ceph-mon[59051]: pgmap v146: 65 pgs: 65 active+clean; 445 MiB data, 2.5 GiB used, 117 GiB / 120 GiB avail; 2.8 MiB/s rd, 38 MiB/s wr, 366 op/s 2026-03-10T10:19:14.790 INFO:tasks.workunit.client.1.vm05.stdout:1/51: getdents d4 0 2026-03-10T10:19:14.792 INFO:tasks.workunit.client.1.vm05.stdout:5/111: creat da/db/f24 x:0 0 0 2026-03-10T10:19:14.792 INFO:tasks.workunit.client.1.vm05.stdout:5/112: write da/f15 [3495028,87353] 0 2026-03-10T10:19:14.795 INFO:tasks.workunit.client.1.vm05.stdout:5/113: write da/f15 [2594674,125870] 0 2026-03-10T10:19:14.795 INFO:tasks.workunit.client.1.vm05.stdout:2/65: rename l9 to db/lf 0 2026-03-10T10:19:14.796 INFO:tasks.workunit.client.1.vm05.stdout:2/66: chown l8 83238330 1 2026-03-10T10:19:14.800 INFO:tasks.workunit.client.1.vm05.stdout:1/52: rename f3 to d4/df/f11 0 2026-03-10T10:19:14.803 INFO:tasks.workunit.client.1.vm05.stdout:1/53: write f0 [475553,52623] 0 2026-03-10T10:19:14.808 INFO:tasks.workunit.client.1.vm05.stdout:2/67: link l6 db/l10 0 2026-03-10T10:19:14.809 INFO:tasks.workunit.client.0.vm02.stdout:5/221: rename d1/lc to d1/db/d11/d1a/l4b 0 2026-03-10T10:19:14.811 INFO:tasks.workunit.client.0.vm02.stdout:1/104: rename d4/da/f11 to d4/d1b/f24 0 2026-03-10T10:19:14.812 INFO:tasks.workunit.client.1.vm05.stdout:2/68: creat db/f11 x:0 0 0 2026-03-10T10:19:14.813 INFO:tasks.workunit.client.0.vm02.stdout:1/105: rmdir d4/da/d1a 39 2026-03-10T10:19:14.814 INFO:tasks.workunit.client.1.vm05.stdout:2/69: mkdir db/d12 0 2026-03-10T10:19:14.823 INFO:tasks.workunit.client.1.vm05.stdout:2/70: symlink db/d12/l13 0 2026-03-10T10:19:14.823 INFO:tasks.workunit.client.0.vm02.stdout:7/127: rename d1/d1b/l1d to d1/dc/d16/d28/l2a 0 2026-03-10T10:19:14.823 INFO:tasks.workunit.client.0.vm02.stdout:7/128: chown d1/dc/l9 323 1 2026-03-10T10:19:14.823 INFO:tasks.workunit.client.0.vm02.stdout:0/186: rename d9/c16 to d9/d18/d1a/d22/d24/d25/c33 0 2026-03-10T10:19:14.825 INFO:tasks.workunit.client.0.vm02.stdout:1/106: creat d4/da/f25 x:0 0 0 2026-03-10T10:19:14.825 INFO:tasks.workunit.client.1.vm05.stdout:2/71: creat db/f14 x:0 0 0 2026-03-10T10:19:14.826 INFO:tasks.workunit.client.1.vm05.stdout:1/54: sync 2026-03-10T10:19:14.830 INFO:tasks.workunit.client.0.vm02.stdout:1/107: write d4/da/d1a/f19 [1708598,128606] 0 2026-03-10T10:19:14.830 INFO:tasks.workunit.client.0.vm02.stdout:1/108: fsync d4/fe 0 2026-03-10T10:19:14.830 INFO:tasks.workunit.client.1.vm05.stdout:0/72: write d1/d2/fc [1383932,6533] 0 2026-03-10T10:19:14.833 INFO:tasks.workunit.client.1.vm05.stdout:1/55: creat d4/df/f12 x:0 0 0 2026-03-10T10:19:14.837 INFO:tasks.workunit.client.1.vm05.stdout:1/56: write f0 [4226974,98857] 0 2026-03-10T10:19:14.840 INFO:tasks.workunit.client.1.vm05.stdout:6/55: getdents dd 0 2026-03-10T10:19:14.842 INFO:tasks.workunit.client.1.vm05.stdout:0/73: unlink d1/d2/c6 0 2026-03-10T10:19:14.845 INFO:tasks.workunit.client.0.vm02.stdout:0/187: rmdir d9/d18/d1a/d31 0 2026-03-10T10:19:14.845 INFO:tasks.workunit.client.0.vm02.stdout:0/188: readlink d9/l2b 0 2026-03-10T10:19:14.847 INFO:tasks.workunit.client.1.vm05.stdout:1/57: sync 2026-03-10T10:19:14.847 INFO:tasks.workunit.client.1.vm05.stdout:1/58: rename d4/df to d4/df/d13 22 2026-03-10T10:19:14.851 INFO:tasks.workunit.client.1.vm05.stdout:4/63: rmdir d1/d3/d9 39 2026-03-10T10:19:14.854 INFO:tasks.workunit.client.1.vm05.stdout:8/56: getdents d7 0 2026-03-10T10:19:14.858 INFO:tasks.workunit.client.0.vm02.stdout:2/187: rename d0/d10/l23 to d0/l43 0 2026-03-10T10:19:14.858 INFO:tasks.workunit.client.1.vm05.stdout:6/56: symlink dd/df/d12/l13 0 2026-03-10T10:19:14.869 INFO:tasks.workunit.client.1.vm05.stdout:0/74: mknod d1/d7/db/d13/d15/c19 0 2026-03-10T10:19:14.870 INFO:tasks.workunit.client.1.vm05.stdout:0/75: stat d1/d2/d9/fd 0 2026-03-10T10:19:14.870 INFO:tasks.workunit.client.1.vm05.stdout:0/76: readlink d1/d7/l5 0 2026-03-10T10:19:14.871 INFO:tasks.workunit.client.0.vm02.stdout:2/188: write d0/d1a/f26 [4837414,76005] 0 2026-03-10T10:19:14.873 INFO:tasks.workunit.client.1.vm05.stdout:8/57: creat d7/fd x:0 0 0 2026-03-10T10:19:14.876 INFO:tasks.workunit.client.1.vm05.stdout:1/59: unlink d4/l8 0 2026-03-10T10:19:14.880 INFO:tasks.workunit.client.0.vm02.stdout:2/189: write d0/f36 [109891,115662] 0 2026-03-10T10:19:14.884 INFO:tasks.workunit.client.1.vm05.stdout:2/72: dread f1 [0,4194304] 0 2026-03-10T10:19:14.884 INFO:tasks.workunit.client.1.vm05.stdout:2/73: dread - db/f11 zero size 2026-03-10T10:19:14.885 INFO:tasks.workunit.client.1.vm05.stdout:6/57: creat dd/f14 x:0 0 0 2026-03-10T10:19:14.886 INFO:tasks.workunit.client.1.vm05.stdout:6/58: rename dd/df/d12 to dd/df/d12/d15 22 2026-03-10T10:19:14.886 INFO:tasks.workunit.client.1.vm05.stdout:6/59: chown f2 1 1 2026-03-10T10:19:14.888 INFO:tasks.workunit.client.0.vm02.stdout:8/192: getdents d1/d1c/d23/d25 0 2026-03-10T10:19:14.888 INFO:tasks.workunit.client.1.vm05.stdout:4/64: truncate f0 1183964 0 2026-03-10T10:19:14.892 INFO:tasks.workunit.client.0.vm02.stdout:2/190: creat d0/f44 x:0 0 0 2026-03-10T10:19:14.892 INFO:tasks.workunit.client.0.vm02.stdout:2/191: stat d0/d1a/d24 0 2026-03-10T10:19:14.892 INFO:tasks.workunit.client.1.vm05.stdout:2/74: dread f1 [0,4194304] 0 2026-03-10T10:19:14.893 INFO:tasks.workunit.client.1.vm05.stdout:8/58: mknod d7/ce 0 2026-03-10T10:19:14.894 INFO:tasks.workunit.client.0.vm02.stdout:8/193: creat d1/d1c/d23/d25/f3d x:0 0 0 2026-03-10T10:19:14.895 INFO:tasks.workunit.client.0.vm02.stdout:8/194: stat d1/d1c/d23/d25/f3d 0 2026-03-10T10:19:14.897 INFO:tasks.workunit.client.1.vm05.stdout:1/60: write d4/df/f11 [2178079,34356] 0 2026-03-10T10:19:14.900 INFO:tasks.workunit.client.1.vm05.stdout:6/60: mknod dd/df/c16 0 2026-03-10T10:19:14.900 INFO:tasks.workunit.client.1.vm05.stdout:4/65: mknod d1/cf 0 2026-03-10T10:19:14.901 INFO:tasks.workunit.client.0.vm02.stdout:2/192: mknod d0/c45 0 2026-03-10T10:19:14.901 INFO:tasks.workunit.client.1.vm05.stdout:4/66: readlink d1/d3/le 0 2026-03-10T10:19:14.901 INFO:tasks.workunit.client.0.vm02.stdout:2/193: chown d0/d1a/f35 744 1 2026-03-10T10:19:14.905 INFO:tasks.workunit.client.0.vm02.stdout:8/195: truncate d1/f12 1838444 0 2026-03-10T10:19:14.907 INFO:tasks.workunit.client.1.vm05.stdout:8/59: dwrite d7/fb [0,4194304] 0 2026-03-10T10:19:14.907 INFO:tasks.workunit.client.1.vm05.stdout:2/75: creat db/f15 x:0 0 0 2026-03-10T10:19:14.909 INFO:tasks.workunit.client.1.vm05.stdout:2/76: truncate f1 2950746 0 2026-03-10T10:19:14.911 INFO:tasks.workunit.client.0.vm02.stdout:2/194: dwrite d0/f2c [0,4194304] 0 2026-03-10T10:19:14.918 INFO:tasks.workunit.client.0.vm02.stdout:2/195: dwrite d0/d1a/f33 [0,4194304] 0 2026-03-10T10:19:14.932 INFO:tasks.workunit.client.1.vm05.stdout:5/114: getdents da 0 2026-03-10T10:19:14.932 INFO:tasks.workunit.client.1.vm05.stdout:5/115: fdatasync da/db/f24 0 2026-03-10T10:19:14.937 INFO:tasks.workunit.client.1.vm05.stdout:3/51: getdents dd 0 2026-03-10T10:19:14.940 INFO:tasks.workunit.client.1.vm05.stdout:6/61: link dd/c11 dd/df/c17 0 2026-03-10T10:19:14.941 INFO:tasks.workunit.client.1.vm05.stdout:3/52: symlink dd/lf 0 2026-03-10T10:19:14.945 INFO:tasks.workunit.client.1.vm05.stdout:3/53: dread f1 [0,4194304] 0 2026-03-10T10:19:14.950 INFO:tasks.workunit.client.1.vm05.stdout:6/62: dwrite dd/f14 [0,4194304] 0 2026-03-10T10:19:14.951 INFO:tasks.workunit.client.1.vm05.stdout:3/54: symlink dd/l10 0 2026-03-10T10:19:14.955 INFO:tasks.workunit.client.1.vm05.stdout:7/118: truncate d5/ff 55986 0 2026-03-10T10:19:14.955 INFO:tasks.workunit.client.1.vm05.stdout:3/55: write f3 [657779,5588] 0 2026-03-10T10:19:14.964 INFO:tasks.workunit.client.1.vm05.stdout:3/56: creat dd/f11 x:0 0 0 2026-03-10T10:19:14.964 INFO:tasks.workunit.client.1.vm05.stdout:7/119: fdatasync d5/ff 0 2026-03-10T10:19:14.965 INFO:tasks.workunit.client.1.vm05.stdout:6/63: creat dd/df/f18 x:0 0 0 2026-03-10T10:19:14.965 INFO:tasks.workunit.client.1.vm05.stdout:7/120: readlink d5/l11 0 2026-03-10T10:19:14.968 INFO:tasks.workunit.client.1.vm05.stdout:3/57: creat dd/f12 x:0 0 0 2026-03-10T10:19:14.968 INFO:tasks.workunit.client.1.vm05.stdout:3/58: write f3 [489183,28202] 0 2026-03-10T10:19:14.969 INFO:tasks.workunit.client.1.vm05.stdout:6/64: symlink dd/df/l19 0 2026-03-10T10:19:14.970 INFO:tasks.workunit.client.1.vm05.stdout:6/65: dread fb [0,4194304] 0 2026-03-10T10:19:14.971 INFO:tasks.workunit.client.1.vm05.stdout:3/59: rename l8 to dd/l13 0 2026-03-10T10:19:14.972 INFO:tasks.workunit.client.1.vm05.stdout:6/66: symlink dd/df/d12/l1a 0 2026-03-10T10:19:14.976 INFO:tasks.workunit.client.1.vm05.stdout:3/60: dwrite f9 [0,4194304] 0 2026-03-10T10:19:14.977 INFO:tasks.workunit.client.1.vm05.stdout:3/61: chown dd/f11 1 1 2026-03-10T10:19:14.978 INFO:tasks.workunit.client.1.vm05.stdout:6/67: dread f3 [0,4194304] 0 2026-03-10T10:19:14.978 INFO:tasks.workunit.client.1.vm05.stdout:3/62: rename cc to dd/c14 0 2026-03-10T10:19:14.979 INFO:tasks.workunit.client.1.vm05.stdout:3/63: truncate f6 278480 0 2026-03-10T10:19:14.980 INFO:tasks.workunit.client.1.vm05.stdout:6/68: write dd/df/f18 [382551,9531] 0 2026-03-10T10:19:14.980 INFO:tasks.workunit.client.1.vm05.stdout:6/69: readlink dd/df/d12/l13 0 2026-03-10T10:19:14.981 INFO:tasks.workunit.client.1.vm05.stdout:6/70: truncate dd/df/f18 742010 0 2026-03-10T10:19:14.981 INFO:tasks.workunit.client.1.vm05.stdout:6/71: chown dd/df/f18 5 1 2026-03-10T10:19:14.987 INFO:tasks.workunit.client.1.vm05.stdout:6/72: mkdir dd/d1b 0 2026-03-10T10:19:14.992 INFO:tasks.workunit.client.1.vm05.stdout:5/116: sync 2026-03-10T10:19:14.992 INFO:tasks.workunit.client.1.vm05.stdout:3/64: dwrite f9 [0,4194304] 0 2026-03-10T10:19:14.993 INFO:tasks.workunit.client.1.vm05.stdout:3/65: stat dd/l10 0 2026-03-10T10:19:14.993 INFO:tasks.workunit.client.1.vm05.stdout:5/117: chown da/db/c21 169591 1 2026-03-10T10:19:14.994 INFO:tasks.workunit.client.1.vm05.stdout:3/66: write fb [970537,68624] 0 2026-03-10T10:19:14.995 INFO:tasks.workunit.client.1.vm05.stdout:5/118: fdatasync da/db/fd 0 2026-03-10T10:19:14.999 INFO:tasks.workunit.client.1.vm05.stdout:3/67: mkdir dd/d15 0 2026-03-10T10:19:15.001 INFO:tasks.workunit.client.1.vm05.stdout:3/68: write fb [229840,5703] 0 2026-03-10T10:19:15.004 INFO:tasks.workunit.client.1.vm05.stdout:3/69: mknod dd/d15/c16 0 2026-03-10T10:19:15.004 INFO:tasks.workunit.client.1.vm05.stdout:6/73: dwrite dd/fe [4194304,4194304] 0 2026-03-10T10:19:15.012 INFO:tasks.workunit.client.1.vm05.stdout:3/70: symlink dd/l17 0 2026-03-10T10:19:15.012 INFO:tasks.workunit.client.1.vm05.stdout:5/119: link da/db/de/c1f da/db/d17/c25 0 2026-03-10T10:19:15.016 INFO:tasks.workunit.client.1.vm05.stdout:3/71: creat dd/d15/f18 x:0 0 0 2026-03-10T10:19:15.022 INFO:tasks.workunit.client.1.vm05.stdout:5/120: dwrite da/db/fd [0,4194304] 0 2026-03-10T10:19:15.026 INFO:tasks.workunit.client.1.vm05.stdout:5/121: write da/db/f1e [2911725,32804] 0 2026-03-10T10:19:15.029 INFO:tasks.workunit.client.1.vm05.stdout:5/122: mkdir da/db/d26 0 2026-03-10T10:19:15.030 INFO:tasks.workunit.client.1.vm05.stdout:3/72: dwrite f6 [0,4194304] 0 2026-03-10T10:19:15.038 INFO:tasks.workunit.client.1.vm05.stdout:3/73: symlink dd/l19 0 2026-03-10T10:19:15.041 INFO:tasks.workunit.client.1.vm05.stdout:3/74: write f2 [719981,98272] 0 2026-03-10T10:19:15.044 INFO:tasks.workunit.client.1.vm05.stdout:6/74: sync 2026-03-10T10:19:15.046 INFO:tasks.workunit.client.1.vm05.stdout:5/123: dwrite da/f10 [0,4194304] 0 2026-03-10T10:19:15.049 INFO:tasks.workunit.client.1.vm05.stdout:6/75: chown lc 6911 1 2026-03-10T10:19:15.049 INFO:tasks.workunit.client.1.vm05.stdout:6/76: read f3 [5931766,2705] 0 2026-03-10T10:19:15.058 INFO:tasks.workunit.client.1.vm05.stdout:3/75: mknod dd/d15/c1a 0 2026-03-10T10:19:15.063 INFO:tasks.workunit.client.1.vm05.stdout:3/76: readlink dd/lf 0 2026-03-10T10:19:15.064 INFO:tasks.workunit.client.1.vm05.stdout:3/77: creat dd/d15/f1b x:0 0 0 2026-03-10T10:19:15.064 INFO:tasks.workunit.client.1.vm05.stdout:3/78: stat dd/d15/c16 0 2026-03-10T10:19:15.064 INFO:tasks.workunit.client.1.vm05.stdout:3/79: creat dd/d15/f1c x:0 0 0 2026-03-10T10:19:15.115 INFO:tasks.workunit.client.0.vm02.stdout:5/222: truncate d1/db/d11/f4a 4096404 0 2026-03-10T10:19:15.117 INFO:tasks.workunit.client.0.vm02.stdout:6/109: rmdir d0/d8 39 2026-03-10T10:19:15.118 INFO:tasks.workunit.client.0.vm02.stdout:7/129: write d1/dc/f3 [540514,76303] 0 2026-03-10T10:19:15.120 INFO:tasks.workunit.client.0.vm02.stdout:2/196: getdents d0/d10 0 2026-03-10T10:19:15.124 INFO:tasks.workunit.client.0.vm02.stdout:2/197: dwrite d0/f2c [0,4194304] 0 2026-03-10T10:19:15.125 INFO:tasks.workunit.client.0.vm02.stdout:6/110: readlink d0/d8/l1d 0 2026-03-10T10:19:15.125 INFO:tasks.workunit.client.0.vm02.stdout:2/198: fdatasync d0/d1a/f25 0 2026-03-10T10:19:15.128 INFO:tasks.workunit.client.0.vm02.stdout:2/199: dread d0/d1a/f25 [0,4194304] 0 2026-03-10T10:19:15.131 INFO:tasks.workunit.client.0.vm02.stdout:7/130: readlink d1/dc/d10/l1a 0 2026-03-10T10:19:15.131 INFO:tasks.workunit.client.0.vm02.stdout:9/126: rename f7 to da/d10/f26 0 2026-03-10T10:19:15.136 INFO:tasks.workunit.client.0.vm02.stdout:2/200: dwrite d0/f36 [0,4194304] 0 2026-03-10T10:19:15.141 INFO:tasks.workunit.client.0.vm02.stdout:6/111: mknod d0/d8/d9/c23 0 2026-03-10T10:19:15.143 INFO:tasks.workunit.client.1.vm05.stdout:0/77: getdents d1/d7/db/d13/d15 0 2026-03-10T10:19:15.144 INFO:tasks.workunit.client.0.vm02.stdout:6/112: dread d0/d8/d9/f14 [0,4194304] 0 2026-03-10T10:19:15.145 INFO:tasks.workunit.client.0.vm02.stdout:6/113: write d0/d8/d9/f13 [1949668,87069] 0 2026-03-10T10:19:15.145 INFO:tasks.workunit.client.0.vm02.stdout:6/114: chown d0/d8/d9/f14 20 1 2026-03-10T10:19:15.151 INFO:tasks.workunit.client.0.vm02.stdout:1/109: creat d4/f26 x:0 0 0 2026-03-10T10:19:15.153 INFO:tasks.workunit.client.0.vm02.stdout:4/178: rename d1/d2/d37/l21 to d1/d2/d1a/l3d 0 2026-03-10T10:19:15.154 INFO:tasks.workunit.client.0.vm02.stdout:4/179: write d1/d2/d37/f2e [1036629,44370] 0 2026-03-10T10:19:15.156 INFO:tasks.workunit.client.1.vm05.stdout:0/78: mknod d1/c1a 0 2026-03-10T10:19:15.158 INFO:tasks.workunit.client.0.vm02.stdout:2/201: chown d0/l2 861997685 1 2026-03-10T10:19:15.159 INFO:tasks.workunit.client.0.vm02.stdout:2/202: write d0/d1a/f20 [502170,93806] 0 2026-03-10T10:19:15.161 INFO:tasks.workunit.client.0.vm02.stdout:2/203: dread d0/f36 [0,4194304] 0 2026-03-10T10:19:15.161 INFO:tasks.workunit.client.0.vm02.stdout:2/204: fsync d0/d1a/f25 0 2026-03-10T10:19:15.162 INFO:tasks.workunit.client.1.vm05.stdout:8/60: getdents d7 0 2026-03-10T10:19:15.162 INFO:tasks.workunit.client.1.vm05.stdout:8/61: readlink - no filename 2026-03-10T10:19:15.169 INFO:tasks.workunit.client.0.vm02.stdout:6/115: symlink d0/d7/l24 0 2026-03-10T10:19:15.174 INFO:tasks.workunit.client.0.vm02.stdout:3/104: rmdir d1/d8 39 2026-03-10T10:19:15.182 INFO:tasks.workunit.client.1.vm05.stdout:1/61: dwrite d4/df/f11 [0,4194304] 0 2026-03-10T10:19:15.182 INFO:tasks.workunit.client.1.vm05.stdout:8/62: dread f2 [0,4194304] 0 2026-03-10T10:19:15.182 INFO:tasks.workunit.client.0.vm02.stdout:9/127: truncate da/f14 125918 0 2026-03-10T10:19:15.182 INFO:tasks.workunit.client.0.vm02.stdout:1/110: mkdir d4/da/d27 0 2026-03-10T10:19:15.182 INFO:tasks.workunit.client.0.vm02.stdout:1/111: read - d4/da/f13 zero size 2026-03-10T10:19:15.182 INFO:tasks.workunit.client.0.vm02.stdout:1/112: chown d4/da/d1a 4368 1 2026-03-10T10:19:15.182 INFO:tasks.workunit.client.0.vm02.stdout:9/128: creat da/d10/f27 x:0 0 0 2026-03-10T10:19:15.185 INFO:tasks.workunit.client.0.vm02.stdout:2/205: link d0/f36 d0/d10/f46 0 2026-03-10T10:19:15.185 INFO:tasks.workunit.client.0.vm02.stdout:2/206: read - d0/d1a/d24/f34 zero size 2026-03-10T10:19:15.186 INFO:tasks.workunit.client.1.vm05.stdout:9/77: symlink d0/l19 0 2026-03-10T10:19:15.187 INFO:tasks.workunit.client.1.vm05.stdout:9/78: fsync d0/fa 0 2026-03-10T10:19:15.187 INFO:tasks.workunit.client.1.vm05.stdout:2/77: getdents db 0 2026-03-10T10:19:15.188 INFO:tasks.workunit.client.1.vm05.stdout:8/63: dwrite d7/f9 [0,4194304] 0 2026-03-10T10:19:15.188 INFO:tasks.workunit.client.1.vm05.stdout:2/78: chown ca 3561235 1 2026-03-10T10:19:15.190 INFO:tasks.workunit.client.1.vm05.stdout:2/79: dread - db/f15 zero size 2026-03-10T10:19:15.191 INFO:tasks.workunit.client.0.vm02.stdout:6/116: mknod d0/c25 0 2026-03-10T10:19:15.192 INFO:tasks.workunit.client.0.vm02.stdout:8/196: dwrite d1/d1c/f14 [4194304,4194304] 0 2026-03-10T10:19:15.193 INFO:tasks.workunit.client.0.vm02.stdout:6/117: fdatasync d0/f21 0 2026-03-10T10:19:15.199 INFO:tasks.workunit.client.1.vm05.stdout:8/64: read f6 [95078,128128] 0 2026-03-10T10:19:15.207 INFO:tasks.workunit.client.0.vm02.stdout:1/113: unlink d4/f1d 0 2026-03-10T10:19:15.207 INFO:tasks.workunit.client.1.vm05.stdout:8/65: write d7/fd [717449,63344] 0 2026-03-10T10:19:15.208 INFO:tasks.workunit.client.1.vm05.stdout:8/66: dwrite d7/fb [0,4194304] 0 2026-03-10T10:19:15.209 INFO:tasks.workunit.client.0.vm02.stdout:1/114: dwrite d4/fe [0,4194304] 0 2026-03-10T10:19:15.210 INFO:tasks.workunit.client.0.vm02.stdout:1/115: fsync d4/da/f13 0 2026-03-10T10:19:15.213 INFO:tasks.workunit.client.0.vm02.stdout:0/189: rename d9/d18/d1a/d30 to d9/d34 0 2026-03-10T10:19:15.218 INFO:tasks.workunit.client.1.vm05.stdout:9/79: symlink d0/d1/l1a 0 2026-03-10T10:19:15.226 INFO:tasks.workunit.client.0.vm02.stdout:6/118: creat d0/d7/f26 x:0 0 0 2026-03-10T10:19:15.226 INFO:tasks.workunit.client.0.vm02.stdout:6/119: truncate d0/f20 709013 0 2026-03-10T10:19:15.226 INFO:tasks.workunit.client.0.vm02.stdout:6/120: dwrite d0/f21 [0,4194304] 0 2026-03-10T10:19:15.226 INFO:tasks.workunit.client.1.vm05.stdout:8/67: mknod d7/cf 0 2026-03-10T10:19:15.226 INFO:tasks.workunit.client.1.vm05.stdout:9/80: mknod d0/d1/c1b 0 2026-03-10T10:19:15.226 INFO:tasks.workunit.client.1.vm05.stdout:8/68: symlink d7/l10 0 2026-03-10T10:19:15.229 INFO:tasks.workunit.client.0.vm02.stdout:3/105: creat d1/f25 x:0 0 0 2026-03-10T10:19:15.229 INFO:tasks.workunit.client.0.vm02.stdout:5/223: rename d1/db/d11/d13/d28/d37/d41 to d1/d4c 0 2026-03-10T10:19:15.229 INFO:tasks.workunit.client.0.vm02.stdout:3/106: stat d1/c11 0 2026-03-10T10:19:15.230 INFO:tasks.workunit.client.1.vm05.stdout:9/81: rename d0/d1/d13/l17 to d0/d1/d16/l1c 0 2026-03-10T10:19:15.232 INFO:tasks.workunit.client.0.vm02.stdout:6/121: dwrite d0/f2 [0,4194304] 0 2026-03-10T10:19:15.235 INFO:tasks.workunit.client.1.vm05.stdout:8/69: creat d7/f11 x:0 0 0 2026-03-10T10:19:15.237 INFO:tasks.workunit.client.0.vm02.stdout:0/190: dread d9/d18/d1a/d22/d24/f2f [0,4194304] 0 2026-03-10T10:19:15.245 INFO:tasks.workunit.client.0.vm02.stdout:8/197: mkdir d1/d1c/d23/d3e 0 2026-03-10T10:19:15.267 INFO:tasks.workunit.client.1.vm05.stdout:8/70: write f6 [5191,26871] 0 2026-03-10T10:19:15.267 INFO:tasks.workunit.client.1.vm05.stdout:8/71: read - d7/fc zero size 2026-03-10T10:19:15.267 INFO:tasks.workunit.client.1.vm05.stdout:8/72: creat d7/f12 x:0 0 0 2026-03-10T10:19:15.267 INFO:tasks.workunit.client.1.vm05.stdout:8/73: mknod d7/c13 0 2026-03-10T10:19:15.267 INFO:tasks.workunit.client.1.vm05.stdout:8/74: truncate d7/fd 1013756 0 2026-03-10T10:19:15.267 INFO:tasks.workunit.client.1.vm05.stdout:8/75: mkdir d7/d14 0 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:3/107: stat d1/d8/f1a 0 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:3/108: dwrite d1/d6/f1b [0,4194304] 0 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:6/122: creat d0/d8/f27 x:0 0 0 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:0/191: unlink d9/f1b 0 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:3/109: creat d1/d6/f26 x:0 0 0 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:5/224: rename d1/db/d11/d16/c2b to d1/db/d11/d16/d29/c4d 0 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:5/225: dwrite d1/db/f1e [0,4194304] 0 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:5/226: dread - d1/db/d11/d13/f1f zero size 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:4/180: sync 2026-03-10T10:19:15.268 INFO:tasks.workunit.client.0.vm02.stdout:2/207: sync 2026-03-10T10:19:15.271 INFO:tasks.workunit.client.1.vm05.stdout:8/76: mkdir d7/d14/d15 0 2026-03-10T10:19:15.280 INFO:tasks.workunit.client.0.vm02.stdout:0/192: mknod d9/c35 0 2026-03-10T10:19:15.280 INFO:tasks.workunit.client.0.vm02.stdout:2/208: read d0/d10/f14 [1340149,106873] 0 2026-03-10T10:19:15.280 INFO:tasks.workunit.client.0.vm02.stdout:4/181: dwrite d1/d2/f4 [0,4194304] 0 2026-03-10T10:19:15.280 INFO:tasks.workunit.client.1.vm05.stdout:8/77: chown d7/f11 3091 1 2026-03-10T10:19:15.280 INFO:tasks.workunit.client.1.vm05.stdout:8/78: truncate f6 1116886 0 2026-03-10T10:19:15.280 INFO:tasks.workunit.client.1.vm05.stdout:8/79: truncate f2 716616 0 2026-03-10T10:19:15.280 INFO:tasks.workunit.client.1.vm05.stdout:8/80: rename d7/d14 to d7/d14/d15/d16 22 2026-03-10T10:19:15.282 INFO:tasks.workunit.client.0.vm02.stdout:2/209: dwrite d0/d10/f14 [0,4194304] 0 2026-03-10T10:19:15.282 INFO:tasks.workunit.client.0.vm02.stdout:2/210: chown d0/c45 27 1 2026-03-10T10:19:15.285 INFO:tasks.workunit.client.0.vm02.stdout:8/198: getdents d1/d1c 0 2026-03-10T10:19:15.286 INFO:tasks.workunit.client.1.vm05.stdout:2/80: sync 2026-03-10T10:19:15.287 INFO:tasks.workunit.client.0.vm02.stdout:8/199: fsync d1/d1c/d24/f31 0 2026-03-10T10:19:15.292 INFO:tasks.workunit.client.1.vm05.stdout:2/81: dwrite db/f15 [0,4194304] 0 2026-03-10T10:19:15.293 INFO:tasks.workunit.client.0.vm02.stdout:4/182: dread d1/d2/f31 [0,4194304] 0 2026-03-10T10:19:15.295 INFO:tasks.workunit.client.1.vm05.stdout:2/82: write f7 [3208889,78772] 0 2026-03-10T10:19:15.295 INFO:tasks.workunit.client.1.vm05.stdout:2/83: fdatasync db/f11 0 2026-03-10T10:19:15.295 INFO:tasks.workunit.client.0.vm02.stdout:2/211: creat d0/d1a/f47 x:0 0 0 2026-03-10T10:19:15.299 INFO:tasks.workunit.client.0.vm02.stdout:4/183: dwrite d1/f1d [8388608,4194304] 0 2026-03-10T10:19:15.301 INFO:tasks.workunit.client.1.vm05.stdout:2/84: dwrite db/f11 [0,4194304] 0 2026-03-10T10:19:15.302 INFO:tasks.workunit.client.0.vm02.stdout:3/110: link d1/d8/d21/l24 d1/l27 0 2026-03-10T10:19:15.306 INFO:tasks.workunit.client.0.vm02.stdout:8/200: dread d1/d2/f29 [0,4194304] 0 2026-03-10T10:19:15.312 INFO:tasks.workunit.client.0.vm02.stdout:2/212: creat d0/d1a/d24/f48 x:0 0 0 2026-03-10T10:19:15.317 INFO:tasks.workunit.client.1.vm05.stdout:2/85: dwrite f1 [0,4194304] 0 2026-03-10T10:19:15.330 INFO:tasks.workunit.client.0.vm02.stdout:4/184: unlink d1/d2/d37/f1b 0 2026-03-10T10:19:15.333 INFO:tasks.workunit.client.0.vm02.stdout:0/193: link d9/d18/d1a/c29 d9/d18/d1a/d22/c36 0 2026-03-10T10:19:15.334 INFO:tasks.workunit.client.1.vm05.stdout:2/86: mknod db/d12/c16 0 2026-03-10T10:19:15.334 INFO:tasks.workunit.client.1.vm05.stdout:2/87: chown db/f11 1706 1 2026-03-10T10:19:15.339 INFO:tasks.workunit.client.0.vm02.stdout:2/213: dwrite d0/d10/f19 [0,4194304] 0 2026-03-10T10:19:15.340 INFO:tasks.workunit.client.0.vm02.stdout:2/214: stat d0/d10/f1f 0 2026-03-10T10:19:15.343 INFO:tasks.workunit.client.0.vm02.stdout:2/215: dread d0/f36 [0,4194304] 0 2026-03-10T10:19:15.350 INFO:tasks.workunit.client.0.vm02.stdout:4/185: mkdir d1/d32/d3e 0 2026-03-10T10:19:15.359 INFO:tasks.workunit.client.0.vm02.stdout:0/194: symlink d9/d34/l37 0 2026-03-10T10:19:15.363 INFO:tasks.workunit.client.0.vm02.stdout:8/201: link d1/d1c/d23/f3b d1/d1c/f3f 0 2026-03-10T10:19:15.368 INFO:tasks.workunit.client.0.vm02.stdout:3/111: creat d1/f28 x:0 0 0 2026-03-10T10:19:15.373 INFO:tasks.workunit.client.1.vm05.stdout:2/88: getdents db 0 2026-03-10T10:19:15.373 INFO:tasks.workunit.client.0.vm02.stdout:2/216: mkdir d0/d1a/d49 0 2026-03-10T10:19:15.373 INFO:tasks.workunit.client.0.vm02.stdout:2/217: write d0/d1a/f33 [2141677,47941] 0 2026-03-10T10:19:15.374 INFO:tasks.workunit.client.0.vm02.stdout:2/218: truncate d0/d1a/d24/f48 756835 0 2026-03-10T10:19:15.376 INFO:tasks.workunit.client.0.vm02.stdout:4/186: creat d1/d2/f3f x:0 0 0 2026-03-10T10:19:15.377 INFO:tasks.workunit.client.0.vm02.stdout:4/187: read - d1/d10/db/f35 zero size 2026-03-10T10:19:15.378 INFO:tasks.workunit.client.0.vm02.stdout:4/188: write d1/d2/d37/f28 [1141518,100016] 0 2026-03-10T10:19:15.379 INFO:tasks.workunit.client.1.vm05.stdout:2/89: rmdir db 39 2026-03-10T10:19:15.383 INFO:tasks.workunit.client.0.vm02.stdout:3/112: creat d1/d8/d21/f29 x:0 0 0 2026-03-10T10:19:15.384 INFO:tasks.workunit.client.0.vm02.stdout:3/113: dread d1/d8/fb [0,4194304] 0 2026-03-10T10:19:15.388 INFO:tasks.workunit.client.0.vm02.stdout:2/219: mknod d0/d10/c4a 0 2026-03-10T10:19:15.389 INFO:tasks.workunit.client.1.vm05.stdout:2/90: unlink db/lf 0 2026-03-10T10:19:15.390 INFO:tasks.workunit.client.1.vm05.stdout:2/91: stat db/d12/l13 0 2026-03-10T10:19:15.391 INFO:tasks.workunit.client.1.vm05.stdout:2/92: write db/f11 [2996519,110619] 0 2026-03-10T10:19:15.396 INFO:tasks.workunit.client.1.vm05.stdout:2/93: mknod db/d12/c17 0 2026-03-10T10:19:15.398 INFO:tasks.workunit.client.1.vm05.stdout:2/94: read db/f11 [894320,82487] 0 2026-03-10T10:19:15.398 INFO:tasks.workunit.client.0.vm02.stdout:0/195: getdents d9/d18/d1a/d22/d24 0 2026-03-10T10:19:15.398 INFO:tasks.workunit.client.0.vm02.stdout:2/220: write d0/f1b [798964,7891] 0 2026-03-10T10:19:15.400 INFO:tasks.workunit.client.0.vm02.stdout:8/202: sync 2026-03-10T10:19:15.401 INFO:tasks.workunit.client.0.vm02.stdout:8/203: read - d1/d2/f36 zero size 2026-03-10T10:19:15.402 INFO:tasks.workunit.client.0.vm02.stdout:0/196: mknod d9/c38 0 2026-03-10T10:19:15.406 INFO:tasks.workunit.client.0.vm02.stdout:0/197: dwrite d9/d18/d1a/d22/d24/f26 [0,4194304] 0 2026-03-10T10:19:15.408 INFO:tasks.workunit.client.0.vm02.stdout:0/198: truncate d9/d18/d1a/f1f 320957 0 2026-03-10T10:19:15.410 INFO:tasks.workunit.client.0.vm02.stdout:2/221: creat d0/d10/f4b x:0 0 0 2026-03-10T10:19:15.411 INFO:tasks.workunit.client.0.vm02.stdout:0/199: symlink d9/d18/d1a/d22/l39 0 2026-03-10T10:19:15.412 INFO:tasks.workunit.client.0.vm02.stdout:0/200: write d9/f28 [179452,83492] 0 2026-03-10T10:19:15.412 INFO:tasks.workunit.client.0.vm02.stdout:0/201: chown d9/d18 2 1 2026-03-10T10:19:15.413 INFO:tasks.workunit.client.0.vm02.stdout:2/222: creat d0/d1a/f4c x:0 0 0 2026-03-10T10:19:15.415 INFO:tasks.workunit.client.0.vm02.stdout:2/223: readlink d0/d1a/l3e 0 2026-03-10T10:19:15.417 INFO:tasks.workunit.client.0.vm02.stdout:0/202: chown d9/d18/d1a/d22/c36 1502740083 1 2026-03-10T10:19:15.419 INFO:tasks.workunit.client.0.vm02.stdout:2/224: dread d0/f2c [0,4194304] 0 2026-03-10T10:19:15.420 INFO:tasks.workunit.client.0.vm02.stdout:0/203: creat d9/d18/d1a/d22/d24/d25/f3a x:0 0 0 2026-03-10T10:19:15.432 INFO:tasks.workunit.client.0.vm02.stdout:0/204: rename d9/c38 to d9/d34/c3b 0 2026-03-10T10:19:15.433 INFO:tasks.workunit.client.0.vm02.stdout:0/205: fsync d9/d18/f1e 0 2026-03-10T10:19:15.434 INFO:tasks.workunit.client.0.vm02.stdout:0/206: mkdir d9/d18/d1a/d3c 0 2026-03-10T10:19:15.435 INFO:tasks.workunit.client.0.vm02.stdout:0/207: stat d9/d18/d1a/l21 0 2026-03-10T10:19:15.436 INFO:tasks.workunit.client.0.vm02.stdout:2/225: rename d0/f38 to d0/f4d 0 2026-03-10T10:19:15.438 INFO:tasks.workunit.client.0.vm02.stdout:2/226: mknod d0/c4e 0 2026-03-10T10:19:15.438 INFO:tasks.workunit.client.0.vm02.stdout:2/227: dread - d0/d1a/f35 zero size 2026-03-10T10:19:15.439 INFO:tasks.workunit.client.0.vm02.stdout:2/228: creat d0/d1a/d49/f4f x:0 0 0 2026-03-10T10:19:15.440 INFO:tasks.workunit.client.0.vm02.stdout:2/229: creat d0/d1a/d49/f50 x:0 0 0 2026-03-10T10:19:15.441 INFO:tasks.workunit.client.0.vm02.stdout:2/230: truncate d0/d1a/f47 950862 0 2026-03-10T10:19:15.445 INFO:tasks.workunit.client.0.vm02.stdout:2/231: dwrite d0/f2c [4194304,4194304] 0 2026-03-10T10:19:15.456 INFO:tasks.workunit.client.0.vm02.stdout:2/232: dwrite d0/d1a/f20 [0,4194304] 0 2026-03-10T10:19:15.461 INFO:tasks.workunit.client.0.vm02.stdout:2/233: mkdir d0/d1a/d49/d51 0 2026-03-10T10:19:15.466 INFO:tasks.workunit.client.0.vm02.stdout:2/234: dread d0/fe [0,4194304] 0 2026-03-10T10:19:15.469 INFO:tasks.workunit.client.0.vm02.stdout:4/189: fsync d1/d2/d37/f2e 0 2026-03-10T10:19:15.472 INFO:tasks.workunit.client.0.vm02.stdout:2/235: chown d0/l2 43 1 2026-03-10T10:19:15.472 INFO:tasks.workunit.client.0.vm02.stdout:2/236: fdatasync d0/d1a/d24/f34 0 2026-03-10T10:19:15.472 INFO:tasks.workunit.client.0.vm02.stdout:4/190: chown d1/d10/f8 104622342 1 2026-03-10T10:19:15.473 INFO:tasks.workunit.client.0.vm02.stdout:2/237: creat d0/d1a/f52 x:0 0 0 2026-03-10T10:19:15.474 INFO:tasks.workunit.client.0.vm02.stdout:2/238: chown d0/d1a/d49/f50 1 1 2026-03-10T10:19:15.476 INFO:tasks.workunit.client.0.vm02.stdout:4/191: dread d1/f1d [8388608,4194304] 0 2026-03-10T10:19:15.479 INFO:tasks.workunit.client.0.vm02.stdout:4/192: dread d1/d2/f34 [0,4194304] 0 2026-03-10T10:19:15.480 INFO:tasks.workunit.client.0.vm02.stdout:4/193: symlink d1/d32/l40 0 2026-03-10T10:19:15.485 INFO:tasks.workunit.client.0.vm02.stdout:4/194: chown d1/d10/f30 24921 1 2026-03-10T10:19:15.488 INFO:tasks.workunit.client.0.vm02.stdout:4/195: unlink d1/d2/d37/f3c 0 2026-03-10T10:19:15.489 INFO:tasks.workunit.client.0.vm02.stdout:4/196: mkdir d1/d41 0 2026-03-10T10:19:15.491 INFO:tasks.workunit.client.0.vm02.stdout:4/197: creat d1/d32/d3e/f42 x:0 0 0 2026-03-10T10:19:15.491 INFO:tasks.workunit.client.0.vm02.stdout:4/198: write d1/d10/db/f15 [128375,72219] 0 2026-03-10T10:19:15.493 INFO:tasks.workunit.client.0.vm02.stdout:4/199: creat d1/d10/db/f43 x:0 0 0 2026-03-10T10:19:15.497 INFO:tasks.workunit.client.0.vm02.stdout:4/200: mkdir d1/d2/d44 0 2026-03-10T10:19:15.498 INFO:tasks.workunit.client.0.vm02.stdout:4/201: write d1/d10/db/f1e [2240163,127682] 0 2026-03-10T10:19:15.498 INFO:tasks.workunit.client.0.vm02.stdout:4/202: dread - d1/d10/db/f20 zero size 2026-03-10T10:19:15.499 INFO:tasks.workunit.client.0.vm02.stdout:4/203: stat d1/d10 0 2026-03-10T10:19:15.502 INFO:tasks.workunit.client.0.vm02.stdout:4/204: creat d1/d10/f45 x:0 0 0 2026-03-10T10:19:15.528 INFO:tasks.workunit.client.0.vm02.stdout:4/205: dread d1/d10/db/f16 [0,4194304] 0 2026-03-10T10:19:15.650 INFO:tasks.workunit.client.1.vm05.stdout:7/121: fsync d5/ff 0 2026-03-10T10:19:15.651 INFO:tasks.workunit.client.1.vm05.stdout:7/122: dread - d5/d17/f18 zero size 2026-03-10T10:19:15.653 INFO:tasks.workunit.client.1.vm05.stdout:7/123: creat d5/d17/f1e x:0 0 0 2026-03-10T10:19:15.654 INFO:tasks.workunit.client.1.vm05.stdout:7/124: read d5/ff [37399,106127] 0 2026-03-10T10:19:15.654 INFO:tasks.workunit.client.1.vm05.stdout:7/125: stat d5/f13 0 2026-03-10T10:19:15.655 INFO:tasks.workunit.client.1.vm05.stdout:7/126: creat d5/dd/f1f x:0 0 0 2026-03-10T10:19:15.657 INFO:tasks.workunit.client.1.vm05.stdout:3/80: link dd/l13 dd/d15/l1d 0 2026-03-10T10:19:15.657 INFO:tasks.workunit.client.1.vm05.stdout:7/127: mkdir d5/d1d/d20 0 2026-03-10T10:19:15.658 INFO:tasks.workunit.client.1.vm05.stdout:3/81: write f1 [2549966,70258] 0 2026-03-10T10:19:15.660 INFO:tasks.workunit.client.1.vm05.stdout:7/128: mknod d5/d1d/d20/c21 0 2026-03-10T10:19:15.660 INFO:tasks.workunit.client.1.vm05.stdout:7/129: stat d5/fa 0 2026-03-10T10:19:15.669 INFO:tasks.workunit.client.1.vm05.stdout:6/77: rmdir dd/df 39 2026-03-10T10:19:15.670 INFO:tasks.workunit.client.1.vm05.stdout:7/130: creat d5/f22 x:0 0 0 2026-03-10T10:19:15.670 INFO:tasks.workunit.client.1.vm05.stdout:6/78: fsync dd/fe 0 2026-03-10T10:19:15.673 INFO:tasks.workunit.client.1.vm05.stdout:3/82: dwrite dd/fe [0,4194304] 0 2026-03-10T10:19:15.680 INFO:tasks.workunit.client.1.vm05.stdout:7/131: creat d5/dd/f23 x:0 0 0 2026-03-10T10:19:15.684 INFO:tasks.workunit.client.1.vm05.stdout:6/79: dwrite f3 [0,4194304] 0 2026-03-10T10:19:15.691 INFO:tasks.workunit.client.1.vm05.stdout:7/132: symlink d5/l24 0 2026-03-10T10:19:15.695 INFO:tasks.workunit.client.1.vm05.stdout:7/133: chown d5/ff 47 1 2026-03-10T10:19:15.696 INFO:tasks.workunit.client.1.vm05.stdout:6/80: symlink dd/df/d12/l1c 0 2026-03-10T10:19:15.699 INFO:tasks.workunit.client.1.vm05.stdout:7/134: creat d5/f25 x:0 0 0 2026-03-10T10:19:15.702 INFO:tasks.workunit.client.1.vm05.stdout:6/81: creat dd/d1b/f1d x:0 0 0 2026-03-10T10:19:15.704 INFO:tasks.workunit.client.1.vm05.stdout:6/82: stat dd/d1b 0 2026-03-10T10:19:15.710 INFO:tasks.workunit.client.1.vm05.stdout:7/135: dwrite d5/fa [0,4194304] 0 2026-03-10T10:19:15.719 INFO:tasks.workunit.client.1.vm05.stdout:6/83: dwrite dd/df/f18 [0,4194304] 0 2026-03-10T10:19:15.722 INFO:tasks.workunit.client.1.vm05.stdout:6/84: dread fb [0,4194304] 0 2026-03-10T10:19:15.725 INFO:tasks.workunit.client.1.vm05.stdout:6/85: write f3 [726720,65209] 0 2026-03-10T10:19:15.725 INFO:tasks.workunit.client.1.vm05.stdout:6/86: fdatasync dd/f14 0 2026-03-10T10:19:15.728 INFO:tasks.workunit.client.1.vm05.stdout:6/87: stat dd/df/l19 0 2026-03-10T10:19:15.750 INFO:tasks.workunit.client.1.vm05.stdout:5/124: truncate f5 414936 0 2026-03-10T10:19:15.755 INFO:tasks.workunit.client.1.vm05.stdout:5/125: unlink da/f15 0 2026-03-10T10:19:15.764 INFO:tasks.workunit.client.1.vm05.stdout:5/126: mknod da/c27 0 2026-03-10T10:19:15.765 INFO:tasks.workunit.client.1.vm05.stdout:5/127: write da/db/d17/f1c [2446631,92169] 0 2026-03-10T10:19:15.765 INFO:tasks.workunit.client.1.vm05.stdout:0/79: creat d1/d7/db/d13/d17/f1b x:0 0 0 2026-03-10T10:19:15.765 INFO:tasks.workunit.client.1.vm05.stdout:0/80: dread - d1/d7/db/d13/d17/f18 zero size 2026-03-10T10:19:15.765 INFO:tasks.workunit.client.1.vm05.stdout:5/128: rmdir da/d23 0 2026-03-10T10:19:15.765 INFO:tasks.workunit.client.1.vm05.stdout:0/81: unlink d1/d2/d9/l14 0 2026-03-10T10:19:15.765 INFO:tasks.workunit.client.1.vm05.stdout:0/82: write d1/f11 [2593220,25858] 0 2026-03-10T10:19:15.769 INFO:tasks.workunit.client.1.vm05.stdout:5/129: dread da/db/d17/f1c [0,4194304] 0 2026-03-10T10:19:15.770 INFO:tasks.workunit.client.1.vm05.stdout:5/130: write f9 [3433383,30036] 0 2026-03-10T10:19:15.775 INFO:tasks.workunit.client.1.vm05.stdout:5/131: dread da/db/d17/f1c [0,4194304] 0 2026-03-10T10:19:15.776 INFO:tasks.workunit.client.1.vm05.stdout:5/132: truncate da/f20 606707 0 2026-03-10T10:19:15.777 INFO:tasks.workunit.client.1.vm05.stdout:0/83: read d1/d2/fc [987862,107499] 0 2026-03-10T10:19:15.778 INFO:tasks.workunit.client.0.vm02.stdout:4/206: getdents d1/d2/d1a 0 2026-03-10T10:19:15.814 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:15 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:15.814 INFO:tasks.workunit.client.0.vm02.stdout:7/131: truncate d1/f5 4880021 0 2026-03-10T10:19:15.814 INFO:tasks.workunit.client.0.vm02.stdout:4/207: getdents d1/d2/d44 0 2026-03-10T10:19:15.814 INFO:tasks.workunit.client.0.vm02.stdout:4/208: unlink d1/d10/l2d 0 2026-03-10T10:19:15.814 INFO:tasks.workunit.client.1.vm05.stdout:0/84: read d1/d2/fc [612635,76553] 0 2026-03-10T10:19:15.814 INFO:tasks.workunit.client.1.vm05.stdout:4/67: write f0 [327628,16561] 0 2026-03-10T10:19:15.814 INFO:tasks.workunit.client.1.vm05.stdout:0/85: symlink d1/d2/d9/l1c 0 2026-03-10T10:19:15.814 INFO:tasks.workunit.client.1.vm05.stdout:4/68: rmdir d1/d3/d9 39 2026-03-10T10:19:15.814 INFO:tasks.workunit.client.1.vm05.stdout:0/86: creat d1/d2/d9/f1d x:0 0 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:0/87: chown d1/cf 11536 1 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:1/62: symlink d4/df/l14 0 2026-03-10T10:19:15.815 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:15 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:4/69: creat d1/d3/f10 x:0 0 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:4/70: rename d1 to d1/d3/d11 22 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:0/88: dread d1/d7/f16 [0,4194304] 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:1/63: dwrite d4/df/f11 [0,4194304] 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:0/89: creat d1/d7/db/d12/f1e x:0 0 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:4/71: creat d1/d3/f12 x:0 0 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:1/64: rename d4/df/f12 to d4/dd/f15 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:0/90: symlink d1/d7/db/d13/l1f 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:4/72: creat d1/d3/d9/f13 x:0 0 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:4/73: stat d1/d3 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:1/65: rename f0 to d4/f16 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:4/74: mknod d1/d3/c14 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:4/75: write d1/d3/f10 [490354,130972] 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:4/76: fsync d1/d3/f5 0 2026-03-10T10:19:15.815 INFO:tasks.workunit.client.1.vm05.stdout:4/77: truncate d1/d3/f10 1177962 0 2026-03-10T10:19:15.822 INFO:tasks.workunit.client.1.vm05.stdout:4/78: dwrite f0 [0,4194304] 0 2026-03-10T10:19:15.832 INFO:tasks.workunit.client.1.vm05.stdout:4/79: mknod d1/d3/c15 0 2026-03-10T10:19:15.899 INFO:tasks.workunit.client.1.vm05.stdout:6/88: sync 2026-03-10T10:19:15.904 INFO:tasks.workunit.client.1.vm05.stdout:0/91: sync 2026-03-10T10:19:15.905 INFO:tasks.workunit.client.1.vm05.stdout:0/92: mkdir d1/d7/db/d12/d20 0 2026-03-10T10:19:15.906 INFO:tasks.workunit.client.1.vm05.stdout:0/93: chown d1/d2/d9/f1d 11297644 1 2026-03-10T10:19:15.914 INFO:tasks.workunit.client.1.vm05.stdout:5/133: fdatasync da/db/d17/f1c 0 2026-03-10T10:19:15.926 INFO:tasks.workunit.client.0.vm02.stdout:9/129: write da/f13 [670979,103098] 0 2026-03-10T10:19:15.926 INFO:tasks.workunit.client.0.vm02.stdout:8/204: getdents d1/d1c/d23 0 2026-03-10T10:19:15.929 INFO:tasks.workunit.client.0.vm02.stdout:1/116: truncate d4/da/fc 510227 0 2026-03-10T10:19:15.932 INFO:tasks.workunit.client.0.vm02.stdout:8/205: readlink d1/d2/l2e 0 2026-03-10T10:19:15.933 INFO:tasks.workunit.client.0.vm02.stdout:7/132: dread d1/dc/ff [4194304,4194304] 0 2026-03-10T10:19:15.934 INFO:tasks.workunit.client.0.vm02.stdout:7/133: chown d1/dc/c12 2359 1 2026-03-10T10:19:15.935 INFO:tasks.workunit.client.1.vm05.stdout:9/82: truncate d0/d1/d13/f4 86613 0 2026-03-10T10:19:15.936 INFO:tasks.workunit.client.1.vm05.stdout:9/83: readlink d0/d1/l1a 0 2026-03-10T10:19:15.938 INFO:tasks.workunit.client.0.vm02.stdout:1/117: creat d4/da/f28 x:0 0 0 2026-03-10T10:19:15.939 INFO:tasks.workunit.client.0.vm02.stdout:5/227: rmdir d1/db 39 2026-03-10T10:19:15.941 INFO:tasks.workunit.client.1.vm05.stdout:9/84: mknod d0/df/d11/c1d 0 2026-03-10T10:19:15.942 INFO:tasks.workunit.client.0.vm02.stdout:6/123: truncate d0/d7/f17 2910773 0 2026-03-10T10:19:15.943 INFO:tasks.workunit.client.0.vm02.stdout:1/118: mknod d4/d1b/c29 0 2026-03-10T10:19:15.945 INFO:tasks.workunit.client.1.vm05.stdout:9/85: chown d0/d1/d16 53897860 1 2026-03-10T10:19:15.945 INFO:tasks.workunit.client.0.vm02.stdout:1/119: symlink d4/d1b/l2a 0 2026-03-10T10:19:15.946 INFO:tasks.workunit.client.0.vm02.stdout:1/120: write d4/ff [1342680,29742] 0 2026-03-10T10:19:15.946 INFO:tasks.workunit.client.0.vm02.stdout:1/121: truncate d4/f18 596834 0 2026-03-10T10:19:15.953 INFO:tasks.workunit.client.0.vm02.stdout:4/209: read d1/d10/db/f15 [2633701,105531] 0 2026-03-10T10:19:15.954 INFO:tasks.workunit.client.0.vm02.stdout:4/210: dread - d1/d10/db/f20 zero size 2026-03-10T10:19:15.954 INFO:tasks.workunit.client.0.vm02.stdout:5/228: creat d1/db/d11/d13/f4e x:0 0 0 2026-03-10T10:19:15.954 INFO:tasks.workunit.client.0.vm02.stdout:5/229: dread d1/db/d11/f47 [0,4194304] 0 2026-03-10T10:19:15.954 INFO:tasks.workunit.client.0.vm02.stdout:5/230: dread d1/db/f1e [0,4194304] 0 2026-03-10T10:19:15.957 INFO:tasks.workunit.client.0.vm02.stdout:4/211: creat d1/d32/f46 x:0 0 0 2026-03-10T10:19:15.957 INFO:tasks.workunit.client.0.vm02.stdout:4/212: fsync d1/d10/db/f1e 0 2026-03-10T10:19:15.957 INFO:tasks.workunit.client.0.vm02.stdout:4/213: dread - d1/d10/db/f20 zero size 2026-03-10T10:19:15.959 INFO:tasks.workunit.client.0.vm02.stdout:4/214: dread d1/d2/d37/f14 [0,4194304] 0 2026-03-10T10:19:15.963 INFO:tasks.workunit.client.1.vm05.stdout:9/86: creat d0/f1e x:0 0 0 2026-03-10T10:19:15.964 INFO:tasks.workunit.client.1.vm05.stdout:9/87: chown d0/d1/d16/f18 7 1 2026-03-10T10:19:15.964 INFO:tasks.workunit.client.0.vm02.stdout:4/215: dread d1/d2/d37/f28 [0,4194304] 0 2026-03-10T10:19:15.964 INFO:tasks.workunit.client.0.vm02.stdout:5/231: fsync d1/db/d11/d13/d28/d37/f3c 0 2026-03-10T10:19:15.964 INFO:tasks.workunit.client.1.vm05.stdout:8/81: truncate d7/fd 87323 0 2026-03-10T10:19:15.965 INFO:tasks.workunit.client.0.vm02.stdout:5/232: fdatasync d1/db/d11/d13/d28/d37/d3d/f49 0 2026-03-10T10:19:15.967 INFO:tasks.workunit.client.0.vm02.stdout:5/233: chown d1/db/d11/d16/c1b 8175 1 2026-03-10T10:19:15.968 INFO:tasks.workunit.client.0.vm02.stdout:1/122: creat d4/da/d27/f2b x:0 0 0 2026-03-10T10:19:15.968 INFO:tasks.workunit.client.0.vm02.stdout:4/216: dwrite d1/d2/f3f [0,4194304] 0 2026-03-10T10:19:15.974 INFO:tasks.workunit.client.0.vm02.stdout:8/206: sync 2026-03-10T10:19:15.974 INFO:tasks.workunit.client.0.vm02.stdout:6/124: sync 2026-03-10T10:19:15.983 INFO:tasks.workunit.client.1.vm05.stdout:9/88: dwrite d0/d1/fb [0,4194304] 0 2026-03-10T10:19:15.984 INFO:tasks.workunit.client.0.vm02.stdout:6/125: dread d0/d8/d9/f13 [0,4194304] 0 2026-03-10T10:19:15.984 INFO:tasks.workunit.client.1.vm05.stdout:8/82: write d7/f8 [1589963,93796] 0 2026-03-10T10:19:15.990 INFO:tasks.workunit.client.0.vm02.stdout:6/126: dwrite d0/f20 [0,4194304] 0 2026-03-10T10:19:16.000 INFO:tasks.workunit.client.0.vm02.stdout:5/234: mkdir d1/db/d11/d16/d29/d40/d4f 0 2026-03-10T10:19:16.002 INFO:tasks.workunit.client.1.vm05.stdout:9/89: creat d0/d1/f1f x:0 0 0 2026-03-10T10:19:16.003 INFO:tasks.workunit.client.1.vm05.stdout:9/90: write d0/d1/f1f [598307,22671] 0 2026-03-10T10:19:16.004 INFO:tasks.workunit.client.0.vm02.stdout:4/217: mknod d1/d32/c47 0 2026-03-10T10:19:16.005 INFO:tasks.workunit.client.0.vm02.stdout:4/218: readlink d1/d10/db/l27 0 2026-03-10T10:19:16.006 INFO:tasks.workunit.client.0.vm02.stdout:3/114: getdents d1/d8/d21 0 2026-03-10T10:19:16.011 INFO:tasks.workunit.client.0.vm02.stdout:8/207: creat d1/f40 x:0 0 0 2026-03-10T10:19:16.011 INFO:tasks.workunit.client.0.vm02.stdout:3/115: dwrite d1/d6/f1b [0,4194304] 0 2026-03-10T10:19:16.012 INFO:tasks.workunit.client.0.vm02.stdout:3/116: dread - d1/f14 zero size 2026-03-10T10:19:16.012 INFO:tasks.workunit.client.1.vm05.stdout:2/95: getdents db 0 2026-03-10T10:19:16.012 INFO:tasks.workunit.client.0.vm02.stdout:3/117: chown d1/f3 408211 1 2026-03-10T10:19:16.017 INFO:tasks.workunit.client.0.vm02.stdout:0/208: rmdir d9/d18/d1a 39 2026-03-10T10:19:16.017 INFO:tasks.workunit.client.0.vm02.stdout:3/118: dwrite d1/f12 [0,4194304] 0 2026-03-10T10:19:16.021 INFO:tasks.workunit.client.1.vm05.stdout:2/96: dwrite db/f14 [0,4194304] 0 2026-03-10T10:19:16.022 INFO:tasks.workunit.client.0.vm02.stdout:5/235: rmdir d1/db/d11/d13/d28/d37 39 2026-03-10T10:19:16.024 INFO:tasks.workunit.client.0.vm02.stdout:5/236: chown d1/db/d11/d13/d28/f2c 2300 1 2026-03-10T10:19:16.034 INFO:tasks.workunit.client.0.vm02.stdout:2/239: truncate d0/d1a/f47 191302 0 2026-03-10T10:19:16.036 INFO:tasks.workunit.client.1.vm05.stdout:2/97: mknod db/c18 0 2026-03-10T10:19:16.037 INFO:tasks.workunit.client.0.vm02.stdout:8/208: symlink d1/d1c/l41 0 2026-03-10T10:19:16.037 INFO:tasks.workunit.client.1.vm05.stdout:2/98: write f7 [3990263,45079] 0 2026-03-10T10:19:16.041 INFO:tasks.workunit.client.0.vm02.stdout:3/119: creat d1/d8/d21/f2a x:0 0 0 2026-03-10T10:19:16.047 INFO:tasks.workunit.client.0.vm02.stdout:3/120: stat f0 0 2026-03-10T10:19:16.048 INFO:tasks.workunit.client.0.vm02.stdout:0/209: mkdir d9/d34/d3d 0 2026-03-10T10:19:16.048 INFO:tasks.workunit.client.1.vm05.stdout:7/136: fsync d5/d17/f1e 0 2026-03-10T10:19:16.048 INFO:tasks.workunit.client.1.vm05.stdout:7/137: fdatasync d5/d17/f18 0 2026-03-10T10:19:16.049 INFO:tasks.workunit.client.0.vm02.stdout:8/209: rmdir d1/d1c/d23/d25 39 2026-03-10T10:19:16.051 INFO:tasks.workunit.client.1.vm05.stdout:2/99: unlink db/lc 0 2026-03-10T10:19:16.051 INFO:tasks.workunit.client.1.vm05.stdout:3/83: write dd/fe [5021237,8508] 0 2026-03-10T10:19:16.052 INFO:tasks.workunit.client.1.vm05.stdout:6/89: getdents dd/df/d12 0 2026-03-10T10:19:16.052 INFO:tasks.workunit.client.1.vm05.stdout:2/100: write f7 [278830,92269] 0 2026-03-10T10:19:16.053 INFO:tasks.workunit.client.0.vm02.stdout:3/121: mknod d1/d20/c2b 0 2026-03-10T10:19:16.059 INFO:tasks.workunit.client.1.vm05.stdout:3/84: symlink dd/l1e 0 2026-03-10T10:19:16.062 INFO:tasks.workunit.client.0.vm02.stdout:2/240: truncate d0/d1a/f25 1644135 0 2026-03-10T10:19:16.062 INFO:tasks.workunit.client.1.vm05.stdout:3/85: chown f3 0 1 2026-03-10T10:19:16.063 INFO:tasks.workunit.client.0.vm02.stdout:2/241: dwrite d0/d1a/f52 [0,4194304] 0 2026-03-10T10:19:16.065 INFO:tasks.workunit.client.0.vm02.stdout:8/210: creat d1/d1c/f42 x:0 0 0 2026-03-10T10:19:16.075 INFO:tasks.workunit.client.1.vm05.stdout:6/90: link dd/df/f18 dd/df/f1e 0 2026-03-10T10:19:16.080 INFO:tasks.workunit.client.0.vm02.stdout:3/122: unlink d1/d6/l17 0 2026-03-10T10:19:16.082 INFO:tasks.workunit.client.0.vm02.stdout:0/210: mknod d9/d18/d1a/d22/d24/c3e 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:1/123: dread d4/f5 [0,4194304] 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:0/211: creat d9/d18/d1a/d22/f3f x:0 0 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:2/242: link d0/d1a/f31 d0/d1a/f53 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:8/211: stat d1/d1c/d23/d25/f2b 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:8/212: chown d1/d1c/d23/d25/f2b 151514191 1 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:8/213: chown d1/d1c/l2c 1 1 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:8/214: write d1/d2/f28 [766407,125990] 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:8/215: readlink d1/d1c/l2c 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:8/216: fsync d1/f16 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.0.vm02.stdout:8/217: chown d1/d1c/f20 2 1 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:3/86: mkdir dd/d15/d1f 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:3/87: truncate dd/f12 150082 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:3/88: stat dd/f12 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:2/101: link db/f11 db/f19 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:2/102: chown db/c18 30824 1 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:6/91: unlink c6 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:3/89: mkdir dd/d20 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:6/92: mknod dd/df/c1f 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:0/94: rmdir d1/d2 39 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:6/93: creat dd/df/d12/f20 x:0 0 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:2/103: link db/f14 db/d12/f1a 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:2/104: readlink db/d12/l13 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:1/66: getdents d4/df 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:3/90: symlink dd/d20/l21 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:6/94: mknod dd/df/d12/c21 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:2/105: symlink db/l1b 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:2/106: read db/f14 [3416530,26309] 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:2/107: stat db/f19 0 2026-03-10T10:19:16.128 INFO:tasks.workunit.client.1.vm05.stdout:4/80: rmdir d1 39 2026-03-10T10:19:16.133 INFO:tasks.workunit.client.0.vm02.stdout:9/130: truncate da/d10/f20 348927 0 2026-03-10T10:19:16.134 INFO:tasks.workunit.client.0.vm02.stdout:9/131: write da/f13 [207009,123113] 0 2026-03-10T10:19:16.147 INFO:tasks.workunit.client.1.vm05.stdout:8/83: fdatasync d7/f8 0 2026-03-10T10:19:16.147 INFO:tasks.workunit.client.1.vm05.stdout:6/95: creat dd/df/f22 x:0 0 0 2026-03-10T10:19:16.147 INFO:tasks.workunit.client.1.vm05.stdout:6/96: write dd/df/f22 [998189,32898] 0 2026-03-10T10:19:16.147 INFO:tasks.workunit.client.0.vm02.stdout:7/134: truncate d1/dc/d10/f13 3347392 0 2026-03-10T10:19:16.147 INFO:tasks.workunit.client.0.vm02.stdout:8/218: truncate d1/d1c/f33 1476367 0 2026-03-10T10:19:16.147 INFO:tasks.workunit.client.0.vm02.stdout:6/127: rmdir d0/d7 39 2026-03-10T10:19:16.147 INFO:tasks.workunit.client.0.vm02.stdout:6/128: chown d0/l5 3 1 2026-03-10T10:19:16.148 INFO:tasks.workunit.client.1.vm05.stdout:4/81: truncate d1/d3/f12 379026 0 2026-03-10T10:19:16.149 INFO:tasks.workunit.client.0.vm02.stdout:8/219: mkdir d1/d1c/d43 0 2026-03-10T10:19:16.149 INFO:tasks.workunit.client.1.vm05.stdout:2/108: dwrite db/f11 [4194304,4194304] 0 2026-03-10T10:19:16.150 INFO:tasks.workunit.client.1.vm05.stdout:8/84: symlink d7/l17 0 2026-03-10T10:19:16.150 INFO:tasks.workunit.client.1.vm05.stdout:8/85: chown d7/l17 3432 1 2026-03-10T10:19:16.152 INFO:tasks.workunit.client.1.vm05.stdout:3/91: symlink dd/d15/d1f/l22 0 2026-03-10T10:19:16.152 INFO:tasks.workunit.client.1.vm05.stdout:3/92: write f1 [4808881,102799] 0 2026-03-10T10:19:16.156 INFO:tasks.workunit.client.1.vm05.stdout:7/138: sync 2026-03-10T10:19:16.162 INFO:tasks.workunit.client.1.vm05.stdout:7/139: dread - d5/dd/f23 zero size 2026-03-10T10:19:16.162 INFO:tasks.workunit.client.1.vm05.stdout:4/82: mknod d1/d3/d9/c16 0 2026-03-10T10:19:16.164 INFO:tasks.workunit.client.0.vm02.stdout:0/212: sync 2026-03-10T10:19:16.167 INFO:tasks.workunit.client.0.vm02.stdout:8/220: rename d1/f19 to d1/d1c/d24/d35/f44 0 2026-03-10T10:19:16.171 INFO:tasks.workunit.client.1.vm05.stdout:2/109: mkdir db/d1c 0 2026-03-10T10:19:16.173 INFO:tasks.workunit.client.0.vm02.stdout:0/213: creat d9/d18/d1a/d22/d24/f40 x:0 0 0 2026-03-10T10:19:16.173 INFO:tasks.workunit.client.0.vm02.stdout:0/214: chown d9/d18 711948 1 2026-03-10T10:19:16.174 INFO:tasks.workunit.client.0.vm02.stdout:0/215: truncate d9/d18/d1a/f1f 728000 0 2026-03-10T10:19:16.174 INFO:tasks.workunit.client.1.vm05.stdout:8/86: rename d7/l10 to d7/l18 0 2026-03-10T10:19:16.175 INFO:tasks.workunit.client.0.vm02.stdout:0/216: write d9/d18/d1a/d22/f3f [462682,5490] 0 2026-03-10T10:19:16.177 INFO:tasks.workunit.client.1.vm05.stdout:3/93: creat dd/d15/f23 x:0 0 0 2026-03-10T10:19:16.181 INFO:tasks.workunit.client.0.vm02.stdout:5/237: dread d1/db/d11/d13/f1c [0,4194304] 0 2026-03-10T10:19:16.182 INFO:tasks.workunit.client.0.vm02.stdout:8/221: rmdir d1/d1c/d24/d35 39 2026-03-10T10:19:16.182 INFO:tasks.workunit.client.0.vm02.stdout:8/222: readlink d1/d1c/l2c 0 2026-03-10T10:19:16.183 INFO:tasks.workunit.client.0.vm02.stdout:9/132: getdents da/d10 0 2026-03-10T10:19:16.185 INFO:tasks.workunit.client.1.vm05.stdout:2/110: creat db/d12/f1d x:0 0 0 2026-03-10T10:19:16.185 INFO:tasks.workunit.client.0.vm02.stdout:7/135: getdents d1/dc/d16 0 2026-03-10T10:19:16.186 INFO:tasks.workunit.client.1.vm05.stdout:3/94: mkdir dd/d15/d24 0 2026-03-10T10:19:16.187 INFO:tasks.workunit.client.1.vm05.stdout:3/95: readlink dd/d20/l21 0 2026-03-10T10:19:16.189 INFO:tasks.workunit.client.0.vm02.stdout:5/238: dread d1/db/f1e [0,4194304] 0 2026-03-10T10:19:16.190 INFO:tasks.workunit.client.0.vm02.stdout:5/239: write d1/db/d11/d13/d28/f35 [775871,96756] 0 2026-03-10T10:19:16.194 INFO:tasks.workunit.client.0.vm02.stdout:8/223: rename d1/f10 to d1/d1c/d43/f45 0 2026-03-10T10:19:16.194 INFO:tasks.workunit.client.0.vm02.stdout:5/240: dwrite d1/db/d11/f3e [0,4194304] 0 2026-03-10T10:19:16.194 INFO:tasks.workunit.client.0.vm02.stdout:8/224: readlink d1/d1c/l2c 0 2026-03-10T10:19:16.198 INFO:tasks.workunit.client.0.vm02.stdout:8/225: dread d1/d1c/d43/f45 [0,4194304] 0 2026-03-10T10:19:16.200 INFO:tasks.workunit.client.1.vm05.stdout:4/83: link d1/d3/f5 d1/f17 0 2026-03-10T10:19:16.201 INFO:tasks.workunit.client.1.vm05.stdout:4/84: stat d1/d3/c15 0 2026-03-10T10:19:16.202 INFO:tasks.workunit.client.0.vm02.stdout:0/217: creat d9/d34/d3d/f41 x:0 0 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.0.vm02.stdout:9/133: dread da/f1e [0,4194304] 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.0.vm02.stdout:5/241: rmdir d1/db/d11 39 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.0.vm02.stdout:8/226: read d1/d1c/d23/f3b [127134,64233] 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.0.vm02.stdout:8/227: dread - d1/d1c/f42 zero size 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.0.vm02.stdout:8/228: write d1/f40 [792233,52218] 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.0.vm02.stdout:8/229: write d1/f16 [209756,99537] 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.1.vm05.stdout:8/87: creat d7/d14/d15/f19 x:0 0 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.1.vm05.stdout:8/88: chown d7 61679 1 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.1.vm05.stdout:3/96: mknod dd/d15/c25 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.1.vm05.stdout:2/111: rename l6 to db/d12/l1e 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.1.vm05.stdout:8/89: symlink d7/l1a 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.1.vm05.stdout:7/140: getdents d5/d1d/d20 0 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.1.vm05.stdout:7/141: chown d5/l11 1 1 2026-03-10T10:19:16.211 INFO:tasks.workunit.client.0.vm02.stdout:7/136: sync 2026-03-10T10:19:16.212 INFO:tasks.workunit.client.0.vm02.stdout:7/137: chown d1/dc 22 1 2026-03-10T10:19:16.215 INFO:tasks.workunit.client.0.vm02.stdout:8/230: dwrite d1/f21 [0,4194304] 0 2026-03-10T10:19:16.219 INFO:tasks.workunit.client.0.vm02.stdout:8/231: write d1/d1c/f20 [1536754,18882] 0 2026-03-10T10:19:16.219 INFO:tasks.workunit.client.0.vm02.stdout:8/232: write d1/f21 [1587836,13970] 0 2026-03-10T10:19:16.223 INFO:tasks.workunit.client.0.vm02.stdout:7/138: read d1/f15 [126210,75419] 0 2026-03-10T10:19:16.230 INFO:tasks.workunit.client.0.vm02.stdout:8/233: sync 2026-03-10T10:19:16.232 INFO:tasks.workunit.client.0.vm02.stdout:5/242: rmdir d1/db 39 2026-03-10T10:19:16.233 INFO:tasks.workunit.client.1.vm05.stdout:2/112: fdatasync db/f14 0 2026-03-10T10:19:16.235 INFO:tasks.workunit.client.0.vm02.stdout:0/218: mkdir d9/d42 0 2026-03-10T10:19:16.236 INFO:tasks.workunit.client.1.vm05.stdout:8/90: symlink d7/l1b 0 2026-03-10T10:19:16.237 INFO:tasks.workunit.client.1.vm05.stdout:8/91: fdatasync d7/f11 0 2026-03-10T10:19:16.240 INFO:tasks.workunit.client.0.vm02.stdout:7/139: creat d1/dc/d16/d28/f2b x:0 0 0 2026-03-10T10:19:16.247 INFO:tasks.workunit.client.0.vm02.stdout:4/219: dwrite d1/d2/f3f [4194304,4194304] 0 2026-03-10T10:19:16.247 INFO:tasks.workunit.client.0.vm02.stdout:0/219: sync 2026-03-10T10:19:16.248 INFO:tasks.workunit.client.0.vm02.stdout:4/220: chown d1/d10/l38 10965187 1 2026-03-10T10:19:16.251 INFO:tasks.workunit.client.1.vm05.stdout:4/85: link d1/d3/l7 d1/d3/d9/l18 0 2026-03-10T10:19:16.256 INFO:tasks.workunit.client.1.vm05.stdout:4/86: write d1/d3/d9/fd [481431,128925] 0 2026-03-10T10:19:16.260 INFO:tasks.workunit.client.1.vm05.stdout:4/87: truncate d1/f17 974755 0 2026-03-10T10:19:16.261 INFO:tasks.workunit.client.0.vm02.stdout:8/234: chown d1/d1c/d24/d35/f44 1047058 1 2026-03-10T10:19:16.264 INFO:tasks.workunit.client.1.vm05.stdout:8/92: rename d7/fc to d7/f1c 0 2026-03-10T10:19:16.266 INFO:tasks.workunit.client.0.vm02.stdout:7/140: mkdir d1/dc/d16/d28/d2c 0 2026-03-10T10:19:16.268 INFO:tasks.workunit.client.1.vm05.stdout:7/142: truncate d5/fe 700555 0 2026-03-10T10:19:16.268 INFO:tasks.workunit.client.0.vm02.stdout:9/134: creat da/f28 x:0 0 0 2026-03-10T10:19:16.269 INFO:tasks.workunit.client.1.vm05.stdout:2/113: creat db/d1c/f1f x:0 0 0 2026-03-10T10:19:16.269 INFO:tasks.workunit.client.0.vm02.stdout:0/220: rmdir d9/d34 39 2026-03-10T10:19:16.270 INFO:tasks.workunit.client.0.vm02.stdout:0/221: write d9/d18/d1a/d22/d24/f40 [191639,110767] 0 2026-03-10T10:19:16.274 INFO:tasks.workunit.client.0.vm02.stdout:0/222: dwrite d9/d18/d1a/d22/d24/d25/f3a [0,4194304] 0 2026-03-10T10:19:16.280 INFO:tasks.workunit.client.0.vm02.stdout:4/221: creat d1/d2/d37/f48 x:0 0 0 2026-03-10T10:19:16.286 INFO:tasks.workunit.client.1.vm05.stdout:9/91: write d0/f7 [825367,57934] 0 2026-03-10T10:19:16.286 INFO:tasks.workunit.client.1.vm05.stdout:9/92: chown d0/d1/fb 5919976 1 2026-03-10T10:19:16.287 INFO:tasks.workunit.client.0.vm02.stdout:4/222: chown d1/d2/d37/l17 7755 1 2026-03-10T10:19:16.287 INFO:tasks.workunit.client.0.vm02.stdout:4/223: read - d1/d10/db/f20 zero size 2026-03-10T10:19:16.287 INFO:tasks.workunit.client.0.vm02.stdout:7/141: mkdir d1/dc/d16/d28/d2d 0 2026-03-10T10:19:16.288 INFO:tasks.workunit.client.0.vm02.stdout:9/135: readlink da/l16 0 2026-03-10T10:19:16.290 INFO:tasks.workunit.client.1.vm05.stdout:2/114: mknod db/d1c/c20 0 2026-03-10T10:19:16.290 INFO:tasks.workunit.client.0.vm02.stdout:4/224: unlink d1/d32/l39 0 2026-03-10T10:19:16.291 INFO:tasks.workunit.client.0.vm02.stdout:5/243: chown d1/db/d11/d13/d28/d37 493 1 2026-03-10T10:19:16.292 INFO:tasks.workunit.client.0.vm02.stdout:7/142: creat d1/dc/f2e x:0 0 0 2026-03-10T10:19:16.292 INFO:tasks.workunit.client.1.vm05.stdout:9/93: dwrite d0/d1/f1f [0,4194304] 0 2026-03-10T10:19:16.293 INFO:tasks.workunit.client.1.vm05.stdout:7/143: mkdir d5/d26 0 2026-03-10T10:19:16.294 INFO:tasks.workunit.client.0.vm02.stdout:9/136: creat da/d10/f29 x:0 0 0 2026-03-10T10:19:16.296 INFO:tasks.workunit.client.0.vm02.stdout:0/223: unlink d9/l13 0 2026-03-10T10:19:16.297 INFO:tasks.workunit.client.0.vm02.stdout:4/225: mkdir d1/d2/d1a/d49 0 2026-03-10T10:19:16.298 INFO:tasks.workunit.client.0.vm02.stdout:4/226: dread - d1/d10/f45 zero size 2026-03-10T10:19:16.298 INFO:tasks.workunit.client.0.vm02.stdout:4/227: stat d1/f1d 0 2026-03-10T10:19:16.298 INFO:tasks.workunit.client.0.vm02.stdout:4/228: fsync d1/d32/f46 0 2026-03-10T10:19:16.299 INFO:tasks.workunit.client.0.vm02.stdout:5/244: stat d1/c20 0 2026-03-10T10:19:16.301 INFO:tasks.workunit.client.0.vm02.stdout:9/137: mknod da/d10/c2a 0 2026-03-10T10:19:16.304 INFO:tasks.workunit.client.1.vm05.stdout:2/115: dread f7 [0,4194304] 0 2026-03-10T10:19:16.309 INFO:tasks.workunit.client.1.vm05.stdout:7/144: dwrite d5/f22 [0,4194304] 0 2026-03-10T10:19:16.311 INFO:tasks.workunit.client.1.vm05.stdout:4/88: sync 2026-03-10T10:19:16.312 INFO:tasks.workunit.client.0.vm02.stdout:6/129: truncate d0/d8/d9/f13 1795457 0 2026-03-10T10:19:16.315 INFO:tasks.workunit.client.0.vm02.stdout:7/143: dread d1/fd [0,4194304] 0 2026-03-10T10:19:16.315 INFO:tasks.workunit.client.1.vm05.stdout:2/116: dread db/d12/f1a [0,4194304] 0 2026-03-10T10:19:16.315 INFO:tasks.workunit.client.1.vm05.stdout:2/117: stat db 0 2026-03-10T10:19:16.316 INFO:tasks.workunit.client.1.vm05.stdout:5/134: truncate f5 590064 0 2026-03-10T10:19:16.320 INFO:tasks.workunit.client.1.vm05.stdout:2/118: truncate db/f15 4442913 0 2026-03-10T10:19:16.323 INFO:tasks.workunit.client.0.vm02.stdout:5/245: symlink d1/db/d11/l50 0 2026-03-10T10:19:16.324 INFO:tasks.workunit.client.1.vm05.stdout:2/119: fsync db/f11 0 2026-03-10T10:19:16.324 INFO:tasks.workunit.client.1.vm05.stdout:2/120: dread - db/d1c/f1f zero size 2026-03-10T10:19:16.328 INFO:tasks.workunit.client.1.vm05.stdout:8/93: getdents d7/d14 0 2026-03-10T10:19:16.331 INFO:tasks.workunit.client.1.vm05.stdout:4/89: creat d1/f19 x:0 0 0 2026-03-10T10:19:16.331 INFO:tasks.workunit.client.1.vm05.stdout:7/145: unlink d5/fa 0 2026-03-10T10:19:16.331 INFO:tasks.workunit.client.0.vm02.stdout:7/144: creat d1/dc/d16/d28/d2d/f2f x:0 0 0 2026-03-10T10:19:16.331 INFO:tasks.workunit.client.0.vm02.stdout:6/130: dwrite d0/d7/f26 [0,4194304] 0 2026-03-10T10:19:16.334 INFO:tasks.workunit.client.0.vm02.stdout:5/246: symlink d1/l51 0 2026-03-10T10:19:16.348 INFO:tasks.workunit.client.1.vm05.stdout:5/135: dwrite f9 [0,4194304] 0 2026-03-10T10:19:16.348 INFO:tasks.workunit.client.0.vm02.stdout:5/247: stat d1/db/d11/l34 0 2026-03-10T10:19:16.348 INFO:tasks.workunit.client.0.vm02.stdout:1/124: truncate d4/f8 3272129 0 2026-03-10T10:19:16.350 INFO:tasks.workunit.client.1.vm05.stdout:8/94: sync 2026-03-10T10:19:16.350 INFO:tasks.workunit.client.0.vm02.stdout:3/123: dwrite d1/fe [0,4194304] 0 2026-03-10T10:19:16.354 INFO:tasks.workunit.client.1.vm05.stdout:0/95: truncate d1/f11 971271 0 2026-03-10T10:19:16.355 INFO:tasks.workunit.client.1.vm05.stdout:2/121: dwrite db/f14 [0,4194304] 0 2026-03-10T10:19:16.368 INFO:tasks.workunit.client.1.vm05.stdout:2/122: dwrite db/d12/f1a [0,4194304] 0 2026-03-10T10:19:16.369 INFO:tasks.workunit.client.1.vm05.stdout:4/90: creat d1/d3/d9/f1a x:0 0 0 2026-03-10T10:19:16.370 INFO:tasks.workunit.client.0.vm02.stdout:5/248: rename d1/f26 to d1/db/d11/d16/d29/f52 0 2026-03-10T10:19:16.380 INFO:tasks.workunit.client.1.vm05.stdout:9/94: unlink d0/d1/d13/f4 0 2026-03-10T10:19:16.384 INFO:tasks.workunit.client.1.vm05.stdout:9/95: fsync d0/d1/d13/f8 0 2026-03-10T10:19:16.385 INFO:tasks.workunit.client.0.vm02.stdout:2/243: truncate d0/d1a/f52 345577 0 2026-03-10T10:19:16.385 INFO:tasks.workunit.client.0.vm02.stdout:7/145: symlink d1/dc/d16/d28/d2c/l30 0 2026-03-10T10:19:16.390 INFO:tasks.workunit.client.1.vm05.stdout:1/67: truncate d4/df/f11 1166400 0 2026-03-10T10:19:16.390 INFO:tasks.workunit.client.1.vm05.stdout:6/97: write dd/df/f18 [5186214,77326] 0 2026-03-10T10:19:16.391 INFO:tasks.workunit.client.1.vm05.stdout:6/98: fsync dd/f14 0 2026-03-10T10:19:16.392 INFO:tasks.workunit.client.1.vm05.stdout:6/99: dread fb [0,4194304] 0 2026-03-10T10:19:16.394 INFO:tasks.workunit.client.1.vm05.stdout:4/91: creat d1/d3/d9/f1b x:0 0 0 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.0.vm02.stdout:2/244: fsync d0/d10/f46 0 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.0.vm02.stdout:5/249: dread d1/f3 [0,4194304] 0 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.0.vm02.stdout:6/131: creat d0/f28 x:0 0 0 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.0.vm02.stdout:6/132: stat d0/f20 0 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.0.vm02.stdout:7/146: write d1/fd [905599,114580] 0 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.1.vm05.stdout:4/92: write d1/d3/d9/f1b [441376,24126] 0 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.1.vm05.stdout:5/136: sync 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.1.vm05.stdout:9/96: dwrite d0/fa [0,4194304] 0 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.1.vm05.stdout:4/93: read - d1/d3/d9/f1a zero size 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.1.vm05.stdout:5/137: dread - da/db/f24 zero size 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.1.vm05.stdout:4/94: fsync d1/d3/d9/f13 0 2026-03-10T10:19:16.404 INFO:tasks.workunit.client.1.vm05.stdout:9/97: dwrite d0/d1/fb [0,4194304] 0 2026-03-10T10:19:16.408 INFO:tasks.workunit.client.0.vm02.stdout:1/125: sync 2026-03-10T10:19:16.408 INFO:tasks.workunit.client.0.vm02.stdout:3/124: sync 2026-03-10T10:19:16.408 INFO:tasks.workunit.client.1.vm05.stdout:0/96: rename d1/d7/db/d13/d17/f18 to d1/d2/f21 0 2026-03-10T10:19:16.409 INFO:tasks.workunit.client.1.vm05.stdout:9/98: rename d0/d1 to d0/d1/d13/d20 22 2026-03-10T10:19:16.409 INFO:tasks.workunit.client.1.vm05.stdout:9/99: readlink d0/l19 0 2026-03-10T10:19:16.410 INFO:tasks.workunit.client.0.vm02.stdout:5/250: creat d1/db/d11/d16/d29/d40/f53 x:0 0 0 2026-03-10T10:19:16.411 INFO:tasks.workunit.client.0.vm02.stdout:6/133: mkdir d0/d8/d29 0 2026-03-10T10:19:16.415 INFO:tasks.workunit.client.0.vm02.stdout:7/147: symlink d1/dc/d16/d28/l31 0 2026-03-10T10:19:16.417 INFO:tasks.workunit.client.0.vm02.stdout:6/134: dwrite d0/f21 [0,4194304] 0 2026-03-10T10:19:16.418 INFO:tasks.workunit.client.0.vm02.stdout:6/135: read d0/d8/d9/f14 [1650023,15501] 0 2026-03-10T10:19:16.421 INFO:tasks.workunit.client.1.vm05.stdout:6/100: mknod dd/df/d12/c23 0 2026-03-10T10:19:16.421 INFO:tasks.workunit.client.1.vm05.stdout:1/68: write d4/dd/f15 [431181,98998] 0 2026-03-10T10:19:16.422 INFO:tasks.workunit.client.1.vm05.stdout:6/101: write dd/d1b/f1d [952191,131005] 0 2026-03-10T10:19:16.422 INFO:tasks.workunit.client.1.vm05.stdout:6/102: chown dd/df/f22 2695 1 2026-03-10T10:19:16.425 INFO:tasks.workunit.client.1.vm05.stdout:6/103: dread dd/fe [4194304,4194304] 0 2026-03-10T10:19:16.430 INFO:tasks.workunit.client.1.vm05.stdout:6/104: write f2 [7993805,55720] 0 2026-03-10T10:19:16.431 INFO:tasks.workunit.client.1.vm05.stdout:5/138: unlink da/c11 0 2026-03-10T10:19:16.443 INFO:tasks.workunit.client.0.vm02.stdout:8/235: rmdir d1 39 2026-03-10T10:19:16.452 INFO:tasks.workunit.client.1.vm05.stdout:1/69: chown d4/l6 54310 1 2026-03-10T10:19:16.454 INFO:tasks.workunit.client.0.vm02.stdout:6/136: write d0/d8/d9/f14 [4253751,110411] 0 2026-03-10T10:19:16.457 INFO:tasks.workunit.client.0.vm02.stdout:3/125: symlink d1/l2c 0 2026-03-10T10:19:16.458 INFO:tasks.workunit.client.1.vm05.stdout:0/97: dwrite d1/d2/fc [0,4194304] 0 2026-03-10T10:19:16.462 INFO:tasks.workunit.client.1.vm05.stdout:0/98: write d1/d7/db/d13/d17/f1b [678271,64876] 0 2026-03-10T10:19:16.462 INFO:tasks.workunit.client.1.vm05.stdout:0/99: readlink d1/d2/d9/l1c 0 2026-03-10T10:19:16.472 INFO:tasks.workunit.client.0.vm02.stdout:8/236: creat d1/d1c/d43/f46 x:0 0 0 2026-03-10T10:19:16.472 INFO:tasks.workunit.client.1.vm05.stdout:9/100: dread d0/df/d11/f12 [0,4194304] 0 2026-03-10T10:19:16.476 INFO:tasks.workunit.client.1.vm05.stdout:5/139: mkdir da/db/d28 0 2026-03-10T10:19:16.477 INFO:tasks.workunit.client.0.vm02.stdout:7/148: creat d1/f32 x:0 0 0 2026-03-10T10:19:16.483 INFO:tasks.workunit.client.0.vm02.stdout:7/149: mkdir d1/d33 0 2026-03-10T10:19:16.484 INFO:tasks.workunit.client.0.vm02.stdout:7/150: write d1/dc/f26 [1041137,7915] 0 2026-03-10T10:19:16.485 INFO:tasks.workunit.client.0.vm02.stdout:6/137: getdents d0/d8/d9 0 2026-03-10T10:19:16.486 INFO:tasks.workunit.client.1.vm05.stdout:5/140: fdatasync da/f20 0 2026-03-10T10:19:16.486 INFO:tasks.workunit.client.1.vm05.stdout:9/101: dread d0/d1/f1f [0,4194304] 0 2026-03-10T10:19:16.486 INFO:tasks.workunit.client.1.vm05.stdout:9/102: read d0/f7 [97689,60328] 0 2026-03-10T10:19:16.486 INFO:tasks.workunit.client.1.vm05.stdout:9/103: dwrite d0/d1/d16/f18 [0,4194304] 0 2026-03-10T10:19:16.486 INFO:tasks.workunit.client.1.vm05.stdout:4/95: link d1/cf d1/d3/c1c 0 2026-03-10T10:19:16.486 INFO:tasks.workunit.client.1.vm05.stdout:1/70: getdents d4/df 0 2026-03-10T10:19:16.490 INFO:tasks.workunit.client.0.vm02.stdout:7/151: dwrite d1/dc/f3 [0,4194304] 0 2026-03-10T10:19:16.491 INFO:tasks.workunit.client.0.vm02.stdout:7/152: chown d1/dc/d10/f27 1 1 2026-03-10T10:19:16.502 INFO:tasks.workunit.client.1.vm05.stdout:0/100: dwrite d1/d7/f16 [0,4194304] 0 2026-03-10T10:19:16.505 INFO:tasks.workunit.client.1.vm05.stdout:0/101: write d1/d2/fc [893336,93708] 0 2026-03-10T10:19:16.511 INFO:tasks.workunit.client.1.vm05.stdout:4/96: unlink d1/ca 0 2026-03-10T10:19:16.518 INFO:tasks.workunit.client.0.vm02.stdout:6/138: creat d0/d8/f2a x:0 0 0 2026-03-10T10:19:16.523 INFO:tasks.workunit.client.0.vm02.stdout:7/153: creat d1/f34 x:0 0 0 2026-03-10T10:19:16.523 INFO:tasks.workunit.client.0.vm02.stdout:7/154: truncate d1/dc/f2e 763051 0 2026-03-10T10:19:16.530 INFO:tasks.workunit.client.1.vm05.stdout:0/102: unlink f0 0 2026-03-10T10:19:16.530 INFO:tasks.workunit.client.1.vm05.stdout:3/97: truncate dd/fe 4979635 0 2026-03-10T10:19:16.530 INFO:tasks.workunit.client.1.vm05.stdout:5/141: creat da/db/f29 x:0 0 0 2026-03-10T10:19:16.531 INFO:tasks.workunit.client.1.vm05.stdout:9/104: mkdir d0/d1/d13/de/d21 0 2026-03-10T10:19:16.534 INFO:tasks.workunit.client.1.vm05.stdout:4/97: mknod d1/d3/c1d 0 2026-03-10T10:19:16.536 INFO:tasks.workunit.client.1.vm05.stdout:5/142: creat da/db/d17/f2a x:0 0 0 2026-03-10T10:19:16.536 INFO:tasks.workunit.client.1.vm05.stdout:0/103: symlink d1/d7/l22 0 2026-03-10T10:19:16.540 INFO:tasks.workunit.client.1.vm05.stdout:5/143: read f9 [2586220,24539] 0 2026-03-10T10:19:16.540 INFO:tasks.workunit.client.1.vm05.stdout:5/144: stat da/db/d17/l22 0 2026-03-10T10:19:16.540 INFO:tasks.workunit.client.1.vm05.stdout:5/145: symlink da/db/de/l2b 0 2026-03-10T10:19:16.542 INFO:tasks.workunit.client.1.vm05.stdout:3/98: dwrite dd/d15/f1c [0,4194304] 0 2026-03-10T10:19:16.542 INFO:tasks.workunit.client.1.vm05.stdout:0/104: symlink d1/d7/db/d13/d17/l23 0 2026-03-10T10:19:16.544 INFO:tasks.workunit.client.1.vm05.stdout:5/146: write da/db/f29 [170006,111063] 0 2026-03-10T10:19:16.551 INFO:tasks.workunit.client.1.vm05.stdout:3/99: rmdir dd/d15 39 2026-03-10T10:19:16.554 INFO:tasks.workunit.client.1.vm05.stdout:5/147: creat da/db/de/f2c x:0 0 0 2026-03-10T10:19:16.559 INFO:tasks.workunit.client.1.vm05.stdout:3/100: dread f1 [0,4194304] 0 2026-03-10T10:19:16.564 INFO:tasks.workunit.client.1.vm05.stdout:3/101: creat dd/d20/f26 x:0 0 0 2026-03-10T10:19:16.564 INFO:tasks.workunit.client.1.vm05.stdout:3/102: write f9 [4466260,103154] 0 2026-03-10T10:19:16.581 INFO:tasks.workunit.client.1.vm05.stdout:3/103: dread f3 [0,4194304] 0 2026-03-10T10:19:16.587 INFO:tasks.workunit.client.1.vm05.stdout:3/104: chown dd/l13 506 1 2026-03-10T10:19:16.587 INFO:tasks.workunit.client.1.vm05.stdout:7/146: fsync d5/fe 0 2026-03-10T10:19:16.588 INFO:tasks.workunit.client.1.vm05.stdout:7/147: dread - d5/f25 zero size 2026-03-10T10:19:16.588 INFO:tasks.workunit.client.1.vm05.stdout:7/148: stat d5/l9 0 2026-03-10T10:19:16.590 INFO:tasks.workunit.client.1.vm05.stdout:3/105: symlink dd/d15/l27 0 2026-03-10T10:19:16.590 INFO:tasks.workunit.client.1.vm05.stdout:7/149: fdatasync d5/dd/f12 0 2026-03-10T10:19:16.601 INFO:tasks.workunit.client.1.vm05.stdout:3/106: symlink dd/d15/d24/l28 0 2026-03-10T10:19:16.601 INFO:tasks.workunit.client.1.vm05.stdout:7/150: read - d5/dd/f1a zero size 2026-03-10T10:19:16.601 INFO:tasks.workunit.client.1.vm05.stdout:3/107: write fb [1147415,94755] 0 2026-03-10T10:19:16.601 INFO:tasks.workunit.client.1.vm05.stdout:3/108: dread - dd/d15/f18 zero size 2026-03-10T10:19:16.601 INFO:tasks.workunit.client.1.vm05.stdout:3/109: dread dd/f12 [0,4194304] 0 2026-03-10T10:19:16.601 INFO:tasks.workunit.client.1.vm05.stdout:3/110: write dd/d15/f23 [845580,115909] 0 2026-03-10T10:19:16.606 INFO:tasks.workunit.client.1.vm05.stdout:7/151: symlink d5/dd/l27 0 2026-03-10T10:19:16.608 INFO:tasks.workunit.client.1.vm05.stdout:7/152: dread - d5/dd/f1a zero size 2026-03-10T10:19:16.610 INFO:tasks.workunit.client.1.vm05.stdout:7/153: rename d5/d17/f18 to d5/dd/f28 0 2026-03-10T10:19:16.612 INFO:tasks.workunit.client.1.vm05.stdout:7/154: mkdir d5/d1d/d29 0 2026-03-10T10:19:16.612 INFO:tasks.workunit.client.1.vm05.stdout:7/155: dread - d5/f25 zero size 2026-03-10T10:19:16.639 INFO:tasks.workunit.client.1.vm05.stdout:0/105: dread d1/d7/f4 [0,4194304] 0 2026-03-10T10:19:16.640 INFO:tasks.workunit.client.1.vm05.stdout:0/106: write d1/d2/fc [461509,124613] 0 2026-03-10T10:19:16.641 INFO:tasks.workunit.client.1.vm05.stdout:3/111: sync 2026-03-10T10:19:16.646 INFO:tasks.workunit.client.1.vm05.stdout:3/112: creat dd/d15/f29 x:0 0 0 2026-03-10T10:19:16.647 INFO:tasks.workunit.client.1.vm05.stdout:0/107: creat d1/d7/f24 x:0 0 0 2026-03-10T10:19:16.648 INFO:tasks.workunit.client.1.vm05.stdout:3/113: unlink dd/f11 0 2026-03-10T10:19:16.649 INFO:tasks.workunit.client.1.vm05.stdout:0/108: mknod d1/d7/db/d12/c25 0 2026-03-10T10:19:16.649 INFO:tasks.workunit.client.1.vm05.stdout:0/109: chown d1 264885 1 2026-03-10T10:19:16.654 INFO:tasks.workunit.client.1.vm05.stdout:9/105: fsync d0/fa 0 2026-03-10T10:19:16.658 INFO:tasks.workunit.client.1.vm05.stdout:3/114: dwrite dd/d15/f29 [0,4194304] 0 2026-03-10T10:19:16.662 INFO:tasks.workunit.client.0.vm02.stdout:0/224: dwrite d9/d18/d1a/d22/d24/f2f [0,4194304] 0 2026-03-10T10:19:16.669 INFO:tasks.workunit.client.1.vm05.stdout:3/115: link dd/d15/f18 dd/d15/d1f/f2a 0 2026-03-10T10:19:16.674 INFO:tasks.workunit.client.1.vm05.stdout:9/106: link d0/fa d0/d1/d13/f22 0 2026-03-10T10:19:16.674 INFO:tasks.workunit.client.1.vm05.stdout:3/116: creat dd/d15/d1f/f2b x:0 0 0 2026-03-10T10:19:16.683 INFO:tasks.workunit.client.1.vm05.stdout:3/117: write dd/f12 [1160273,89051] 0 2026-03-10T10:19:16.688 INFO:tasks.workunit.client.1.vm05.stdout:3/118: mkdir dd/d15/d24/d2c 0 2026-03-10T10:19:16.690 INFO:tasks.workunit.client.1.vm05.stdout:3/119: creat dd/d15/d24/d2c/f2d x:0 0 0 2026-03-10T10:19:16.694 INFO:tasks.workunit.client.1.vm05.stdout:3/120: symlink dd/d15/l2e 0 2026-03-10T10:19:16.700 INFO:tasks.workunit.client.1.vm05.stdout:3/121: readlink l0 0 2026-03-10T10:19:16.702 INFO:tasks.workunit.client.1.vm05.stdout:3/122: creat dd/d15/d24/f2f x:0 0 0 2026-03-10T10:19:16.710 INFO:tasks.workunit.client.1.vm05.stdout:3/123: dwrite f2 [0,4194304] 0 2026-03-10T10:19:16.715 INFO:tasks.workunit.client.1.vm05.stdout:9/107: sync 2026-03-10T10:19:16.726 INFO:tasks.workunit.client.1.vm05.stdout:3/124: symlink dd/d15/d1f/l30 0 2026-03-10T10:19:16.726 INFO:tasks.workunit.client.1.vm05.stdout:3/125: dread - dd/d15/d1f/f2b zero size 2026-03-10T10:19:16.726 INFO:tasks.workunit.client.1.vm05.stdout:3/126: fsync f6 0 2026-03-10T10:19:16.726 INFO:tasks.workunit.client.1.vm05.stdout:3/127: read dd/f12 [257245,19073] 0 2026-03-10T10:19:16.726 INFO:tasks.workunit.client.1.vm05.stdout:9/108: creat d0/d1/f23 x:0 0 0 2026-03-10T10:19:16.726 INFO:tasks.workunit.client.1.vm05.stdout:3/128: chown dd/d15/c1a 9819 1 2026-03-10T10:19:16.726 INFO:tasks.workunit.client.1.vm05.stdout:9/109: dread - d0/f1e zero size 2026-03-10T10:19:16.726 INFO:tasks.workunit.client.1.vm05.stdout:3/129: dread - dd/d15/d24/d2c/f2d zero size 2026-03-10T10:19:16.729 INFO:tasks.workunit.client.0.vm02.stdout:7/155: fsync d1/dc/f26 0 2026-03-10T10:19:16.734 INFO:tasks.workunit.client.0.vm02.stdout:4/229: dwrite d1/d10/db/f16 [0,4194304] 0 2026-03-10T10:19:16.735 INFO:tasks.workunit.client.1.vm05.stdout:3/130: dwrite dd/d15/f1c [0,4194304] 0 2026-03-10T10:19:16.739 INFO:tasks.workunit.client.1.vm05.stdout:9/110: dwrite d0/d1/fb [4194304,4194304] 0 2026-03-10T10:19:16.740 INFO:tasks.workunit.client.1.vm05.stdout:3/131: truncate dd/d15/f18 415703 0 2026-03-10T10:19:16.740 INFO:tasks.workunit.client.1.vm05.stdout:3/132: write f9 [2819817,52307] 0 2026-03-10T10:19:16.743 INFO:tasks.workunit.client.0.vm02.stdout:4/230: rmdir d1/d10/db 39 2026-03-10T10:19:16.755 INFO:tasks.workunit.client.1.vm05.stdout:3/133: mknod dd/d15/d24/d2c/c31 0 2026-03-10T10:19:16.755 INFO:tasks.workunit.client.1.vm05.stdout:9/111: dwrite d0/d1/f9 [0,4194304] 0 2026-03-10T10:19:16.755 INFO:tasks.workunit.client.0.vm02.stdout:4/231: chown d1/d2/f3f 1886 1 2026-03-10T10:19:16.755 INFO:tasks.workunit.client.0.vm02.stdout:4/232: rename d1/d10/db/l33 to d1/d2/d1a/d49/l4a 0 2026-03-10T10:19:16.759 INFO:tasks.workunit.client.0.vm02.stdout:3/126: dread d1/f1c [0,4194304] 0 2026-03-10T10:19:16.760 INFO:tasks.workunit.client.1.vm05.stdout:3/134: rename dd/f12 to dd/d15/d24/d2c/f32 0 2026-03-10T10:19:16.760 INFO:tasks.workunit.client.0.vm02.stdout:3/127: read d1/d8/f1a [1049217,122415] 0 2026-03-10T10:19:16.763 INFO:tasks.workunit.client.0.vm02.stdout:3/128: write f0 [4417131,19790] 0 2026-03-10T10:19:16.765 INFO:tasks.workunit.client.1.vm05.stdout:3/135: write dd/d15/d24/d2c/f32 [136224,84366] 0 2026-03-10T10:19:16.766 INFO:tasks.workunit.client.1.vm05.stdout:3/136: dread f2 [0,4194304] 0 2026-03-10T10:19:16.769 INFO:tasks.workunit.client.1.vm05.stdout:3/137: dread dd/d15/d24/d2c/f32 [0,4194304] 0 2026-03-10T10:19:16.769 INFO:tasks.workunit.client.1.vm05.stdout:3/138: chown l7 249511676 1 2026-03-10T10:19:16.769 INFO:tasks.workunit.client.1.vm05.stdout:3/139: write fb [1902144,117653] 0 2026-03-10T10:19:16.770 INFO:tasks.workunit.client.0.vm02.stdout:3/129: symlink d1/d8/l2d 0 2026-03-10T10:19:16.770 INFO:tasks.workunit.client.1.vm05.stdout:3/140: write f6 [657399,10623] 0 2026-03-10T10:19:16.770 INFO:tasks.workunit.client.1.vm05.stdout:9/112: getdents d0/d1/d16 0 2026-03-10T10:19:16.775 INFO:tasks.workunit.client.0.vm02.stdout:3/130: unlink d1/d6/f26 0 2026-03-10T10:19:16.777 INFO:tasks.workunit.client.0.vm02.stdout:3/131: dread d1/f12 [0,4194304] 0 2026-03-10T10:19:16.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:16 vm02.local ceph-mon[50200]: pgmap v147: 65 pgs: 65 active+clean; 485 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 2.2 MiB/s rd, 41 MiB/s wr, 254 op/s 2026-03-10T10:19:16.781 INFO:tasks.workunit.client.0.vm02.stdout:3/132: creat d1/d8/f2e x:0 0 0 2026-03-10T10:19:16.783 INFO:tasks.workunit.client.1.vm05.stdout:3/141: creat dd/d15/d24/f33 x:0 0 0 2026-03-10T10:19:16.783 INFO:tasks.workunit.client.1.vm05.stdout:3/142: stat dd/d20/l21 0 2026-03-10T10:19:16.785 INFO:tasks.workunit.client.0.vm02.stdout:3/133: rename d1/f19 to d1/d8/d21/f2f 0 2026-03-10T10:19:16.791 INFO:tasks.workunit.client.0.vm02.stdout:3/134: dwrite d1/f5 [0,4194304] 0 2026-03-10T10:19:16.792 INFO:tasks.workunit.client.0.vm02.stdout:3/135: chown d1/d6/l1e 1927444 1 2026-03-10T10:19:16.794 INFO:tasks.workunit.client.1.vm05.stdout:3/143: mknod dd/c34 0 2026-03-10T10:19:16.795 INFO:tasks.workunit.client.1.vm05.stdout:3/144: write fa [278371,52572] 0 2026-03-10T10:19:16.795 INFO:tasks.workunit.client.0.vm02.stdout:3/136: rmdir d1/d20 39 2026-03-10T10:19:16.801 INFO:tasks.workunit.client.0.vm02.stdout:4/233: read d1/d2/d37/f2e [875080,20428] 0 2026-03-10T10:19:16.805 INFO:tasks.workunit.client.1.vm05.stdout:3/145: link f6 dd/d20/f35 0 2026-03-10T10:19:16.809 INFO:tasks.workunit.client.0.vm02.stdout:3/137: rename d1/l7 to d1/d20/l30 0 2026-03-10T10:19:16.809 INFO:tasks.workunit.client.1.vm05.stdout:3/146: stat dd/d15/d24/d2c/f2d 0 2026-03-10T10:19:16.809 INFO:tasks.workunit.client.1.vm05.stdout:3/147: write dd/d15/d1f/f2b [1028106,96160] 0 2026-03-10T10:19:16.809 INFO:tasks.workunit.client.1.vm05.stdout:3/148: dread f1 [0,4194304] 0 2026-03-10T10:19:16.811 INFO:tasks.workunit.client.1.vm05.stdout:3/149: read dd/d20/f35 [289605,11131] 0 2026-03-10T10:19:16.812 INFO:tasks.workunit.client.0.vm02.stdout:4/234: creat d1/d2/d1a/f4b x:0 0 0 2026-03-10T10:19:16.812 INFO:tasks.workunit.client.1.vm05.stdout:3/150: stat dd/d15/c1a 0 2026-03-10T10:19:16.812 INFO:tasks.workunit.client.1.vm05.stdout:3/151: write fb [1522995,76279] 0 2026-03-10T10:19:16.813 INFO:tasks.workunit.client.0.vm02.stdout:4/235: unlink d1/d2/l13 0 2026-03-10T10:19:16.814 INFO:tasks.workunit.client.0.vm02.stdout:4/236: readlink d1/d10/l38 0 2026-03-10T10:19:16.883 INFO:tasks.workunit.client.0.vm02.stdout:9/138: truncate da/f1b 3571973 0 2026-03-10T10:19:16.890 INFO:tasks.workunit.client.0.vm02.stdout:5/251: getdents d1/db/d11/d16/d29 0 2026-03-10T10:19:16.892 INFO:tasks.workunit.client.0.vm02.stdout:5/252: dread d1/db/d11/f3e [0,4194304] 0 2026-03-10T10:19:16.892 INFO:tasks.workunit.client.0.vm02.stdout:4/237: sync 2026-03-10T10:19:16.896 INFO:tasks.workunit.client.0.vm02.stdout:4/238: dwrite d1/d10/db/f35 [0,4194304] 0 2026-03-10T10:19:16.913 INFO:tasks.workunit.client.1.vm05.stdout:8/95: truncate f6 379486 0 2026-03-10T10:19:16.918 INFO:tasks.workunit.client.1.vm05.stdout:1/71: read d4/df/f11 [1165398,65888] 0 2026-03-10T10:19:16.918 INFO:tasks.workunit.client.0.vm02.stdout:4/239: getdents d1/d32/d3e 0 2026-03-10T10:19:16.918 INFO:tasks.workunit.client.1.vm05.stdout:2/123: truncate db/f15 3544183 0 2026-03-10T10:19:16.919 INFO:tasks.workunit.client.0.vm02.stdout:4/240: read d1/d2/f31 [639257,97386] 0 2026-03-10T10:19:16.920 INFO:tasks.workunit.client.0.vm02.stdout:4/241: write d1/d2/d37/f48 [338,40316] 0 2026-03-10T10:19:16.920 INFO:tasks.workunit.client.0.vm02.stdout:4/242: read d1/d2/f3f [5959757,49352] 0 2026-03-10T10:19:16.921 INFO:tasks.workunit.client.1.vm05.stdout:1/72: rmdir d4/dd 39 2026-03-10T10:19:16.923 INFO:tasks.workunit.client.1.vm05.stdout:2/124: symlink db/d1c/l21 0 2026-03-10T10:19:16.923 INFO:tasks.workunit.client.0.vm02.stdout:2/245: truncate d0/d1a/f20 3971899 0 2026-03-10T10:19:16.925 INFO:tasks.workunit.client.0.vm02.stdout:1/126: write d4/d1b/f24 [87613,81709] 0 2026-03-10T10:19:16.926 INFO:tasks.workunit.client.1.vm05.stdout:2/125: symlink db/d1c/l22 0 2026-03-10T10:19:16.940 INFO:tasks.workunit.client.1.vm05.stdout:1/73: mknod d4/dd/c17 0 2026-03-10T10:19:16.941 INFO:tasks.workunit.client.0.vm02.stdout:2/246: creat d0/d1a/d49/f54 x:0 0 0 2026-03-10T10:19:16.941 INFO:tasks.workunit.client.0.vm02.stdout:1/127: dwrite d4/da/f13 [0,4194304] 0 2026-03-10T10:19:16.941 INFO:tasks.workunit.client.0.vm02.stdout:2/247: dread d0/d10/f46 [0,4194304] 0 2026-03-10T10:19:16.941 INFO:tasks.workunit.client.0.vm02.stdout:8/237: truncate d1/d1c/d24/d35/f44 1862623 0 2026-03-10T10:19:16.941 INFO:tasks.workunit.client.0.vm02.stdout:1/128: getdents d4/da/d1a/d22 0 2026-03-10T10:19:16.942 INFO:tasks.workunit.client.0.vm02.stdout:1/129: truncate d4/da/f28 616565 0 2026-03-10T10:19:16.944 INFO:tasks.workunit.client.0.vm02.stdout:7/156: read d1/f5 [2940173,114467] 0 2026-03-10T10:19:16.944 INFO:tasks.workunit.client.1.vm05.stdout:1/74: dread d4/f16 [4194304,4194304] 0 2026-03-10T10:19:16.945 INFO:tasks.workunit.client.0.vm02.stdout:4/243: sync 2026-03-10T10:19:16.945 INFO:tasks.workunit.client.1.vm05.stdout:6/105: write fb [2190987,110377] 0 2026-03-10T10:19:16.948 INFO:tasks.workunit.client.0.vm02.stdout:7/157: dread d1/dc/f3 [0,4194304] 0 2026-03-10T10:19:16.951 INFO:tasks.workunit.client.1.vm05.stdout:2/126: dwrite f7 [0,4194304] 0 2026-03-10T10:19:16.952 INFO:tasks.workunit.client.0.vm02.stdout:7/158: chown d1/dc/f3 184569628 1 2026-03-10T10:19:16.952 INFO:tasks.workunit.client.0.vm02.stdout:2/248: rename d0/d10/l13 to d0/d10/l55 0 2026-03-10T10:19:16.952 INFO:tasks.workunit.client.0.vm02.stdout:2/249: write d0/d1a/d49/f4f [376814,87132] 0 2026-03-10T10:19:16.952 INFO:tasks.workunit.client.1.vm05.stdout:2/127: write f7 [6703035,104683] 0 2026-03-10T10:19:16.958 INFO:tasks.workunit.client.1.vm05.stdout:1/75: dread d4/f16 [0,4194304] 0 2026-03-10T10:19:16.959 INFO:tasks.workunit.client.0.vm02.stdout:7/159: mknod d1/dc/d16/d28/d2c/c35 0 2026-03-10T10:19:16.960 INFO:tasks.workunit.client.0.vm02.stdout:1/130: truncate d4/da/fc 1267508 0 2026-03-10T10:19:16.962 INFO:tasks.workunit.client.1.vm05.stdout:6/106: mkdir dd/df/d12/d24 0 2026-03-10T10:19:16.963 INFO:tasks.workunit.client.0.vm02.stdout:2/250: dread d0/d1a/f25 [0,4194304] 0 2026-03-10T10:19:16.964 INFO:tasks.workunit.client.0.vm02.stdout:7/160: dread d1/dc/f2e [0,4194304] 0 2026-03-10T10:19:16.966 INFO:tasks.workunit.client.1.vm05.stdout:1/76: chown d4/df/l14 3 1 2026-03-10T10:19:16.974 INFO:tasks.workunit.client.0.vm02.stdout:2/251: dwrite d0/d1a/f52 [0,4194304] 0 2026-03-10T10:19:16.975 INFO:tasks.workunit.client.0.vm02.stdout:7/161: mkdir d1/dc/d16/d28/d2d/d36 0 2026-03-10T10:19:16.985 INFO:tasks.workunit.client.0.vm02.stdout:2/252: rmdir d0/d1a/d24 39 2026-03-10T10:19:16.986 INFO:tasks.workunit.client.1.vm05.stdout:5/148: rename da/db/d17 to da/db/d2d 0 2026-03-10T10:19:16.986 INFO:tasks.workunit.client.1.vm05.stdout:6/107: link lc dd/df/l25 0 2026-03-10T10:19:16.988 INFO:tasks.workunit.client.0.vm02.stdout:2/253: rename d0/c32 to d0/d10/c56 0 2026-03-10T10:19:16.989 INFO:tasks.workunit.client.1.vm05.stdout:6/108: stat dd/df/l19 0 2026-03-10T10:19:16.992 INFO:tasks.workunit.client.1.vm05.stdout:6/109: mknod dd/df/d12/c26 0 2026-03-10T10:19:16.992 INFO:tasks.workunit.client.0.vm02.stdout:2/254: dwrite d0/d1a/d24/f34 [0,4194304] 0 2026-03-10T10:19:16.997 INFO:tasks.workunit.client.1.vm05.stdout:6/110: mkdir dd/d27 0 2026-03-10T10:19:17.004 INFO:tasks.workunit.client.1.vm05.stdout:5/149: dread f9 [0,4194304] 0 2026-03-10T10:19:17.004 INFO:tasks.workunit.client.1.vm05.stdout:6/111: dread fb [0,4194304] 0 2026-03-10T10:19:17.011 INFO:tasks.workunit.client.1.vm05.stdout:6/112: mkdir dd/df/d12/d24/d28 0 2026-03-10T10:19:17.011 INFO:tasks.workunit.client.1.vm05.stdout:6/113: truncate dd/d1b/f1d 1726442 0 2026-03-10T10:19:17.011 INFO:tasks.workunit.client.1.vm05.stdout:4/98: truncate d1/d3/f12 279242 0 2026-03-10T10:19:17.014 INFO:tasks.workunit.client.1.vm05.stdout:4/99: mknod d1/d3/d9/c1e 0 2026-03-10T10:19:17.018 INFO:tasks.workunit.client.1.vm05.stdout:7/156: getdents d5/dd 0 2026-03-10T10:19:17.022 INFO:tasks.workunit.client.1.vm05.stdout:0/110: write d1/d2/d9/fd [392580,61073] 0 2026-03-10T10:19:17.027 INFO:tasks.workunit.client.1.vm05.stdout:6/114: creat dd/f29 x:0 0 0 2026-03-10T10:19:17.032 INFO:tasks.workunit.client.1.vm05.stdout:0/111: symlink d1/d7/db/l26 0 2026-03-10T10:19:17.036 INFO:tasks.workunit.client.1.vm05.stdout:9/113: read d0/d1/d13/f8 [7612,117896] 0 2026-03-10T10:19:17.036 INFO:tasks.workunit.client.1.vm05.stdout:6/115: mkdir dd/d27/d2a 0 2026-03-10T10:19:17.040 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:16 vm05.local ceph-mon[59051]: pgmap v147: 65 pgs: 65 active+clean; 485 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 2.2 MiB/s rd, 41 MiB/s wr, 254 op/s 2026-03-10T10:19:17.040 INFO:tasks.workunit.client.1.vm05.stdout:6/116: dread fb [0,4194304] 0 2026-03-10T10:19:17.040 INFO:tasks.workunit.client.1.vm05.stdout:9/114: creat d0/df/d11/f24 x:0 0 0 2026-03-10T10:19:17.040 INFO:tasks.workunit.client.1.vm05.stdout:0/112: creat d1/d7/f27 x:0 0 0 2026-03-10T10:19:17.041 INFO:tasks.workunit.client.1.vm05.stdout:3/152: getdents dd/d20 0 2026-03-10T10:19:17.041 INFO:tasks.workunit.client.1.vm05.stdout:9/115: stat d0/d1/c1b 0 2026-03-10T10:19:17.042 INFO:tasks.workunit.client.0.vm02.stdout:3/138: getdents d1 0 2026-03-10T10:19:17.042 INFO:tasks.workunit.client.1.vm05.stdout:0/113: chown d1/d7/f4 76 1 2026-03-10T10:19:17.042 INFO:tasks.workunit.client.0.vm02.stdout:3/139: write d1/d8/d21/f2a [1003264,124129] 0 2026-03-10T10:19:17.043 INFO:tasks.workunit.client.0.vm02.stdout:3/140: chown d1/fe 7238079 1 2026-03-10T10:19:17.045 INFO:tasks.workunit.client.1.vm05.stdout:9/116: write d0/d1/d16/f18 [1189937,55726] 0 2026-03-10T10:19:17.047 INFO:tasks.workunit.client.0.vm02.stdout:3/141: creat d1/f31 x:0 0 0 2026-03-10T10:19:17.048 INFO:tasks.workunit.client.1.vm05.stdout:0/114: mknod d1/d7/db/d13/d17/c28 0 2026-03-10T10:19:17.048 INFO:tasks.workunit.client.1.vm05.stdout:3/153: creat dd/d15/d1f/f36 x:0 0 0 2026-03-10T10:19:17.049 INFO:tasks.workunit.client.1.vm05.stdout:0/115: stat d1/d7/db/d12/c25 0 2026-03-10T10:19:17.049 INFO:tasks.workunit.client.1.vm05.stdout:3/154: readlink dd/d15/d24/l28 0 2026-03-10T10:19:17.050 INFO:tasks.workunit.client.0.vm02.stdout:7/162: dread d1/f5 [0,4194304] 0 2026-03-10T10:19:17.050 INFO:tasks.workunit.client.0.vm02.stdout:7/163: chown d1/dc/d16/d28/d2d 13746 1 2026-03-10T10:19:17.054 INFO:tasks.workunit.client.1.vm05.stdout:6/117: link c9 dd/d27/c2b 0 2026-03-10T10:19:17.054 INFO:tasks.workunit.client.0.vm02.stdout:7/164: write d1/dc/d10/f24 [4412371,76525] 0 2026-03-10T10:19:17.056 INFO:tasks.workunit.client.0.vm02.stdout:7/165: dread d1/dc/f3 [0,4194304] 0 2026-03-10T10:19:17.057 INFO:tasks.workunit.client.0.vm02.stdout:7/166: read d1/dc/f25 [963082,48406] 0 2026-03-10T10:19:17.057 INFO:tasks.workunit.client.0.vm02.stdout:7/167: write d1/f34 [42237,129022] 0 2026-03-10T10:19:17.059 INFO:tasks.workunit.client.1.vm05.stdout:9/117: truncate d0/d1/f23 276249 0 2026-03-10T10:19:17.062 INFO:tasks.workunit.client.0.vm02.stdout:3/142: creat d1/d6/f32 x:0 0 0 2026-03-10T10:19:17.062 INFO:tasks.workunit.client.1.vm05.stdout:0/116: symlink d1/d7/db/l29 0 2026-03-10T10:19:17.062 INFO:tasks.workunit.client.1.vm05.stdout:9/118: creat d0/df/f25 x:0 0 0 2026-03-10T10:19:17.063 INFO:tasks.workunit.client.0.vm02.stdout:7/168: mknod d1/dc/d16/d28/d2c/c37 0 2026-03-10T10:19:17.064 INFO:tasks.workunit.client.0.vm02.stdout:3/143: write d1/f28 [983258,126141] 0 2026-03-10T10:19:17.064 INFO:tasks.workunit.client.0.vm02.stdout:3/144: chown d1 76 1 2026-03-10T10:19:17.065 INFO:tasks.workunit.client.0.vm02.stdout:3/145: truncate d1/f31 763775 0 2026-03-10T10:19:17.069 INFO:tasks.workunit.client.0.vm02.stdout:7/169: dwrite d1/dc/ff [0,4194304] 0 2026-03-10T10:19:17.071 INFO:tasks.workunit.client.1.vm05.stdout:0/117: rmdir d1/d7/db/d13 39 2026-03-10T10:19:17.072 INFO:tasks.workunit.client.1.vm05.stdout:6/118: dwrite dd/f14 [0,4194304] 0 2026-03-10T10:19:17.072 INFO:tasks.workunit.client.0.vm02.stdout:3/146: creat d1/f33 x:0 0 0 2026-03-10T10:19:17.074 INFO:tasks.workunit.client.0.vm02.stdout:7/170: mkdir d1/dc/d10/d38 0 2026-03-10T10:19:17.076 INFO:tasks.workunit.client.0.vm02.stdout:7/171: creat d1/dc/d16/d28/f39 x:0 0 0 2026-03-10T10:19:17.078 INFO:tasks.workunit.client.0.vm02.stdout:3/147: creat d1/d8/f34 x:0 0 0 2026-03-10T10:19:17.080 INFO:tasks.workunit.client.0.vm02.stdout:7/172: symlink d1/dc/d10/d38/l3a 0 2026-03-10T10:19:17.083 INFO:tasks.workunit.client.0.vm02.stdout:7/173: unlink d1/dc/d10/l14 0 2026-03-10T10:19:17.084 INFO:tasks.workunit.client.0.vm02.stdout:7/174: stat d1/dc/d10/l11 0 2026-03-10T10:19:17.084 INFO:tasks.workunit.client.0.vm02.stdout:7/175: write d1/dc/d10/f24 [3130996,83760] 0 2026-03-10T10:19:17.085 INFO:tasks.workunit.client.0.vm02.stdout:7/176: dread - d1/dc/d16/d28/d2d/f2f zero size 2026-03-10T10:19:17.089 INFO:tasks.workunit.client.1.vm05.stdout:6/119: mknod dd/df/d12/d24/c2c 0 2026-03-10T10:19:17.089 INFO:tasks.workunit.client.0.vm02.stdout:7/177: dwrite d1/fd [0,4194304] 0 2026-03-10T10:19:17.107 INFO:tasks.workunit.client.1.vm05.stdout:9/119: dread d0/d1/f23 [0,4194304] 0 2026-03-10T10:19:17.107 INFO:tasks.workunit.client.1.vm05.stdout:0/118: creat d1/d7/f2a x:0 0 0 2026-03-10T10:19:17.110 INFO:tasks.workunit.client.0.vm02.stdout:3/148: link d1/fe d1/d8/d21/f35 0 2026-03-10T10:19:17.110 INFO:tasks.workunit.client.0.vm02.stdout:7/178: mknod d1/dc/c3b 0 2026-03-10T10:19:17.116 INFO:tasks.workunit.client.1.vm05.stdout:9/120: write d0/d1/f23 [1267236,46047] 0 2026-03-10T10:19:17.118 INFO:tasks.workunit.client.1.vm05.stdout:6/120: dwrite fb [0,4194304] 0 2026-03-10T10:19:17.121 INFO:tasks.workunit.client.0.vm02.stdout:7/179: unlink d1/dc/d16/d28/f39 0 2026-03-10T10:19:17.122 INFO:tasks.workunit.client.0.vm02.stdout:7/180: truncate d1/dc/f26 1123617 0 2026-03-10T10:19:17.130 INFO:tasks.workunit.client.1.vm05.stdout:0/119: symlink d1/d7/db/d13/d17/l2b 0 2026-03-10T10:19:17.130 INFO:tasks.workunit.client.1.vm05.stdout:9/121: rmdir d0/d1/d16 39 2026-03-10T10:19:17.136 INFO:tasks.workunit.client.0.vm02.stdout:7/181: mknod d1/c3c 0 2026-03-10T10:19:17.141 INFO:tasks.workunit.client.1.vm05.stdout:0/120: symlink d1/d7/db/d13/d15/l2c 0 2026-03-10T10:19:17.146 INFO:tasks.workunit.client.0.vm02.stdout:3/149: dread d1/f3 [0,4194304] 0 2026-03-10T10:19:17.146 INFO:tasks.workunit.client.0.vm02.stdout:3/150: write d1/d8/fb [1681233,6596] 0 2026-03-10T10:19:17.150 INFO:tasks.workunit.client.1.vm05.stdout:9/122: mkdir d0/d1/d13/d26 0 2026-03-10T10:19:17.153 INFO:tasks.workunit.client.0.vm02.stdout:7/182: link d1/dc/f2e d1/dc/d16/d28/d2d/f3d 0 2026-03-10T10:19:17.158 INFO:tasks.workunit.client.0.vm02.stdout:7/183: dwrite d1/dc/f26 [0,4194304] 0 2026-03-10T10:19:17.163 INFO:tasks.workunit.client.1.vm05.stdout:9/123: rename d0/d1/f23 to d0/d1/d13/f27 0 2026-03-10T10:19:17.165 INFO:tasks.workunit.client.0.vm02.stdout:3/151: creat d1/d6/f36 x:0 0 0 2026-03-10T10:19:17.165 INFO:tasks.workunit.client.0.vm02.stdout:3/152: dread - d1/d8/f34 zero size 2026-03-10T10:19:17.169 INFO:tasks.workunit.client.0.vm02.stdout:7/184: mknod d1/dc/c3e 0 2026-03-10T10:19:17.172 INFO:tasks.workunit.client.0.vm02.stdout:7/185: dread d1/dc/f3 [0,4194304] 0 2026-03-10T10:19:17.175 INFO:tasks.workunit.client.0.vm02.stdout:7/186: read - d1/dc/d16/f1f zero size 2026-03-10T10:19:17.176 INFO:tasks.workunit.client.0.vm02.stdout:7/187: write d1/dc/d10/f24 [3743793,115667] 0 2026-03-10T10:19:17.185 INFO:tasks.workunit.client.1.vm05.stdout:9/124: getdents d0/d1/d13/de 0 2026-03-10T10:19:17.187 INFO:tasks.workunit.client.0.vm02.stdout:3/153: rename d1/l15 to d1/l37 0 2026-03-10T10:19:17.191 INFO:tasks.workunit.client.0.vm02.stdout:7/188: symlink d1/dc/d16/d28/d2d/d36/l3f 0 2026-03-10T10:19:17.199 INFO:tasks.workunit.client.1.vm05.stdout:9/125: write d0/d1/f1f [3473071,110980] 0 2026-03-10T10:19:17.199 INFO:tasks.workunit.client.0.vm02.stdout:7/189: mknod d1/d1b/c40 0 2026-03-10T10:19:17.199 INFO:tasks.workunit.client.0.vm02.stdout:3/154: rename f0 to d1/d20/f38 0 2026-03-10T10:19:17.199 INFO:tasks.workunit.client.0.vm02.stdout:1/131: write d4/f8 [3314984,10236] 0 2026-03-10T10:19:17.199 INFO:tasks.workunit.client.0.vm02.stdout:7/190: dwrite d1/dc/d16/d28/f2b [0,4194304] 0 2026-03-10T10:19:17.200 INFO:tasks.workunit.client.0.vm02.stdout:1/132: dread - d4/da/d1a/d22/f23 zero size 2026-03-10T10:19:17.205 INFO:tasks.workunit.client.0.vm02.stdout:1/133: dwrite f3 [4194304,4194304] 0 2026-03-10T10:19:17.209 INFO:tasks.workunit.client.0.vm02.stdout:3/155: dread - d1/f14 zero size 2026-03-10T10:19:17.221 INFO:tasks.workunit.client.0.vm02.stdout:7/191: rename d1/dc/c3b to d1/dc/d16/d28/d2d/c41 0 2026-03-10T10:19:17.226 INFO:tasks.workunit.client.0.vm02.stdout:3/156: creat d1/d6/f39 x:0 0 0 2026-03-10T10:19:17.230 INFO:tasks.workunit.client.0.vm02.stdout:7/192: dwrite d1/dc/f2e [0,4194304] 0 2026-03-10T10:19:17.230 INFO:tasks.workunit.client.0.vm02.stdout:7/193: write d1/dc/ff [2766311,93341] 0 2026-03-10T10:19:17.233 INFO:tasks.workunit.client.0.vm02.stdout:7/194: chown d1/d1b 0 1 2026-03-10T10:19:17.240 INFO:tasks.workunit.client.0.vm02.stdout:1/134: getdents d4/d1b 0 2026-03-10T10:19:17.262 INFO:tasks.workunit.client.0.vm02.stdout:7/195: unlink d1/dc/d16/d28/f2b 0 2026-03-10T10:19:17.263 INFO:tasks.workunit.client.0.vm02.stdout:7/196: stat d1/dc/d10 0 2026-03-10T10:19:17.263 INFO:tasks.workunit.client.0.vm02.stdout:7/197: stat d1/dc/d16/d28/l2a 0 2026-03-10T10:19:17.263 INFO:tasks.workunit.client.0.vm02.stdout:7/198: stat d1/l22 0 2026-03-10T10:19:17.263 INFO:tasks.workunit.client.0.vm02.stdout:7/199: write d1/f32 [236264,58285] 0 2026-03-10T10:19:17.635 INFO:tasks.workunit.client.0.vm02.stdout:3/157: sync 2026-03-10T10:19:17.635 INFO:tasks.workunit.client.0.vm02.stdout:1/135: sync 2026-03-10T10:19:17.636 INFO:tasks.workunit.client.1.vm05.stdout:9/126: sync 2026-03-10T10:19:17.637 INFO:tasks.workunit.client.0.vm02.stdout:3/158: chown d1/f12 38 1 2026-03-10T10:19:17.637 INFO:tasks.workunit.client.0.vm02.stdout:3/159: chown d1/d6/f39 15 1 2026-03-10T10:19:17.638 INFO:tasks.workunit.client.0.vm02.stdout:1/136: mkdir d4/d2c 0 2026-03-10T10:19:17.638 INFO:tasks.workunit.client.0.vm02.stdout:1/137: stat d4/c9 0 2026-03-10T10:19:17.641 INFO:tasks.workunit.client.0.vm02.stdout:3/160: dwrite d1/d8/d21/f29 [0,4194304] 0 2026-03-10T10:19:17.644 INFO:tasks.workunit.client.0.vm02.stdout:1/138: mknod d4/da/c2d 0 2026-03-10T10:19:17.648 INFO:tasks.workunit.client.0.vm02.stdout:3/161: dwrite d1/d20/f22 [0,4194304] 0 2026-03-10T10:19:17.649 INFO:tasks.workunit.client.0.vm02.stdout:1/139: symlink d4/d1b/l2e 0 2026-03-10T10:19:17.650 INFO:tasks.workunit.client.0.vm02.stdout:3/162: readlink d1/l1d 0 2026-03-10T10:19:17.650 INFO:tasks.workunit.client.0.vm02.stdout:3/163: fdatasync d1/f28 0 2026-03-10T10:19:17.653 INFO:tasks.workunit.client.0.vm02.stdout:3/164: readlink d1/l2c 0 2026-03-10T10:19:17.655 INFO:tasks.workunit.client.0.vm02.stdout:3/165: dread d1/f12 [0,4194304] 0 2026-03-10T10:19:17.656 INFO:tasks.workunit.client.0.vm02.stdout:3/166: creat d1/d6/f3a x:0 0 0 2026-03-10T10:19:17.656 INFO:tasks.workunit.client.0.vm02.stdout:3/167: dread - d1/f33 zero size 2026-03-10T10:19:17.657 INFO:tasks.workunit.client.0.vm02.stdout:3/168: creat d1/d8/f3b x:0 0 0 2026-03-10T10:19:17.658 INFO:tasks.workunit.client.0.vm02.stdout:3/169: write d1/d6/f39 [723987,107714] 0 2026-03-10T10:19:17.659 INFO:tasks.workunit.client.0.vm02.stdout:3/170: dread d1/f31 [0,4194304] 0 2026-03-10T10:19:17.666 INFO:tasks.workunit.client.0.vm02.stdout:3/171: dwrite d1/f1c [0,4194304] 0 2026-03-10T10:19:17.670 INFO:tasks.workunit.client.0.vm02.stdout:3/172: creat d1/d8/d21/f3c x:0 0 0 2026-03-10T10:19:17.722 INFO:tasks.workunit.client.0.vm02.stdout:5/253: write d1/f32 [623311,97739] 0 2026-03-10T10:19:17.726 INFO:tasks.workunit.client.0.vm02.stdout:5/254: chown d1/db/d11/d1a/l4b 1512 1 2026-03-10T10:19:17.730 INFO:tasks.workunit.client.0.vm02.stdout:6/139: dwrite d0/d7/f17 [0,4194304] 0 2026-03-10T10:19:17.732 INFO:tasks.workunit.client.0.vm02.stdout:6/140: chown d0/f21 6 1 2026-03-10T10:19:17.745 INFO:tasks.workunit.client.0.vm02.stdout:1/140: unlink d4/da/fc 0 2026-03-10T10:19:17.745 INFO:tasks.workunit.client.0.vm02.stdout:8/238: write d1/d1c/f3f [766623,23602] 0 2026-03-10T10:19:17.746 INFO:tasks.workunit.client.0.vm02.stdout:1/141: write d4/da/d1a/f1c [717452,120094] 0 2026-03-10T10:19:17.747 INFO:tasks.workunit.client.0.vm02.stdout:1/142: chown d4/d1b/l2e 25802528 1 2026-03-10T10:19:17.747 INFO:tasks.workunit.client.0.vm02.stdout:1/143: truncate d4/f21 181364 0 2026-03-10T10:19:17.748 INFO:tasks.workunit.client.0.vm02.stdout:1/144: write d4/f18 [894006,65546] 0 2026-03-10T10:19:17.758 INFO:tasks.workunit.client.0.vm02.stdout:5/255: rmdir d1/db/d11/d13 39 2026-03-10T10:19:17.762 INFO:tasks.workunit.client.0.vm02.stdout:4/244: truncate d1/f1d 8841186 0 2026-03-10T10:19:17.763 INFO:tasks.workunit.client.0.vm02.stdout:4/245: dread - d1/d10/db/f43 zero size 2026-03-10T10:19:17.764 INFO:tasks.workunit.client.0.vm02.stdout:6/141: readlink d0/d8/l22 0 2026-03-10T10:19:17.767 INFO:tasks.workunit.client.1.vm05.stdout:1/77: truncate d4/f16 1939182 0 2026-03-10T10:19:17.768 INFO:tasks.workunit.client.0.vm02.stdout:8/239: rename d1/d1c/lf to d1/d2/l47 0 2026-03-10T10:19:17.769 INFO:tasks.workunit.client.0.vm02.stdout:8/240: chown d1/d1c/f42 15032 1 2026-03-10T10:19:17.773 INFO:tasks.workunit.client.0.vm02.stdout:8/241: dwrite d1/d1c/f42 [0,4194304] 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:1/78: dread d4/dd/f15 [0,4194304] 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:1/79: dread d4/dd/f15 [0,4194304] 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:1/80: fsync d4/dd/f15 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:1/81: chown d4/df 13 1 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:5/150: dwrite da/f10 [4194304,4194304] 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:1/82: creat d4/f18 x:0 0 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:4/100: dread d1/d3/f12 [0,4194304] 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:7/157: write d5/f13 [1527131,20576] 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:5/151: creat da/f2e x:0 0 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:7/158: fdatasync d5/ff 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.1.vm05.stdout:5/152: chown da/db/de/l19 0 1 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.0.vm02.stdout:1/145: symlink d4/d1b/l2f 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.0.vm02.stdout:1/146: readlink d4/d1b/l2e 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.0.vm02.stdout:1/147: chown d4/d1b 9870 1 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.0.vm02.stdout:1/148: write d4/d1b/f24 [1110780,11518] 0 2026-03-10T10:19:17.823 INFO:tasks.workunit.client.0.vm02.stdout:5/256: stat d1/db/d11/d13/d28/f2c 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:1/149: stat d4/da/d1a/d22 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:4/246: creat d1/d2/d1a/f4c x:0 0 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:2/255: rmdir d0 39 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:4/247: dwrite d1/d10/db/f16 [0,4194304] 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:6/142: creat d0/d8/d9/f2b x:0 0 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:5/257: symlink d1/db/d11/d16/d29/d40/l54 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:0/225: truncate d9/d18/d1a/d22/f3f 243910 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:2/256: write d0/d10/f4b [279741,70519] 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:4/248: creat d1/d2/d1a/f4d x:0 0 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:4/249: write d1/d32/f46 [601133,115297] 0 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.0.vm02.stdout:4/250: dread - d1/d10/db/f43 zero size 2026-03-10T10:19:17.824 INFO:tasks.workunit.client.1.vm05.stdout:7/159: write d5/f22 [3136788,80322] 0 2026-03-10T10:19:17.829 INFO:tasks.workunit.client.1.vm05.stdout:4/101: creat d1/d3/d9/dc/f1f x:0 0 0 2026-03-10T10:19:17.830 INFO:tasks.workunit.client.0.vm02.stdout:4/251: symlink d1/l4e 0 2026-03-10T10:19:17.832 INFO:tasks.workunit.client.0.vm02.stdout:5/258: rename d1/db/d11/l3f to d1/db/d11/d16/l55 0 2026-03-10T10:19:17.834 INFO:tasks.workunit.client.1.vm05.stdout:4/102: mknod d1/c20 0 2026-03-10T10:19:17.834 INFO:tasks.workunit.client.1.vm05.stdout:3/155: dwrite dd/d15/d1f/f2a [0,4194304] 0 2026-03-10T10:19:17.835 INFO:tasks.workunit.client.1.vm05.stdout:5/153: dwrite da/db/d2d/f1c [0,4194304] 0 2026-03-10T10:19:17.840 INFO:tasks.workunit.client.0.vm02.stdout:4/252: link d1/l4e d1/d32/l4f 0 2026-03-10T10:19:17.847 INFO:tasks.workunit.client.0.vm02.stdout:4/253: write d1/d2/d1a/f4b [29434,109199] 0 2026-03-10T10:19:17.847 INFO:tasks.workunit.client.1.vm05.stdout:4/103: creat d1/d3/d9/dc/f21 x:0 0 0 2026-03-10T10:19:17.848 INFO:tasks.workunit.client.0.vm02.stdout:2/257: rename d0/d1a/l2e to d0/l57 0 2026-03-10T10:19:17.849 INFO:tasks.workunit.client.1.vm05.stdout:3/156: mknod dd/d15/d1f/c37 0 2026-03-10T10:19:17.850 INFO:tasks.workunit.client.0.vm02.stdout:5/259: creat d1/db/f56 x:0 0 0 2026-03-10T10:19:17.851 INFO:tasks.workunit.client.0.vm02.stdout:5/260: write d1/db/d11/d16/f19 [321876,55221] 0 2026-03-10T10:19:17.851 INFO:tasks.workunit.client.1.vm05.stdout:6/121: truncate f3 7368520 0 2026-03-10T10:19:17.855 INFO:tasks.workunit.client.0.vm02.stdout:5/261: dwrite d1/db/d11/d13/d28/d37/d3d/f49 [0,4194304] 0 2026-03-10T10:19:17.858 INFO:tasks.workunit.client.0.vm02.stdout:0/226: read d9/d18/f2a [195722,54840] 0 2026-03-10T10:19:17.865 INFO:tasks.workunit.client.0.vm02.stdout:1/150: dread f3 [0,4194304] 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:1/151: creat d4/da/d27/f30 x:0 0 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:1/152: dwrite d4/da/f12 [0,4194304] 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:2/258: unlink d0/l29 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:0/227: mkdir d9/d18/d1a/d43 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:2/259: dwrite d0/f36 [0,4194304] 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:1/153: mknod d4/da/c31 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:0/228: symlink d9/d18/l44 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:2/260: mknod d0/c58 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:1/154: write d4/f18 [394703,59422] 0 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:1/155: chown d4/c1f 18 1 2026-03-10T10:19:17.910 INFO:tasks.workunit.client.0.vm02.stdout:2/261: dread d0/d10/f46 [0,4194304] 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.0.vm02.stdout:2/262: dwrite d0/d10/f4b [0,4194304] 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.0.vm02.stdout:2/263: fdatasync d0/d1a/d24/f34 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.0.vm02.stdout:1/156: creat d4/da/d1a/d22/f32 x:0 0 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.0.vm02.stdout:1/157: read - d4/da/d1a/d22/f32 zero size 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:5/154: rename da/db/de/l18 to da/db/d28/l2f 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:3/157: creat dd/d15/d24/d2c/f38 x:0 0 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:5/155: creat da/db/d2d/f30 x:0 0 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:5/156: fdatasync da/db/d2d/f2a 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:5/157: dread - da/db/f1d zero size 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:6/122: getdents dd/d27/d2a 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:6/123: write fb [3824883,122281] 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:6/124: chown dd/fe 21504549 1 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:5/158: creat da/db/d2d/f31 x:0 0 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:3/158: mkdir dd/d39 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:5/159: mkdir da/db/d28/d32 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:4/104: rename d1/cf to d1/d3/c22 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:6/125: mknod dd/df/d12/d24/d28/c2d 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:4/105: write d1/d3/d9/fd [281299,11432] 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:6/126: readlink l7 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:3/159: dwrite dd/d15/d1f/f2b [0,4194304] 0 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:3/160: chown dd/d15/c25 13 1 2026-03-10T10:19:17.911 INFO:tasks.workunit.client.1.vm05.stdout:3/161: dread f9 [0,4194304] 0 2026-03-10T10:19:17.914 INFO:tasks.workunit.client.0.vm02.stdout:2/264: mknod d0/d10/c59 0 2026-03-10T10:19:17.921 INFO:tasks.workunit.client.0.vm02.stdout:1/158: rename d4/c20 to d4/c33 0 2026-03-10T10:19:17.921 INFO:tasks.workunit.client.0.vm02.stdout:1/159: truncate d4/da/f25 341274 0 2026-03-10T10:19:17.924 INFO:tasks.workunit.client.0.vm02.stdout:2/265: link d0/d10/c59 d0/d10/c5a 0 2026-03-10T10:19:17.931 INFO:tasks.workunit.client.0.vm02.stdout:2/266: unlink d0/d1a/f35 0 2026-03-10T10:19:17.932 INFO:tasks.workunit.client.1.vm05.stdout:3/162: mkdir dd/d15/d24/d2c/d3a 0 2026-03-10T10:19:17.932 INFO:tasks.workunit.client.1.vm05.stdout:6/127: symlink dd/l2e 0 2026-03-10T10:19:17.932 INFO:tasks.workunit.client.1.vm05.stdout:6/128: write dd/f14 [3111681,103644] 0 2026-03-10T10:19:17.932 INFO:tasks.workunit.client.0.vm02.stdout:2/267: mknod d0/d1a/d24/c5b 0 2026-03-10T10:19:17.933 INFO:tasks.workunit.client.0.vm02.stdout:2/268: truncate d0/f36 4381811 0 2026-03-10T10:19:17.939 INFO:tasks.workunit.client.0.vm02.stdout:2/269: dwrite d0/d1a/f4c [0,4194304] 0 2026-03-10T10:19:17.945 INFO:tasks.workunit.client.1.vm05.stdout:6/129: creat dd/d27/f2f x:0 0 0 2026-03-10T10:19:17.963 INFO:tasks.workunit.client.0.vm02.stdout:2/270: mknod d0/d1a/d49/c5c 0 2026-03-10T10:19:17.963 INFO:tasks.workunit.client.0.vm02.stdout:2/271: stat d0/d10/l55 0 2026-03-10T10:19:17.963 INFO:tasks.workunit.client.0.vm02.stdout:2/272: dread d0/d10/f4b [0,4194304] 0 2026-03-10T10:19:17.963 INFO:tasks.workunit.client.0.vm02.stdout:2/273: stat d0/d1a/f33 0 2026-03-10T10:19:17.963 INFO:tasks.workunit.client.0.vm02.stdout:2/274: readlink d0/la 0 2026-03-10T10:19:17.963 INFO:tasks.workunit.client.0.vm02.stdout:2/275: mknod d0/d1a/d49/c5d 0 2026-03-10T10:19:17.963 INFO:tasks.workunit.client.0.vm02.stdout:2/276: dwrite d0/d1a/d49/f4f [0,4194304] 0 2026-03-10T10:19:17.963 INFO:tasks.workunit.client.0.vm02.stdout:2/277: dread d0/d1a/f52 [0,4194304] 0 2026-03-10T10:19:17.965 INFO:tasks.workunit.client.0.vm02.stdout:2/278: dwrite d0/f44 [0,4194304] 0 2026-03-10T10:19:17.968 INFO:tasks.workunit.client.0.vm02.stdout:2/279: chown d0/d10/l39 10067 1 2026-03-10T10:19:18.024 INFO:tasks.workunit.client.0.vm02.stdout:1/160: sync 2026-03-10T10:19:18.026 INFO:tasks.workunit.client.0.vm02.stdout:1/161: dread d4/ff [4194304,4194304] 0 2026-03-10T10:19:18.027 INFO:tasks.workunit.client.0.vm02.stdout:1/162: fsync d4/da/d1a/f19 0 2026-03-10T10:19:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:17 vm02.local ceph-mon[50200]: pgmap v148: 65 pgs: 65 active+clean; 699 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 4.3 MiB/s rd, 83 MiB/s wr, 304 op/s 2026-03-10T10:19:18.029 INFO:tasks.workunit.client.0.vm02.stdout:1/163: creat d4/d1b/f34 x:0 0 0 2026-03-10T10:19:18.033 INFO:tasks.workunit.client.0.vm02.stdout:1/164: link d4/f5 d4/da/d27/f35 0 2026-03-10T10:19:18.033 INFO:tasks.workunit.client.0.vm02.stdout:1/165: write d4/f26 [190999,33360] 0 2026-03-10T10:19:18.034 INFO:tasks.workunit.client.0.vm02.stdout:1/166: getdents d4/d2c 0 2026-03-10T10:19:18.037 INFO:tasks.workunit.client.0.vm02.stdout:1/167: read d4/f5 [2291143,126679] 0 2026-03-10T10:19:18.037 INFO:tasks.workunit.client.0.vm02.stdout:1/168: dread - d4/da/d27/f30 zero size 2026-03-10T10:19:18.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:17 vm05.local ceph-mon[59051]: pgmap v148: 65 pgs: 65 active+clean; 699 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 4.3 MiB/s rd, 83 MiB/s wr, 304 op/s 2026-03-10T10:19:18.039 INFO:tasks.workunit.client.0.vm02.stdout:1/169: mknod d4/da/d1a/c36 0 2026-03-10T10:19:18.041 INFO:tasks.workunit.client.0.vm02.stdout:1/170: unlink d4/da/d27/f2b 0 2026-03-10T10:19:18.042 INFO:tasks.workunit.client.0.vm02.stdout:1/171: write d4/da/f12 [281205,122216] 0 2026-03-10T10:19:18.045 INFO:tasks.workunit.client.0.vm02.stdout:1/172: mknod d4/d2c/c37 0 2026-03-10T10:19:18.049 INFO:tasks.workunit.client.0.vm02.stdout:1/173: dwrite d4/da/f13 [0,4194304] 0 2026-03-10T10:19:18.055 INFO:tasks.workunit.client.0.vm02.stdout:1/174: mkdir d4/da/d27/d38 0 2026-03-10T10:19:18.056 INFO:tasks.workunit.client.0.vm02.stdout:1/175: rmdir d4/da/d1a 39 2026-03-10T10:19:18.057 INFO:tasks.workunit.client.0.vm02.stdout:1/176: truncate d4/f18 1464814 0 2026-03-10T10:19:18.058 INFO:tasks.workunit.client.0.vm02.stdout:1/177: stat d4/da/d1a/f19 0 2026-03-10T10:19:18.059 INFO:tasks.workunit.client.0.vm02.stdout:1/178: symlink d4/d1b/l39 0 2026-03-10T10:19:18.059 INFO:tasks.workunit.client.0.vm02.stdout:1/179: chown d4/f26 1767 1 2026-03-10T10:19:18.061 INFO:tasks.workunit.client.0.vm02.stdout:1/180: creat d4/f3a x:0 0 0 2026-03-10T10:19:18.063 INFO:tasks.workunit.client.0.vm02.stdout:1/181: dread d4/ff [0,4194304] 0 2026-03-10T10:19:18.064 INFO:tasks.workunit.client.0.vm02.stdout:1/182: creat d4/da/d27/d38/f3b x:0 0 0 2026-03-10T10:19:18.065 INFO:tasks.workunit.client.0.vm02.stdout:1/183: mkdir d4/da/d27/d38/d3c 0 2026-03-10T10:19:18.066 INFO:tasks.workunit.client.0.vm02.stdout:1/184: truncate d4/da/d1a/d22/f23 64540 0 2026-03-10T10:19:18.069 INFO:tasks.workunit.client.0.vm02.stdout:1/185: dwrite d4/da/d1a/f1c [0,4194304] 0 2026-03-10T10:19:18.070 INFO:tasks.workunit.client.0.vm02.stdout:1/186: read - d4/da/d27/f30 zero size 2026-03-10T10:19:18.071 INFO:tasks.workunit.client.0.vm02.stdout:1/187: chown d4/fe 0 1 2026-03-10T10:19:18.075 INFO:tasks.workunit.client.0.vm02.stdout:1/188: link f3 d4/da/d1a/f3d 0 2026-03-10T10:19:18.119 INFO:tasks.workunit.client.0.vm02.stdout:1/189: rename d4/da/lb to d4/da/d27/l3e 0 2026-03-10T10:19:18.119 INFO:tasks.workunit.client.0.vm02.stdout:1/190: write d4/f8 [4113912,87030] 0 2026-03-10T10:19:18.119 INFO:tasks.workunit.client.0.vm02.stdout:1/191: read d4/da/d1a/f19 [115627,70979] 0 2026-03-10T10:19:18.119 INFO:tasks.workunit.client.0.vm02.stdout:1/192: rename f3 to d4/da/d27/d38/f3f 0 2026-03-10T10:19:18.119 INFO:tasks.workunit.client.0.vm02.stdout:1/193: stat d4/da/f12 0 2026-03-10T10:19:18.119 INFO:tasks.workunit.client.0.vm02.stdout:1/194: creat d4/da/d1a/f40 x:0 0 0 2026-03-10T10:19:18.119 INFO:tasks.workunit.client.0.vm02.stdout:1/195: dread d4/f5 [0,4194304] 0 2026-03-10T10:19:18.119 INFO:tasks.workunit.client.0.vm02.stdout:1/196: symlink d4/l41 0 2026-03-10T10:19:18.260 INFO:tasks.workunit.client.1.vm05.stdout:0/121: rmdir d1 39 2026-03-10T10:19:18.262 INFO:tasks.workunit.client.1.vm05.stdout:4/106: sync 2026-03-10T10:19:18.272 INFO:tasks.workunit.client.0.vm02.stdout:7/200: rmdir d1 39 2026-03-10T10:19:18.273 INFO:tasks.workunit.client.1.vm05.stdout:0/122: creat d1/d7/db/d12/f2d x:0 0 0 2026-03-10T10:19:18.273 INFO:tasks.workunit.client.1.vm05.stdout:0/123: dread - d1/d2/d9/f1d zero size 2026-03-10T10:19:18.273 INFO:tasks.workunit.client.1.vm05.stdout:4/107: dread d1/d3/f10 [0,4194304] 0 2026-03-10T10:19:18.273 INFO:tasks.workunit.client.1.vm05.stdout:4/108: symlink d1/d3/d9/dc/l23 0 2026-03-10T10:19:18.273 INFO:tasks.workunit.client.1.vm05.stdout:9/127: dwrite d0/d1/d13/f22 [0,4194304] 0 2026-03-10T10:19:18.273 INFO:tasks.workunit.client.1.vm05.stdout:4/109: chown d1/d3/d9/dc/f1f 37309886 1 2026-03-10T10:19:18.277 INFO:tasks.workunit.client.0.vm02.stdout:7/201: dwrite d1/f15 [0,4194304] 0 2026-03-10T10:19:18.279 INFO:tasks.workunit.client.1.vm05.stdout:0/124: creat d1/d7/db/d12/d20/f2e x:0 0 0 2026-03-10T10:19:18.279 INFO:tasks.workunit.client.1.vm05.stdout:4/110: write d1/d3/d9/fd [1410029,15260] 0 2026-03-10T10:19:18.279 INFO:tasks.workunit.client.1.vm05.stdout:4/111: fdatasync d1/d3/d9/dc/f21 0 2026-03-10T10:19:18.279 INFO:tasks.workunit.client.1.vm05.stdout:4/112: chown d1/f17 3882 1 2026-03-10T10:19:18.285 INFO:tasks.workunit.client.1.vm05.stdout:4/113: dread - d1/d3/d9/dc/f1f zero size 2026-03-10T10:19:18.293 INFO:tasks.workunit.client.0.vm02.stdout:3/173: rmdir d1/d6 39 2026-03-10T10:19:18.300 INFO:tasks.workunit.client.0.vm02.stdout:3/174: write d1/f25 [668092,114090] 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:3/175: dread d1/f12 [0,4194304] 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:9/139: write da/d10/f20 [810095,44493] 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:3/176: rename d1/d8/f3b to d1/d8/f3d 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:9/140: rename da/f1e to da/d10/f2b 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:9/141: write da/f28 [490240,55343] 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:3/177: mknod d1/d8/d21/c3e 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:3/178: creat d1/d8/f3f x:0 0 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:9/142: dread da/f15 [0,4194304] 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:9/143: mkdir da/d10/d2c 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.0.vm02.stdout:3/179: dread d1/f5 [0,4194304] 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:0/125: rmdir d1/d2/d9 39 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:8/96: write f6 [105970,90941] 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:4/114: mknod d1/d3/d9/dc/c24 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:4/115: chown d1/fb 5 1 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:0/126: mkdir d1/d7/db/d13/d2f 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:0/127: fdatasync d1/d7/db/d12/f2d 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:0/128: stat d1/d7/db/d12/f2d 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:0/129: dread - d1/d7/db/d12/d20/f2e zero size 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:8/97: creat d7/d14/d15/f1d x:0 0 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:4/116: creat d1/d3/d9/dc/f25 x:0 0 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:9/128: creat d0/f28 x:0 0 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:0/130: fsync d1/d2/d9/f1d 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:0/131: write d1/d2/d9/fd [884538,44805] 0 2026-03-10T10:19:18.342 INFO:tasks.workunit.client.1.vm05.stdout:0/132: chown d1/d2/d9/l1c 1448474 1 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:9/129: dread d0/d1/f1f [0,4194304] 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:0/133: mknod d1/d7/db/d13/d15/c30 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:8/98: link d7/f12 d7/f1e 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:8/99: fdatasync d7/fb 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:9/130: rmdir d0/d1/d13 39 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:0/134: dread d1/d7/f4 [0,4194304] 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:8/100: truncate f6 846416 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:8/101: readlink d7/l1b 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:4/117: creat d1/d3/f26 x:0 0 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:8/102: rmdir d7/d14 39 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:4/118: mknod d1/d3/c27 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:4/119: dread - d1/d3/f26 zero size 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:9/131: link d0/d1/l10 d0/d1/d13/de/d21/l29 0 2026-03-10T10:19:18.343 INFO:tasks.workunit.client.1.vm05.stdout:9/132: stat d0/d1/f1f 0 2026-03-10T10:19:18.344 INFO:tasks.workunit.client.1.vm05.stdout:4/120: rename d1/d3/d9/dc/l23 to d1/d3/d9/dc/l28 0 2026-03-10T10:19:18.345 INFO:tasks.workunit.client.1.vm05.stdout:9/133: write d0/d1/d16/f18 [148576,80189] 0 2026-03-10T10:19:18.345 INFO:tasks.workunit.client.1.vm05.stdout:4/121: symlink d1/d3/d9/l29 0 2026-03-10T10:19:18.346 INFO:tasks.workunit.client.1.vm05.stdout:9/134: dread - d0/f28 zero size 2026-03-10T10:19:18.350 INFO:tasks.workunit.client.1.vm05.stdout:4/122: dwrite d1/d3/d9/dc/f1f [0,4194304] 0 2026-03-10T10:19:18.361 INFO:tasks.workunit.client.1.vm05.stdout:4/123: creat d1/d3/d9/dc/f2a x:0 0 0 2026-03-10T10:19:18.361 INFO:tasks.workunit.client.1.vm05.stdout:4/124: dwrite d1/d3/d9/dc/f2a [0,4194304] 0 2026-03-10T10:19:18.368 INFO:tasks.workunit.client.1.vm05.stdout:4/125: symlink d1/d3/d9/l2b 0 2026-03-10T10:19:18.444 INFO:tasks.workunit.client.1.vm05.stdout:7/160: fsync d5/f22 0 2026-03-10T10:19:18.451 INFO:tasks.workunit.client.0.vm02.stdout:9/144: dread da/d10/f26 [4194304,4194304] 0 2026-03-10T10:19:18.453 INFO:tasks.workunit.client.1.vm05.stdout:7/161: dwrite d5/dd/f1f [0,4194304] 0 2026-03-10T10:19:18.453 INFO:tasks.workunit.client.0.vm02.stdout:9/145: rename da/l16 to da/l2d 0 2026-03-10T10:19:18.454 INFO:tasks.workunit.client.0.vm02.stdout:9/146: chown da/d10/c1a 1 1 2026-03-10T10:19:18.455 INFO:tasks.workunit.client.0.vm02.stdout:9/147: truncate da/d10/f26 1820736 0 2026-03-10T10:19:18.464 INFO:tasks.workunit.client.0.vm02.stdout:9/148: symlink da/d10/d2c/l2e 0 2026-03-10T10:19:18.468 INFO:tasks.workunit.client.0.vm02.stdout:9/149: dwrite da/f28 [0,4194304] 0 2026-03-10T10:19:18.470 INFO:tasks.workunit.client.0.vm02.stdout:9/150: chown da/f13 387400 1 2026-03-10T10:19:18.471 INFO:tasks.workunit.client.0.vm02.stdout:9/151: truncate da/d10/f29 854448 0 2026-03-10T10:19:18.503 INFO:tasks.workunit.client.0.vm02.stdout:6/143: rmdir d0/d8/d9 39 2026-03-10T10:19:18.504 INFO:tasks.workunit.client.0.vm02.stdout:6/144: mknod d0/d8/d9/c2c 0 2026-03-10T10:19:18.505 INFO:tasks.workunit.client.0.vm02.stdout:6/145: symlink d0/d7/l2d 0 2026-03-10T10:19:18.506 INFO:tasks.workunit.client.0.vm02.stdout:6/146: rename d0/l5 to d0/d7/l2e 0 2026-03-10T10:19:18.512 INFO:tasks.workunit.client.0.vm02.stdout:6/147: dwrite d0/d8/f2a [0,4194304] 0 2026-03-10T10:19:18.518 INFO:tasks.workunit.client.0.vm02.stdout:6/148: fsync d0/d8/d9/f13 0 2026-03-10T10:19:18.521 INFO:tasks.workunit.client.0.vm02.stdout:6/149: dwrite d0/f1c [0,4194304] 0 2026-03-10T10:19:18.524 INFO:tasks.workunit.client.0.vm02.stdout:6/150: mkdir d0/d8/d29/d2f 0 2026-03-10T10:19:18.524 INFO:tasks.workunit.client.0.vm02.stdout:6/151: chown d0/d8/f2a 233948 1 2026-03-10T10:19:18.526 INFO:tasks.workunit.client.0.vm02.stdout:6/152: creat d0/d8/d9/f30 x:0 0 0 2026-03-10T10:19:18.530 INFO:tasks.workunit.client.0.vm02.stdout:6/153: dwrite d0/d8/f27 [0,4194304] 0 2026-03-10T10:19:18.531 INFO:tasks.workunit.client.0.vm02.stdout:6/154: write d0/d8/d9/f30 [842868,120014] 0 2026-03-10T10:19:18.533 INFO:tasks.workunit.client.0.vm02.stdout:6/155: mkdir d0/d8/d9/d31 0 2026-03-10T10:19:18.533 INFO:tasks.workunit.client.0.vm02.stdout:6/156: chown d0/d8/l22 3353553 1 2026-03-10T10:19:18.831 INFO:tasks.workunit.client.0.vm02.stdout:8/242: dread d1/d1c/d24/d35/f44 [0,4194304] 0 2026-03-10T10:19:18.831 INFO:tasks.workunit.client.0.vm02.stdout:8/243: chown d1/d1c/c1f 21854 1 2026-03-10T10:19:18.832 INFO:tasks.workunit.client.0.vm02.stdout:8/244: truncate d1/f16 876970 0 2026-03-10T10:19:18.833 INFO:tasks.workunit.client.0.vm02.stdout:8/245: fsync d1/d1c/d24/d35/f44 0 2026-03-10T10:19:18.836 INFO:tasks.workunit.client.0.vm02.stdout:8/246: dwrite d1/f40 [0,4194304] 0 2026-03-10T10:19:18.838 INFO:tasks.workunit.client.0.vm02.stdout:8/247: read d1/d1c/d24/f31 [3541397,95524] 0 2026-03-10T10:19:18.838 INFO:tasks.workunit.client.0.vm02.stdout:8/248: chown d1 1 1 2026-03-10T10:19:18.839 INFO:tasks.workunit.client.0.vm02.stdout:8/249: mknod d1/d2/c48 0 2026-03-10T10:19:18.840 INFO:tasks.workunit.client.0.vm02.stdout:8/250: mknod d1/c49 0 2026-03-10T10:19:18.841 INFO:tasks.workunit.client.0.vm02.stdout:8/251: symlink d1/d1c/l4a 0 2026-03-10T10:19:18.842 INFO:tasks.workunit.client.0.vm02.stdout:8/252: link d1/d1c/d43/f45 d1/d1c/d43/f4b 0 2026-03-10T10:19:18.846 INFO:tasks.workunit.client.0.vm02.stdout:8/253: chown d1/f40 274681550 1 2026-03-10T10:19:18.846 INFO:tasks.workunit.client.0.vm02.stdout:8/254: chown d1/d1c/f2a 5 1 2026-03-10T10:19:18.846 INFO:tasks.workunit.client.0.vm02.stdout:8/255: creat d1/d1c/d23/d25/f4c x:0 0 0 2026-03-10T10:19:18.854 INFO:tasks.workunit.client.1.vm05.stdout:8/103: fdatasync f6 0 2026-03-10T10:19:18.856 INFO:tasks.workunit.client.1.vm05.stdout:8/104: write f6 [1386493,76186] 0 2026-03-10T10:19:18.858 INFO:tasks.workunit.client.1.vm05.stdout:8/105: dread f2 [0,4194304] 0 2026-03-10T10:19:18.860 INFO:tasks.workunit.client.1.vm05.stdout:8/106: unlink d7/cf 0 2026-03-10T10:19:18.862 INFO:tasks.workunit.client.1.vm05.stdout:8/107: unlink d7/d14/d15/f19 0 2026-03-10T10:19:18.863 INFO:tasks.workunit.client.1.vm05.stdout:8/108: fdatasync d7/fd 0 2026-03-10T10:19:18.863 INFO:tasks.workunit.client.1.vm05.stdout:8/109: chown d7/f9 1 1 2026-03-10T10:19:18.864 INFO:tasks.workunit.client.1.vm05.stdout:8/110: getdents d7/d14 0 2026-03-10T10:19:18.870 INFO:tasks.workunit.client.1.vm05.stdout:8/111: dwrite d7/f1c [0,4194304] 0 2026-03-10T10:19:18.873 INFO:tasks.workunit.client.1.vm05.stdout:8/112: fdatasync d7/fb 0 2026-03-10T10:19:18.875 INFO:tasks.workunit.client.1.vm05.stdout:8/113: rename d7/d14/d15/f1d to d7/d14/d15/f1f 0 2026-03-10T10:19:18.877 INFO:tasks.workunit.client.1.vm05.stdout:8/114: dread d7/f1c [0,4194304] 0 2026-03-10T10:19:18.878 INFO:tasks.workunit.client.1.vm05.stdout:8/115: write d7/f9 [3832952,60876] 0 2026-03-10T10:19:18.910 INFO:tasks.workunit.client.0.vm02.stdout:8/256: sync 2026-03-10T10:19:18.911 INFO:tasks.workunit.client.0.vm02.stdout:8/257: rename d1/d2/l38 to d1/l4d 0 2026-03-10T10:19:18.926 INFO:tasks.workunit.client.0.vm02.stdout:5/262: fdatasync d1/db/d11/d13/d28/d37/d3d/f49 0 2026-03-10T10:19:18.930 INFO:tasks.workunit.client.0.vm02.stdout:5/263: dwrite d1/db/d11/d13/d28/f2e [0,4194304] 0 2026-03-10T10:19:18.935 INFO:tasks.workunit.client.0.vm02.stdout:5/264: write d1/db/d11/d1a/f27 [1317959,122384] 0 2026-03-10T10:19:18.936 INFO:tasks.workunit.client.0.vm02.stdout:5/265: write d1/db/d11/d1a/f27 [2017569,120854] 0 2026-03-10T10:19:18.937 INFO:tasks.workunit.client.0.vm02.stdout:5/266: chown d1/f10 2371248 1 2026-03-10T10:19:18.944 INFO:tasks.workunit.client.0.vm02.stdout:5/267: creat d1/db/d11/d16/d29/d40/d4f/f57 x:0 0 0 2026-03-10T10:19:18.948 INFO:tasks.workunit.client.0.vm02.stdout:5/268: symlink d1/d4c/l58 0 2026-03-10T10:19:18.958 INFO:tasks.workunit.client.0.vm02.stdout:5/269: creat d1/db/d11/d16/d29/d40/f59 x:0 0 0 2026-03-10T10:19:18.959 INFO:tasks.workunit.client.0.vm02.stdout:5/270: readlink d1/l8 0 2026-03-10T10:19:18.959 INFO:tasks.workunit.client.0.vm02.stdout:5/271: symlink d1/db/d11/d13/d28/d37/l5a 0 2026-03-10T10:19:18.963 INFO:tasks.workunit.client.0.vm02.stdout:5/272: sync 2026-03-10T10:19:18.983 INFO:tasks.workunit.client.0.vm02.stdout:5/273: dread d1/f3 [12582912,4194304] 0 2026-03-10T10:19:18.985 INFO:tasks.workunit.client.0.vm02.stdout:5/274: creat d1/db/d11/d16/d48/f5b x:0 0 0 2026-03-10T10:19:18.986 INFO:tasks.workunit.client.0.vm02.stdout:5/275: write d1/db/d11/d13/f1c [1184645,18727] 0 2026-03-10T10:19:18.987 INFO:tasks.workunit.client.0.vm02.stdout:5/276: dread - d1/db/d11/d16/d29/d40/f53 zero size 2026-03-10T10:19:19.022 INFO:tasks.workunit.client.0.vm02.stdout:5/277: readlink d1/db/d11/d16/l55 0 2026-03-10T10:19:19.025 INFO:tasks.workunit.client.0.vm02.stdout:5/278: getdents d1/db/d11/d1a 0 2026-03-10T10:19:19.027 INFO:tasks.workunit.client.0.vm02.stdout:5/279: creat d1/db/d11/d16/d29/d40/d4f/f5c x:0 0 0 2026-03-10T10:19:19.031 INFO:tasks.workunit.client.0.vm02.stdout:5/280: rename d1/c9 to d1/db/d11/d16/c5d 0 2026-03-10T10:19:19.031 INFO:tasks.workunit.client.0.vm02.stdout:5/281: readlink d1/l51 0 2026-03-10T10:19:19.038 INFO:tasks.workunit.client.0.vm02.stdout:5/282: getdents d1/db 0 2026-03-10T10:19:19.039 INFO:tasks.workunit.client.0.vm02.stdout:1/197: dread d4/f26 [0,4194304] 0 2026-03-10T10:19:19.040 INFO:tasks.workunit.client.0.vm02.stdout:1/198: stat d4/da/d1a/c16 0 2026-03-10T10:19:19.040 INFO:tasks.workunit.client.0.vm02.stdout:1/199: stat d4/da/d27/d38/d3c 0 2026-03-10T10:19:19.049 INFO:tasks.workunit.client.0.vm02.stdout:4/254: truncate d1/d10/db/f16 7075880 0 2026-03-10T10:19:19.050 INFO:tasks.workunit.client.0.vm02.stdout:4/255: mknod d1/d2/c50 0 2026-03-10T10:19:19.051 INFO:tasks.workunit.client.0.vm02.stdout:4/256: chown d1/d32/c47 315644 1 2026-03-10T10:19:19.051 INFO:tasks.workunit.client.0.vm02.stdout:4/257: chown d1/d2/d37 831 1 2026-03-10T10:19:19.054 INFO:tasks.workunit.client.0.vm02.stdout:5/283: rename d1/db/d11/d1a/l4b to d1/l5e 0 2026-03-10T10:19:19.057 INFO:tasks.workunit.client.0.vm02.stdout:4/258: dwrite d1/d32/f46 [0,4194304] 0 2026-03-10T10:19:19.057 INFO:tasks.workunit.client.0.vm02.stdout:4/259: fdatasync d1/d2/d37/f2e 0 2026-03-10T10:19:19.058 INFO:tasks.workunit.client.0.vm02.stdout:4/260: truncate d1/d2/d37/f48 707144 0 2026-03-10T10:19:19.059 INFO:tasks.workunit.client.1.vm05.stdout:5/160: rmdir da/db/d2d 39 2026-03-10T10:19:19.063 INFO:tasks.workunit.client.0.vm02.stdout:4/261: dwrite d1/d2/d1a/f4c [0,4194304] 0 2026-03-10T10:19:19.067 INFO:tasks.workunit.client.1.vm05.stdout:5/161: dwrite da/f10 [8388608,4194304] 0 2026-03-10T10:19:19.069 INFO:tasks.workunit.client.1.vm05.stdout:5/162: write da/db/de/f2c [346566,30385] 0 2026-03-10T10:19:19.073 INFO:tasks.workunit.client.1.vm05.stdout:5/163: dread da/f10 [0,4194304] 0 2026-03-10T10:19:19.074 INFO:tasks.workunit.client.1.vm05.stdout:5/164: stat da/db/de/f2c 0 2026-03-10T10:19:19.084 INFO:tasks.workunit.client.0.vm02.stdout:4/262: readlink d1/d2/d1a/d49/l4a 0 2026-03-10T10:19:19.086 INFO:tasks.workunit.client.1.vm05.stdout:5/165: truncate da/db/d2d/f30 526498 0 2026-03-10T10:19:19.089 INFO:tasks.workunit.client.0.vm02.stdout:4/263: mknod d1/d10/c51 0 2026-03-10T10:19:19.090 INFO:tasks.workunit.client.0.vm02.stdout:4/264: chown d1/d10/db/f15 197 1 2026-03-10T10:19:19.091 INFO:tasks.workunit.client.1.vm05.stdout:5/166: dwrite da/db/fd [0,4194304] 0 2026-03-10T10:19:19.093 INFO:tasks.workunit.client.0.vm02.stdout:4/265: mkdir d1/d52 0 2026-03-10T10:19:19.093 INFO:tasks.workunit.client.1.vm05.stdout:5/167: readlink da/db/de/l2b 0 2026-03-10T10:19:19.094 INFO:tasks.workunit.client.1.vm05.stdout:5/168: write da/db/d2d/f30 [610774,19856] 0 2026-03-10T10:19:19.095 INFO:tasks.workunit.client.1.vm05.stdout:5/169: write da/db/d2d/f30 [821733,22410] 0 2026-03-10T10:19:19.097 INFO:tasks.workunit.client.0.vm02.stdout:4/266: mkdir d1/d52/d53 0 2026-03-10T10:19:19.099 INFO:tasks.workunit.client.1.vm05.stdout:5/170: creat da/db/de/f33 x:0 0 0 2026-03-10T10:19:19.100 INFO:tasks.workunit.client.1.vm05.stdout:5/171: truncate da/db/f1d 595665 0 2026-03-10T10:19:19.101 INFO:tasks.workunit.client.0.vm02.stdout:4/267: mknod d1/d52/d53/c54 0 2026-03-10T10:19:19.101 INFO:tasks.workunit.client.1.vm05.stdout:5/172: write da/db/fd [981817,3115] 0 2026-03-10T10:19:19.103 INFO:tasks.workunit.client.1.vm05.stdout:5/173: chown c6 62526668 1 2026-03-10T10:19:19.104 INFO:tasks.workunit.client.1.vm05.stdout:5/174: truncate da/f20 1029616 0 2026-03-10T10:19:19.104 INFO:tasks.workunit.client.0.vm02.stdout:5/284: dread d1/db/d11/d13/d28/d37/f3c [0,4194304] 0 2026-03-10T10:19:19.104 INFO:tasks.workunit.client.0.vm02.stdout:2/280: getdents d0/d1a/d24 0 2026-03-10T10:19:19.105 INFO:tasks.workunit.client.0.vm02.stdout:2/281: write d0/d1a/f4c [491066,115346] 0 2026-03-10T10:19:19.107 INFO:tasks.workunit.client.0.vm02.stdout:4/268: dwrite d1/d10/f45 [0,4194304] 0 2026-03-10T10:19:19.119 INFO:tasks.workunit.client.0.vm02.stdout:5/285: mkdir d1/db/d11/d16/d29/d40/d4f/d5f 0 2026-03-10T10:19:19.121 INFO:tasks.workunit.client.0.vm02.stdout:4/269: mkdir d1/d2/d55 0 2026-03-10T10:19:19.122 INFO:tasks.workunit.client.0.vm02.stdout:5/286: write d1/db/fd [1489246,93815] 0 2026-03-10T10:19:19.123 INFO:tasks.workunit.client.0.vm02.stdout:4/270: write d1/d10/db/f20 [588587,6860] 0 2026-03-10T10:19:19.124 INFO:tasks.workunit.client.0.vm02.stdout:5/287: truncate d1/db/d11/d16/d29/d40/f59 36057 0 2026-03-10T10:19:19.128 INFO:tasks.workunit.client.0.vm02.stdout:5/288: creat d1/db/d11/d16/d29/d40/d4f/f60 x:0 0 0 2026-03-10T10:19:19.135 INFO:tasks.workunit.client.1.vm05.stdout:7/162: dread d5/fe [0,4194304] 0 2026-03-10T10:19:19.135 INFO:tasks.workunit.client.0.vm02.stdout:5/289: readlink d1/db/d11/l50 0 2026-03-10T10:19:19.135 INFO:tasks.workunit.client.0.vm02.stdout:4/271: mknod d1/d2/d55/c56 0 2026-03-10T10:19:19.135 INFO:tasks.workunit.client.0.vm02.stdout:4/272: chown d1/d2/d44 190952978 1 2026-03-10T10:19:19.135 INFO:tasks.workunit.client.0.vm02.stdout:1/200: rename d4/da/d27/l3e to d4/da/d27/d38/l42 0 2026-03-10T10:19:19.136 INFO:tasks.workunit.client.0.vm02.stdout:0/229: rename d9/f17 to d9/d18/d1a/d43/f45 0 2026-03-10T10:19:19.143 INFO:tasks.workunit.client.1.vm05.stdout:7/163: link l4 d5/d1d/d29/l2a 0 2026-03-10T10:19:19.147 INFO:tasks.workunit.client.0.vm02.stdout:0/230: truncate d9/d18/d1a/f1f 1490671 0 2026-03-10T10:19:19.153 INFO:tasks.workunit.client.0.vm02.stdout:0/231: mkdir d9/d18/d1a/d46 0 2026-03-10T10:19:19.155 INFO:tasks.workunit.client.0.vm02.stdout:7/202: dwrite d1/dc/d10/f27 [4194304,4194304] 0 2026-03-10T10:19:19.160 INFO:tasks.workunit.client.0.vm02.stdout:5/290: sync 2026-03-10T10:19:19.160 INFO:tasks.workunit.client.0.vm02.stdout:4/273: sync 2026-03-10T10:19:19.161 INFO:tasks.workunit.client.0.vm02.stdout:4/274: rename d1 to d1/d32/d3e/d57 22 2026-03-10T10:19:19.161 INFO:tasks.workunit.client.0.vm02.stdout:0/232: dwrite d9/d34/d3d/f41 [0,4194304] 0 2026-03-10T10:19:19.163 INFO:tasks.workunit.client.0.vm02.stdout:5/291: dread - d1/db/d11/d13/f1f zero size 2026-03-10T10:19:19.167 INFO:tasks.workunit.client.0.vm02.stdout:5/292: read d1/db/f1e [960943,93867] 0 2026-03-10T10:19:19.174 INFO:tasks.workunit.client.0.vm02.stdout:3/180: fsync d1/d8/f3f 0 2026-03-10T10:19:19.176 INFO:tasks.workunit.client.0.vm02.stdout:0/233: mknod d9/d34/d3d/c47 0 2026-03-10T10:19:19.178 INFO:tasks.workunit.client.0.vm02.stdout:0/234: dread d9/d34/d3d/f41 [0,4194304] 0 2026-03-10T10:19:19.181 INFO:tasks.workunit.client.0.vm02.stdout:5/293: mkdir d1/db/d11/d13/d28/d37/d3d/d61 0 2026-03-10T10:19:19.181 INFO:tasks.workunit.client.0.vm02.stdout:5/294: stat d1/db/d11/d16/f19 0 2026-03-10T10:19:19.183 INFO:tasks.workunit.client.0.vm02.stdout:2/282: unlink d0/d1a/f20 0 2026-03-10T10:19:19.183 INFO:tasks.workunit.client.1.vm05.stdout:0/135: rename d1/d7/db to d1/d2/d9/d31 0 2026-03-10T10:19:19.185 INFO:tasks.workunit.client.0.vm02.stdout:3/181: write d1/d6/f32 [155786,40098] 0 2026-03-10T10:19:19.188 INFO:tasks.workunit.client.1.vm05.stdout:4/126: getdents d1/d3 0 2026-03-10T10:19:19.192 INFO:tasks.workunit.client.0.vm02.stdout:4/275: creat d1/d41/f58 x:0 0 0 2026-03-10T10:19:19.192 INFO:tasks.workunit.client.1.vm05.stdout:4/127: mknod d1/d3/d9/c2c 0 2026-03-10T10:19:19.199 INFO:tasks.workunit.client.0.vm02.stdout:5/295: mkdir d1/db/d11/d62 0 2026-03-10T10:19:19.200 INFO:tasks.workunit.client.1.vm05.stdout:0/136: dwrite d1/d2/d9/f1d [0,4194304] 0 2026-03-10T10:19:19.200 INFO:tasks.workunit.client.1.vm05.stdout:9/135: rename d0/d1/f1f to d0/f2a 0 2026-03-10T10:19:19.201 INFO:tasks.workunit.client.1.vm05.stdout:0/137: readlink d1/d2/d9/d31/l26 0 2026-03-10T10:19:19.202 INFO:tasks.workunit.client.0.vm02.stdout:5/296: dread d1/db/d11/d13/d28/d37/f3c [0,4194304] 0 2026-03-10T10:19:19.202 INFO:tasks.workunit.client.0.vm02.stdout:7/203: link d1/dc/f25 d1/dc/d16/d28/d2d/f42 0 2026-03-10T10:19:19.205 INFO:tasks.workunit.client.1.vm05.stdout:0/138: chown d1/d2/d9/d31 235 1 2026-03-10T10:19:19.205 INFO:tasks.workunit.client.0.vm02.stdout:7/204: readlink d1/dc/d16/d28/l2a 0 2026-03-10T10:19:19.205 INFO:tasks.workunit.client.0.vm02.stdout:7/205: write d1/dc/d10/f24 [4498262,26245] 0 2026-03-10T10:19:19.206 INFO:tasks.workunit.client.0.vm02.stdout:4/276: creat d1/d2/d44/f59 x:0 0 0 2026-03-10T10:19:19.208 INFO:tasks.workunit.client.0.vm02.stdout:2/283: rmdir d0/d1a/d49/d51 0 2026-03-10T10:19:19.209 INFO:tasks.workunit.client.0.vm02.stdout:2/284: write d0/d1a/d49/f50 [866377,104023] 0 2026-03-10T10:19:19.210 INFO:tasks.workunit.client.0.vm02.stdout:2/285: chown d0/d10/l39 314218552 1 2026-03-10T10:19:19.217 INFO:tasks.workunit.client.1.vm05.stdout:9/136: dread d0/d1/fb [4194304,4194304] 0 2026-03-10T10:19:19.218 INFO:tasks.workunit.client.0.vm02.stdout:7/206: readlink d1/l1c 0 2026-03-10T10:19:19.218 INFO:tasks.workunit.client.0.vm02.stdout:2/286: chown d0/d10/l37 1531934065 1 2026-03-10T10:19:19.224 INFO:tasks.workunit.client.0.vm02.stdout:7/207: chown d1/f5 57 1 2026-03-10T10:19:19.224 INFO:tasks.workunit.client.1.vm05.stdout:5/175: rename da/db/de/c1f to da/db/d2d/c34 0 2026-03-10T10:19:19.224 INFO:tasks.workunit.client.1.vm05.stdout:9/137: rmdir d0/d1/d13/de/d21 39 2026-03-10T10:19:19.225 INFO:tasks.workunit.client.0.vm02.stdout:4/277: sync 2026-03-10T10:19:19.226 INFO:tasks.workunit.client.0.vm02.stdout:4/278: chown d1/d10/db/f24 578 1 2026-03-10T10:19:19.228 INFO:tasks.workunit.client.0.vm02.stdout:5/297: link d1/c2 d1/db/d11/d13/d28/d37/d3d/c63 0 2026-03-10T10:19:19.229 INFO:tasks.workunit.client.0.vm02.stdout:5/298: chown d1/db/d11/d16/d29/c36 4819 1 2026-03-10T10:19:19.231 INFO:tasks.workunit.client.0.vm02.stdout:4/279: creat d1/d52/f5a x:0 0 0 2026-03-10T10:19:19.235 INFO:tasks.workunit.client.0.vm02.stdout:5/299: write d1/f12 [849153,48693] 0 2026-03-10T10:19:19.240 INFO:tasks.workunit.client.0.vm02.stdout:5/300: mknod d1/db/d11/c64 0 2026-03-10T10:19:19.240 INFO:tasks.workunit.client.0.vm02.stdout:5/301: dread d1/db/d11/d13/d28/d37/d3d/f49 [0,4194304] 0 2026-03-10T10:19:19.240 INFO:tasks.workunit.client.0.vm02.stdout:5/302: chown d1/db 14 1 2026-03-10T10:19:19.243 INFO:tasks.workunit.client.0.vm02.stdout:5/303: creat d1/db/d11/d62/f65 x:0 0 0 2026-03-10T10:19:19.244 INFO:tasks.workunit.client.0.vm02.stdout:5/304: read d1/db/d11/d13/d28/d37/d3d/f49 [2743083,116601] 0 2026-03-10T10:19:19.246 INFO:tasks.workunit.client.0.vm02.stdout:5/305: creat d1/db/d11/d16/d29/d40/f66 x:0 0 0 2026-03-10T10:19:19.246 INFO:tasks.workunit.client.0.vm02.stdout:5/306: stat d1/db/d11/l50 0 2026-03-10T10:19:19.248 INFO:tasks.workunit.client.0.vm02.stdout:5/307: rmdir d1/db/d11 39 2026-03-10T10:19:19.252 INFO:tasks.workunit.client.0.vm02.stdout:5/308: write d1/db/d11/d13/d28/f31 [4938156,19244] 0 2026-03-10T10:19:19.261 INFO:tasks.workunit.client.0.vm02.stdout:5/309: sync 2026-03-10T10:19:19.264 INFO:tasks.workunit.client.0.vm02.stdout:5/310: mkdir d1/db/d11/d62/d67 0 2026-03-10T10:19:19.265 INFO:tasks.workunit.client.0.vm02.stdout:5/311: write d1/db/d11/d13/d28/f2e [2615842,39210] 0 2026-03-10T10:19:19.266 INFO:tasks.workunit.client.0.vm02.stdout:5/312: write d1/db/d11/d16/d29/d40/d4f/f57 [673325,88808] 0 2026-03-10T10:19:19.267 INFO:tasks.workunit.client.0.vm02.stdout:5/313: dread - d1/db/d11/d16/d29/d40/f66 zero size 2026-03-10T10:19:19.267 INFO:tasks.workunit.client.0.vm02.stdout:5/314: read - d1/db/d11/d16/d48/f5b zero size 2026-03-10T10:19:19.268 INFO:tasks.workunit.client.0.vm02.stdout:5/315: dread - d1/db/d11/d62/f65 zero size 2026-03-10T10:19:19.270 INFO:tasks.workunit.client.1.vm05.stdout:3/163: write dd/fe [2998119,116596] 0 2026-03-10T10:19:19.273 INFO:tasks.workunit.client.1.vm05.stdout:1/83: write d4/f16 [2030472,127702] 0 2026-03-10T10:19:19.273 INFO:tasks.workunit.client.1.vm05.stdout:3/164: chown dd/d20/f26 9079 1 2026-03-10T10:19:19.279 INFO:tasks.workunit.client.1.vm05.stdout:2/128: truncate db/f15 3043986 0 2026-03-10T10:19:19.283 INFO:tasks.workunit.client.0.vm02.stdout:5/316: dwrite d1/db/fd [0,4194304] 0 2026-03-10T10:19:19.289 INFO:tasks.workunit.client.1.vm05.stdout:1/84: unlink d4/f16 0 2026-03-10T10:19:19.292 INFO:tasks.workunit.client.1.vm05.stdout:3/165: mkdir dd/d15/d24/d2c/d3b 0 2026-03-10T10:19:19.292 INFO:tasks.workunit.client.1.vm05.stdout:3/166: stat f6 0 2026-03-10T10:19:19.293 INFO:tasks.workunit.client.1.vm05.stdout:1/85: dread d4/df/f11 [0,4194304] 0 2026-03-10T10:19:19.300 INFO:tasks.workunit.client.1.vm05.stdout:3/167: write dd/d20/f35 [1566671,106378] 0 2026-03-10T10:19:19.308 INFO:tasks.workunit.client.1.vm05.stdout:1/86: unlink d4/dd/c17 0 2026-03-10T10:19:19.319 INFO:tasks.workunit.client.0.vm02.stdout:6/157: truncate d0/f2 1742331 0 2026-03-10T10:19:19.320 INFO:tasks.workunit.client.1.vm05.stdout:1/87: dwrite d4/f18 [0,4194304] 0 2026-03-10T10:19:19.325 INFO:tasks.workunit.client.0.vm02.stdout:6/158: dwrite d0/f21 [4194304,4194304] 0 2026-03-10T10:19:19.327 INFO:tasks.workunit.client.1.vm05.stdout:1/88: fsync d4/df/f11 0 2026-03-10T10:19:19.340 INFO:tasks.workunit.client.0.vm02.stdout:6/159: mkdir d0/d8/d9/d31/d32 0 2026-03-10T10:19:19.341 INFO:tasks.workunit.client.1.vm05.stdout:8/116: getdents d7/d14/d15 0 2026-03-10T10:19:19.347 INFO:tasks.workunit.client.1.vm05.stdout:8/117: fsync d7/d14/d15/f1f 0 2026-03-10T10:19:19.389 INFO:tasks.workunit.client.0.vm02.stdout:8/258: dwrite d1/d1c/f33 [0,4194304] 0 2026-03-10T10:19:19.390 INFO:tasks.workunit.client.0.vm02.stdout:6/160: link d0/f21 d0/d8/d29/d2f/f33 0 2026-03-10T10:19:19.390 INFO:tasks.workunit.client.0.vm02.stdout:8/259: dwrite d1/d1c/d23/f3b [0,4194304] 0 2026-03-10T10:19:19.390 INFO:tasks.workunit.client.0.vm02.stdout:8/260: rmdir d1/d1c/d24/d35 39 2026-03-10T10:19:19.390 INFO:tasks.workunit.client.0.vm02.stdout:6/161: getdents d0/d7 0 2026-03-10T10:19:19.390 INFO:tasks.workunit.client.0.vm02.stdout:6/162: rename d0/d8 to d0/d8/d9/d31/d34 22 2026-03-10T10:19:19.394 INFO:tasks.workunit.client.0.vm02.stdout:9/152: dread da/f25 [0,4194304] 0 2026-03-10T10:19:19.397 INFO:tasks.workunit.client.0.vm02.stdout:9/153: dwrite da/f15 [0,4194304] 0 2026-03-10T10:19:19.768 INFO:tasks.workunit.client.1.vm05.stdout:0/139: sync 2026-03-10T10:19:19.776 INFO:tasks.workunit.client.0.vm02.stdout:4/280: dwrite d1/d2/d1a/f4c [4194304,4194304] 0 2026-03-10T10:19:19.777 INFO:tasks.workunit.client.0.vm02.stdout:4/281: write d1/d10/db/f43 [151182,96645] 0 2026-03-10T10:19:19.777 INFO:tasks.workunit.client.0.vm02.stdout:4/282: chown d1/d52 77718 1 2026-03-10T10:19:19.781 INFO:tasks.workunit.client.0.vm02.stdout:4/283: fsync d1/d2/d37/f28 0 2026-03-10T10:19:19.781 INFO:tasks.workunit.client.0.vm02.stdout:4/284: chown d1/d32 1271381 1 2026-03-10T10:19:19.785 INFO:tasks.workunit.client.1.vm05.stdout:0/140: dread d1/d2/d9/fd [0,4194304] 0 2026-03-10T10:19:19.787 INFO:tasks.workunit.client.1.vm05.stdout:0/141: chown d1/d2/d9/d31/d12/f2d 1543 1 2026-03-10T10:19:19.788 INFO:tasks.workunit.client.1.vm05.stdout:6/130: dwrite f2 [4194304,4194304] 0 2026-03-10T10:19:19.789 INFO:tasks.workunit.client.1.vm05.stdout:0/142: write d1/d7/f24 [1000995,68558] 0 2026-03-10T10:19:19.793 INFO:tasks.workunit.client.1.vm05.stdout:0/143: stat d1/d2 0 2026-03-10T10:19:19.805 INFO:tasks.workunit.client.0.vm02.stdout:4/285: read d1/d2/d37/f14 [1597715,16548] 0 2026-03-10T10:19:19.809 INFO:tasks.workunit.client.0.vm02.stdout:4/286: getdents d1/d32/d3e 0 2026-03-10T10:19:19.815 INFO:tasks.workunit.client.0.vm02.stdout:4/287: write d1/d2/d44/f59 [355277,5013] 0 2026-03-10T10:19:19.815 INFO:tasks.workunit.client.0.vm02.stdout:4/288: chown d1/d2/f34 117666508 1 2026-03-10T10:19:19.816 INFO:tasks.workunit.client.0.vm02.stdout:1/201: dwrite d4/ff [0,4194304] 0 2026-03-10T10:19:19.817 INFO:tasks.workunit.client.1.vm05.stdout:7/164: rmdir d5 39 2026-03-10T10:19:19.817 INFO:tasks.workunit.client.0.vm02.stdout:1/202: dread - d4/da/d27/f30 zero size 2026-03-10T10:19:19.830 INFO:tasks.workunit.client.0.vm02.stdout:1/203: readlink d4/da/d27/d38/l42 0 2026-03-10T10:19:19.830 INFO:tasks.workunit.client.0.vm02.stdout:1/204: readlink d4/da/d27/d38/l42 0 2026-03-10T10:19:19.831 INFO:tasks.workunit.client.0.vm02.stdout:1/205: creat d4/d2c/f43 x:0 0 0 2026-03-10T10:19:19.831 INFO:tasks.workunit.client.0.vm02.stdout:1/206: read d4/f26 [18435,78654] 0 2026-03-10T10:19:19.832 INFO:tasks.workunit.client.0.vm02.stdout:1/207: read d4/da/d1a/f19 [2341527,91572] 0 2026-03-10T10:19:19.833 INFO:tasks.workunit.client.0.vm02.stdout:1/208: creat d4/d1b/f44 x:0 0 0 2026-03-10T10:19:19.833 INFO:tasks.workunit.client.0.vm02.stdout:1/209: chown d4/f21 467 1 2026-03-10T10:19:19.834 INFO:tasks.workunit.client.0.vm02.stdout:1/210: dread d4/f21 [0,4194304] 0 2026-03-10T10:19:19.835 INFO:tasks.workunit.client.0.vm02.stdout:1/211: readlink d4/ld 0 2026-03-10T10:19:19.835 INFO:tasks.workunit.client.0.vm02.stdout:1/212: dread - d4/d1b/f34 zero size 2026-03-10T10:19:19.837 INFO:tasks.workunit.client.0.vm02.stdout:1/213: rename d4/d1b/f24 to d4/da/d1a/d22/f45 0 2026-03-10T10:19:19.838 INFO:tasks.workunit.client.0.vm02.stdout:1/214: creat d4/da/d27/f46 x:0 0 0 2026-03-10T10:19:19.839 INFO:tasks.workunit.client.0.vm02.stdout:1/215: mkdir d4/da/d1a/d47 0 2026-03-10T10:19:19.842 INFO:tasks.workunit.client.0.vm02.stdout:1/216: dwrite d4/d1b/f44 [0,4194304] 0 2026-03-10T10:19:19.861 INFO:tasks.workunit.client.0.vm02.stdout:3/182: write d1/f3 [2216592,40865] 0 2026-03-10T10:19:19.863 INFO:tasks.workunit.client.0.vm02.stdout:3/183: creat d1/d20/f40 x:0 0 0 2026-03-10T10:19:19.866 INFO:tasks.workunit.client.0.vm02.stdout:3/184: dread d1/d6/f39 [0,4194304] 0 2026-03-10T10:19:19.866 INFO:tasks.workunit.client.1.vm05.stdout:4/128: dwrite d1/d3/f5 [0,4194304] 0 2026-03-10T10:19:19.867 INFO:tasks.workunit.client.0.vm02.stdout:3/185: creat d1/d20/f41 x:0 0 0 2026-03-10T10:19:19.868 INFO:tasks.workunit.client.0.vm02.stdout:3/186: stat d1/d8/d21/c3e 0 2026-03-10T10:19:19.868 INFO:tasks.workunit.client.1.vm05.stdout:4/129: creat d1/d3/d9/f2d x:0 0 0 2026-03-10T10:19:19.869 INFO:tasks.workunit.client.0.vm02.stdout:3/187: creat d1/d6/f42 x:0 0 0 2026-03-10T10:19:19.872 INFO:tasks.workunit.client.1.vm05.stdout:4/130: fdatasync d1/f19 0 2026-03-10T10:19:19.880 INFO:tasks.workunit.client.0.vm02.stdout:3/188: creat d1/d6/f43 x:0 0 0 2026-03-10T10:19:19.880 INFO:tasks.workunit.client.0.vm02.stdout:3/189: fsync d1/d8/f34 0 2026-03-10T10:19:19.881 INFO:tasks.workunit.client.0.vm02.stdout:1/217: sync 2026-03-10T10:19:19.881 INFO:tasks.workunit.client.1.vm05.stdout:4/131: creat d1/d3/d9/dc/f2e x:0 0 0 2026-03-10T10:19:19.882 INFO:tasks.workunit.client.0.vm02.stdout:1/218: stat d4/da/d1a/f3d 0 2026-03-10T10:19:19.883 INFO:tasks.workunit.client.1.vm05.stdout:5/176: rename da/db/d2d to da/db/d26/d35 0 2026-03-10T10:19:19.883 INFO:tasks.workunit.client.1.vm05.stdout:4/132: truncate d1/d3/d9/f1a 675253 0 2026-03-10T10:19:19.884 INFO:tasks.workunit.client.1.vm05.stdout:4/133: readlink d1/l2 0 2026-03-10T10:19:19.884 INFO:tasks.workunit.client.0.vm02.stdout:3/190: dwrite d1/f3 [0,4194304] 0 2026-03-10T10:19:19.886 INFO:tasks.workunit.client.0.vm02.stdout:3/191: stat d1/l2c 0 2026-03-10T10:19:19.886 INFO:tasks.workunit.client.0.vm02.stdout:7/208: write d1/dc/d10/f13 [1786105,81705] 0 2026-03-10T10:19:19.889 INFO:tasks.workunit.client.0.vm02.stdout:1/219: mknod d4/da/d27/d38/d3c/c48 0 2026-03-10T10:19:19.893 INFO:tasks.workunit.client.1.vm05.stdout:4/134: creat d1/d3/d9/f2f x:0 0 0 2026-03-10T10:19:19.894 INFO:tasks.workunit.client.0.vm02.stdout:1/220: creat d4/da/d1a/d22/f49 x:0 0 0 2026-03-10T10:19:19.897 INFO:tasks.workunit.client.1.vm05.stdout:5/177: getdents da/db/d28/d32 0 2026-03-10T10:19:19.898 INFO:tasks.workunit.client.0.vm02.stdout:7/209: sync 2026-03-10T10:19:19.898 INFO:tasks.workunit.client.1.vm05.stdout:9/138: rename d0/l19 to d0/d1/d13/de/l2b 0 2026-03-10T10:19:19.898 INFO:tasks.workunit.client.0.vm02.stdout:7/210: chown d1/dc/c3e 604763466 1 2026-03-10T10:19:19.898 INFO:tasks.workunit.client.0.vm02.stdout:1/221: mkdir d4/d4a 0 2026-03-10T10:19:19.899 INFO:tasks.workunit.client.1.vm05.stdout:5/178: stat da/db/de/c1b 0 2026-03-10T10:19:19.899 INFO:tasks.workunit.client.0.vm02.stdout:1/222: fdatasync d4/f3a 0 2026-03-10T10:19:19.899 INFO:tasks.workunit.client.1.vm05.stdout:5/179: stat c1 0 2026-03-10T10:19:19.901 INFO:tasks.workunit.client.0.vm02.stdout:7/211: dread d1/dc/f3 [4194304,4194304] 0 2026-03-10T10:19:19.905 INFO:tasks.workunit.client.1.vm05.stdout:5/180: mknod da/db/d26/c36 0 2026-03-10T10:19:19.906 INFO:tasks.workunit.client.0.vm02.stdout:1/223: dwrite d4/da/d27/d38/f3b [0,4194304] 0 2026-03-10T10:19:19.907 INFO:tasks.workunit.client.0.vm02.stdout:1/224: stat d4/da/l17 0 2026-03-10T10:19:19.912 INFO:tasks.workunit.client.1.vm05.stdout:3/168: rmdir dd/d15/d24 39 2026-03-10T10:19:19.913 INFO:tasks.workunit.client.0.vm02.stdout:1/225: dwrite d4/f3a [0,4194304] 0 2026-03-10T10:19:19.918 INFO:tasks.workunit.client.1.vm05.stdout:5/181: dread da/db/f1d [0,4194304] 0 2026-03-10T10:19:19.921 INFO:tasks.workunit.client.0.vm02.stdout:5/317: dwrite d1/f10 [0,4194304] 0 2026-03-10T10:19:19.928 INFO:tasks.workunit.client.0.vm02.stdout:5/318: creat d1/f68 x:0 0 0 2026-03-10T10:19:19.929 INFO:tasks.workunit.client.1.vm05.stdout:3/169: dwrite dd/d20/f26 [0,4194304] 0 2026-03-10T10:19:19.932 INFO:tasks.workunit.client.0.vm02.stdout:5/319: mknod d1/db/d11/d13/d28/d37/d3d/c69 0 2026-03-10T10:19:19.951 INFO:tasks.workunit.client.1.vm05.stdout:3/170: rename dd/d15/f29 to dd/d15/d24/d2c/f3c 0 2026-03-10T10:19:19.951 INFO:tasks.workunit.client.0.vm02.stdout:5/320: dwrite d1/db/d11/d16/f19 [0,4194304] 0 2026-03-10T10:19:19.962 INFO:tasks.workunit.client.1.vm05.stdout:3/171: link dd/d15/d24/f2f dd/d15/d1f/f3d 0 2026-03-10T10:19:19.965 INFO:tasks.workunit.client.1.vm05.stdout:3/172: creat dd/d15/d24/d2c/f3e x:0 0 0 2026-03-10T10:19:19.969 INFO:tasks.workunit.client.1.vm05.stdout:3/173: dread dd/d15/f23 [0,4194304] 0 2026-03-10T10:19:19.984 INFO:tasks.workunit.client.1.vm05.stdout:3/174: creat dd/d15/d24/d2c/f3f x:0 0 0 2026-03-10T10:19:19.985 INFO:tasks.workunit.client.1.vm05.stdout:3/175: creat dd/d15/d24/d2c/d3b/f40 x:0 0 0 2026-03-10T10:19:19.985 INFO:tasks.workunit.client.1.vm05.stdout:3/176: dwrite dd/d20/f26 [0,4194304] 0 2026-03-10T10:19:20.003 INFO:tasks.workunit.client.1.vm05.stdout:3/177: creat dd/f41 x:0 0 0 2026-03-10T10:19:20.026 INFO:tasks.workunit.client.1.vm05.stdout:3/178: rename dd/d15/d24/f33 to dd/d15/d24/f42 0 2026-03-10T10:19:20.026 INFO:tasks.workunit.client.1.vm05.stdout:3/179: readlink dd/d15/l27 0 2026-03-10T10:19:20.026 INFO:tasks.workunit.client.1.vm05.stdout:3/180: mknod dd/d15/c43 0 2026-03-10T10:19:20.026 INFO:tasks.workunit.client.1.vm05.stdout:3/181: write dd/d15/d1f/f2b [3285699,66718] 0 2026-03-10T10:19:20.026 INFO:tasks.workunit.client.1.vm05.stdout:3/182: truncate fa 787882 0 2026-03-10T10:19:20.026 INFO:tasks.workunit.client.1.vm05.stdout:3/183: write dd/d15/d24/d2c/f3e [546078,49485] 0 2026-03-10T10:19:20.032 INFO:tasks.workunit.client.1.vm05.stdout:3/184: dwrite f9 [4194304,4194304] 0 2026-03-10T10:19:20.038 INFO:tasks.workunit.client.1.vm05.stdout:3/185: rename dd/d15/f18 to dd/d15/d24/f44 0 2026-03-10T10:19:20.040 INFO:tasks.workunit.client.1.vm05.stdout:3/186: truncate dd/d15/d24/d2c/f3c 5096967 0 2026-03-10T10:19:20.062 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:19 vm02.local ceph-mon[50200]: pgmap v149: 65 pgs: 65 active+clean; 794 MiB data, 3.6 GiB used, 116 GiB / 120 GiB avail; 4.8 MiB/s rd, 98 MiB/s wr, 213 op/s 2026-03-10T10:19:20.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.060+0000 7ff5b8f27700 1 -- 192.168.123.102:0/1185736071 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 msgr2=0x7ff5ac098c40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.060+0000 7ff5b8f27700 1 --2- 192.168.123.102:0/1185736071 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 0x7ff5ac098c40 secure :-1 s=READY pgs=306 cs=0 l=1 rev1=1 crypto rx=0x7ff5a8009b00 tx=0x7ff5a8009e10 comp rx=0 tx=0).stop 2026-03-10T10:19:20.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.062+0000 7ff5b8f27700 1 -- 192.168.123.102:0/1185736071 shutdown_connections 2026-03-10T10:19:20.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.062+0000 7ff5b8f27700 1 --2- 192.168.123.102:0/1185736071 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5ac099180 0x7ff5ac09b570 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.062+0000 7ff5b8f27700 1 --2- 192.168.123.102:0/1185736071 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 0x7ff5ac098c40 unknown :-1 s=CLOSED pgs=306 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.062+0000 7ff5b8f27700 1 -- 192.168.123.102:0/1185736071 >> 192.168.123.102:0/1185736071 conn(0x7ff5ac090240 msgr2=0x7ff5ac0926a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:20.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.062+0000 7ff5b8f27700 1 -- 192.168.123.102:0/1185736071 shutdown_connections 2026-03-10T10:19:20.062 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.062+0000 7ff5b8f27700 1 -- 192.168.123.102:0/1185736071 wait complete. 2026-03-10T10:19:20.063 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b8f27700 1 Processor -- start 2026-03-10T10:19:20.063 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b8f27700 1 -- start start 2026-03-10T10:19:20.063 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b8f27700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 0x7ff5ac12d830 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b8f27700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5ac099180 0x7ff5ac12dd70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b8f27700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5ac12e390 con 0x7ff5ac096850 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b8f27700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5ac12e4d0 con 0x7ff5ac099180 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b37fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 0x7ff5ac12d830 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b37fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 0x7ff5ac12d830 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:52924/0 (socket says 192.168.123.102:52924) 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b37fe700 1 -- 192.168.123.102:0/3553160412 learned_addr learned my addr 192.168.123.102:0/3553160412 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b2ffd700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5ac099180 0x7ff5ac12dd70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b37fe700 1 -- 192.168.123.102:0/3553160412 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5ac099180 msgr2=0x7ff5ac12dd70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b37fe700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5ac099180 0x7ff5ac12dd70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.063+0000 7ff5b37fe700 1 -- 192.168.123.102:0/3553160412 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff5a80097e0 con 0x7ff5ac096850 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.064+0000 7ff5b37fe700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 0x7ff5ac12d830 secure :-1 s=READY pgs=307 cs=0 l=1 rev1=1 crypto rx=0x7ff5a8005850 tx=0x7ff5a8004a40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:20.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.064+0000 7ff5b0ff9700 1 -- 192.168.123.102:0/3553160412 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff5a801d070 con 0x7ff5ac096850 2026-03-10T10:19:20.065 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.064+0000 7ff5b0ff9700 1 -- 192.168.123.102:0/3553160412 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff5a800bc50 con 0x7ff5ac096850 2026-03-10T10:19:20.065 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.064+0000 7ff5b0ff9700 1 -- 192.168.123.102:0/3553160412 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff5a800f780 con 0x7ff5ac096850 2026-03-10T10:19:20.065 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.064+0000 7ff5b8f27700 1 -- 192.168.123.102:0/3553160412 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff5ac132f20 con 0x7ff5ac096850 2026-03-10T10:19:20.065 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.064+0000 7ff5b8f27700 1 -- 192.168.123.102:0/3553160412 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff5ac133390 con 0x7ff5ac096850 2026-03-10T10:19:20.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.065+0000 7ff5b8f27700 1 -- 192.168.123.102:0/3553160412 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff5ac091dc0 con 0x7ff5ac096850 2026-03-10T10:19:20.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.067+0000 7ff5b0ff9700 1 -- 192.168.123.102:0/3553160412 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7ff5a8022470 con 0x7ff5ac096850 2026-03-10T10:19:20.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.067+0000 7ff5b0ff9700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5a406c600 0x7ff5a406eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.067+0000 7ff5b0ff9700 1 -- 192.168.123.102:0/3553160412 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7ff5a808d920 con 0x7ff5ac096850 2026-03-10T10:19:20.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.068+0000 7ff5b2ffd700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5a406c600 0x7ff5a406eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.068+0000 7ff5b2ffd700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5a406c600 0x7ff5a406eac0 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7ff5a0005950 tx=0x7ff5a00058e0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:20.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.068+0000 7ff5b0ff9700 1 -- 192.168.123.102:0/3553160412 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7ff5a805bc50 con 0x7ff5ac096850 2026-03-10T10:19:20.128 INFO:tasks.workunit.client.1.vm05.stdout:4/135: sync 2026-03-10T10:19:20.131 INFO:tasks.workunit.client.0.vm02.stdout:6/163: unlink d0/f2 0 2026-03-10T10:19:20.136 INFO:tasks.workunit.client.1.vm05.stdout:4/136: dread - d1/d3/d9/f2d zero size 2026-03-10T10:19:20.144 INFO:tasks.workunit.client.1.vm05.stdout:4/137: dread d1/d3/f5 [0,4194304] 0 2026-03-10T10:19:20.144 INFO:tasks.workunit.client.1.vm05.stdout:1/89: write d4/dd/f15 [882956,1055] 0 2026-03-10T10:19:20.146 INFO:tasks.workunit.client.0.vm02.stdout:2/287: dwrite d0/f30 [0,4194304] 0 2026-03-10T10:19:20.149 INFO:tasks.workunit.client.1.vm05.stdout:3/187: sync 2026-03-10T10:19:20.149 INFO:tasks.workunit.client.1.vm05.stdout:1/90: truncate d4/dd/f15 1147854 0 2026-03-10T10:19:20.149 INFO:tasks.workunit.client.1.vm05.stdout:9/139: sync 2026-03-10T10:19:20.153 INFO:tasks.workunit.client.1.vm05.stdout:3/188: write f2 [2365260,95319] 0 2026-03-10T10:19:20.153 INFO:tasks.workunit.client.1.vm05.stdout:9/140: dread - d0/df/d11/f24 zero size 2026-03-10T10:19:20.167 INFO:tasks.workunit.client.1.vm05.stdout:8/118: truncate f2 249710 0 2026-03-10T10:19:20.170 INFO:tasks.workunit.client.0.vm02.stdout:8/261: dwrite d1/d1c/f34 [0,4194304] 0 2026-03-10T10:19:20.180 INFO:tasks.workunit.client.0.vm02.stdout:9/154: truncate da/f13 518626 0 2026-03-10T10:19:20.189 INFO:tasks.workunit.client.1.vm05.stdout:4/138: symlink d1/d3/l30 0 2026-03-10T10:19:20.190 INFO:tasks.workunit.client.0.vm02.stdout:2/288: mkdir d0/d1a/d49/d5e 0 2026-03-10T10:19:20.193 INFO:tasks.workunit.client.1.vm05.stdout:1/91: symlink d4/dd/l19 0 2026-03-10T10:19:20.195 INFO:tasks.workunit.client.0.vm02.stdout:9/155: mknod da/d10/d2c/c2f 0 2026-03-10T10:19:20.196 INFO:tasks.workunit.client.0.vm02.stdout:6/164: mknod d0/d8/d29/d2f/c35 0 2026-03-10T10:19:20.198 INFO:tasks.workunit.client.1.vm05.stdout:8/119: mknod d7/d14/c20 0 2026-03-10T10:19:20.203 INFO:tasks.workunit.client.1.vm05.stdout:8/120: read f6 [780353,69677] 0 2026-03-10T10:19:20.208 INFO:tasks.workunit.client.1.vm05.stdout:4/139: truncate d1/d3/d9/f2d 45515 0 2026-03-10T10:19:20.208 INFO:tasks.workunit.client.0.vm02.stdout:8/262: mknod d1/d1c/d24/c4e 0 2026-03-10T10:19:20.208 INFO:tasks.workunit.client.0.vm02.stdout:8/263: chown d1/d1c/c37 1 1 2026-03-10T10:19:20.209 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.208+0000 7ff5b8f27700 1 -- 192.168.123.102:0/3553160412 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7ff5ac002900 con 0x7ff5a406c600 2026-03-10T10:19:20.211 INFO:tasks.workunit.client.0.vm02.stdout:8/264: dread d1/d1c/d43/f4b [4194304,4194304] 0 2026-03-10T10:19:20.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.211+0000 7ff5b0ff9700 1 -- 192.168.123.102:0/3553160412 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7ff5ac002900 con 0x7ff5a406c600 2026-03-10T10:19:20.215 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 -- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5a406c600 msgr2=0x7ff5a406eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5a406c600 0x7ff5a406eac0 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7ff5a0005950 tx=0x7ff5a00058e0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 -- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 msgr2=0x7ff5ac12d830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 0x7ff5ac12d830 secure :-1 s=READY pgs=307 cs=0 l=1 rev1=1 crypto rx=0x7ff5a8005850 tx=0x7ff5a8004a40 comp rx=0 tx=0).stop 2026-03-10T10:19:20.227 INFO:tasks.workunit.client.1.vm05.stdout:1/92: rename d4/df/l14 to d4/dd/l1a 0 2026-03-10T10:19:20.227 INFO:tasks.workunit.client.1.vm05.stdout:1/93: dread d4/df/f11 [0,4194304] 0 2026-03-10T10:19:20.227 INFO:tasks.workunit.client.1.vm05.stdout:1/94: read d4/df/f11 [504722,10190] 0 2026-03-10T10:19:20.227 INFO:tasks.workunit.client.1.vm05.stdout:1/95: fsync d4/f18 0 2026-03-10T10:19:20.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 -- 192.168.123.102:0/3553160412 shutdown_connections 2026-03-10T10:19:20.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff5ac096850 0x7ff5ac12d830 unknown :-1 s=CLOSED pgs=307 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7ff5a406c600 0x7ff5a406eac0 unknown :-1 s=CLOSED pgs=142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 --2- 192.168.123.102:0/3553160412 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff5ac099180 0x7ff5ac12dd70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 -- 192.168.123.102:0/3553160412 >> 192.168.123.102:0/3553160412 conn(0x7ff5ac090240 msgr2=0x7ff5ac094ed0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:20.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 -- 192.168.123.102:0/3553160412 shutdown_connections 2026-03-10T10:19:20.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.215+0000 7ff59e7fc700 1 -- 192.168.123.102:0/3553160412 wait complete. 2026-03-10T10:19:20.227 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:19:20.235 INFO:tasks.workunit.client.1.vm05.stdout:4/140: readlink d1/d3/d9/l18 0 2026-03-10T10:19:20.237 INFO:tasks.workunit.client.0.vm02.stdout:6/165: creat d0/d8/d9/d31/d32/f36 x:0 0 0 2026-03-10T10:19:20.239 INFO:tasks.workunit.client.1.vm05.stdout:1/96: mknod d4/df/c1b 0 2026-03-10T10:19:20.241 INFO:tasks.workunit.client.0.vm02.stdout:8/265: creat d1/d1c/d24/d35/f4f x:0 0 0 2026-03-10T10:19:20.244 INFO:tasks.workunit.client.0.vm02.stdout:8/266: dwrite d1/d2/f36 [0,4194304] 0 2026-03-10T10:19:20.252 INFO:tasks.workunit.client.1.vm05.stdout:4/141: dread d1/d3/d9/f1b [0,4194304] 0 2026-03-10T10:19:20.254 INFO:tasks.workunit.client.1.vm05.stdout:4/142: dread - d1/d3/d9/dc/f2e zero size 2026-03-10T10:19:20.262 INFO:tasks.workunit.client.1.vm05.stdout:1/97: dwrite d4/dd/f15 [0,4194304] 0 2026-03-10T10:19:20.263 INFO:tasks.workunit.client.1.vm05.stdout:3/189: rmdir dd/d15/d24/d2c/d3a 0 2026-03-10T10:19:20.283 INFO:tasks.workunit.client.0.vm02.stdout:6/166: symlink d0/d8/d29/l37 0 2026-03-10T10:19:20.285 INFO:tasks.workunit.client.0.vm02.stdout:8/267: dread d1/d1c/f14 [0,4194304] 0 2026-03-10T10:19:20.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:19 vm05.local ceph-mon[59051]: pgmap v149: 65 pgs: 65 active+clean; 794 MiB data, 3.6 GiB used, 116 GiB / 120 GiB avail; 4.8 MiB/s rd, 98 MiB/s wr, 213 op/s 2026-03-10T10:19:20.297 INFO:tasks.workunit.client.0.vm02.stdout:8/268: dwrite d1/d1c/f20 [0,4194304] 0 2026-03-10T10:19:20.297 INFO:tasks.workunit.client.0.vm02.stdout:8/269: chown d1/d1c/c1a 7585718 1 2026-03-10T10:19:20.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 -- 192.168.123.102:0/1211232289 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b8c075a40 msgr2=0x7f4b8c077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 --2- 192.168.123.102:0/1211232289 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b8c075a40 0x7f4b8c077ed0 secure :-1 s=READY pgs=308 cs=0 l=1 rev1=1 crypto rx=0x7f4b8400d420 tx=0x7f4b8400d730 comp rx=0 tx=0).stop 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 -- 192.168.123.102:0/1211232289 shutdown_connections 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 --2- 192.168.123.102:0/1211232289 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b8c075a40 0x7f4b8c077ed0 unknown :-1 s=CLOSED pgs=308 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 --2- 192.168.123.102:0/1211232289 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b8c072b50 0x7f4b8c072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 -- 192.168.123.102:0/1211232289 >> 192.168.123.102:0/1211232289 conn(0x7f4b8c06dae0 msgr2=0x7f4b8c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 -- 192.168.123.102:0/1211232289 shutdown_connections 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 -- 192.168.123.102:0/1211232289 wait complete. 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 Processor -- start 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 -- start start 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b8c072b50 0x7f4b8c0830f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b8c083630 0x7f4b8c1b3180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4b8c083b40 con 0x7f4b8c083630 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.306+0000 7f4b92b8b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4b8c083cb0 con 0x7f4b8c072b50 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.307+0000 7f4b90927700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b8c072b50 0x7f4b8c0830f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.307+0000 7f4b90927700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b8c072b50 0x7f4b8c0830f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38762/0 (socket says 192.168.123.102:38762) 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.307+0000 7f4b90927700 1 -- 192.168.123.102:0/3326396506 learned_addr learned my addr 192.168.123.102:0/3326396506 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.307+0000 7f4b90927700 1 -- 192.168.123.102:0/3326396506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b8c083630 msgr2=0x7f4b8c1b3180 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.307+0000 7f4b90927700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b8c083630 0x7f4b8c1b3180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.307+0000 7f4b90927700 1 -- 192.168.123.102:0/3326396506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4b84009e30 con 0x7f4b8c072b50 2026-03-10T10:19:20.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.307+0000 7f4b90927700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b8c072b50 0x7f4b8c0830f0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f4b7c009d00 tx=0x7f4b7c00e3b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:20.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.308+0000 7f4b89ffb700 1 -- 192.168.123.102:0/3326396506 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4b7c00a4f0 con 0x7f4b8c072b50 2026-03-10T10:19:20.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.308+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4b8c1b3720 con 0x7f4b8c072b50 2026-03-10T10:19:20.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.308+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4b8c1b3c70 con 0x7f4b8c072b50 2026-03-10T10:19:20.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.308+0000 7f4b89ffb700 1 -- 192.168.123.102:0/3326396506 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f4b7c010040 con 0x7f4b8c072b50 2026-03-10T10:19:20.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.308+0000 7f4b89ffb700 1 -- 192.168.123.102:0/3326396506 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4b7c0136a0 con 0x7f4b8c072b50 2026-03-10T10:19:20.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.308+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4b78005320 con 0x7f4b8c072b50 2026-03-10T10:19:20.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.309+0000 7f4b89ffb700 1 -- 192.168.123.102:0/3326396506 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f4b7c00ed00 con 0x7f4b8c072b50 2026-03-10T10:19:20.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.310+0000 7f4b89ffb700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b7406c530 0x7f4b7406e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.310+0000 7f4b89ffb700 1 -- 192.168.123.102:0/3326396506 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f4b7c08be20 con 0x7f4b8c072b50 2026-03-10T10:19:20.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.310+0000 7f4b8bfff700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b7406c530 0x7f4b7406e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.310+0000 7f4b8bfff700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b7406c530 0x7f4b7406e9f0 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f4b8400d420 tx=0x7f4b84000f40 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:20.313 INFO:tasks.workunit.client.0.vm02.stdout:6/167: creat d0/d8/d29/d2f/f38 x:0 0 0 2026-03-10T10:19:20.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.313+0000 7f4b89ffb700 1 -- 192.168.123.102:0/3326396506 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f4b7c059ec0 con 0x7f4b8c072b50 2026-03-10T10:19:20.321 INFO:tasks.workunit.client.1.vm05.stdout:4/143: dread d1/d3/f12 [0,4194304] 0 2026-03-10T10:19:20.322 INFO:tasks.workunit.client.1.vm05.stdout:4/144: fdatasync d1/d3/d9/dc/f2a 0 2026-03-10T10:19:20.328 INFO:tasks.workunit.client.1.vm05.stdout:1/98: mkdir d4/df/d1c 0 2026-03-10T10:19:20.328 INFO:tasks.workunit.client.1.vm05.stdout:6/131: truncate fb 3507142 0 2026-03-10T10:19:20.329 INFO:tasks.workunit.client.0.vm02.stdout:6/168: dread d0/d8/d29/d2f/f33 [0,4194304] 0 2026-03-10T10:19:20.330 INFO:tasks.workunit.client.0.vm02.stdout:6/169: stat d0/d8/d9/f30 0 2026-03-10T10:19:20.335 INFO:tasks.workunit.client.1.vm05.stdout:1/99: dwrite d4/f18 [0,4194304] 0 2026-03-10T10:19:20.337 INFO:tasks.workunit.client.1.vm05.stdout:8/121: creat d7/f21 x:0 0 0 2026-03-10T10:19:20.338 INFO:tasks.workunit.client.1.vm05.stdout:8/122: read - d7/d14/d15/f1f zero size 2026-03-10T10:19:20.343 INFO:tasks.workunit.client.0.vm02.stdout:6/170: creat d0/d7/f39 x:0 0 0 2026-03-10T10:19:20.348 INFO:tasks.workunit.client.1.vm05.stdout:0/144: write d1/d7/f4 [1074525,39302] 0 2026-03-10T10:19:20.357 INFO:tasks.workunit.client.0.vm02.stdout:6/171: unlink d0/d7/l1e 0 2026-03-10T10:19:20.357 INFO:tasks.workunit.client.0.vm02.stdout:6/172: chown d0/f28 3668631 1 2026-03-10T10:19:20.363 INFO:tasks.workunit.client.1.vm05.stdout:7/165: rename d5/l24 to d5/l2b 0 2026-03-10T10:19:20.380 INFO:tasks.workunit.client.1.vm05.stdout:6/132: mkdir dd/d27/d30 0 2026-03-10T10:19:20.380 INFO:tasks.workunit.client.1.vm05.stdout:3/190: getdents dd/d20 0 2026-03-10T10:19:20.381 INFO:tasks.workunit.client.1.vm05.stdout:6/133: write dd/d27/f2f [347612,78721] 0 2026-03-10T10:19:20.381 INFO:tasks.workunit.client.0.vm02.stdout:8/270: creat d1/d1c/d23/d3e/f50 x:0 0 0 2026-03-10T10:19:20.384 INFO:tasks.workunit.client.1.vm05.stdout:1/100: mknod d4/df/c1d 0 2026-03-10T10:19:20.384 INFO:tasks.workunit.client.1.vm05.stdout:1/101: read d4/df/f11 [286495,40221] 0 2026-03-10T10:19:20.388 INFO:tasks.workunit.client.1.vm05.stdout:6/134: mknod dd/df/c31 0 2026-03-10T10:19:20.388 INFO:tasks.workunit.client.1.vm05.stdout:6/135: readlink l8 0 2026-03-10T10:19:20.390 INFO:tasks.workunit.client.0.vm02.stdout:6/173: mkdir d0/d3a 0 2026-03-10T10:19:20.394 INFO:tasks.workunit.client.0.vm02.stdout:3/192: dwrite d1/f31 [0,4194304] 0 2026-03-10T10:19:20.396 INFO:tasks.workunit.client.1.vm05.stdout:1/102: dwrite d4/dd/f15 [0,4194304] 0 2026-03-10T10:19:20.397 INFO:tasks.workunit.client.0.vm02.stdout:6/174: mknod d0/d8/d9/d31/c3b 0 2026-03-10T10:19:20.399 INFO:tasks.workunit.client.1.vm05.stdout:7/166: creat d5/d26/f2c x:0 0 0 2026-03-10T10:19:20.403 INFO:tasks.workunit.client.1.vm05.stdout:2/129: dwrite db/f15 [0,4194304] 0 2026-03-10T10:19:20.407 INFO:tasks.workunit.client.1.vm05.stdout:3/191: creat dd/d39/f45 x:0 0 0 2026-03-10T10:19:20.408 INFO:tasks.workunit.client.1.vm05.stdout:3/192: chown dd/d15/d24/d2c/f2d 185330 1 2026-03-10T10:19:20.422 INFO:tasks.workunit.client.0.vm02.stdout:3/193: rmdir d1/d8/d21 39 2026-03-10T10:19:20.426 INFO:tasks.workunit.client.0.vm02.stdout:3/194: dwrite d1/f28 [0,4194304] 0 2026-03-10T10:19:20.438 INFO:tasks.workunit.client.0.vm02.stdout:3/195: dwrite d1/d6/f36 [0,4194304] 0 2026-03-10T10:19:20.443 INFO:tasks.workunit.client.1.vm05.stdout:7/167: mkdir d5/d1d/d20/d2d 0 2026-03-10T10:19:20.443 INFO:tasks.workunit.client.1.vm05.stdout:7/168: chown d5/d1d/d20 3204 1 2026-03-10T10:19:20.444 INFO:tasks.workunit.client.0.vm02.stdout:6/175: dwrite d0/f21 [4194304,4194304] 0 2026-03-10T10:19:20.449 INFO:tasks.workunit.client.1.vm05.stdout:7/169: dwrite d5/dd/f12 [0,4194304] 0 2026-03-10T10:19:20.451 INFO:tasks.workunit.client.1.vm05.stdout:2/130: rename db/f15 to db/f23 0 2026-03-10T10:19:20.452 INFO:tasks.workunit.client.1.vm05.stdout:7/170: dread - d5/d17/f1e zero size 2026-03-10T10:19:20.476 INFO:tasks.workunit.client.0.vm02.stdout:1/226: truncate d4/da/d1a/d22/f45 2552208 0 2026-03-10T10:19:20.476 INFO:tasks.workunit.client.0.vm02.stdout:4/289: write d1/d10/f6 [43192,83635] 0 2026-03-10T10:19:20.478 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.475+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f4b78000bf0 con 0x7f4b7406c530 2026-03-10T10:19:20.479 INFO:tasks.workunit.client.0.vm02.stdout:7/212: dwrite d1/dc/d16/f1f [0,4194304] 0 2026-03-10T10:19:20.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.480+0000 7f4b89ffb700 1 -- 192.168.123.102:0/3326396506 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7f4b78000bf0 con 0x7f4b7406c530 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b7406c530 msgr2=0x7f4b7406e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b7406c530 0x7f4b7406e9f0 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f4b8400d420 tx=0x7f4b84000f40 comp rx=0 tx=0).stop 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b8c072b50 msgr2=0x7f4b8c0830f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b8c072b50 0x7f4b8c0830f0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f4b7c009d00 tx=0x7f4b7c00e3b0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 shutdown_connections 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b7406c530 0x7f4b7406e9f0 unknown :-1 s=CLOSED pgs=143 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b8c072b50 0x7f4b8c0830f0 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 --2- 192.168.123.102:0/3326396506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b8c083630 0x7f4b8c1b3180 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 >> 192.168.123.102:0/3326396506 conn(0x7f4b8c06dae0 msgr2=0x7f4b8c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 shutdown_connections 2026-03-10T10:19:20.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.483+0000 7f4b92b8b700 1 -- 192.168.123.102:0/3326396506 wait complete. 2026-03-10T10:19:20.491 INFO:tasks.workunit.client.1.vm05.stdout:0/145: getdents d1/d2/d9/d31 0 2026-03-10T10:19:20.500 INFO:tasks.workunit.client.1.vm05.stdout:3/193: dwrite dd/d15/d24/d2c/f3c [0,4194304] 0 2026-03-10T10:19:20.501 INFO:tasks.workunit.client.1.vm05.stdout:6/136: truncate dd/fe 4189384 0 2026-03-10T10:19:20.502 INFO:tasks.workunit.client.1.vm05.stdout:3/194: dread - dd/d39/f45 zero size 2026-03-10T10:19:20.503 INFO:tasks.workunit.client.1.vm05.stdout:6/137: chown dd/df/c31 20 1 2026-03-10T10:19:20.512 INFO:tasks.workunit.client.1.vm05.stdout:2/131: rename db/f11 to db/f24 0 2026-03-10T10:19:20.515 INFO:tasks.workunit.client.1.vm05.stdout:0/146: read d1/f11 [48428,65110] 0 2026-03-10T10:19:20.515 INFO:tasks.workunit.client.0.vm02.stdout:3/196: mkdir d1/d8/d44 0 2026-03-10T10:19:20.515 INFO:tasks.workunit.client.1.vm05.stdout:0/147: dread - d1/d2/d9/d31/d12/d20/f2e zero size 2026-03-10T10:19:20.518 INFO:tasks.workunit.client.1.vm05.stdout:3/195: symlink dd/d15/d1f/l46 0 2026-03-10T10:19:20.520 INFO:tasks.workunit.client.1.vm05.stdout:6/138: creat dd/df/f32 x:0 0 0 2026-03-10T10:19:20.523 INFO:tasks.workunit.client.0.vm02.stdout:3/197: chown d1/d8/d21/f3c 16 1 2026-03-10T10:19:20.527 INFO:tasks.workunit.client.1.vm05.stdout:6/139: write dd/df/f1e [5726430,123867] 0 2026-03-10T10:19:20.557 INFO:tasks.workunit.client.1.vm05.stdout:2/132: creat db/f25 x:0 0 0 2026-03-10T10:19:20.557 INFO:tasks.workunit.client.0.vm02.stdout:1/227: creat d4/f4b x:0 0 0 2026-03-10T10:19:20.558 INFO:tasks.workunit.client.1.vm05.stdout:0/148: creat d1/d2/d9/f32 x:0 0 0 2026-03-10T10:19:20.558 INFO:tasks.workunit.client.1.vm05.stdout:2/133: dwrite db/d12/f1a [0,4194304] 0 2026-03-10T10:19:20.558 INFO:tasks.workunit.client.1.vm05.stdout:2/134: chown db/l1b 275 1 2026-03-10T10:19:20.558 INFO:tasks.workunit.client.1.vm05.stdout:3/196: dwrite dd/d15/d24/d2c/f32 [0,4194304] 0 2026-03-10T10:19:20.558 INFO:tasks.workunit.client.1.vm05.stdout:6/140: dwrite dd/df/d12/f20 [0,4194304] 0 2026-03-10T10:19:20.560 INFO:tasks.workunit.client.0.vm02.stdout:4/290: getdents d1/d2/d1a/d49 0 2026-03-10T10:19:20.566 INFO:tasks.workunit.client.0.vm02.stdout:4/291: chown d1/d41 499491814 1 2026-03-10T10:19:20.566 INFO:tasks.workunit.client.0.vm02.stdout:4/292: read d1/d2/d1a/f4c [7777967,83758] 0 2026-03-10T10:19:20.566 INFO:tasks.workunit.client.0.vm02.stdout:4/293: chown d1/d52/d53 1 1 2026-03-10T10:19:20.566 INFO:tasks.workunit.client.0.vm02.stdout:4/294: chown d1/d52 3096360 1 2026-03-10T10:19:20.566 INFO:tasks.workunit.client.0.vm02.stdout:4/295: fsync d1/d10/db/f1e 0 2026-03-10T10:19:20.566 INFO:tasks.workunit.client.0.vm02.stdout:4/296: dwrite d1/d10/db/f20 [0,4194304] 0 2026-03-10T10:19:20.574 INFO:tasks.workunit.client.1.vm05.stdout:3/197: dread f9 [4194304,4194304] 0 2026-03-10T10:19:20.585 INFO:tasks.workunit.client.1.vm05.stdout:3/198: chown dd/d15/d24/d2c/f3f 33519575 1 2026-03-10T10:19:20.585 INFO:tasks.workunit.client.1.vm05.stdout:3/199: chown dd/d15/d24/d2c/f3f 15 1 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.576+0000 7f8528b24700 1 -- 192.168.123.102:0/3927789922 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8524072b50 msgr2=0x7f8524072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.576+0000 7f8528b24700 1 --2- 192.168.123.102:0/3927789922 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8524072b50 0x7f8524072f70 secure :-1 s=READY pgs=309 cs=0 l=1 rev1=1 crypto rx=0x7f8514007780 tx=0x7f851400c050 comp rx=0 tx=0).stop 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.576+0000 7f8528b24700 1 -- 192.168.123.102:0/3927789922 shutdown_connections 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.576+0000 7f8528b24700 1 --2- 192.168.123.102:0/3927789922 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8524075a40 0x7f8524077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.576+0000 7f8528b24700 1 --2- 192.168.123.102:0/3927789922 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8524072b50 0x7f8524072f70 unknown :-1 s=CLOSED pgs=309 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.576+0000 7f8528b24700 1 -- 192.168.123.102:0/3927789922 >> 192.168.123.102:0/3927789922 conn(0x7f852406dae0 msgr2=0x7f852406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.576+0000 7f8528b24700 1 -- 192.168.123.102:0/3927789922 shutdown_connections 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8528b24700 1 -- 192.168.123.102:0/3927789922 wait complete. 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8528b24700 1 Processor -- start 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8528b24700 1 -- start start 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8528b24700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8524075a40 0x7f8524082f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8528b24700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85240834c0 0x7f8524083940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8528b24700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f852412e720 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8528b24700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f852412e890 con 0x7f85240834c0 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8521d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85240834c0 0x7f8524083940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8521d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85240834c0 0x7f8524083940 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38772/0 (socket says 192.168.123.102:38772) 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.577+0000 7f8521d9b700 1 -- 192.168.123.102:0/1084035021 learned_addr learned my addr 192.168.123.102:0/1084035021 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.579+0000 7f852259c700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8524075a40 0x7f8524082f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.579+0000 7f852259c700 1 -- 192.168.123.102:0/1084035021 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85240834c0 msgr2=0x7f8524083940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.579+0000 7f852259c700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85240834c0 0x7f8524083940 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.579+0000 7f852259c700 1 -- 192.168.123.102:0/1084035021 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8514007430 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.579+0000 7f852259c700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8524075a40 0x7f8524082f80 secure :-1 s=READY pgs=310 cs=0 l=1 rev1=1 crypto rx=0x7f851400cfd0 tx=0x7f8514008ae0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.580+0000 7f85137fe700 1 -- 192.168.123.102:0/1084035021 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8514022070 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.580+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f852412eb10 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.580+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f852412f000 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.582+0000 7f85137fe700 1 -- 192.168.123.102:0/1084035021 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8514007c40 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.582+0000 7f85137fe700 1 -- 192.168.123.102:0/1084035021 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f851400f040 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.582+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8504005320 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.583+0000 7f85137fe700 1 -- 192.168.123.102:0/1084035021 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f851401a070 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.584+0000 7f85137fe700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f850c06c530 0x7f850c06e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.584+0000 7f85137fe700 1 -- 192.168.123.102:0/1084035021 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f851408c300 con 0x7f8524075a40 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.584+0000 7f8521d9b700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f850c06c530 0x7f850c06e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.585+0000 7f8521d9b700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f850c06c530 0x7f850c06e9f0 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f851c00e3d0 tx=0x7f851c00b040 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:20.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.588+0000 7f85137fe700 1 -- 192.168.123.102:0/1084035021 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f851405a630 con 0x7f8524075a40 2026-03-10T10:19:20.600 INFO:tasks.workunit.client.0.vm02.stdout:1/228: creat d4/d1b/f4c x:0 0 0 2026-03-10T10:19:20.614 INFO:tasks.workunit.client.1.vm05.stdout:6/141: unlink dd/df/f18 0 2026-03-10T10:19:20.619 INFO:tasks.workunit.client.1.vm05.stdout:6/142: rmdir dd/df/d12/d24/d28 39 2026-03-10T10:19:20.620 INFO:tasks.workunit.client.1.vm05.stdout:6/143: fsync dd/f14 0 2026-03-10T10:19:20.625 INFO:tasks.workunit.client.1.vm05.stdout:6/144: mknod dd/df/d12/c33 0 2026-03-10T10:19:20.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.701+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f8504000bf0 con 0x7f850c06c530 2026-03-10T10:19:20.708 INFO:tasks.workunit.client.0.vm02.stdout:4/297: dread d1/d2/f4 [0,4194304] 0 2026-03-10T10:19:20.713 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.712+0000 7f85137fe700 1 -- 192.168.123.102:0/1084035021 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3216 (secure 0 0 0) 0x7f8504000bf0 con 0x7f850c06c530 2026-03-10T10:19:20.713 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:19:20.713 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (3m) 2m ago 4m 22.5M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:19:20.713 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (4m) 2m ago 4m 8154k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:19:20.713 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (4m) 2m ago 4m 8166k - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (4m) 2m ago 4m 7415k - 18.2.1 5be31c24972a 51802fb57170 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (4m) 2m ago 4m 7407k - 18.2.1 5be31c24972a f275982dc269 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (3m) 2m ago 4m 78.5M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (2m) 2m ago 2m 12.0M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (2m) 2m ago 2m 14.3M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (2m) 2m ago 2m 12.2M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (2m) 2m ago 2m 16.6M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:9283,8765,8443 running (5m) 2m ago 5m 502M - 18.2.1 5be31c24972a 8bea583521d3 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (3m) 2m ago 3m 450M - 18.2.1 5be31c24972a ff545ad0664a 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (5m) 2m ago 5m 52.7M 2048M 18.2.1 5be31c24972a ab92d831cc1d 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (3m) 2m ago 3m 45.0M 2048M 18.2.1 5be31c24972a cea7d23f93a6 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (4m) 2m ago 4m 16.0M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (3m) 2m ago 3m 14.6M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (3m) 2m ago 3m 45.5M 4096M 18.2.1 5be31c24972a 9d7f135a3f3b 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (3m) 2m ago 3m 46.2M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (3m) 2m ago 3m 45.2M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (3m) 2m ago 3m 43.6M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (3m) 2m ago 3m 43.8M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (2m) 2m ago 2m 43.6M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:19:20.714 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (3m) 2m ago 4m 34.3M - 2.43.0 a07b618ecd1d a607fd039cb6 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.715+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f850c06c530 msgr2=0x7f850c06e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.715+0000 7f8528b24700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f850c06c530 0x7f850c06e9f0 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f851c00e3d0 tx=0x7f851c00b040 comp rx=0 tx=0).stop 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.715+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8524075a40 msgr2=0x7f8524082f80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.715+0000 7f8528b24700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8524075a40 0x7f8524082f80 secure :-1 s=READY pgs=310 cs=0 l=1 rev1=1 crypto rx=0x7f851400cfd0 tx=0x7f8514008ae0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.715+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 shutdown_connections 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.715+0000 7f8528b24700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8524075a40 0x7f8524082f80 unknown :-1 s=CLOSED pgs=310 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.715+0000 7f8528b24700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f850c06c530 0x7f850c06e9f0 unknown :-1 s=CLOSED pgs=144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.715+0000 7f8528b24700 1 --2- 192.168.123.102:0/1084035021 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85240834c0 0x7f8524083940 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.715+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 >> 192.168.123.102:0/1084035021 conn(0x7f852406dae0 msgr2=0x7f852406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.716+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 shutdown_connections 2026-03-10T10:19:20.716 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.716+0000 7f8528b24700 1 -- 192.168.123.102:0/1084035021 wait complete. 2026-03-10T10:19:20.718 INFO:tasks.workunit.client.0.vm02.stdout:4/298: link d1/d2/f34 d1/d52/d53/f5b 0 2026-03-10T10:19:20.718 INFO:tasks.workunit.client.0.vm02.stdout:4/299: chown d1/d52/f5a 252772349 1 2026-03-10T10:19:20.718 INFO:tasks.workunit.client.0.vm02.stdout:4/300: chown d1/d2/f4 45210572 1 2026-03-10T10:19:20.719 INFO:tasks.workunit.client.0.vm02.stdout:4/301: write d1/d32/f46 [1299442,103789] 0 2026-03-10T10:19:20.761 INFO:tasks.workunit.client.1.vm05.stdout:2/135: sync 2026-03-10T10:19:20.761 INFO:tasks.workunit.client.1.vm05.stdout:2/136: fsync f1 0 2026-03-10T10:19:20.761 INFO:tasks.workunit.client.1.vm05.stdout:3/200: sync 2026-03-10T10:19:20.768 INFO:tasks.workunit.client.1.vm05.stdout:2/137: dwrite f7 [4194304,4194304] 0 2026-03-10T10:19:20.770 INFO:tasks.workunit.client.1.vm05.stdout:2/138: readlink db/d1c/l21 0 2026-03-10T10:19:20.774 INFO:tasks.workunit.client.1.vm05.stdout:3/201: dwrite f3 [0,4194304] 0 2026-03-10T10:19:20.781 INFO:tasks.workunit.client.0.vm02.stdout:6/176: fdatasync d0/d8/d29/d2f/f33 0 2026-03-10T10:19:20.785 INFO:tasks.workunit.client.0.vm02.stdout:6/177: creat d0/f3c x:0 0 0 2026-03-10T10:19:20.789 INFO:tasks.workunit.client.0.vm02.stdout:6/178: dwrite d0/d7/f26 [0,4194304] 0 2026-03-10T10:19:20.816 INFO:tasks.workunit.client.0.vm02.stdout:6/179: dread d0/f20 [0,4194304] 0 2026-03-10T10:19:20.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.816+0000 7fa2cf59e700 1 -- 192.168.123.102:0/1940793812 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0072b20 msgr2=0x7fa2d0072f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.816+0000 7fa2cf59e700 1 --2- 192.168.123.102:0/1940793812 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0072b20 0x7fa2d0072f40 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fa2c0008790 tx=0x7fa2c0008aa0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.816+0000 7fa2cf59e700 1 -- 192.168.123.102:0/1940793812 shutdown_connections 2026-03-10T10:19:20.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.816+0000 7fa2cf59e700 1 --2- 192.168.123.102:0/1940793812 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2d0075a10 0x7fa2d0077ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.816+0000 7fa2cf59e700 1 --2- 192.168.123.102:0/1940793812 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0072b20 0x7fa2d0072f40 secure :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fa2c0008790 tx=0x7fa2c0008aa0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.816+0000 7fa2cf59e700 1 -- 192.168.123.102:0/1940793812 >> 192.168.123.102:0/1940793812 conn(0x7fa2d006daa0 msgr2=0x7fa2d006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:20.817 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.816+0000 7fa2cf59e700 1 -- 192.168.123.102:0/1940793812 shutdown_connections 2026-03-10T10:19:20.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.817+0000 7fa2cf59e700 1 -- 192.168.123.102:0/1940793812 wait complete. 2026-03-10T10:19:20.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.817+0000 7fa2cf59e700 1 Processor -- start 2026-03-10T10:19:20.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.817+0000 7fa2cf59e700 1 -- start start 2026-03-10T10:19:20.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.817+0000 7fa2cf59e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2d0075a10 0x7fa2d01b0ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.817+0000 7fa2cf59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0083bf0 0x7fa2d01b3010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.817+0000 7fa2cf59e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa2d01b3650 con 0x7fa2d0075a10 2026-03-10T10:19:20.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.817+0000 7fa2cf59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa2d01b37c0 con 0x7fa2d0083bf0 2026-03-10T10:19:20.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.819+0000 7fa2cdd9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0083bf0 0x7fa2d01b3010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.819+0000 7fa2cdd9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0083bf0 0x7fa2d01b3010 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38782/0 (socket says 192.168.123.102:38782) 2026-03-10T10:19:20.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.819+0000 7fa2cdd9b700 1 -- 192.168.123.102:0/780762303 learned_addr learned my addr 192.168.123.102:0/780762303 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:20.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.819+0000 7fa2ce59c700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2d0075a10 0x7fa2d01b0ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.819+0000 7fa2cdd9b700 1 -- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2d0075a10 msgr2=0x7fa2d01b0ac0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:20.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.819+0000 7fa2cdd9b700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2d0075a10 0x7fa2d01b0ac0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:20.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.819+0000 7fa2cdd9b700 1 -- 192.168.123.102:0/780762303 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa2c0008440 con 0x7fa2d0083bf0 2026-03-10T10:19:20.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.821+0000 7fa2cdd9b700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0083bf0 0x7fa2d01b3010 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fa2c80060b0 tx=0x7fa2c8008950 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:20.825 INFO:tasks.workunit.client.0.vm02.stdout:6/180: link d0/d8/d9/f13 d0/d8/d9/d31/f3d 0 2026-03-10T10:19:20.827 INFO:tasks.workunit.client.0.vm02.stdout:6/181: symlink d0/d8/d9/d31/l3e 0 2026-03-10T10:19:20.831 INFO:tasks.workunit.client.0.vm02.stdout:6/182: write d0/d7/f26 [3460475,113646] 0 2026-03-10T10:19:20.835 INFO:tasks.workunit.client.0.vm02.stdout:6/183: mknod d0/d8/c3f 0 2026-03-10T10:19:20.836 INFO:tasks.workunit.client.0.vm02.stdout:6/184: write d0/d8/f2a [3504686,24897] 0 2026-03-10T10:19:20.837 INFO:tasks.workunit.client.0.vm02.stdout:6/185: read d0/d8/d9/f30 [696015,43181] 0 2026-03-10T10:19:20.844 INFO:tasks.workunit.client.1.vm05.stdout:5/182: truncate da/db/f1d 437748 0 2026-03-10T10:19:20.846 INFO:tasks.workunit.client.0.vm02.stdout:5/321: dwrite d1/db/d11/d16/f19 [4194304,4194304] 0 2026-03-10T10:19:20.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.847+0000 7fa2bf7fe700 1 -- 192.168.123.102:0/780762303 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa2c800d730 con 0x7fa2d0083bf0 2026-03-10T10:19:20.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.847+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa2d01b3aa0 con 0x7fa2d0083bf0 2026-03-10T10:19:20.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.847+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa2d01b3fc0 con 0x7fa2d0083bf0 2026-03-10T10:19:20.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.848+0000 7fa2bf7fe700 1 -- 192.168.123.102:0/780762303 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa2c8004bb0 con 0x7fa2d0083bf0 2026-03-10T10:19:20.849 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.848+0000 7fa2bf7fe700 1 -- 192.168.123.102:0/780762303 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa2c80165d0 con 0x7fa2d0083bf0 2026-03-10T10:19:20.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.849+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa2d007d150 con 0x7fa2d0083bf0 2026-03-10T10:19:20.851 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.850+0000 7fa2bf7fe700 1 -- 192.168.123.102:0/780762303 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7fa2c8016840 con 0x7fa2d0083bf0 2026-03-10T10:19:20.851 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.851+0000 7fa2bf7fe700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa2b806c530 0x7fa2b806e9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:20.851 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.851+0000 7fa2bf7fe700 1 -- 192.168.123.102:0/780762303 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7fa2c8025080 con 0x7fa2d0083bf0 2026-03-10T10:19:20.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.851+0000 7fa2ce59c700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa2b806c530 0x7fa2b806e9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:20.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.852+0000 7fa2ce59c700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa2b806c530 0x7fa2b806e9f0 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7fa2c000a040 tx=0x7fa2c000b340 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:20.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:20.852+0000 7fa2bf7fe700 1 -- 192.168.123.102:0/780762303 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7fa2c8055ea0 con 0x7fa2d0083bf0 2026-03-10T10:19:20.857 INFO:tasks.workunit.client.0.vm02.stdout:5/322: dwrite d1/f12 [0,4194304] 0 2026-03-10T10:19:20.864 INFO:tasks.workunit.client.0.vm02.stdout:5/323: dwrite d1/db/d11/d13/d28/d37/d3d/f49 [0,4194304] 0 2026-03-10T10:19:20.879 INFO:tasks.workunit.client.1.vm05.stdout:5/183: mknod da/db/d26/c37 0 2026-03-10T10:19:20.887 INFO:tasks.workunit.client.1.vm05.stdout:5/184: mkdir da/db/d26/d35/d38 0 2026-03-10T10:19:20.947 INFO:tasks.workunit.client.0.vm02.stdout:0/235: write d9/d18/d1a/d22/f3f [119540,49715] 0 2026-03-10T10:19:20.972 INFO:tasks.workunit.client.1.vm05.stdout:9/141: dwrite d0/f7 [0,4194304] 0 2026-03-10T10:19:20.974 INFO:tasks.workunit.client.0.vm02.stdout:9/156: dwrite da/d10/f19 [0,4194304] 0 2026-03-10T10:19:20.984 INFO:tasks.workunit.client.1.vm05.stdout:9/142: creat d0/df/d11/f2c x:0 0 0 2026-03-10T10:19:20.985 INFO:tasks.workunit.client.0.vm02.stdout:9/157: creat da/f30 x:0 0 0 2026-03-10T10:19:20.986 INFO:tasks.workunit.client.0.vm02.stdout:9/158: chown da/d10/f27 2 1 2026-03-10T10:19:20.988 INFO:tasks.workunit.client.1.vm05.stdout:9/143: rmdir d0/df 39 2026-03-10T10:19:20.998 INFO:tasks.workunit.client.1.vm05.stdout:9/144: creat d0/df/d11/f2d x:0 0 0 2026-03-10T10:19:21.001 INFO:tasks.workunit.client.1.vm05.stdout:9/145: rename d0/df/d11/f12 to d0/df/d11/f2e 0 2026-03-10T10:19:21.010 INFO:tasks.workunit.client.1.vm05.stdout:9/146: dwrite d0/f1e [0,4194304] 0 2026-03-10T10:19:21.019 INFO:tasks.workunit.client.0.vm02.stdout:1/229: link d4/da/c31 d4/da/c4d 0 2026-03-10T10:19:21.019 INFO:tasks.workunit.client.0.vm02.stdout:1/230: stat d4/da/d1a/d22/f32 0 2026-03-10T10:19:21.021 INFO:tasks.workunit.client.0.vm02.stdout:1/231: unlink d4/d1b/l2a 0 2026-03-10T10:19:21.039 INFO:tasks.workunit.client.0.vm02.stdout:0/236: rmdir d9/d42 0 2026-03-10T10:19:21.040 INFO:tasks.workunit.client.0.vm02.stdout:0/237: fdatasync f2 0 2026-03-10T10:19:21.043 INFO:tasks.workunit.client.0.vm02.stdout:2/289: rename d0/d1a/f26 to d0/d10/f5f 0 2026-03-10T10:19:21.046 INFO:tasks.workunit.client.1.vm05.stdout:4/145: rename d1/d3/d9 to d1/d31 0 2026-03-10T10:19:21.047 INFO:tasks.workunit.client.1.vm05.stdout:4/146: write d1/d31/dc/f2e [757096,20908] 0 2026-03-10T10:19:21.047 INFO:tasks.workunit.client.0.vm02.stdout:2/290: link d0/d10/f1f d0/d1a/d49/d5e/f60 0 2026-03-10T10:19:21.048 INFO:tasks.workunit.client.1.vm05.stdout:7/171: rename d5/l16 to d5/d1d/d29/l2e 0 2026-03-10T10:19:21.048 INFO:tasks.workunit.client.1.vm05.stdout:4/147: chown f0 942138 1 2026-03-10T10:19:21.048 INFO:tasks.workunit.client.0.vm02.stdout:2/291: write d0/d1a/d49/f54 [22818,113884] 0 2026-03-10T10:19:21.049 INFO:tasks.workunit.client.1.vm05.stdout:7/172: write d5/dd/f1a [770093,29900] 0 2026-03-10T10:19:21.051 INFO:tasks.workunit.client.1.vm05.stdout:0/149: rename d1/d7/f2a to d1/d2/d9/d31/d13/d2f/f33 0 2026-03-10T10:19:21.051 INFO:tasks.workunit.client.1.vm05.stdout:7/173: dread d5/fe [0,4194304] 0 2026-03-10T10:19:21.051 INFO:tasks.workunit.client.0.vm02.stdout:2/292: mknod d0/d1a/d24/c61 0 2026-03-10T10:19:21.054 INFO:tasks.workunit.client.1.vm05.stdout:7/174: chown d5/l9 60 1 2026-03-10T10:19:21.054 INFO:tasks.workunit.client.1.vm05.stdout:4/148: unlink d1/d31/l2b 0 2026-03-10T10:19:21.056 INFO:tasks.workunit.client.1.vm05.stdout:6/145: rename la to dd/df/d12/d24/d28/l34 0 2026-03-10T10:19:21.060 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.059+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fa2d004ea90 con 0x7fa2d0083bf0 2026-03-10T10:19:21.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.061+0000 7fa2bf7fe700 1 -- 192.168.123.102:0/780762303 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+558 (secure 0 0 0) 0x7fa2c80594c0 con 0x7fa2d0083bf0 2026-03-10T10:19:21.062 INFO:tasks.workunit.client.1.vm05.stdout:7/175: dwrite d5/d26/f2c [0,4194304] 0 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 6 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:19:21.078 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 14 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.064+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa2b806c530 msgr2=0x7fa2b806e9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.064+0000 7fa2cf59e700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa2b806c530 0x7fa2b806e9f0 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7fa2c000a040 tx=0x7fa2c000b340 comp rx=0 tx=0).stop 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.065+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0083bf0 msgr2=0x7fa2d01b3010 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.065+0000 7fa2cf59e700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0083bf0 0x7fa2d01b3010 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fa2c80060b0 tx=0x7fa2c8008950 comp rx=0 tx=0).stop 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.065+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 shutdown_connections 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.065+0000 7fa2cf59e700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2d0075a10 0x7fa2d01b0ac0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.065+0000 7fa2cf59e700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7fa2b806c530 0x7fa2b806e9f0 unknown :-1 s=CLOSED pgs=145 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.065+0000 7fa2cf59e700 1 --2- 192.168.123.102:0/780762303 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2d0083bf0 0x7fa2d01b3010 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.065+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 >> 192.168.123.102:0/780762303 conn(0x7fa2d006daa0 msgr2=0x7fa2d006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.065+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 shutdown_connections 2026-03-10T10:19:21.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.066+0000 7fa2cf59e700 1 -- 192.168.123.102:0/780762303 wait complete. 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:0/150: getdents d1/d2/d9/d31/d13/d15 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:8/123: truncate d7/f1c 1004297 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:8/124: read - d7/f21 zero size 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:1/103: dwrite d4/dd/f15 [4194304,4194304] 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:2/139: rename f7 to db/f26 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:7/176: creat d5/dd/f2f x:0 0 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:1/104: symlink d4/dd/l1e 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:3/202: rename dd/d15/c16 to dd/d15/d24/c47 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:8/125: getdents d7/d14/d15 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:8/126: truncate d7/f1e 188900 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:1/105: creat d4/dd/f1f x:0 0 0 2026-03-10T10:19:21.080 INFO:tasks.workunit.client.1.vm05.stdout:3/203: creat dd/d15/d24/d2c/d3b/f48 x:0 0 0 2026-03-10T10:19:21.081 INFO:tasks.workunit.client.1.vm05.stdout:1/106: chown d4/dd/l1e 959689 1 2026-03-10T10:19:21.082 INFO:tasks.workunit.client.1.vm05.stdout:1/107: write d4/f18 [4192198,77399] 0 2026-03-10T10:19:21.084 INFO:tasks.workunit.client.1.vm05.stdout:2/140: dwrite db/f14 [0,4194304] 0 2026-03-10T10:19:21.086 INFO:tasks.workunit.client.1.vm05.stdout:6/146: dread dd/df/f22 [0,4194304] 0 2026-03-10T10:19:21.089 INFO:tasks.workunit.client.1.vm05.stdout:3/204: dread dd/d15/d24/d2c/f3c [0,4194304] 0 2026-03-10T10:19:21.090 INFO:tasks.workunit.client.1.vm05.stdout:3/205: readlink dd/lf 0 2026-03-10T10:19:21.103 INFO:tasks.workunit.client.1.vm05.stdout:2/141: rename db/d1c/c20 to db/d1c/c27 0 2026-03-10T10:19:21.109 INFO:tasks.workunit.client.1.vm05.stdout:3/206: dwrite f2 [4194304,4194304] 0 2026-03-10T10:19:21.109 INFO:tasks.workunit.client.0.vm02.stdout:7/213: truncate d1/fd 1994289 0 2026-03-10T10:19:21.110 INFO:tasks.workunit.client.1.vm05.stdout:1/108: mkdir d4/d20 0 2026-03-10T10:19:21.111 INFO:tasks.workunit.client.0.vm02.stdout:7/214: truncate d1/f32 493048 0 2026-03-10T10:19:21.111 INFO:tasks.workunit.client.0.vm02.stdout:7/215: fsync d1/dc/d16/f1f 0 2026-03-10T10:19:21.112 INFO:tasks.workunit.client.1.vm05.stdout:1/109: dread d4/f18 [0,4194304] 0 2026-03-10T10:19:21.114 INFO:tasks.workunit.client.1.vm05.stdout:2/142: rmdir db/d12 39 2026-03-10T10:19:21.115 INFO:tasks.workunit.client.1.vm05.stdout:1/110: dwrite d4/dd/f15 [0,4194304] 0 2026-03-10T10:19:21.122 INFO:tasks.workunit.client.1.vm05.stdout:3/207: symlink dd/d15/d24/d2c/d3b/l49 0 2026-03-10T10:19:21.136 INFO:tasks.workunit.client.0.vm02.stdout:7/216: creat d1/d1b/f43 x:0 0 0 2026-03-10T10:19:21.136 INFO:tasks.workunit.client.0.vm02.stdout:7/217: stat d1/dc/d16/d28/d2d/f2f 0 2026-03-10T10:19:21.141 INFO:tasks.workunit.client.0.vm02.stdout:6/186: getdents d0 0 2026-03-10T10:19:21.142 INFO:tasks.workunit.client.0.vm02.stdout:7/218: mkdir d1/dc/d44 0 2026-03-10T10:19:21.151 INFO:tasks.workunit.client.1.vm05.stdout:1/111: creat d4/dd/f21 x:0 0 0 2026-03-10T10:19:21.152 INFO:tasks.workunit.client.0.vm02.stdout:6/187: creat d0/d3a/f40 x:0 0 0 2026-03-10T10:19:21.152 INFO:tasks.workunit.client.0.vm02.stdout:7/219: dwrite d1/f34 [0,4194304] 0 2026-03-10T10:19:21.169 INFO:tasks.workunit.client.1.vm05.stdout:2/143: mkdir db/d28 0 2026-03-10T10:19:21.174 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:20 vm02.local ceph-mon[50200]: from='client.14648 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:21.174 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:20 vm02.local ceph-mon[50200]: from='client.24427 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:21.174 INFO:tasks.workunit.client.0.vm02.stdout:7/220: symlink d1/dc/d16/d28/d2d/l45 0 2026-03-10T10:19:21.174 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.172+0000 7f43a29ce700 1 -- 192.168.123.102:0/1302619098 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f439c107d90 msgr2=0x7f439c10a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.175 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.172+0000 7f43a29ce700 1 --2- 192.168.123.102:0/1302619098 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f439c107d90 0x7f439c10a1c0 secure :-1 s=READY pgs=311 cs=0 l=1 rev1=1 crypto rx=0x7f4398009b00 tx=0x7f4398009e10 comp rx=0 tx=0).stop 2026-03-10T10:19:21.175 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.175+0000 7f43a29ce700 1 -- 192.168.123.102:0/1302619098 shutdown_connections 2026-03-10T10:19:21.175 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.175+0000 7f43a29ce700 1 --2- 192.168.123.102:0/1302619098 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f439c10a700 0x7f439c10cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.175 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.175+0000 7f43a29ce700 1 --2- 192.168.123.102:0/1302619098 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f439c107d90 0x7f439c10a1c0 unknown :-1 s=CLOSED pgs=311 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.175 INFO:tasks.workunit.client.0.vm02.stdout:6/188: symlink d0/d8/d9/l41 0 2026-03-10T10:19:21.175 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.175+0000 7f43a29ce700 1 -- 192.168.123.102:0/1302619098 >> 192.168.123.102:0/1302619098 conn(0x7f439c06daa0 msgr2=0x7f439c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:21.176 INFO:tasks.workunit.client.0.vm02.stdout:6/189: chown d0/d8/d9/f30 10 1 2026-03-10T10:19:21.176 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.176+0000 7f43a29ce700 1 -- 192.168.123.102:0/1302619098 shutdown_connections 2026-03-10T10:19:21.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.176+0000 7f43a29ce700 1 -- 192.168.123.102:0/1302619098 wait complete. 2026-03-10T10:19:21.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.176+0000 7f43a29ce700 1 Processor -- start 2026-03-10T10:19:21.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.177+0000 7f43a29ce700 1 -- start start 2026-03-10T10:19:21.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.177+0000 7f43a29ce700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f439c107d90 0x7f439c116cb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:21.177 INFO:tasks.workunit.client.1.vm05.stdout:3/208: symlink dd/l4a 0 2026-03-10T10:19:21.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.177+0000 7f43a29ce700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f439c10a700 0x7f439c1171f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:21.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.177+0000 7f43a29ce700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f439c117810 con 0x7f439c107d90 2026-03-10T10:19:21.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.177+0000 7f43a29ce700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f439c1b3320 con 0x7f439c10a700 2026-03-10T10:19:21.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.177+0000 7f43a11cb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f439c10a700 0x7f439c1171f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:21.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.178+0000 7f43a11cb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f439c10a700 0x7f439c1171f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38808/0 (socket says 192.168.123.102:38808) 2026-03-10T10:19:21.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.178+0000 7f43a11cb700 1 -- 192.168.123.102:0/3571735718 learned_addr learned my addr 192.168.123.102:0/3571735718 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:21.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.178+0000 7f43a19cc700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f439c107d90 0x7f439c116cb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:21.178 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.178+0000 7f43a11cb700 1 -- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f439c107d90 msgr2=0x7f439c116cb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.178+0000 7f43a11cb700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f439c107d90 0x7f439c116cb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.178+0000 7f43a11cb700 1 -- 192.168.123.102:0/3571735718 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f43980097e0 con 0x7f439c10a700 2026-03-10T10:19:21.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.179+0000 7f43a11cb700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f439c10a700 0x7f439c1171f0 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f439400c390 tx=0x7f439400c6a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:21.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.179+0000 7f4392ffd700 1 -- 192.168.123.102:0/3571735718 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f439400e030 con 0x7f439c10a700 2026-03-10T10:19:21.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.179+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f439c1b3520 con 0x7f439c10a700 2026-03-10T10:19:21.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.179+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f439c1b3a20 con 0x7f439c10a700 2026-03-10T10:19:21.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.180+0000 7f4392ffd700 1 -- 192.168.123.102:0/3571735718 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f439400f040 con 0x7f439c10a700 2026-03-10T10:19:21.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.181+0000 7f4392ffd700 1 -- 192.168.123.102:0/3571735718 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4394014650 con 0x7f439c10a700 2026-03-10T10:19:21.182 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.181+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4380005320 con 0x7f439c10a700 2026-03-10T10:19:21.182 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.182+0000 7f4392ffd700 1 -- 192.168.123.102:0/3571735718 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f4394009110 con 0x7f439c10a700 2026-03-10T10:19:21.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.183+0000 7f4392ffd700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f438806c330 0x7f438806e7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:21.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.183+0000 7f43a19cc700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f438806c330 0x7f438806e7f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:21.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.183+0000 7f4392ffd700 1 -- 192.168.123.102:0/3571735718 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f439408bcb0 con 0x7f439c10a700 2026-03-10T10:19:21.184 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.183+0000 7f43a19cc700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f438806c330 0x7f438806e7f0 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7f439800a010 tx=0x7f4398009fa0 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:21.185 INFO:tasks.workunit.client.1.vm05.stdout:1/112: symlink d4/l22 0 2026-03-10T10:19:21.185 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.184+0000 7f4392ffd700 1 -- 192.168.123.102:0/3571735718 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f4394059d50 con 0x7f439c10a700 2026-03-10T10:19:21.185 INFO:tasks.workunit.client.1.vm05.stdout:1/113: dread - d4/dd/f1f zero size 2026-03-10T10:19:21.189 INFO:tasks.workunit.client.1.vm05.stdout:3/209: mknod dd/d15/d24/d2c/c4b 0 2026-03-10T10:19:21.198 INFO:tasks.workunit.client.0.vm02.stdout:6/190: link d0/d8/d9/c1b d0/d8/d29/d2f/c42 0 2026-03-10T10:19:21.199 INFO:tasks.workunit.client.0.vm02.stdout:6/191: creat d0/f43 x:0 0 0 2026-03-10T10:19:21.199 INFO:tasks.workunit.client.1.vm05.stdout:3/210: truncate dd/d15/d24/d2c/d3b/f48 370280 0 2026-03-10T10:19:21.199 INFO:tasks.workunit.client.1.vm05.stdout:2/144: link db/d12/f1d db/d12/f29 0 2026-03-10T10:19:21.199 INFO:tasks.workunit.client.1.vm05.stdout:3/211: rmdir dd/d20 39 2026-03-10T10:19:21.201 INFO:tasks.workunit.client.1.vm05.stdout:2/145: rename db/cd to db/d1c/c2a 0 2026-03-10T10:19:21.203 INFO:tasks.workunit.client.0.vm02.stdout:6/192: write d0/d8/f27 [1059326,35783] 0 2026-03-10T10:19:21.205 INFO:tasks.workunit.client.1.vm05.stdout:2/146: mknod db/d12/c2b 0 2026-03-10T10:19:21.206 INFO:tasks.workunit.client.1.vm05.stdout:2/147: dread - db/d1c/f1f zero size 2026-03-10T10:19:21.207 INFO:tasks.workunit.client.1.vm05.stdout:3/212: mkdir dd/d15/d4c 0 2026-03-10T10:19:21.209 INFO:tasks.workunit.client.0.vm02.stdout:6/193: dread - d0/d3a/f40 zero size 2026-03-10T10:19:21.212 INFO:tasks.workunit.client.1.vm05.stdout:2/148: mknod db/d1c/c2c 0 2026-03-10T10:19:21.212 INFO:tasks.workunit.client.1.vm05.stdout:3/213: dwrite dd/d15/d1f/f2b [0,4194304] 0 2026-03-10T10:19:21.214 INFO:tasks.workunit.client.1.vm05.stdout:3/214: stat dd/d15/d24/d2c/d3b/l49 0 2026-03-10T10:19:21.214 INFO:tasks.workunit.client.1.vm05.stdout:3/215: truncate dd/f41 60752 0 2026-03-10T10:19:21.231 INFO:tasks.workunit.client.1.vm05.stdout:9/147: sync 2026-03-10T10:19:21.238 INFO:tasks.workunit.client.1.vm05.stdout:9/148: fdatasync d0/d1/d16/f18 0 2026-03-10T10:19:21.238 INFO:tasks.workunit.client.1.vm05.stdout:3/216: symlink dd/d15/d24/l4d 0 2026-03-10T10:19:21.241 INFO:tasks.workunit.client.1.vm05.stdout:2/149: mkdir db/d2d 0 2026-03-10T10:19:21.245 INFO:tasks.workunit.client.1.vm05.stdout:8/127: sync 2026-03-10T10:19:21.245 INFO:tasks.workunit.client.1.vm05.stdout:0/151: sync 2026-03-10T10:19:21.245 INFO:tasks.workunit.client.1.vm05.stdout:0/152: fsync d1/d7/f27 0 2026-03-10T10:19:21.263 INFO:tasks.workunit.client.1.vm05.stdout:8/128: unlink d7/ca 0 2026-03-10T10:19:21.263 INFO:tasks.workunit.client.1.vm05.stdout:0/153: rmdir d1/d2/d9/d31/d13/d17 39 2026-03-10T10:19:21.263 INFO:tasks.workunit.client.1.vm05.stdout:8/129: write d7/d14/d15/f1f [191749,2069] 0 2026-03-10T10:19:21.266 INFO:tasks.workunit.client.1.vm05.stdout:0/154: write d1/d7/f4 [1550166,43124] 0 2026-03-10T10:19:21.273 INFO:tasks.workunit.client.0.vm02.stdout:6/194: dread d0/d8/d9/f14 [0,4194304] 0 2026-03-10T10:19:21.273 INFO:tasks.workunit.client.1.vm05.stdout:0/155: dread d1/d7/f24 [0,4194304] 0 2026-03-10T10:19:21.276 INFO:tasks.workunit.client.0.vm02.stdout:6/195: write d0/f28 [424706,109845] 0 2026-03-10T10:19:21.280 INFO:tasks.workunit.client.1.vm05.stdout:9/149: creat d0/f2f x:0 0 0 2026-03-10T10:19:21.281 INFO:tasks.workunit.client.1.vm05.stdout:9/150: stat d0/d1/d13/de/d21 0 2026-03-10T10:19:21.282 INFO:tasks.workunit.client.1.vm05.stdout:0/156: dwrite d1/d7/f24 [0,4194304] 0 2026-03-10T10:19:21.283 INFO:tasks.workunit.client.1.vm05.stdout:9/151: read d0/fa [3190644,129823] 0 2026-03-10T10:19:21.284 INFO:tasks.workunit.client.1.vm05.stdout:2/150: creat db/f2e x:0 0 0 2026-03-10T10:19:21.284 INFO:tasks.workunit.client.1.vm05.stdout:0/157: truncate d1/d2/f21 1257580 0 2026-03-10T10:19:21.284 INFO:tasks.workunit.client.1.vm05.stdout:2/151: chown db/f24 50183 1 2026-03-10T10:19:21.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:20 vm05.local ceph-mon[59051]: from='client.14648 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:21.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:20 vm05.local ceph-mon[59051]: from='client.24427 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:21.298 INFO:tasks.workunit.client.1.vm05.stdout:1/114: fdatasync d4/dd/f15 0 2026-03-10T10:19:21.300 INFO:tasks.workunit.client.1.vm05.stdout:2/152: dwrite db/f2e [0,4194304] 0 2026-03-10T10:19:21.305 INFO:tasks.workunit.client.0.vm02.stdout:0/238: mknod d9/d18/d1a/c48 0 2026-03-10T10:19:21.309 INFO:tasks.workunit.client.0.vm02.stdout:8/271: rename d1/d2/ca to d1/d1c/c51 0 2026-03-10T10:19:21.309 INFO:tasks.workunit.client.0.vm02.stdout:3/198: rename d1/d8 to d1/d8/d44/d45 22 2026-03-10T10:19:21.309 INFO:tasks.workunit.client.0.vm02.stdout:9/159: write da/f25 [1756141,60985] 0 2026-03-10T10:19:21.309 INFO:tasks.workunit.client.0.vm02.stdout:9/160: chown da 3 1 2026-03-10T10:19:21.309 INFO:tasks.workunit.client.1.vm05.stdout:5/185: dwrite f5 [0,4194304] 0 2026-03-10T10:19:21.309 INFO:tasks.workunit.client.1.vm05.stdout:5/186: stat da 0 2026-03-10T10:19:21.311 INFO:tasks.workunit.client.0.vm02.stdout:4/302: rename d1/d2/d37/f28 to d1/d2/d1a/d49/f5c 0 2026-03-10T10:19:21.328 INFO:tasks.workunit.client.1.vm05.stdout:4/149: getdents d1/d3 0 2026-03-10T10:19:21.334 INFO:tasks.workunit.client.0.vm02.stdout:2/293: write d0/d1a/f31 [2907143,124539] 0 2026-03-10T10:19:21.344 INFO:tasks.workunit.client.0.vm02.stdout:2/294: dread d0/f44 [0,4194304] 0 2026-03-10T10:19:21.347 INFO:tasks.workunit.client.0.vm02.stdout:9/161: creat da/d10/f31 x:0 0 0 2026-03-10T10:19:21.347 INFO:tasks.workunit.client.1.vm05.stdout:0/158: mknod d1/d2/d9/d31/d13/d2f/c34 0 2026-03-10T10:19:21.349 INFO:tasks.workunit.client.0.vm02.stdout:8/272: truncate d1/d1c/d43/f4b 1359760 0 2026-03-10T10:19:21.351 INFO:tasks.workunit.client.0.vm02.stdout:9/162: dwrite da/f15 [0,4194304] 0 2026-03-10T10:19:21.361 INFO:tasks.workunit.client.0.vm02.stdout:9/163: dwrite da/d10/f19 [0,4194304] 0 2026-03-10T10:19:21.363 INFO:tasks.workunit.client.0.vm02.stdout:9/164: chown da/d10/d2c/c2f 153524 1 2026-03-10T10:19:21.367 INFO:tasks.workunit.client.1.vm05.stdout:1/115: rename d4/f18 to d4/df/d1c/f23 0 2026-03-10T10:19:21.374 INFO:tasks.workunit.client.0.vm02.stdout:4/303: symlink d1/d52/d53/l5d 0 2026-03-10T10:19:21.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.375+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f4380005cc0 con 0x7f439c10a700 2026-03-10T10:19:21.378 INFO:tasks.workunit.client.1.vm05.stdout:5/187: mknod da/db/d28/c39 0 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:19:21.379 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:19:21.380 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.377+0000 7f4392ffd700 1 -- 192.168.123.102:0/3571735718 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1854 (secure 0 0 0) 0x7f4394059b70 con 0x7f439c10a700 2026-03-10T10:19:21.380 INFO:tasks.workunit.client.1.vm05.stdout:7/177: dwrite d5/d17/f19 [0,4194304] 0 2026-03-10T10:19:21.381 INFO:tasks.workunit.client.1.vm05.stdout:7/178: readlink d5/l9 0 2026-03-10T10:19:21.382 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.382+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f438806c330 msgr2=0x7f438806e7f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.382 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.382+0000 7f43a29ce700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f438806c330 0x7f438806e7f0 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7f439800a010 tx=0x7f4398009fa0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.382 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.382+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f439c10a700 msgr2=0x7f439c1171f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.382 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.382+0000 7f43a29ce700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f439c10a700 0x7f439c1171f0 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f439400c390 tx=0x7f439400c6a0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.383+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 shutdown_connections 2026-03-10T10:19:21.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.383+0000 7f43a29ce700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f439c107d90 0x7f439c116cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.383+0000 7f43a29ce700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f438806c330 0x7f438806e7f0 unknown :-1 s=CLOSED pgs=146 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.383+0000 7f43a29ce700 1 --2- 192.168.123.102:0/3571735718 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f439c10a700 0x7f439c1171f0 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.383 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.383+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 >> 192.168.123.102:0/3571735718 conn(0x7f439c06daa0 msgr2=0x7f439c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:21.384 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.384+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 shutdown_connections 2026-03-10T10:19:21.385 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.385+0000 7f43a29ce700 1 -- 192.168.123.102:0/3571735718 wait complete. 2026-03-10T10:19:21.385 INFO:tasks.workunit.client.0.vm02.stdout:2/295: creat d0/d1a/d24/f62 x:0 0 0 2026-03-10T10:19:21.388 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:19:21.388 INFO:tasks.workunit.client.1.vm05.stdout:3/217: rmdir dd/d15/d24/d2c/d3b 39 2026-03-10T10:19:21.389 INFO:tasks.workunit.client.1.vm05.stdout:6/147: truncate dd/df/f1e 672312 0 2026-03-10T10:19:21.393 INFO:tasks.workunit.client.0.vm02.stdout:7/221: truncate d1/dc/d10/f13 2756107 0 2026-03-10T10:19:21.394 INFO:tasks.workunit.client.0.vm02.stdout:7/222: write d1/f32 [363969,79746] 0 2026-03-10T10:19:21.397 INFO:tasks.workunit.client.1.vm05.stdout:9/152: unlink d0/fa 0 2026-03-10T10:19:21.397 INFO:tasks.workunit.client.1.vm05.stdout:7/179: dwrite d5/dd/f1a [0,4194304] 0 2026-03-10T10:19:21.399 INFO:tasks.workunit.client.0.vm02.stdout:8/273: creat d1/d1c/d43/f52 x:0 0 0 2026-03-10T10:19:21.409 INFO:tasks.workunit.client.1.vm05.stdout:2/153: rename l4 to db/d2d/l2f 0 2026-03-10T10:19:21.415 INFO:tasks.workunit.client.1.vm05.stdout:1/116: mknod d4/df/c24 0 2026-03-10T10:19:21.419 INFO:tasks.workunit.client.1.vm05.stdout:1/117: read - d4/dd/f21 zero size 2026-03-10T10:19:21.430 INFO:tasks.workunit.client.0.vm02.stdout:6/196: truncate d0/f20 742132 0 2026-03-10T10:19:21.431 INFO:tasks.workunit.client.0.vm02.stdout:3/199: creat d1/d8/f46 x:0 0 0 2026-03-10T10:19:21.432 INFO:tasks.workunit.client.0.vm02.stdout:3/200: read d1/f5 [4404496,124206] 0 2026-03-10T10:19:21.434 INFO:tasks.workunit.client.1.vm05.stdout:8/130: truncate d7/d14/d15/f1f 136881 0 2026-03-10T10:19:21.455 INFO:tasks.workunit.client.0.vm02.stdout:0/239: write d9/d18/d1a/d22/d24/f26 [2995392,102548] 0 2026-03-10T10:19:21.456 INFO:tasks.workunit.client.1.vm05.stdout:9/153: dread d0/f2a [0,4194304] 0 2026-03-10T10:19:21.468 INFO:tasks.workunit.client.1.vm05.stdout:4/150: rename d1/d3/l30 to d1/d31/l32 0 2026-03-10T10:19:21.468 INFO:tasks.workunit.client.1.vm05.stdout:5/188: rename da/db to da/db/d28/d32/d3a 22 2026-03-10T10:19:21.469 INFO:tasks.workunit.client.1.vm05.stdout:1/118: rename d4 to d4/df/d25 22 2026-03-10T10:19:21.472 INFO:tasks.workunit.client.0.vm02.stdout:2/296: dread d0/fe [0,4194304] 0 2026-03-10T10:19:21.504 INFO:tasks.workunit.client.1.vm05.stdout:1/119: write d4/df/d1c/f23 [4104835,19366] 0 2026-03-10T10:19:21.504 INFO:tasks.workunit.client.1.vm05.stdout:1/120: dwrite d4/df/f11 [0,4194304] 0 2026-03-10T10:19:21.504 INFO:tasks.workunit.client.0.vm02.stdout:5/324: rename d1/db/d11/d13/d28/d37/d3d/d61 to d1/d6a 0 2026-03-10T10:19:21.504 INFO:tasks.workunit.client.0.vm02.stdout:0/240: fsync d9/d18/f2e 0 2026-03-10T10:19:21.504 INFO:tasks.workunit.client.0.vm02.stdout:2/297: read d0/f44 [1273127,56379] 0 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.484+0000 7f57c4da9700 1 -- 192.168.123.102:0/2441180130 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ffe90 msgr2=0x7f57c0100310 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.489+0000 7f57c4da9700 1 --2- 192.168.123.102:0/2441180130 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ffe90 0x7f57c0100310 secure :-1 s=READY pgs=312 cs=0 l=1 rev1=1 crypto rx=0x7f57a8009b00 tx=0x7f57a8009e10 comp rx=0 tx=0).stop 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.489+0000 7f57c4da9700 1 -- 192.168.123.102:0/2441180130 shutdown_connections 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.489+0000 7f57c4da9700 1 --2- 192.168.123.102:0/2441180130 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ffe90 0x7f57c0100310 unknown :-1 s=CLOSED pgs=312 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.489+0000 7f57c4da9700 1 --2- 192.168.123.102:0/2441180130 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f57c00ff530 0x7f57c00ff950 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.489+0000 7f57c4da9700 1 -- 192.168.123.102:0/2441180130 >> 192.168.123.102:0/2441180130 conn(0x7f57c00fb130 msgr2=0x7f57c00fd590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.489+0000 7f57c4da9700 1 -- 192.168.123.102:0/2441180130 shutdown_connections 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.489+0000 7f57c4da9700 1 -- 192.168.123.102:0/2441180130 wait complete. 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.490+0000 7f57c4da9700 1 Processor -- start 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.490+0000 7f57c4da9700 1 -- start start 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.491+0000 7f57c4da9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ff530 0x7f57c0198600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.491+0000 7f57c4da9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f57c00ffe90 0x7f57c0198b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.491+0000 7f57be59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ff530 0x7f57c0198600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.491+0000 7f57be59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ff530 0x7f57c0198600 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:53008/0 (socket says 192.168.123.102:53008) 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.491+0000 7f57be59c700 1 -- 192.168.123.102:0/2953919262 learned_addr learned my addr 192.168.123.102:0/2953919262 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.491+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f57c0199160 con 0x7f57c00ff530 2026-03-10T10:19:21.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.491+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f57c01992a0 con 0x7f57c00ffe90 2026-03-10T10:19:21.505 INFO:tasks.workunit.client.0.vm02.stdout:1/232: rename d4/da/d27/f30 to d4/da/d27/d38/f4e 0 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.492+0000 7f57b7fff700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f57c00ffe90 0x7f57c0198b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.493+0000 7f57be59c700 1 -- 192.168.123.102:0/2953919262 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f57c00ffe90 msgr2=0x7f57c0198b40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.493+0000 7f57be59c700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f57c00ffe90 0x7f57c0198b40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.493+0000 7f57be59c700 1 -- 192.168.123.102:0/2953919262 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f57a80097e0 con 0x7f57c00ff530 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.493+0000 7f57be59c700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ff530 0x7f57c0198600 secure :-1 s=READY pgs=313 cs=0 l=1 rev1=1 crypto rx=0x7f57b000b700 tx=0x7f57b000bac0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.495+0000 7f57b77fe700 1 -- 192.168.123.102:0/2953919262 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f57b0010840 con 0x7f57c00ff530 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.495+0000 7f57b77fe700 1 -- 192.168.123.102:0/2953919262 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f57b0010e80 con 0x7f57c00ff530 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.496+0000 7f57b77fe700 1 -- 192.168.123.102:0/2953919262 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f57b000d590 con 0x7f57c00ff530 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.496+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f57c019dd50 con 0x7f57c00ff530 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.496+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f57c019e1c0 con 0x7f57c00ff530 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.497+0000 7f57b57fa700 1 -- 192.168.123.102:0/2953919262 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f57c004eac0 con 0x7f57c00ff530 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.498+0000 7f57b77fe700 1 -- 192.168.123.102:0/2953919262 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f57b00109a0 con 0x7f57c00ff530 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.498+0000 7f57b77fe700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f57ac06c2e0 0x7f57ac06e7a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.499+0000 7f57b77fe700 1 -- 192.168.123.102:0/2953919262 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f57b008bae0 con 0x7f57c00ff530 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.501+0000 7f57b7fff700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f57ac06c2e0 0x7f57ac06e7a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.501+0000 7f57b7fff700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f57ac06c2e0 0x7f57ac06e7a0 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f57a8009540 tx=0x7f57a8009f90 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:21.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.502+0000 7f57b77fe700 1 -- 192.168.123.102:0/2953919262 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f57b0059d90 con 0x7f57c00ff530 2026-03-10T10:19:21.513 INFO:tasks.workunit.client.1.vm05.stdout:2/154: creat db/d28/f30 x:0 0 0 2026-03-10T10:19:21.532 INFO:tasks.workunit.client.0.vm02.stdout:5/325: rename d1/db/d11/d13/d28/f2e to d1/db/d11/d16/d29/d40/d4f/d5f/f6b 0 2026-03-10T10:19:21.532 INFO:tasks.workunit.client.1.vm05.stdout:3/218: link dd/c34 dd/d15/d1f/c4e 0 2026-03-10T10:19:21.532 INFO:tasks.workunit.client.1.vm05.stdout:9/154: rename d0/d1/c15 to d0/df/c30 0 2026-03-10T10:19:21.532 INFO:tasks.workunit.client.1.vm05.stdout:1/121: symlink d4/dd/l26 0 2026-03-10T10:19:21.535 INFO:tasks.workunit.client.0.vm02.stdout:6/197: link d0/d8/l1f d0/d8/d9/d31/l44 0 2026-03-10T10:19:21.547 INFO:tasks.workunit.client.1.vm05.stdout:5/189: creat da/db/f3b x:0 0 0 2026-03-10T10:19:21.547 INFO:tasks.workunit.client.1.vm05.stdout:9/155: dwrite d0/d1/d13/f27 [0,4194304] 0 2026-03-10T10:19:21.547 INFO:tasks.workunit.client.1.vm05.stdout:9/156: read d0/d1/f9 [1535526,18683] 0 2026-03-10T10:19:21.547 INFO:tasks.workunit.client.1.vm05.stdout:5/190: dread da/db/d26/d35/f1c [0,4194304] 0 2026-03-10T10:19:21.547 INFO:tasks.workunit.client.1.vm05.stdout:3/219: rename dd/d15/d1f/c4e to dd/d15/d1f/c4f 0 2026-03-10T10:19:21.548 INFO:tasks.workunit.client.1.vm05.stdout:5/191: chown da/db/d26/d35/d38 910781 1 2026-03-10T10:19:21.548 INFO:tasks.workunit.client.0.vm02.stdout:8/274: sync 2026-03-10T10:19:21.549 INFO:tasks.workunit.client.0.vm02.stdout:8/275: write d1/d1c/f33 [2061181,128811] 0 2026-03-10T10:19:21.549 INFO:tasks.workunit.client.0.vm02.stdout:8/276: dread - d1/d1c/d43/f52 zero size 2026-03-10T10:19:21.560 INFO:tasks.workunit.client.0.vm02.stdout:5/326: mkdir d1/db/d11/d16/d29/d40/d6c 0 2026-03-10T10:19:21.566 INFO:tasks.workunit.client.0.vm02.stdout:6/198: unlink d0/d8/d9/f2b 0 2026-03-10T10:19:21.579 INFO:tasks.workunit.client.1.vm05.stdout:9/157: rmdir d0/d1/d13/de 39 2026-03-10T10:19:21.580 INFO:tasks.workunit.client.0.vm02.stdout:5/327: rename d1/d4c to d1/db/d11/d16/d29/d40/d4f/d5f/d6d 0 2026-03-10T10:19:21.580 INFO:tasks.workunit.client.1.vm05.stdout:7/180: sync 2026-03-10T10:19:21.581 INFO:tasks.workunit.client.1.vm05.stdout:3/220: rename dd/d15/d24/f44 to dd/d20/f50 0 2026-03-10T10:19:21.582 INFO:tasks.workunit.client.1.vm05.stdout:7/181: chown d5/dd/l27 1681487 1 2026-03-10T10:19:21.592 INFO:tasks.workunit.client.0.vm02.stdout:2/298: getdents d0 0 2026-03-10T10:19:21.595 INFO:tasks.workunit.client.1.vm05.stdout:1/122: fsync d4/df/d1c/f23 0 2026-03-10T10:19:21.597 INFO:tasks.workunit.client.1.vm05.stdout:1/123: write d4/df/d1c/f23 [679456,44822] 0 2026-03-10T10:19:21.598 INFO:tasks.workunit.client.1.vm05.stdout:1/124: readlink d4/dd/l19 0 2026-03-10T10:19:21.607 INFO:tasks.workunit.client.1.vm05.stdout:9/158: creat d0/df/f31 x:0 0 0 2026-03-10T10:19:21.607 INFO:tasks.workunit.client.1.vm05.stdout:9/159: chown d0/f2a 78 1 2026-03-10T10:19:21.608 INFO:tasks.workunit.client.0.vm02.stdout:0/241: dread d9/d18/d1a/d22/d24/f26 [0,4194304] 0 2026-03-10T10:19:21.624 INFO:tasks.workunit.client.0.vm02.stdout:4/304: write d1/d10/db/f20 [4651321,47527] 0 2026-03-10T10:19:21.627 INFO:tasks.workunit.client.0.vm02.stdout:3/201: fsync d1/d8/f46 0 2026-03-10T10:19:21.628 INFO:tasks.workunit.client.0.vm02.stdout:3/202: chown d1/d6/l1e 7299454 1 2026-03-10T10:19:21.637 INFO:tasks.workunit.client.1.vm05.stdout:0/159: truncate d1/d7/f16 821425 0 2026-03-10T10:19:21.638 INFO:tasks.workunit.client.1.vm05.stdout:0/160: readlink d1/d2/d9/d31/l26 0 2026-03-10T10:19:21.639 INFO:tasks.workunit.client.0.vm02.stdout:9/165: write da/d10/f2b [1306291,128754] 0 2026-03-10T10:19:21.640 INFO:tasks.workunit.client.0.vm02.stdout:9/166: stat da/d10/l1c 0 2026-03-10T10:19:21.640 INFO:tasks.workunit.client.0.vm02.stdout:9/167: chown da/d10/l1c 13134 1 2026-03-10T10:19:21.646 INFO:tasks.workunit.client.0.vm02.stdout:7/223: write d1/dc/f25 [2489077,108080] 0 2026-03-10T10:19:21.647 INFO:tasks.workunit.client.0.vm02.stdout:7/224: fdatasync d1/dc/d10/f24 0 2026-03-10T10:19:21.651 INFO:tasks.workunit.client.0.vm02.stdout:7/225: dwrite d1/d1b/f43 [0,4194304] 0 2026-03-10T10:19:21.652 INFO:tasks.workunit.client.0.vm02.stdout:7/226: write d1/f15 [3568163,11067] 0 2026-03-10T10:19:21.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.657+0000 7f57b57fa700 1 -- 192.168.123.102:0/2953919262 --> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f57c00611d0 con 0x7f57ac06c2e0 2026-03-10T10:19:21.662 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.662+0000 7f57b77fe700 1 -- 192.168.123.102:0/2953919262 <== mgr.14225 v2:192.168.123.102:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+442 (secure 0 0 0) 0x7f57c00611d0 con 0x7f57ac06c2e0 2026-03-10T10:19:21.662 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:19:21.662 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:19:21.662 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:19:21.662 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:19:21.662 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T10:19:21.662 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "0/23 daemons upgraded", 2026-03-10T10:19:21.663 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc image on host vm05", 2026-03-10T10:19:21.663 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:19:21.663 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:19:21.664 INFO:tasks.workunit.client.0.vm02.stdout:5/328: rmdir d1/db/d11/d13/d28/d37/d3d 39 2026-03-10T10:19:21.665 INFO:tasks.workunit.client.1.vm05.stdout:2/155: rmdir db 39 2026-03-10T10:19:21.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f57ac06c2e0 msgr2=0x7f57ac06e7a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f57ac06c2e0 0x7f57ac06e7a0 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f57a8009540 tx=0x7f57a8009f90 comp rx=0 tx=0).stop 2026-03-10T10:19:21.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ff530 msgr2=0x7f57c0198600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ff530 0x7f57c0198600 secure :-1 s=READY pgs=313 cs=0 l=1 rev1=1 crypto rx=0x7f57b000b700 tx=0x7f57b000bac0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 shutdown_connections 2026-03-10T10:19:21.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f57c00ff530 0x7f57c0198600 unknown :-1 s=CLOSED pgs=313 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f57ac06c2e0 0x7f57ac06e7a0 unknown :-1 s=CLOSED pgs=147 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 --2- 192.168.123.102:0/2953919262 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f57c00ffe90 0x7f57c0198b40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 >> 192.168.123.102:0/2953919262 conn(0x7f57c00fb130 msgr2=0x7f57c00fd420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:21.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 shutdown_connections 2026-03-10T10:19:21.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.666+0000 7f57c4da9700 1 -- 192.168.123.102:0/2953919262 wait complete. 2026-03-10T10:19:21.679 INFO:tasks.workunit.client.1.vm05.stdout:5/192: link da/db/de/c1b da/db/de/c3c 0 2026-03-10T10:19:21.683 INFO:tasks.workunit.client.0.vm02.stdout:0/242: mkdir d9/d18/d1a/d43/d49 0 2026-03-10T10:19:21.684 INFO:tasks.workunit.client.1.vm05.stdout:9/160: mknod d0/df/d11/c32 0 2026-03-10T10:19:21.685 INFO:tasks.workunit.client.0.vm02.stdout:1/233: unlink d4/da/d1a/d22/f45 0 2026-03-10T10:19:21.685 INFO:tasks.workunit.client.1.vm05.stdout:9/161: dread - d0/df/d11/f2d zero size 2026-03-10T10:19:21.688 INFO:tasks.workunit.client.0.vm02.stdout:7/227: fsync d1/d1b/f43 0 2026-03-10T10:19:21.690 INFO:tasks.workunit.client.0.vm02.stdout:7/228: write d1/dc/f2e [1567802,90979] 0 2026-03-10T10:19:21.695 INFO:tasks.workunit.client.0.vm02.stdout:4/305: unlink d1/d10/f6 0 2026-03-10T10:19:21.699 INFO:tasks.workunit.client.1.vm05.stdout:8/131: link d7/d14/d15/f1f d7/d14/f22 0 2026-03-10T10:19:21.699 INFO:tasks.workunit.client.1.vm05.stdout:9/162: dwrite d0/df/d11/f2d [0,4194304] 0 2026-03-10T10:19:21.703 INFO:tasks.workunit.client.1.vm05.stdout:9/163: dread - d0/df/f25 zero size 2026-03-10T10:19:21.709 INFO:tasks.workunit.client.1.vm05.stdout:0/161: mknod d1/d2/d9/d31/d13/d15/c35 0 2026-03-10T10:19:21.710 INFO:tasks.workunit.client.1.vm05.stdout:0/162: chown d1/d2/d9/d31/d13/d2f 39387 1 2026-03-10T10:19:21.712 INFO:tasks.workunit.client.1.vm05.stdout:4/151: chown d1/d3/c1c 4112429 1 2026-03-10T10:19:21.715 INFO:tasks.workunit.client.0.vm02.stdout:2/299: dwrite d0/fe [4194304,4194304] 0 2026-03-10T10:19:21.724 INFO:tasks.workunit.client.0.vm02.stdout:9/168: creat da/d10/d2c/f32 x:0 0 0 2026-03-10T10:19:21.728 INFO:tasks.workunit.client.1.vm05.stdout:7/182: link d5/f22 d5/d1d/d20/d2d/f30 0 2026-03-10T10:19:21.731 INFO:tasks.workunit.client.1.vm05.stdout:7/183: write d5/d17/f19 [1372899,98657] 0 2026-03-10T10:19:21.732 INFO:tasks.workunit.client.1.vm05.stdout:5/193: symlink da/l3d 0 2026-03-10T10:19:21.734 INFO:tasks.workunit.client.1.vm05.stdout:1/125: symlink d4/l27 0 2026-03-10T10:19:21.735 INFO:tasks.workunit.client.1.vm05.stdout:1/126: dread - d4/dd/f1f zero size 2026-03-10T10:19:21.743 INFO:tasks.workunit.client.1.vm05.stdout:8/132: creat d7/d14/f23 x:0 0 0 2026-03-10T10:19:21.743 INFO:tasks.workunit.client.1.vm05.stdout:9/164: creat d0/df/d11/f33 x:0 0 0 2026-03-10T10:19:21.743 INFO:tasks.workunit.client.1.vm05.stdout:8/133: chown d7/f21 87553 1 2026-03-10T10:19:21.745 INFO:tasks.workunit.client.1.vm05.stdout:8/134: chown d7/d14/d15/f1f 3682 1 2026-03-10T10:19:21.753 INFO:tasks.workunit.client.0.vm02.stdout:2/300: fsync d0/fe 0 2026-03-10T10:19:21.755 INFO:tasks.workunit.client.1.vm05.stdout:0/163: chown d1/d2/d9/d31/d13/d17/f1b 4162372 1 2026-03-10T10:19:21.760 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.760+0000 7f4b63357700 1 -- 192.168.123.102:0/1230959279 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c072b50 msgr2=0x7f4b5c072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.760+0000 7f4b63357700 1 --2- 192.168.123.102:0/1230959279 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c072b50 0x7f4b5c072f70 secure :-1 s=READY pgs=314 cs=0 l=1 rev1=1 crypto rx=0x7f4b58007780 tx=0x7f4b5800c050 comp rx=0 tx=0).stop 2026-03-10T10:19:21.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.762+0000 7f4b63357700 1 -- 192.168.123.102:0/1230959279 shutdown_connections 2026-03-10T10:19:21.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.762+0000 7f4b63357700 1 --2- 192.168.123.102:0/1230959279 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b5c075a40 0x7f4b5c077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.762+0000 7f4b63357700 1 --2- 192.168.123.102:0/1230959279 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c072b50 0x7f4b5c072f70 unknown :-1 s=CLOSED pgs=314 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.762+0000 7f4b63357700 1 -- 192.168.123.102:0/1230959279 >> 192.168.123.102:0/1230959279 conn(0x7f4b5c06dae0 msgr2=0x7f4b5c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:21.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.762+0000 7f4b63357700 1 -- 192.168.123.102:0/1230959279 shutdown_connections 2026-03-10T10:19:21.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.762+0000 7f4b63357700 1 -- 192.168.123.102:0/1230959279 wait complete. 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b63357700 1 Processor -- start 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b63357700 1 -- start start 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b63357700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b5c075a40 0x7f4b5c083190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b63357700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c0836d0 0x7f4b5c1b3190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b63357700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4b5c083c10 con 0x7f4b5c0836d0 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b63357700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4b5c083d80 con 0x7f4b5c075a40 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b608f2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c0836d0 0x7f4b5c1b3190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b608f2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c0836d0 0x7f4b5c1b3190 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:53032/0 (socket says 192.168.123.102:53032) 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b608f2700 1 -- 192.168.123.102:0/2376298377 learned_addr learned my addr 192.168.123.102:0/2376298377 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b610f3700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b5c075a40 0x7f4b5c083190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b608f2700 1 -- 192.168.123.102:0/2376298377 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b5c075a40 msgr2=0x7f4b5c083190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b608f2700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b5c075a40 0x7f4b5c083190 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.763+0000 7f4b608f2700 1 -- 192.168.123.102:0/2376298377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4b58007430 con 0x7f4b5c0836d0 2026-03-10T10:19:21.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.764+0000 7f4b608f2700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c0836d0 0x7f4b5c1b3190 secure :-1 s=READY pgs=315 cs=0 l=1 rev1=1 crypto rx=0x7f4b540082a0 tx=0x7f4b54008660 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:21.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.764+0000 7f4b527fc700 1 -- 192.168.123.102:0/2376298377 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4b54009020 con 0x7f4b5c0836d0 2026-03-10T10:19:21.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.764+0000 7f4b63357700 1 -- 192.168.123.102:0/2376298377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4b5c1b37f0 con 0x7f4b5c0836d0 2026-03-10T10:19:21.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.765+0000 7f4b63357700 1 -- 192.168.123.102:0/2376298377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4b5c1b3cc0 con 0x7f4b5c0836d0 2026-03-10T10:19:21.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.766+0000 7f4b527fc700 1 -- 192.168.123.102:0/2376298377 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f4b54019850 con 0x7f4b5c0836d0 2026-03-10T10:19:21.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.766+0000 7f4b527fc700 1 -- 192.168.123.102:0/2376298377 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4b54009bb0 con 0x7f4b5c0836d0 2026-03-10T10:19:21.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.766+0000 7f4b527fc700 1 -- 192.168.123.102:0/2376298377 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 19) v1 ==== 90308+0+0 (secure 0 0 0) 0x7f4b54009d10 con 0x7f4b5c0836d0 2026-03-10T10:19:21.767 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.767+0000 7f4b527fc700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b4806c600 0x7f4b4806eac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:21.768 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.767+0000 7f4b610f3700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b4806c600 0x7f4b4806eac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:21.768 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.767+0000 7f4b527fc700 1 -- 192.168.123.102:0/2376298377 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5362+0+0 (secure 0 0 0) 0x7f4b54094870 con 0x7f4b5c0836d0 2026-03-10T10:19:21.768 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.768+0000 7f4b610f3700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b4806c600 0x7f4b4806eac0 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7f4b58007750 tx=0x7f4b5800c490 comp rx=0 tx=0).ready entity=mgr.14225 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:21.768 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.768+0000 7f4b63357700 1 -- 192.168.123.102:0/2376298377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4b40005320 con 0x7f4b5c0836d0 2026-03-10T10:19:21.770 INFO:tasks.workunit.client.1.vm05.stdout:4/152: creat d1/d31/dc/f33 x:0 0 0 2026-03-10T10:19:21.771 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.771+0000 7f4b527fc700 1 -- 192.168.123.102:0/2376298377 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+180422 (secure 0 0 0) 0x7f4b5405f110 con 0x7f4b5c0836d0 2026-03-10T10:19:21.774 INFO:tasks.workunit.client.1.vm05.stdout:2/156: creat db/d12/f31 x:0 0 0 2026-03-10T10:19:21.775 INFO:tasks.workunit.client.1.vm05.stdout:2/157: chown db/d1c/l22 17331 1 2026-03-10T10:19:21.780 INFO:tasks.workunit.client.0.vm02.stdout:5/329: creat d1/db/d11/d16/d29/d40/d4f/f6e x:0 0 0 2026-03-10T10:19:21.783 INFO:tasks.workunit.client.1.vm05.stdout:7/184: unlink d5/d17/f1e 0 2026-03-10T10:19:21.783 INFO:tasks.workunit.client.0.vm02.stdout:6/199: rename d0/d8/f27 to d0/d8/f45 0 2026-03-10T10:19:21.786 INFO:tasks.workunit.client.1.vm05.stdout:7/185: dwrite d5/f25 [0,4194304] 0 2026-03-10T10:19:21.787 INFO:tasks.workunit.client.0.vm02.stdout:0/243: readlink d9/d18/d1a/l1d 0 2026-03-10T10:19:21.789 INFO:tasks.workunit.client.1.vm05.stdout:0/164: creat d1/d2/d9/d31/f36 x:0 0 0 2026-03-10T10:19:21.790 INFO:tasks.workunit.client.0.vm02.stdout:1/234: mknod d4/da/d27/d38/d3c/c4f 0 2026-03-10T10:19:21.790 INFO:tasks.workunit.client.1.vm05.stdout:7/186: dwrite d5/f13 [4194304,4194304] 0 2026-03-10T10:19:21.793 INFO:tasks.workunit.client.1.vm05.stdout:7/187: chown d5/d17 0 1 2026-03-10T10:19:21.794 INFO:tasks.workunit.client.1.vm05.stdout:6/148: creat dd/df/d12/f35 x:0 0 0 2026-03-10T10:19:21.797 INFO:tasks.workunit.client.1.vm05.stdout:4/153: creat d1/d31/f34 x:0 0 0 2026-03-10T10:19:21.799 INFO:tasks.workunit.client.0.vm02.stdout:7/229: mknod d1/dc/d16/d28/c46 0 2026-03-10T10:19:21.801 INFO:tasks.workunit.client.1.vm05.stdout:8/135: sync 2026-03-10T10:19:21.802 INFO:tasks.workunit.client.1.vm05.stdout:3/221: getdents dd/d15 0 2026-03-10T10:19:21.802 INFO:tasks.workunit.client.0.vm02.stdout:7/230: dwrite d1/dc/f26 [4194304,4194304] 0 2026-03-10T10:19:21.802 INFO:tasks.workunit.client.0.vm02.stdout:7/231: chown d1/dc/d16/f1f 1938 1 2026-03-10T10:19:21.803 INFO:tasks.workunit.client.1.vm05.stdout:3/222: dread - dd/d15/d24/f2f zero size 2026-03-10T10:19:21.805 INFO:tasks.workunit.client.0.vm02.stdout:4/306: mkdir d1/d41/d5e 0 2026-03-10T10:19:21.806 INFO:tasks.workunit.client.1.vm05.stdout:2/158: symlink db/d2d/l32 0 2026-03-10T10:19:21.826 INFO:tasks.workunit.client.0.vm02.stdout:8/277: write d1/d1c/d43/f45 [2174584,9634] 0 2026-03-10T10:19:21.841 INFO:tasks.workunit.client.1.vm05.stdout:7/188: rename d5/dd/f1f to d5/d1d/f31 0 2026-03-10T10:19:21.842 INFO:tasks.workunit.client.1.vm05.stdout:7/189: fsync d5/dd/f1a 0 2026-03-10T10:19:21.845 INFO:tasks.workunit.client.0.vm02.stdout:2/301: rename d0/d10/f14 to d0/d1a/d49/d5e/f63 0 2026-03-10T10:19:21.845 INFO:tasks.workunit.client.0.vm02.stdout:2/302: readlink d0/l43 0 2026-03-10T10:19:21.847 INFO:tasks.workunit.client.1.vm05.stdout:6/149: unlink c4 0 2026-03-10T10:19:21.851 INFO:tasks.workunit.client.1.vm05.stdout:4/154: unlink d1/d31/l18 0 2026-03-10T10:19:21.853 INFO:tasks.workunit.client.0.vm02.stdout:5/330: mknod d1/db/d11/d16/d29/c6f 0 2026-03-10T10:19:21.855 INFO:tasks.workunit.client.1.vm05.stdout:8/136: mkdir d7/d14/d24 0 2026-03-10T10:19:21.860 INFO:tasks.workunit.client.0.vm02.stdout:6/200: truncate d0/d8/d9/f30 39472 0 2026-03-10T10:19:21.867 INFO:tasks.workunit.client.1.vm05.stdout:5/194: dwrite da/db/f1e [0,4194304] 0 2026-03-10T10:19:21.873 INFO:tasks.workunit.client.1.vm05.stdout:5/195: chown da/db/de/f2c 68171702 1 2026-03-10T10:19:21.883 INFO:tasks.workunit.client.0.vm02.stdout:3/203: link d1/d8/d21/f35 d1/d8/d21/f47 0 2026-03-10T10:19:21.883 INFO:tasks.workunit.client.1.vm05.stdout:9/165: rename d0/df/d11/f2e to d0/d1/d13/de/d21/f34 0 2026-03-10T10:19:21.884 INFO:tasks.workunit.client.1.vm05.stdout:9/166: write d0/f2f [766224,52722] 0 2026-03-10T10:19:21.885 INFO:tasks.workunit.client.0.vm02.stdout:8/278: creat d1/d1c/d43/f53 x:0 0 0 2026-03-10T10:19:21.889 INFO:tasks.workunit.client.1.vm05.stdout:7/190: creat d5/d1d/f32 x:0 0 0 2026-03-10T10:19:21.899 INFO:tasks.workunit.client.0.vm02.stdout:2/303: creat d0/d1a/d49/f64 x:0 0 0 2026-03-10T10:19:21.901 INFO:tasks.workunit.client.1.vm05.stdout:8/137: dread f2 [0,4194304] 0 2026-03-10T10:19:21.902 INFO:tasks.workunit.client.1.vm05.stdout:8/138: stat d7/l1a 0 2026-03-10T10:19:21.902 INFO:tasks.workunit.client.1.vm05.stdout:6/150: dwrite dd/df/d12/f20 [0,4194304] 0 2026-03-10T10:19:21.906 INFO:tasks.workunit.client.1.vm05.stdout:6/151: write dd/df/d12/f35 [189282,17997] 0 2026-03-10T10:19:21.912 INFO:tasks.workunit.client.0.vm02.stdout:1/235: fsync d4/da/d1a/d22/f32 0 2026-03-10T10:19:21.916 INFO:tasks.workunit.client.1.vm05.stdout:1/127: getdents d4/dd 0 2026-03-10T10:19:21.918 INFO:tasks.workunit.client.0.vm02.stdout:7/232: symlink d1/dc/d44/l47 0 2026-03-10T10:19:21.919 INFO:tasks.workunit.client.1.vm05.stdout:1/128: fsync d4/dd/f21 0 2026-03-10T10:19:21.926 INFO:tasks.workunit.client.1.vm05.stdout:1/129: dread d4/df/f11 [0,4194304] 0 2026-03-10T10:19:21.926 INFO:tasks.workunit.client.1.vm05.stdout:7/191: fsync d5/f22 0 2026-03-10T10:19:21.930 INFO:tasks.workunit.client.0.vm02.stdout:9/169: write da/f14 [629942,121485] 0 2026-03-10T10:19:21.932 INFO:tasks.workunit.client.0.vm02.stdout:0/244: dwrite d9/d18/d1a/d22/d24/f40 [0,4194304] 0 2026-03-10T10:19:21.934 INFO:tasks.workunit.client.1.vm05.stdout:2/159: dwrite db/f26 [8388608,4194304] 0 2026-03-10T10:19:21.946 INFO:tasks.workunit.client.0.vm02.stdout:2/304: mkdir d0/d1a/d49/d5e/d65 0 2026-03-10T10:19:21.949 INFO:tasks.workunit.client.1.vm05.stdout:2/160: dread db/f14 [0,4194304] 0 2026-03-10T10:19:21.952 INFO:tasks.workunit.client.0.vm02.stdout:6/201: rmdir d0/d7 39 2026-03-10T10:19:21.965 INFO:tasks.workunit.client.1.vm05.stdout:0/165: creat d1/d2/d9/d31/d12/d20/f37 x:0 0 0 2026-03-10T10:19:21.966 INFO:tasks.workunit.client.0.vm02.stdout:3/204: dwrite d1/d6/f39 [0,4194304] 0 2026-03-10T10:19:21.968 INFO:tasks.workunit.client.1.vm05.stdout:4/155: dwrite d1/fb [4194304,4194304] 0 2026-03-10T10:19:21.969 INFO:tasks.workunit.client.1.vm05.stdout:5/196: dwrite da/db/d26/d35/f1c [0,4194304] 0 2026-03-10T10:19:21.972 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.972+0000 7f4b63357700 1 -- 192.168.123.102:0/2376298377 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f4b400059f0 con 0x7f4b5c0836d0 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.973+0000 7f4b527fc700 1 -- 192.168.123.102:0/2376298377 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f4b5401d020 con 0x7f4b5c0836d0 2026-03-10T10:19:21.981 INFO:tasks.workunit.client.1.vm05.stdout:5/197: write da/db/d26/d35/f30 [600587,45972] 0 2026-03-10T10:19:21.981 INFO:tasks.workunit.client.1.vm05.stdout:0/166: dwrite d1/d2/d9/d31/d12/f2d [0,4194304] 0 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.979+0000 7f4b3ffff700 1 -- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b4806c600 msgr2=0x7f4b4806eac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.979+0000 7f4b3ffff700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b4806c600 0x7f4b4806eac0 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7f4b58007750 tx=0x7f4b5800c490 comp rx=0 tx=0).stop 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.979+0000 7f4b3ffff700 1 -- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c0836d0 msgr2=0x7f4b5c1b3190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.979+0000 7f4b3ffff700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c0836d0 0x7f4b5c1b3190 secure :-1 s=READY pgs=315 cs=0 l=1 rev1=1 crypto rx=0x7f4b540082a0 tx=0x7f4b54008660 comp rx=0 tx=0).stop 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.979+0000 7f4b3ffff700 1 -- 192.168.123.102:0/2376298377 shutdown_connections 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.979+0000 7f4b3ffff700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:6800/2,v1:192.168.123.102:6801/2] conn(0x7f4b4806c600 0x7f4b4806eac0 unknown :-1 s=CLOSED pgs=148 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.979+0000 7f4b3ffff700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4b5c075a40 0x7f4b5c083190 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.981 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.979+0000 7f4b3ffff700 1 --2- 192.168.123.102:0/2376298377 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4b5c0836d0 0x7f4b5c1b3190 unknown :-1 s=CLOSED pgs=315 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:21.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.979+0000 7f4b3ffff700 1 -- 192.168.123.102:0/2376298377 >> 192.168.123.102:0/2376298377 conn(0x7f4b5c06dae0 msgr2=0x7f4b5c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:21.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.980+0000 7f4b3ffff700 1 -- 192.168.123.102:0/2376298377 shutdown_connections 2026-03-10T10:19:21.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:21.980+0000 7f4b3ffff700 1 -- 192.168.123.102:0/2376298377 wait complete. 2026-03-10T10:19:21.986 INFO:tasks.workunit.client.0.vm02.stdout:7/233: creat d1/dc/d16/f48 x:0 0 0 2026-03-10T10:19:21.989 INFO:tasks.workunit.client.0.vm02.stdout:7/234: dread d1/dc/d16/f1f [0,4194304] 0 2026-03-10T10:19:22.001 INFO:tasks.workunit.client.0.vm02.stdout:4/307: creat d1/d2/d55/f5f x:0 0 0 2026-03-10T10:19:22.003 INFO:tasks.workunit.client.1.vm05.stdout:3/223: link dd/d15/f1b dd/d39/f51 0 2026-03-10T10:19:22.005 INFO:tasks.workunit.client.0.vm02.stdout:8/279: symlink d1/d1c/d23/d25/l54 0 2026-03-10T10:19:22.005 INFO:tasks.workunit.client.0.vm02.stdout:8/280: chown d1/d1c/d23/d25 829955547 1 2026-03-10T10:19:22.017 INFO:tasks.workunit.client.0.vm02.stdout:0/245: mkdir d9/d18/d1a/d22/d4a 0 2026-03-10T10:19:22.019 INFO:tasks.workunit.client.0.vm02.stdout:5/331: rename d1/db/d11/d16/d29/d40/d4f/f5c to d1/db/d11/f70 0 2026-03-10T10:19:22.020 INFO:tasks.workunit.client.1.vm05.stdout:7/192: creat d5/d26/f33 x:0 0 0 2026-03-10T10:19:22.020 INFO:tasks.workunit.client.0.vm02.stdout:5/332: truncate d1/db/d11/d13/f1c 2209849 0 2026-03-10T10:19:22.022 INFO:tasks.workunit.client.1.vm05.stdout:7/193: dread d5/dd/f12 [0,4194304] 0 2026-03-10T10:19:22.022 INFO:tasks.workunit.client.1.vm05.stdout:2/161: readlink db/d2d/l2f 0 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.1.vm05.stdout:2/162: dread - db/d1c/f1f zero size 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.1.vm05.stdout:5/198: rmdir da/db/d26 39 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.1.vm05.stdout:5/199: write da/f20 [704200,6458] 0 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.1.vm05.stdout:6/152: mkdir dd/d36 0 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.0.vm02.stdout:3/205: creat d1/d6/f48 x:0 0 0 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.0.vm02.stdout:7/235: mkdir d1/d1b/d49 0 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.0.vm02.stdout:4/308: write d1/d10/db/f35 [325629,68293] 0 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.0.vm02.stdout:4/309: readlink d1/d10/db/l27 0 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.0.vm02.stdout:4/310: write d1/d10/f45 [1547321,79339] 0 2026-03-10T10:19:22.032 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:21 vm02.local ceph-mon[50200]: from='client.14654 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:22.032 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:21 vm02.local ceph-mon[50200]: pgmap v150: 65 pgs: 65 active+clean; 836 MiB data, 3.8 GiB used, 116 GiB / 120 GiB avail; 5.0 MiB/s rd, 104 MiB/s wr, 171 op/s 2026-03-10T10:19:22.032 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:21 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/780762303' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:19:22.032 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:21 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/3571735718' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:19:22.032 INFO:tasks.workunit.client.1.vm05.stdout:0/167: dread - d1/d2/d9/d31/d13/d2f/f33 zero size 2026-03-10T10:19:22.034 INFO:tasks.workunit.client.0.vm02.stdout:0/246: unlink d9/d18/f2e 0 2026-03-10T10:19:22.036 INFO:tasks.workunit.client.0.vm02.stdout:6/202: mknod d0/d8/c46 0 2026-03-10T10:19:22.038 INFO:tasks.workunit.client.0.vm02.stdout:3/206: unlink d1/f31 0 2026-03-10T10:19:22.038 INFO:tasks.workunit.client.1.vm05.stdout:0/168: stat d1/d2/d9/d31/d13/d15/c19 0 2026-03-10T10:19:22.042 INFO:tasks.workunit.client.1.vm05.stdout:0/169: dread d1/d2/f21 [0,4194304] 0 2026-03-10T10:19:22.044 INFO:tasks.workunit.client.0.vm02.stdout:4/311: dwrite d1/d10/db/f16 [0,4194304] 0 2026-03-10T10:19:22.047 INFO:tasks.workunit.client.0.vm02.stdout:2/305: creat d0/d1a/f66 x:0 0 0 2026-03-10T10:19:22.057 INFO:tasks.workunit.client.1.vm05.stdout:1/130: mknod d4/d20/c28 0 2026-03-10T10:19:22.059 INFO:tasks.workunit.client.1.vm05.stdout:8/139: write f6 [504117,13987] 0 2026-03-10T10:19:22.063 INFO:tasks.workunit.client.0.vm02.stdout:1/236: truncate d4/da/d27/d38/f3b 1249529 0 2026-03-10T10:19:22.064 INFO:tasks.workunit.client.0.vm02.stdout:9/170: truncate da/d10/f2b 1060517 0 2026-03-10T10:19:22.066 INFO:tasks.workunit.client.0.vm02.stdout:5/333: write d1/fe [1469952,66982] 0 2026-03-10T10:19:22.067 INFO:tasks.workunit.client.0.vm02.stdout:5/334: write d1/f68 [355623,66796] 0 2026-03-10T10:19:22.067 INFO:tasks.workunit.client.0.vm02.stdout:5/335: readlink d1/db/d11/l50 0 2026-03-10T10:19:22.072 INFO:tasks.workunit.client.1.vm05.stdout:7/194: truncate d5/d1d/d20/d2d/f30 617035 0 2026-03-10T10:19:22.073 INFO:tasks.workunit.client.0.vm02.stdout:4/312: chown d1/d52/d53 5 1 2026-03-10T10:19:22.073 INFO:tasks.workunit.client.1.vm05.stdout:2/163: rmdir db/d1c 39 2026-03-10T10:19:22.076 INFO:tasks.workunit.client.0.vm02.stdout:6/203: unlink d0/f3c 0 2026-03-10T10:19:22.077 INFO:tasks.workunit.client.1.vm05.stdout:5/200: stat da/db/c1a 0 2026-03-10T10:19:22.077 INFO:tasks.workunit.client.1.vm05.stdout:5/201: chown da/db 508671578 1 2026-03-10T10:19:22.078 INFO:tasks.workunit.client.1.vm05.stdout:6/153: chown dd/df/f1e 66783099 1 2026-03-10T10:19:22.078 INFO:tasks.workunit.client.1.vm05.stdout:6/154: stat dd/f14 0 2026-03-10T10:19:22.078 INFO:tasks.workunit.client.0.vm02.stdout:9/171: creat da/d10/f33 x:0 0 0 2026-03-10T10:19:22.080 INFO:tasks.workunit.client.0.vm02.stdout:5/336: mkdir d1/db/d11/d16/d29/d40/d4f/d5f/d6d/d71 0 2026-03-10T10:19:22.083 INFO:tasks.workunit.client.0.vm02.stdout:5/337: dwrite d1/db/d11/d16/d29/d40/d4f/f57 [0,4194304] 0 2026-03-10T10:19:22.092 INFO:tasks.workunit.client.0.vm02.stdout:6/204: dread - d0/d7/f39 zero size 2026-03-10T10:19:22.092 INFO:tasks.workunit.client.0.vm02.stdout:1/237: mknod d4/da/c50 0 2026-03-10T10:19:22.093 INFO:tasks.workunit.client.0.vm02.stdout:9/172: mkdir da/d10/d2c/d34 0 2026-03-10T10:19:22.093 INFO:tasks.workunit.client.0.vm02.stdout:0/247: getdents d9/d18 0 2026-03-10T10:19:22.095 INFO:tasks.workunit.client.1.vm05.stdout:5/202: fdatasync da/db/d26/d35/f2a 0 2026-03-10T10:19:22.095 INFO:tasks.workunit.client.0.vm02.stdout:5/338: fsync d1/db/d11/d13/f4e 0 2026-03-10T10:19:22.096 INFO:tasks.workunit.client.0.vm02.stdout:1/238: readlink d4/da/d1a/l1e 0 2026-03-10T10:19:22.099 INFO:tasks.workunit.client.0.vm02.stdout:0/248: mknod d9/d18/d1a/d22/d24/c4b 0 2026-03-10T10:19:22.107 INFO:tasks.workunit.client.1.vm05.stdout:9/167: link d0/df/c30 d0/d1/c35 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.1.vm05.stdout:3/224: creat dd/f52 x:0 0 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.1.vm05.stdout:1/131: rename d4/l10 to d4/d20/l29 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.1.vm05.stdout:0/170: creat d1/f38 x:0 0 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.1.vm05.stdout:0/171: write d1/d2/d9/d31/d12/d20/f37 [181376,119603] 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.0.vm02.stdout:5/339: write d1/db/f56 [930936,53844] 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.0.vm02.stdout:1/239: symlink d4/da/d1a/d22/l51 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.0.vm02.stdout:9/173: fdatasync da/f13 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.0.vm02.stdout:9/174: write da/f14 [1661481,52595] 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.0.vm02.stdout:6/205: rename d0/d8/d29/d2f/c35 to d0/c47 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.0.vm02.stdout:1/240: unlink d4/da/d27/f46 0 2026-03-10T10:19:22.108 INFO:tasks.workunit.client.1.vm05.stdout:9/168: creat d0/d1/d16/f36 x:0 0 0 2026-03-10T10:19:22.109 INFO:tasks.workunit.client.1.vm05.stdout:1/132: creat d4/df/d1c/f2a x:0 0 0 2026-03-10T10:19:22.110 INFO:tasks.workunit.client.1.vm05.stdout:1/133: chown d4/dd/f21 5414 1 2026-03-10T10:19:22.113 INFO:tasks.workunit.client.1.vm05.stdout:7/195: creat d5/f34 x:0 0 0 2026-03-10T10:19:22.113 INFO:tasks.workunit.client.0.vm02.stdout:6/206: write d0/d8/d9/f14 [788669,107591] 0 2026-03-10T10:19:22.113 INFO:tasks.workunit.client.1.vm05.stdout:3/225: dwrite dd/d20/f50 [0,4194304] 0 2026-03-10T10:19:22.113 INFO:tasks.workunit.client.1.vm05.stdout:2/164: creat db/f33 x:0 0 0 2026-03-10T10:19:22.117 INFO:tasks.workunit.client.0.vm02.stdout:6/207: dwrite d0/f43 [0,4194304] 0 2026-03-10T10:19:22.125 INFO:tasks.workunit.client.1.vm05.stdout:5/203: mknod da/db/d28/d32/c3e 0 2026-03-10T10:19:22.132 INFO:tasks.workunit.client.0.vm02.stdout:1/241: fdatasync d4/f5 0 2026-03-10T10:19:22.132 INFO:tasks.workunit.client.1.vm05.stdout:9/169: truncate d0/f2a 3918566 0 2026-03-10T10:19:22.132 INFO:tasks.workunit.client.1.vm05.stdout:9/170: stat d0/d1/d16/l1c 0 2026-03-10T10:19:22.132 INFO:tasks.workunit.client.1.vm05.stdout:6/155: rename dd/df/f32 to dd/d27/d30/f37 0 2026-03-10T10:19:22.132 INFO:tasks.workunit.client.1.vm05.stdout:5/204: write da/db/d26/d35/f31 [992155,67684] 0 2026-03-10T10:19:22.132 INFO:tasks.workunit.client.1.vm05.stdout:1/134: symlink d4/d20/l2b 0 2026-03-10T10:19:22.133 INFO:tasks.workunit.client.0.vm02.stdout:3/207: sync 2026-03-10T10:19:22.133 INFO:tasks.workunit.client.1.vm05.stdout:3/226: dwrite f3 [4194304,4194304] 0 2026-03-10T10:19:22.134 INFO:tasks.workunit.client.0.vm02.stdout:4/313: sync 2026-03-10T10:19:22.134 INFO:tasks.workunit.client.1.vm05.stdout:1/135: fsync d4/dd/f1f 0 2026-03-10T10:19:22.135 INFO:tasks.workunit.client.0.vm02.stdout:1/242: dread d4/f18 [0,4194304] 0 2026-03-10T10:19:22.140 INFO:tasks.workunit.client.1.vm05.stdout:1/136: write d4/df/d1c/f23 [1543367,11977] 0 2026-03-10T10:19:22.144 INFO:tasks.workunit.client.0.vm02.stdout:6/208: rmdir d0/d8/d9 39 2026-03-10T10:19:22.153 INFO:tasks.workunit.client.1.vm05.stdout:1/137: dread d4/df/d1c/f23 [0,4194304] 0 2026-03-10T10:19:22.153 INFO:tasks.workunit.client.1.vm05.stdout:0/172: mkdir d1/d2/d39 0 2026-03-10T10:19:22.153 INFO:tasks.workunit.client.1.vm05.stdout:0/173: readlink d1/d7/l22 0 2026-03-10T10:19:22.153 INFO:tasks.workunit.client.1.vm05.stdout:0/174: chown d1/d2/d9/d31/d12/d20 18111735 1 2026-03-10T10:19:22.153 INFO:tasks.workunit.client.0.vm02.stdout:5/340: creat d1/db/d11/d13/d28/d37/f72 x:0 0 0 2026-03-10T10:19:22.153 INFO:tasks.workunit.client.0.vm02.stdout:4/314: mknod d1/d10/db/c60 0 2026-03-10T10:19:22.153 INFO:tasks.workunit.client.0.vm02.stdout:4/315: fsync d1/d2/d37/f2e 0 2026-03-10T10:19:22.153 INFO:tasks.workunit.client.0.vm02.stdout:3/208: dwrite d1/d8/f3d [0,4194304] 0 2026-03-10T10:19:22.155 INFO:tasks.workunit.client.1.vm05.stdout:6/156: symlink dd/d27/l38 0 2026-03-10T10:19:22.155 INFO:tasks.workunit.client.1.vm05.stdout:7/196: unlink d5/c15 0 2026-03-10T10:19:22.157 INFO:tasks.workunit.client.0.vm02.stdout:0/249: rename d9/d18/d1a/d22/c36 to d9/d18/d1a/c4c 0 2026-03-10T10:19:22.158 INFO:tasks.workunit.client.0.vm02.stdout:3/209: dwrite d1/d6/f36 [0,4194304] 0 2026-03-10T10:19:22.170 INFO:tasks.workunit.client.1.vm05.stdout:2/165: symlink db/l34 0 2026-03-10T10:19:22.176 INFO:tasks.workunit.client.0.vm02.stdout:4/316: symlink d1/d2/l61 0 2026-03-10T10:19:22.178 INFO:tasks.workunit.client.1.vm05.stdout:1/138: read d4/df/f11 [1585041,23807] 0 2026-03-10T10:19:22.179 INFO:tasks.workunit.client.0.vm02.stdout:1/243: mknod d4/da/d1a/c52 0 2026-03-10T10:19:22.179 INFO:tasks.workunit.client.0.vm02.stdout:1/244: dread - d4/f4b zero size 2026-03-10T10:19:22.180 INFO:tasks.workunit.client.0.vm02.stdout:1/245: chown d4/da/d1a 23340233 1 2026-03-10T10:19:22.180 INFO:tasks.workunit.client.0.vm02.stdout:1/246: write d4/da/f25 [159412,117562] 0 2026-03-10T10:19:22.186 INFO:tasks.workunit.client.0.vm02.stdout:7/236: dwrite d1/f34 [4194304,4194304] 0 2026-03-10T10:19:22.188 INFO:tasks.workunit.client.0.vm02.stdout:7/237: stat d1/dc/d44/l47 0 2026-03-10T10:19:22.189 INFO:tasks.workunit.client.0.vm02.stdout:8/281: truncate d1/f40 3152782 0 2026-03-10T10:19:22.193 INFO:tasks.workunit.client.0.vm02.stdout:2/306: truncate d0/d1a/f53 3121842 0 2026-03-10T10:19:22.197 INFO:tasks.workunit.client.0.vm02.stdout:0/250: truncate d9/d18/f1e 2042571 0 2026-03-10T10:19:22.197 INFO:tasks.workunit.client.0.vm02.stdout:3/210: rename d1/f33 to d1/d6/f49 0 2026-03-10T10:19:22.198 INFO:tasks.workunit.client.0.vm02.stdout:3/211: fdatasync d1/d8/f3f 0 2026-03-10T10:19:22.199 INFO:tasks.workunit.client.1.vm05.stdout:7/197: mkdir d5/d1d/d20/d35 0 2026-03-10T10:19:22.201 INFO:tasks.workunit.client.1.vm05.stdout:6/157: rename dd/df/d12/d24/d28/c2d to dd/d27/d30/c39 0 2026-03-10T10:19:22.202 INFO:tasks.workunit.client.1.vm05.stdout:4/156: dwrite d1/d3/f5 [4194304,4194304] 0 2026-03-10T10:19:22.203 INFO:tasks.workunit.client.0.vm02.stdout:1/247: mkdir d4/d2c/d53 0 2026-03-10T10:19:22.204 INFO:tasks.workunit.client.1.vm05.stdout:2/166: creat db/d28/f35 x:0 0 0 2026-03-10T10:19:22.204 INFO:tasks.workunit.client.1.vm05.stdout:1/139: creat d4/d20/f2c x:0 0 0 2026-03-10T10:19:22.207 INFO:tasks.workunit.client.1.vm05.stdout:1/140: fsync d4/dd/f1f 0 2026-03-10T10:19:22.208 INFO:tasks.workunit.client.1.vm05.stdout:9/171: symlink d0/d1/d13/d26/l37 0 2026-03-10T10:19:22.208 INFO:tasks.workunit.client.0.vm02.stdout:2/307: write d0/d10/f46 [4254348,100239] 0 2026-03-10T10:19:22.208 INFO:tasks.workunit.client.0.vm02.stdout:2/308: readlink d0/l57 0 2026-03-10T10:19:22.211 INFO:tasks.workunit.client.0.vm02.stdout:5/341: rmdir d1/db/d11/d16/d29/d40/d6c 0 2026-03-10T10:19:22.213 INFO:tasks.workunit.client.1.vm05.stdout:1/141: dwrite d4/d20/f2c [0,4194304] 0 2026-03-10T10:19:22.216 INFO:tasks.workunit.client.1.vm05.stdout:1/142: truncate d4/dd/f21 9189 0 2026-03-10T10:19:22.217 INFO:tasks.workunit.client.0.vm02.stdout:1/248: creat d4/d2c/f54 x:0 0 0 2026-03-10T10:19:22.218 INFO:tasks.workunit.client.0.vm02.stdout:6/209: dread d0/f28 [0,4194304] 0 2026-03-10T10:19:22.219 INFO:tasks.workunit.client.1.vm05.stdout:1/143: chown d4/df/c1d 6 1 2026-03-10T10:19:22.220 INFO:tasks.workunit.client.0.vm02.stdout:8/282: mknod d1/d1c/d24/c55 0 2026-03-10T10:19:22.220 INFO:tasks.workunit.client.0.vm02.stdout:2/309: truncate d0/f9 3169307 0 2026-03-10T10:19:22.222 INFO:tasks.workunit.client.0.vm02.stdout:3/212: link d1/f1c d1/d8/d21/f4a 0 2026-03-10T10:19:22.223 INFO:tasks.workunit.client.1.vm05.stdout:3/227: getdents dd/d15/d1f 0 2026-03-10T10:19:22.223 INFO:tasks.workunit.client.1.vm05.stdout:7/198: creat d5/d1d/d20/d35/f36 x:0 0 0 2026-03-10T10:19:22.224 INFO:tasks.workunit.client.0.vm02.stdout:5/342: creat d1/db/d11/d16/d29/d40/d4f/d5f/f73 x:0 0 0 2026-03-10T10:19:22.226 INFO:tasks.workunit.client.1.vm05.stdout:6/158: symlink dd/l3a 0 2026-03-10T10:19:22.226 INFO:tasks.workunit.client.0.vm02.stdout:0/251: creat d9/f4d x:0 0 0 2026-03-10T10:19:22.227 INFO:tasks.workunit.client.1.vm05.stdout:1/144: creat d4/d20/f2d x:0 0 0 2026-03-10T10:19:22.227 INFO:tasks.workunit.client.1.vm05.stdout:1/145: fsync d4/dd/f21 0 2026-03-10T10:19:22.227 INFO:tasks.workunit.client.1.vm05.stdout:6/159: chown dd/df/d12/d24 200 1 2026-03-10T10:19:22.230 INFO:tasks.workunit.client.0.vm02.stdout:0/252: dwrite d9/d18/d1a/d22/f3f [0,4194304] 0 2026-03-10T10:19:22.232 INFO:tasks.workunit.client.1.vm05.stdout:0/175: getdents d1/d2/d9/d31/d13 0 2026-03-10T10:19:22.232 INFO:tasks.workunit.client.0.vm02.stdout:3/213: creat d1/d20/f4b x:0 0 0 2026-03-10T10:19:22.232 INFO:tasks.workunit.client.0.vm02.stdout:5/343: creat d1/db/d11/d62/f74 x:0 0 0 2026-03-10T10:19:22.232 INFO:tasks.workunit.client.0.vm02.stdout:0/253: readlink d9/l2b 0 2026-03-10T10:19:22.233 INFO:tasks.workunit.client.0.vm02.stdout:3/214: chown d1/d6/f1b 137 1 2026-03-10T10:19:22.236 INFO:tasks.workunit.client.0.vm02.stdout:3/215: dwrite d1/d6/f32 [0,4194304] 0 2026-03-10T10:19:22.238 INFO:tasks.workunit.client.1.vm05.stdout:2/167: creat db/f36 x:0 0 0 2026-03-10T10:19:22.241 INFO:tasks.workunit.client.1.vm05.stdout:2/168: write db/d28/f30 [334862,105564] 0 2026-03-10T10:19:22.248 INFO:tasks.workunit.client.1.vm05.stdout:2/169: dread - db/d12/f31 zero size 2026-03-10T10:19:22.249 INFO:tasks.workunit.client.1.vm05.stdout:2/170: chown db/d12/f29 1601 1 2026-03-10T10:19:22.256 INFO:tasks.workunit.client.0.vm02.stdout:2/310: creat d0/d10/f67 x:0 0 0 2026-03-10T10:19:22.256 INFO:tasks.workunit.client.1.vm05.stdout:2/171: truncate db/d28/f30 1257084 0 2026-03-10T10:19:22.257 INFO:tasks.workunit.client.1.vm05.stdout:3/228: creat dd/d15/d1f/f53 x:0 0 0 2026-03-10T10:19:22.257 INFO:tasks.workunit.client.1.vm05.stdout:3/229: readlink dd/l10 0 2026-03-10T10:19:22.257 INFO:tasks.workunit.client.1.vm05.stdout:9/172: creat d0/d1/d13/de/f38 x:0 0 0 2026-03-10T10:19:22.258 INFO:tasks.workunit.client.0.vm02.stdout:2/311: creat d0/d1a/d49/d5e/f68 x:0 0 0 2026-03-10T10:19:22.259 INFO:tasks.workunit.client.0.vm02.stdout:2/312: write d0/d10/f46 [2716435,95515] 0 2026-03-10T10:19:22.260 INFO:tasks.workunit.client.0.vm02.stdout:5/344: creat d1/db/d11/d13/d28/d37/d3d/f75 x:0 0 0 2026-03-10T10:19:22.261 INFO:tasks.workunit.client.0.vm02.stdout:5/345: truncate d1/db/d11/d16/d29/d40/f59 954197 0 2026-03-10T10:19:22.262 INFO:tasks.workunit.client.1.vm05.stdout:9/173: creat d0/df/f39 x:0 0 0 2026-03-10T10:19:22.265 INFO:tasks.workunit.client.0.vm02.stdout:5/346: creat d1/db/d11/d13/d28/d37/f76 x:0 0 0 2026-03-10T10:19:22.265 INFO:tasks.workunit.client.1.vm05.stdout:9/174: rename d0/d1/l1a to d0/d1/d13/de/d21/l3a 0 2026-03-10T10:19:22.265 INFO:tasks.workunit.client.1.vm05.stdout:9/175: stat d0/d1/d16/f36 0 2026-03-10T10:19:22.266 INFO:tasks.workunit.client.0.vm02.stdout:5/347: mknod d1/db/d11/d1a/c77 0 2026-03-10T10:19:22.267 INFO:tasks.workunit.client.0.vm02.stdout:5/348: fdatasync d1/db/d11/d13/d28/d37/f76 0 2026-03-10T10:19:22.267 INFO:tasks.workunit.client.1.vm05.stdout:9/176: creat d0/df/f3b x:0 0 0 2026-03-10T10:19:22.269 INFO:tasks.workunit.client.1.vm05.stdout:6/160: dread fb [0,4194304] 0 2026-03-10T10:19:22.270 INFO:tasks.workunit.client.0.vm02.stdout:5/349: dread d1/f12 [0,4194304] 0 2026-03-10T10:19:22.275 INFO:tasks.workunit.client.1.vm05.stdout:3/230: dwrite f6 [0,4194304] 0 2026-03-10T10:19:22.276 INFO:tasks.workunit.client.1.vm05.stdout:9/177: chown d0/d1/d13/l6 270842546 1 2026-03-10T10:19:22.276 INFO:tasks.workunit.client.0.vm02.stdout:5/350: symlink d1/db/d11/d16/d29/l78 0 2026-03-10T10:19:22.277 INFO:tasks.workunit.client.1.vm05.stdout:3/231: dread - dd/d15/d24/f2f zero size 2026-03-10T10:19:22.278 INFO:tasks.workunit.client.0.vm02.stdout:5/351: readlink d1/lf 0 2026-03-10T10:19:22.278 INFO:tasks.workunit.client.1.vm05.stdout:6/161: mknod dd/c3b 0 2026-03-10T10:19:22.278 INFO:tasks.workunit.client.1.vm05.stdout:9/178: symlink d0/d1/d13/de/d21/l3c 0 2026-03-10T10:19:22.279 INFO:tasks.workunit.client.1.vm05.stdout:3/232: fdatasync dd/d15/d24/f42 0 2026-03-10T10:19:22.281 INFO:tasks.workunit.client.1.vm05.stdout:3/233: chown dd/d15/c43 38110 1 2026-03-10T10:19:22.284 INFO:tasks.workunit.client.1.vm05.stdout:6/162: creat dd/d1b/f3c x:0 0 0 2026-03-10T10:19:22.286 INFO:tasks.workunit.client.0.vm02.stdout:5/352: dread d1/f3 [0,4194304] 0 2026-03-10T10:19:22.286 INFO:tasks.workunit.client.1.vm05.stdout:3/234: rename dd/d15/d24/d2c/f2d to dd/d15/d1f/f54 0 2026-03-10T10:19:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:21 vm05.local ceph-mon[59051]: from='client.14654 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:21 vm05.local ceph-mon[59051]: pgmap v150: 65 pgs: 65 active+clean; 836 MiB data, 3.8 GiB used, 116 GiB / 120 GiB avail; 5.0 MiB/s rd, 104 MiB/s wr, 171 op/s 2026-03-10T10:19:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:21 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/780762303' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:19:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:21 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/3571735718' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:19:22.290 INFO:tasks.workunit.client.0.vm02.stdout:5/353: unlink d1/db/d11/d16/d29/d40/f53 0 2026-03-10T10:19:22.290 INFO:tasks.workunit.client.1.vm05.stdout:3/235: dread dd/d15/d1f/f2a [0,4194304] 0 2026-03-10T10:19:22.291 INFO:tasks.workunit.client.0.vm02.stdout:5/354: mkdir d1/db/d11/d16/d79 0 2026-03-10T10:19:22.292 INFO:tasks.workunit.client.0.vm02.stdout:5/355: chown d1/l51 57152 1 2026-03-10T10:19:22.293 INFO:tasks.workunit.client.0.vm02.stdout:5/356: truncate d1/db/d11/d16/d29/d40/f66 434223 0 2026-03-10T10:19:22.294 INFO:tasks.workunit.client.0.vm02.stdout:5/357: readlink d1/db/d11/d16/d29/d40/d4f/d5f/d6d/l58 0 2026-03-10T10:19:22.302 INFO:tasks.workunit.client.1.vm05.stdout:3/236: rename dd/d39/f45 to dd/d15/d24/d2c/d3b/f55 0 2026-03-10T10:19:22.302 INFO:tasks.workunit.client.1.vm05.stdout:3/237: write f9 [1337425,99636] 0 2026-03-10T10:19:22.304 INFO:tasks.workunit.client.0.vm02.stdout:5/358: dread d1/fe [0,4194304] 0 2026-03-10T10:19:22.304 INFO:tasks.workunit.client.0.vm02.stdout:5/359: readlink d1/l8 0 2026-03-10T10:19:22.305 INFO:tasks.workunit.client.0.vm02.stdout:5/360: truncate d1/db/d11/d13/d28/d37/f3c 4366839 0 2026-03-10T10:19:22.306 INFO:tasks.workunit.client.1.vm05.stdout:3/238: mkdir dd/d20/d56 0 2026-03-10T10:19:22.334 INFO:tasks.workunit.client.0.vm02.stdout:5/361: creat d1/db/d11/d16/d29/d40/f7a x:0 0 0 2026-03-10T10:19:22.334 INFO:tasks.workunit.client.0.vm02.stdout:5/362: write d1/db/fd [1504813,126869] 0 2026-03-10T10:19:22.334 INFO:tasks.workunit.client.0.vm02.stdout:5/363: mkdir d1/db/d11/d7b 0 2026-03-10T10:19:22.334 INFO:tasks.workunit.client.1.vm05.stdout:3/239: mknod dd/d15/d24/d2c/d3b/c57 0 2026-03-10T10:19:22.334 INFO:tasks.workunit.client.1.vm05.stdout:3/240: symlink dd/l58 0 2026-03-10T10:19:22.334 INFO:tasks.workunit.client.1.vm05.stdout:3/241: stat dd/f52 0 2026-03-10T10:19:22.334 INFO:tasks.workunit.client.1.vm05.stdout:3/242: rename dd/d15/d24/l28 to dd/d15/d24/l59 0 2026-03-10T10:19:22.336 INFO:tasks.workunit.client.0.vm02.stdout:5/364: dread d1/db/d11/d13/f1c [0,4194304] 0 2026-03-10T10:19:22.338 INFO:tasks.workunit.client.0.vm02.stdout:5/365: mknod d1/db/d11/d16/d29/d40/c7c 0 2026-03-10T10:19:22.342 INFO:tasks.workunit.client.0.vm02.stdout:5/366: rename d1/db/d11/f70 to d1/db/d11/f7d 0 2026-03-10T10:19:22.345 INFO:tasks.workunit.client.0.vm02.stdout:5/367: dwrite d1/db/d11/d13/d28/f31 [0,4194304] 0 2026-03-10T10:19:22.349 INFO:tasks.workunit.client.0.vm02.stdout:5/368: write d1/db/d11/d16/d29/d40/d4f/d5f/f6b [3427032,14604] 0 2026-03-10T10:19:22.351 INFO:tasks.workunit.client.0.vm02.stdout:5/369: symlink d1/db/d11/d16/d29/d40/d4f/l7e 0 2026-03-10T10:19:22.376 INFO:tasks.workunit.client.1.vm05.stdout:3/243: dread fb [0,4194304] 0 2026-03-10T10:19:22.377 INFO:tasks.workunit.client.1.vm05.stdout:3/244: mknod dd/d15/d1f/c5a 0 2026-03-10T10:19:22.380 INFO:tasks.workunit.client.1.vm05.stdout:0/176: sync 2026-03-10T10:19:22.380 INFO:tasks.workunit.client.1.vm05.stdout:2/172: sync 2026-03-10T10:19:22.388 INFO:tasks.workunit.client.1.vm05.stdout:2/173: stat db/d1c/l21 0 2026-03-10T10:19:22.395 INFO:tasks.workunit.client.1.vm05.stdout:8/140: write d7/f1c [2035510,33780] 0 2026-03-10T10:19:22.395 INFO:tasks.workunit.client.1.vm05.stdout:0/177: dread d1/d2/d9/f1d [0,4194304] 0 2026-03-10T10:19:22.400 INFO:tasks.workunit.client.1.vm05.stdout:0/178: mknod d1/d2/d9/d31/d13/d17/c3a 0 2026-03-10T10:19:22.414 INFO:tasks.workunit.client.1.vm05.stdout:0/179: rename d1/d2/d9/d31/d13/d15/c19 to d1/d2/d9/d31/d12/c3b 0 2026-03-10T10:19:22.414 INFO:tasks.workunit.client.1.vm05.stdout:0/180: creat d1/d7/f3c x:0 0 0 2026-03-10T10:19:22.414 INFO:tasks.workunit.client.1.vm05.stdout:0/181: mkdir d1/d2/d39/d3d 0 2026-03-10T10:19:22.414 INFO:tasks.workunit.client.1.vm05.stdout:0/182: link d1/d2/d9/d31/d12/d20/f37 d1/d2/d9/d31/d13/f3e 0 2026-03-10T10:19:22.420 INFO:tasks.workunit.client.1.vm05.stdout:0/183: creat d1/d2/d9/f3f x:0 0 0 2026-03-10T10:19:22.426 INFO:tasks.workunit.client.1.vm05.stdout:2/174: sync 2026-03-10T10:19:22.427 INFO:tasks.workunit.client.1.vm05.stdout:0/184: rename d1/d2/d9/d31/d12/f2d to d1/d2/d9/f40 0 2026-03-10T10:19:22.428 INFO:tasks.workunit.client.1.vm05.stdout:2/175: fsync db/f2e 0 2026-03-10T10:19:22.437 INFO:tasks.workunit.client.1.vm05.stdout:0/185: dread d1/d2/d9/d31/d12/d20/f37 [0,4194304] 0 2026-03-10T10:19:22.442 INFO:tasks.workunit.client.1.vm05.stdout:0/186: mkdir d1/d2/d9/d31/d12/d41 0 2026-03-10T10:19:22.443 INFO:tasks.workunit.client.1.vm05.stdout:2/176: mknod db/c37 0 2026-03-10T10:19:22.447 INFO:tasks.workunit.client.0.vm02.stdout:5/370: fdatasync d1/db/d11/d13/d28/f31 0 2026-03-10T10:19:22.447 INFO:tasks.workunit.client.0.vm02.stdout:5/371: dread - d1/db/d11/d13/f1f zero size 2026-03-10T10:19:22.450 INFO:tasks.workunit.client.0.vm02.stdout:5/372: creat d1/f7f x:0 0 0 2026-03-10T10:19:22.454 INFO:tasks.workunit.client.0.vm02.stdout:5/373: creat d1/db/d11/d16/d29/d40/d4f/d5f/d6d/d71/f80 x:0 0 0 2026-03-10T10:19:22.455 INFO:tasks.workunit.client.0.vm02.stdout:5/374: write d1/db/f56 [1811809,29850] 0 2026-03-10T10:19:22.498 INFO:tasks.workunit.client.1.vm05.stdout:1/146: truncate d4/df/d1c/f23 1290721 0 2026-03-10T10:19:22.500 INFO:tasks.workunit.client.0.vm02.stdout:0/254: unlink d9/d18/d1a/c4c 0 2026-03-10T10:19:22.502 INFO:tasks.workunit.client.1.vm05.stdout:5/205: truncate f9 1190203 0 2026-03-10T10:19:22.502 INFO:tasks.workunit.client.0.vm02.stdout:0/255: creat d9/d34/d3d/f4e x:0 0 0 2026-03-10T10:19:22.502 INFO:tasks.workunit.client.0.vm02.stdout:0/256: readlink l7 0 2026-03-10T10:19:22.503 INFO:tasks.workunit.client.0.vm02.stdout:2/313: read d0/d10/f46 [2539603,81014] 0 2026-03-10T10:19:22.505 INFO:tasks.workunit.client.1.vm05.stdout:1/147: dwrite d4/df/d1c/f2a [0,4194304] 0 2026-03-10T10:19:22.505 INFO:tasks.workunit.client.0.vm02.stdout:0/257: creat d9/d18/d1a/d22/d24/f4f x:0 0 0 2026-03-10T10:19:22.506 INFO:tasks.workunit.client.0.vm02.stdout:0/258: write d9/d34/d3d/f4e [268361,61417] 0 2026-03-10T10:19:22.506 INFO:tasks.workunit.client.0.vm02.stdout:9/175: write da/d10/f2b [1868144,93270] 0 2026-03-10T10:19:22.515 INFO:tasks.workunit.client.0.vm02.stdout:2/314: unlink d0/d1a/d24/c3f 0 2026-03-10T10:19:22.515 INFO:tasks.workunit.client.1.vm05.stdout:1/148: mknod d4/dd/c2e 0 2026-03-10T10:19:22.515 INFO:tasks.workunit.client.1.vm05.stdout:5/206: creat da/db/d26/f3f x:0 0 0 2026-03-10T10:19:22.516 INFO:tasks.workunit.client.0.vm02.stdout:9/176: mkdir da/d10/d2c/d34/d35 0 2026-03-10T10:19:22.519 INFO:tasks.workunit.client.1.vm05.stdout:5/207: chown da/db/de/l19 7709792 1 2026-03-10T10:19:22.519 INFO:tasks.workunit.client.0.vm02.stdout:2/315: mkdir d0/d10/d69 0 2026-03-10T10:19:22.520 INFO:tasks.workunit.client.1.vm05.stdout:5/208: dread - da/db/d26/d35/f2a zero size 2026-03-10T10:19:22.527 INFO:tasks.workunit.client.1.vm05.stdout:5/209: dread da/db/d26/d35/f1c [0,4194304] 0 2026-03-10T10:19:22.527 INFO:tasks.workunit.client.1.vm05.stdout:5/210: stat da/f10 0 2026-03-10T10:19:22.540 INFO:tasks.workunit.client.0.vm02.stdout:9/177: dread da/f14 [0,4194304] 0 2026-03-10T10:19:22.542 INFO:tasks.workunit.client.0.vm02.stdout:4/317: write d1/d2/f34 [3215358,108863] 0 2026-03-10T10:19:22.546 INFO:tasks.workunit.client.0.vm02.stdout:7/238: dwrite d1/dc/d10/f27 [0,4194304] 0 2026-03-10T10:19:22.551 INFO:tasks.workunit.client.0.vm02.stdout:8/283: write d1/d2/f28 [40907,104624] 0 2026-03-10T10:19:22.553 INFO:tasks.workunit.client.0.vm02.stdout:9/178: creat da/d10/d2c/d34/f36 x:0 0 0 2026-03-10T10:19:22.554 INFO:tasks.workunit.client.0.vm02.stdout:9/179: truncate da/d10/f31 108947 0 2026-03-10T10:19:22.557 INFO:tasks.workunit.client.1.vm05.stdout:4/157: dwrite d1/d3/f12 [0,4194304] 0 2026-03-10T10:19:22.558 INFO:tasks.workunit.client.0.vm02.stdout:1/249: truncate d4/f3a 1956239 0 2026-03-10T10:19:22.559 INFO:tasks.workunit.client.1.vm05.stdout:7/199: dwrite d5/ff [0,4194304] 0 2026-03-10T10:19:22.559 INFO:tasks.workunit.client.0.vm02.stdout:7/239: creat d1/dc/d44/f4a x:0 0 0 2026-03-10T10:19:22.560 INFO:tasks.workunit.client.1.vm05.stdout:7/200: truncate d5/dd/f1a 4505141 0 2026-03-10T10:19:22.560 INFO:tasks.workunit.client.1.vm05.stdout:7/201: chown d5/dd/f2f 8028703 1 2026-03-10T10:19:22.560 INFO:tasks.workunit.client.1.vm05.stdout:7/202: write d5/d17/f19 [4367393,8142] 0 2026-03-10T10:19:22.574 INFO:tasks.workunit.client.0.vm02.stdout:9/180: symlink da/d10/d2c/d34/l37 0 2026-03-10T10:19:22.574 INFO:tasks.workunit.client.1.vm05.stdout:4/158: symlink d1/d31/dc/l35 0 2026-03-10T10:19:22.574 INFO:tasks.workunit.client.0.vm02.stdout:6/210: truncate d0/f1c 2588995 0 2026-03-10T10:19:22.576 INFO:tasks.workunit.client.1.vm05.stdout:1/149: sync 2026-03-10T10:19:22.578 INFO:tasks.workunit.client.1.vm05.stdout:7/203: creat d5/d1d/d20/d35/f37 x:0 0 0 2026-03-10T10:19:22.580 INFO:tasks.workunit.client.0.vm02.stdout:3/216: truncate d1/d6/f36 2998257 0 2026-03-10T10:19:22.580 INFO:tasks.workunit.client.0.vm02.stdout:9/181: mkdir da/d10/d38 0 2026-03-10T10:19:22.580 INFO:tasks.workunit.client.0.vm02.stdout:3/217: dread - d1/d8/f3f zero size 2026-03-10T10:19:22.582 INFO:tasks.workunit.client.0.vm02.stdout:0/259: sync 2026-03-10T10:19:22.592 INFO:tasks.workunit.client.1.vm05.stdout:7/204: dread d5/dd/f12 [0,4194304] 0 2026-03-10T10:19:22.592 INFO:tasks.workunit.client.1.vm05.stdout:7/205: read d5/d17/f19 [2916812,59321] 0 2026-03-10T10:19:22.593 INFO:tasks.workunit.client.1.vm05.stdout:9/179: write d0/d1/d13/f22 [11295,96152] 0 2026-03-10T10:19:22.595 INFO:tasks.workunit.client.0.vm02.stdout:1/250: dread d4/da/d1a/f1c [0,4194304] 0 2026-03-10T10:19:22.596 INFO:tasks.workunit.client.0.vm02.stdout:1/251: write d4/da/f25 [802123,53061] 0 2026-03-10T10:19:22.596 INFO:tasks.workunit.client.0.vm02.stdout:1/252: stat d4/da/f25 0 2026-03-10T10:19:22.597 INFO:tasks.workunit.client.1.vm05.stdout:9/180: write d0/df/f3b [322921,14323] 0 2026-03-10T10:19:22.599 INFO:tasks.workunit.client.0.vm02.stdout:3/218: creat d1/d8/d21/f4c x:0 0 0 2026-03-10T10:19:22.599 INFO:tasks.workunit.client.0.vm02.stdout:3/219: fsync d1/d6/f3a 0 2026-03-10T10:19:22.600 INFO:tasks.workunit.client.1.vm05.stdout:6/163: write dd/df/f1e [626052,27709] 0 2026-03-10T10:19:22.600 INFO:tasks.workunit.client.1.vm05.stdout:9/181: dwrite d0/d1/d13/f22 [4194304,4194304] 0 2026-03-10T10:19:22.601 INFO:tasks.workunit.client.1.vm05.stdout:9/182: dread - d0/df/f39 zero size 2026-03-10T10:19:22.608 INFO:tasks.workunit.client.1.vm05.stdout:9/183: dwrite d0/df/d11/f2c [0,4194304] 0 2026-03-10T10:19:22.609 INFO:tasks.workunit.client.0.vm02.stdout:6/211: rename d0/d8/d9/l41 to d0/d8/d9/d31/d32/l48 0 2026-03-10T10:19:22.610 INFO:tasks.workunit.client.0.vm02.stdout:7/240: link d1/f5 d1/d1b/d49/f4b 0 2026-03-10T10:19:22.613 INFO:tasks.workunit.client.1.vm05.stdout:1/150: creat d4/f2f x:0 0 0 2026-03-10T10:19:22.616 INFO:tasks.workunit.client.1.vm05.stdout:3/245: write dd/d15/d24/f2f [520350,28609] 0 2026-03-10T10:19:22.617 INFO:tasks.workunit.client.1.vm05.stdout:3/246: write dd/d15/d24/d2c/d3b/f40 [96280,124042] 0 2026-03-10T10:19:22.620 INFO:tasks.workunit.client.1.vm05.stdout:3/247: dread dd/d20/f50 [0,4194304] 0 2026-03-10T10:19:22.627 INFO:tasks.workunit.client.0.vm02.stdout:1/253: symlink d4/da/d27/d38/d3c/l55 0 2026-03-10T10:19:22.630 INFO:tasks.workunit.client.0.vm02.stdout:3/220: unlink d1/d8/f1a 0 2026-03-10T10:19:22.631 INFO:tasks.workunit.client.1.vm05.stdout:4/159: getdents d1 0 2026-03-10T10:19:22.634 INFO:tasks.workunit.client.0.vm02.stdout:4/318: dread d1/d32/f46 [0,4194304] 0 2026-03-10T10:19:22.634 INFO:tasks.workunit.client.0.vm02.stdout:6/212: mknod d0/d7/c49 0 2026-03-10T10:19:22.637 INFO:tasks.workunit.client.1.vm05.stdout:9/184: creat d0/d1/d16/f3d x:0 0 0 2026-03-10T10:19:22.637 INFO:tasks.workunit.client.1.vm05.stdout:6/164: mkdir dd/d27/d2a/d3d 0 2026-03-10T10:19:22.637 INFO:tasks.workunit.client.1.vm05.stdout:6/165: fsync dd/f14 0 2026-03-10T10:19:22.638 INFO:tasks.workunit.client.0.vm02.stdout:6/213: dwrite d0/d8/d9/d31/d32/f36 [0,4194304] 0 2026-03-10T10:19:22.639 INFO:tasks.workunit.client.0.vm02.stdout:6/214: chown d0/d8/d9/d31 6167146 1 2026-03-10T10:19:22.643 INFO:tasks.workunit.client.0.vm02.stdout:6/215: dread d0/d8/d29/d2f/f33 [0,4194304] 0 2026-03-10T10:19:22.650 INFO:tasks.workunit.client.1.vm05.stdout:1/151: mknod d4/df/c30 0 2026-03-10T10:19:22.655 INFO:tasks.workunit.client.1.vm05.stdout:3/248: unlink dd/d15/c1a 0 2026-03-10T10:19:22.655 INFO:tasks.workunit.client.0.vm02.stdout:3/221: dwrite d1/d20/f38 [0,4194304] 0 2026-03-10T10:19:22.660 INFO:tasks.workunit.client.0.vm02.stdout:9/182: link da/d10/c12 da/d10/d2c/c39 0 2026-03-10T10:19:22.661 INFO:tasks.workunit.client.1.vm05.stdout:1/152: dread d4/df/d1c/f2a [0,4194304] 0 2026-03-10T10:19:22.665 INFO:tasks.workunit.client.0.vm02.stdout:5/375: truncate d1/fe 3583306 0 2026-03-10T10:19:22.670 INFO:tasks.workunit.client.1.vm05.stdout:8/141: dwrite d7/f11 [0,4194304] 0 2026-03-10T10:19:22.670 INFO:tasks.workunit.client.0.vm02.stdout:5/376: dread d1/f32 [0,4194304] 0 2026-03-10T10:19:22.671 INFO:tasks.workunit.client.0.vm02.stdout:0/260: getdents d9/d18/d1a/d3c 0 2026-03-10T10:19:22.673 INFO:tasks.workunit.client.0.vm02.stdout:4/319: rmdir d1/d2/d37 39 2026-03-10T10:19:22.673 INFO:tasks.workunit.client.1.vm05.stdout:8/142: chown d7/fb 33220237 1 2026-03-10T10:19:22.675 INFO:tasks.workunit.client.1.vm05.stdout:9/185: write d0/d1/fb [4404888,49432] 0 2026-03-10T10:19:22.675 INFO:tasks.workunit.client.1.vm05.stdout:0/187: write d1/d2/fc [4499887,127861] 0 2026-03-10T10:19:22.676 INFO:tasks.workunit.client.1.vm05.stdout:5/211: rmdir da/db 39 2026-03-10T10:19:22.679 INFO:tasks.workunit.client.1.vm05.stdout:3/249: mknod dd/d39/c5b 0 2026-03-10T10:19:22.681 INFO:tasks.workunit.client.0.vm02.stdout:6/216: symlink d0/d3a/l4a 0 2026-03-10T10:19:22.684 INFO:tasks.workunit.client.1.vm05.stdout:1/153: dwrite d4/f2f [0,4194304] 0 2026-03-10T10:19:22.684 INFO:tasks.workunit.client.1.vm05.stdout:9/186: fdatasync d0/df/f39 0 2026-03-10T10:19:22.685 INFO:tasks.workunit.client.1.vm05.stdout:3/250: dread - dd/d15/d24/f42 zero size 2026-03-10T10:19:22.688 INFO:tasks.workunit.client.1.vm05.stdout:9/187: fdatasync d0/df/d11/f24 0 2026-03-10T10:19:22.689 INFO:tasks.workunit.client.0.vm02.stdout:2/316: dwrite d0/d1a/f33 [0,4194304] 0 2026-03-10T10:19:22.690 INFO:tasks.workunit.client.1.vm05.stdout:2/177: dwrite db/d12/f29 [0,4194304] 0 2026-03-10T10:19:22.705 INFO:tasks.workunit.client.1.vm05.stdout:1/154: dwrite d4/d20/f2c [4194304,4194304] 0 2026-03-10T10:19:22.705 INFO:tasks.workunit.client.1.vm05.stdout:8/143: unlink d7/fb 0 2026-03-10T10:19:22.705 INFO:tasks.workunit.client.1.vm05.stdout:1/155: write d4/d20/f2d [478606,59634] 0 2026-03-10T10:19:22.705 INFO:tasks.workunit.client.1.vm05.stdout:6/166: mkdir dd/d27/d2a/d3d/d3e 0 2026-03-10T10:19:22.706 INFO:tasks.workunit.client.1.vm05.stdout:9/188: fsync d0/d1/d16/f36 0 2026-03-10T10:19:22.709 INFO:tasks.workunit.client.1.vm05.stdout:3/251: mkdir dd/d39/d5c 0 2026-03-10T10:19:22.709 INFO:tasks.workunit.client.1.vm05.stdout:9/189: dread - d0/d1/d16/f3d zero size 2026-03-10T10:19:22.713 INFO:tasks.workunit.client.0.vm02.stdout:9/183: creat da/d10/d2c/d34/f3a x:0 0 0 2026-03-10T10:19:22.715 INFO:tasks.workunit.client.1.vm05.stdout:9/190: write d0/d1/fb [7191123,15184] 0 2026-03-10T10:19:22.716 INFO:tasks.workunit.client.0.vm02.stdout:3/222: dwrite d1/d8/f2e [0,4194304] 0 2026-03-10T10:19:22.718 INFO:tasks.workunit.client.1.vm05.stdout:8/144: creat d7/d14/f25 x:0 0 0 2026-03-10T10:19:22.722 INFO:tasks.workunit.client.1.vm05.stdout:6/167: fdatasync dd/d27/d30/f37 0 2026-03-10T10:19:22.722 INFO:tasks.workunit.client.0.vm02.stdout:4/320: mknod d1/d32/c62 0 2026-03-10T10:19:22.725 INFO:tasks.workunit.client.0.vm02.stdout:1/254: mknod d4/da/d1a/d47/c56 0 2026-03-10T10:19:22.727 INFO:tasks.workunit.client.0.vm02.stdout:0/261: dwrite d9/d34/d3d/f41 [0,4194304] 0 2026-03-10T10:19:22.730 INFO:tasks.workunit.client.1.vm05.stdout:9/191: mknod d0/df/d11/c3e 0 2026-03-10T10:19:22.730 INFO:tasks.workunit.client.0.vm02.stdout:0/262: readlink l7 0 2026-03-10T10:19:22.730 INFO:tasks.workunit.client.0.vm02.stdout:9/184: creat da/d10/f3b x:0 0 0 2026-03-10T10:19:22.733 INFO:tasks.workunit.client.0.vm02.stdout:5/377: mknod d1/c81 0 2026-03-10T10:19:22.735 INFO:tasks.workunit.client.0.vm02.stdout:4/321: unlink d1/d10/f45 0 2026-03-10T10:19:22.735 INFO:tasks.workunit.client.0.vm02.stdout:4/322: write d1/d10/db/f20 [3623827,41588] 0 2026-03-10T10:19:22.738 INFO:tasks.workunit.client.1.vm05.stdout:1/156: dwrite d4/df/f11 [4194304,4194304] 0 2026-03-10T10:19:22.744 INFO:tasks.workunit.client.0.vm02.stdout:5/378: creat d1/db/d11/d16/d29/f82 x:0 0 0 2026-03-10T10:19:22.747 INFO:tasks.workunit.client.1.vm05.stdout:3/252: dwrite dd/d15/d1f/f2a [0,4194304] 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.0.vm02.stdout:2/317: creat d0/d10/f6a x:0 0 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.0.vm02.stdout:1/255: mkdir d4/da/d57 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.0.vm02.stdout:4/323: mkdir d1/d2/d37/d63 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.0.vm02.stdout:1/256: dread d4/f21 [0,4194304] 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.1.vm05.stdout:8/145: unlink d7/ce 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.1.vm05.stdout:9/192: unlink d0/df/f31 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.1.vm05.stdout:0/188: write d1/d2/d9/d31/d13/d2f/f33 [72126,53049] 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.1.vm05.stdout:9/193: stat d0/d1/d16/f18 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.1.vm05.stdout:9/194: dread d0/df/d11/f2d [0,4194304] 0 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.1.vm05.stdout:9/195: chown d0/f2f 9 1 2026-03-10T10:19:22.759 INFO:tasks.workunit.client.0.vm02.stdout:7/241: fsync d1/dc/d10/f27 0 2026-03-10T10:19:22.760 INFO:tasks.workunit.client.1.vm05.stdout:1/157: creat d4/d20/f31 x:0 0 0 2026-03-10T10:19:22.760 INFO:tasks.workunit.client.1.vm05.stdout:1/158: read d4/df/d1c/f23 [548590,109055] 0 2026-03-10T10:19:22.761 INFO:tasks.workunit.client.1.vm05.stdout:2/178: link db/d2d/l32 db/d28/l38 0 2026-03-10T10:19:22.763 INFO:tasks.workunit.client.0.vm02.stdout:4/324: symlink d1/d2/d37/l64 0 2026-03-10T10:19:22.781 INFO:tasks.workunit.client.0.vm02.stdout:7/242: creat d1/dc/d16/d28/d2d/f4c x:0 0 0 2026-03-10T10:19:22.782 INFO:tasks.workunit.client.0.vm02.stdout:7/243: write d1/d1b/f43 [3956352,16942] 0 2026-03-10T10:19:22.785 INFO:tasks.workunit.client.1.vm05.stdout:6/168: truncate dd/f14 3388797 0 2026-03-10T10:19:22.785 INFO:tasks.workunit.client.1.vm05.stdout:6/169: chown dd/d27 2617983 1 2026-03-10T10:19:22.786 INFO:tasks.workunit.client.1.vm05.stdout:6/170: write dd/df/f1e [523232,19032] 0 2026-03-10T10:19:22.787 INFO:tasks.workunit.client.0.vm02.stdout:2/318: creat d0/d10/f6b x:0 0 0 2026-03-10T10:19:22.788 INFO:tasks.workunit.client.0.vm02.stdout:2/319: chown d0/l1c 882080 1 2026-03-10T10:19:22.788 INFO:tasks.workunit.client.0.vm02.stdout:2/320: readlink d0/d1a/l18 0 2026-03-10T10:19:22.792 INFO:tasks.workunit.client.0.vm02.stdout:2/321: dwrite d0/f30 [0,4194304] 0 2026-03-10T10:19:22.792 INFO:tasks.workunit.client.0.vm02.stdout:2/322: write d0/d1a/d49/d5e/f68 [323702,104664] 0 2026-03-10T10:19:22.795 INFO:tasks.workunit.client.0.vm02.stdout:1/257: creat d4/d2c/d53/f58 x:0 0 0 2026-03-10T10:19:22.802 INFO:tasks.workunit.client.0.vm02.stdout:1/258: dread - d4/da/d27/d38/f4e zero size 2026-03-10T10:19:22.802 INFO:tasks.workunit.client.1.vm05.stdout:0/189: mknod d1/d2/c42 0 2026-03-10T10:19:22.802 INFO:tasks.workunit.client.1.vm05.stdout:2/179: mknod db/d28/c39 0 2026-03-10T10:19:22.802 INFO:tasks.workunit.client.1.vm05.stdout:8/146: link d7/f12 d7/d14/d24/f26 0 2026-03-10T10:19:22.802 INFO:tasks.workunit.client.1.vm05.stdout:0/190: write d1/d2/d9/f3f [445479,70387] 0 2026-03-10T10:19:22.804 INFO:tasks.workunit.client.0.vm02.stdout:7/244: dread d1/dc/d10/f13 [0,4194304] 0 2026-03-10T10:19:22.806 INFO:tasks.workunit.client.1.vm05.stdout:0/191: write d1/d2/d9/d31/f36 [1024702,64034] 0 2026-03-10T10:19:22.809 INFO:tasks.workunit.client.1.vm05.stdout:8/147: dwrite d7/d14/f25 [0,4194304] 0 2026-03-10T10:19:22.824 INFO:tasks.workunit.client.0.vm02.stdout:7/245: chown d1/fd 241545356 1 2026-03-10T10:19:22.824 INFO:tasks.workunit.client.0.vm02.stdout:7/246: write d1/dc/f26 [7666224,43899] 0 2026-03-10T10:19:22.825 INFO:tasks.workunit.client.1.vm05.stdout:2/180: chown db/le 2285 1 2026-03-10T10:19:22.828 INFO:tasks.workunit.client.0.vm02.stdout:7/247: truncate d1/f5 4459855 0 2026-03-10T10:19:22.829 INFO:tasks.workunit.client.0.vm02.stdout:7/248: chown d1/d1b/d49/f4b 10422 1 2026-03-10T10:19:22.830 INFO:tasks.workunit.client.0.vm02.stdout:7/249: dread - d1/dc/d16/d28/d2d/f2f zero size 2026-03-10T10:19:22.847 INFO:tasks.workunit.client.0.vm02.stdout:7/250: mkdir d1/d1b/d4d 0 2026-03-10T10:19:22.847 INFO:tasks.workunit.client.1.vm05.stdout:2/181: dwrite db/f36 [0,4194304] 0 2026-03-10T10:19:22.847 INFO:tasks.workunit.client.1.vm05.stdout:8/148: link d7/l1a d7/l27 0 2026-03-10T10:19:22.847 INFO:tasks.workunit.client.1.vm05.stdout:8/149: fdatasync d7/f21 0 2026-03-10T10:19:22.847 INFO:tasks.workunit.client.1.vm05.stdout:8/150: symlink d7/l28 0 2026-03-10T10:19:22.847 INFO:tasks.workunit.client.1.vm05.stdout:2/182: dwrite db/d28/f30 [0,4194304] 0 2026-03-10T10:19:22.847 INFO:tasks.workunit.client.1.vm05.stdout:2/183: write db/d12/f31 [291810,24350] 0 2026-03-10T10:19:22.847 INFO:tasks.workunit.client.1.vm05.stdout:8/151: chown d7/d14/c20 2216032 1 2026-03-10T10:19:22.847 INFO:tasks.workunit.client.1.vm05.stdout:2/184: chown ca 344 1 2026-03-10T10:19:22.854 INFO:tasks.workunit.client.1.vm05.stdout:8/152: rename d7/f1e to d7/d14/f29 0 2026-03-10T10:19:22.872 INFO:tasks.workunit.client.1.vm05.stdout:1/159: sync 2026-03-10T10:19:22.873 INFO:tasks.workunit.client.1.vm05.stdout:8/153: dread d7/f8 [0,4194304] 0 2026-03-10T10:19:22.874 INFO:tasks.workunit.client.1.vm05.stdout:6/171: dread dd/df/d12/f35 [0,4194304] 0 2026-03-10T10:19:22.881 INFO:tasks.workunit.client.0.vm02.stdout:4/325: dread d1/d2/f3f [0,4194304] 0 2026-03-10T10:19:22.892 INFO:tasks.workunit.client.0.vm02.stdout:4/326: dwrite d1/d52/f5a [0,4194304] 0 2026-03-10T10:19:22.893 INFO:tasks.workunit.client.0.vm02.stdout:4/327: write d1/d10/db/f35 [2496350,121699] 0 2026-03-10T10:19:22.893 INFO:tasks.workunit.client.1.vm05.stdout:8/154: symlink d7/d14/d15/l2a 0 2026-03-10T10:19:22.893 INFO:tasks.workunit.client.1.vm05.stdout:8/155: write d7/f1c [568591,117661] 0 2026-03-10T10:19:22.893 INFO:tasks.workunit.client.1.vm05.stdout:1/160: dwrite d4/d20/f31 [0,4194304] 0 2026-03-10T10:19:22.893 INFO:tasks.workunit.client.1.vm05.stdout:8/156: chown d7/f1c 1843 1 2026-03-10T10:19:22.893 INFO:tasks.workunit.client.1.vm05.stdout:1/161: mknod d4/df/c32 0 2026-03-10T10:19:22.919 INFO:tasks.workunit.client.1.vm05.stdout:0/192: sync 2026-03-10T10:19:22.927 INFO:tasks.workunit.client.0.vm02.stdout:8/284: dwrite d1/d1c/f1e [0,4194304] 0 2026-03-10T10:19:22.931 INFO:tasks.workunit.client.1.vm05.stdout:4/160: getdents d1/d31/dc 0 2026-03-10T10:19:22.931 INFO:tasks.workunit.client.0.vm02.stdout:8/285: fdatasync d1/d1c/d43/f53 0 2026-03-10T10:19:22.931 INFO:tasks.workunit.client.0.vm02.stdout:8/286: mkdir d1/d1c/d24/d35/d56 0 2026-03-10T10:19:22.931 INFO:tasks.workunit.client.0.vm02.stdout:8/287: rmdir d1/d1c/d23 39 2026-03-10T10:19:22.932 INFO:tasks.workunit.client.0.vm02.stdout:8/288: write d1/d1c/f42 [1435824,83269] 0 2026-03-10T10:19:22.933 INFO:tasks.workunit.client.1.vm05.stdout:7/206: dwrite d5/dd/f12 [4194304,4194304] 0 2026-03-10T10:19:22.935 INFO:tasks.workunit.client.0.vm02.stdout:8/289: mknod d1/d1c/d24/d35/d56/c57 0 2026-03-10T10:19:22.936 INFO:tasks.workunit.client.0.vm02.stdout:8/290: write d1/d1c/f33 [2928433,80052] 0 2026-03-10T10:19:22.936 INFO:tasks.workunit.client.1.vm05.stdout:0/193: read d1/f11 [818167,108653] 0 2026-03-10T10:19:22.936 INFO:tasks.workunit.client.0.vm02.stdout:8/291: chown d1/d1c/c3a 2933 1 2026-03-10T10:19:22.937 INFO:tasks.workunit.client.1.vm05.stdout:6/172: sync 2026-03-10T10:19:22.938 INFO:tasks.workunit.client.1.vm05.stdout:4/161: creat d1/d31/f36 x:0 0 0 2026-03-10T10:19:22.946 INFO:tasks.workunit.client.0.vm02.stdout:8/292: mknod d1/d1c/d24/d35/c58 0 2026-03-10T10:19:22.946 INFO:tasks.workunit.client.1.vm05.stdout:9/196: getdents d0/d1/d16 0 2026-03-10T10:19:22.947 INFO:tasks.workunit.client.1.vm05.stdout:9/197: chown d0/f1e 68470 1 2026-03-10T10:19:22.947 INFO:tasks.workunit.client.1.vm05.stdout:9/198: truncate d0/df/f3b 1006497 0 2026-03-10T10:19:22.948 INFO:tasks.workunit.client.0.vm02.stdout:6/217: rename d0/d7 to d0/d8/d29/d2f/d4b 0 2026-03-10T10:19:22.949 INFO:tasks.workunit.client.0.vm02.stdout:6/218: dread - d0/d8/d29/d2f/f38 zero size 2026-03-10T10:19:22.949 INFO:tasks.workunit.client.1.vm05.stdout:6/173: chown dd/df/f22 480178974 1 2026-03-10T10:19:22.951 INFO:tasks.workunit.client.1.vm05.stdout:4/162: symlink d1/d31/dc/l37 0 2026-03-10T10:19:22.958 INFO:tasks.workunit.client.0.vm02.stdout:3/223: rename d1/d8/f34 to d1/d8/d21/f4d 0 2026-03-10T10:19:22.958 INFO:tasks.workunit.client.0.vm02.stdout:3/224: dwrite d1/d6/f39 [0,4194304] 0 2026-03-10T10:19:22.958 INFO:tasks.workunit.client.0.vm02.stdout:3/225: dread - d1/d20/f41 zero size 2026-03-10T10:19:22.959 INFO:tasks.workunit.client.0.vm02.stdout:3/226: dwrite d1/f3 [4194304,4194304] 0 2026-03-10T10:19:22.974 INFO:tasks.workunit.client.0.vm02.stdout:6/219: dread d0/f21 [4194304,4194304] 0 2026-03-10T10:19:22.975 INFO:tasks.workunit.client.0.vm02.stdout:6/220: write d0/d8/d29/d2f/f38 [396098,127507] 0 2026-03-10T10:19:22.986 INFO:tasks.workunit.client.0.vm02.stdout:8/293: creat d1/d1c/d24/f59 x:0 0 0 2026-03-10T10:19:22.989 INFO:tasks.workunit.client.1.vm05.stdout:2/185: write db/d12/f1d [4740541,92936] 0 2026-03-10T10:19:22.993 INFO:tasks.workunit.client.1.vm05.stdout:6/174: rename dd/df to dd/d36/d3f 0 2026-03-10T10:19:22.995 INFO:tasks.workunit.client.0.vm02.stdout:0/263: rename c8 to d9/d18/d1a/d46/c50 0 2026-03-10T10:19:22.995 INFO:tasks.workunit.client.0.vm02.stdout:0/264: stat f2 0 2026-03-10T10:19:22.999 INFO:tasks.workunit.client.1.vm05.stdout:4/163: readlink d1/d3/le 0 2026-03-10T10:19:23.002 INFO:tasks.workunit.client.0.vm02.stdout:6/221: write d0/f28 [75775,130461] 0 2026-03-10T10:19:23.003 INFO:tasks.workunit.client.0.vm02.stdout:8/294: dread - d1/d1c/d43/f46 zero size 2026-03-10T10:19:23.005 INFO:tasks.workunit.client.0.vm02.stdout:6/222: dwrite d0/d8/d29/d2f/d4b/f17 [0,4194304] 0 2026-03-10T10:19:23.016 INFO:tasks.workunit.client.0.vm02.stdout:4/328: rename d1/d2/d1a/c2f to d1/d2/d37/d63/c65 0 2026-03-10T10:19:23.020 INFO:tasks.workunit.client.0.vm02.stdout:0/265: mkdir d9/d18/d1a/d22/d24/d51 0 2026-03-10T10:19:23.031 INFO:tasks.workunit.client.0.vm02.stdout:0/266: symlink d9/d18/d1a/d22/d24/l52 0 2026-03-10T10:19:23.031 INFO:tasks.workunit.client.0.vm02.stdout:3/227: creat d1/f4e x:0 0 0 2026-03-10T10:19:23.032 INFO:tasks.workunit.client.1.vm05.stdout:4/164: dwrite d1/f17 [0,4194304] 0 2026-03-10T10:19:23.032 INFO:tasks.workunit.client.1.vm05.stdout:1/162: fsync d4/df/f11 0 2026-03-10T10:19:23.032 INFO:tasks.workunit.client.1.vm05.stdout:2/186: rename db/l34 to db/d2d/l3a 0 2026-03-10T10:19:23.032 INFO:tasks.workunit.client.1.vm05.stdout:2/187: fdatasync db/d28/f35 0 2026-03-10T10:19:23.032 INFO:tasks.workunit.client.1.vm05.stdout:5/212: write da/db/f1d [196423,116854] 0 2026-03-10T10:19:23.032 INFO:tasks.workunit.client.1.vm05.stdout:7/207: getdents d5/dd 0 2026-03-10T10:19:23.033 INFO:tasks.workunit.client.1.vm05.stdout:9/199: creat d0/d1/d13/f3f x:0 0 0 2026-03-10T10:19:23.033 INFO:tasks.workunit.client.1.vm05.stdout:6/175: unlink dd/c11 0 2026-03-10T10:19:23.034 INFO:tasks.workunit.client.0.vm02.stdout:4/329: fsync d1/d2/d1a/d49/f5c 0 2026-03-10T10:19:23.036 INFO:tasks.workunit.client.1.vm05.stdout:2/188: creat db/d12/f3b x:0 0 0 2026-03-10T10:19:23.050 INFO:tasks.workunit.client.1.vm05.stdout:6/176: stat dd/fe 0 2026-03-10T10:19:23.050 INFO:tasks.workunit.client.1.vm05.stdout:7/208: dwrite d5/d1d/d20/d35/f37 [0,4194304] 0 2026-03-10T10:19:23.051 INFO:tasks.workunit.client.1.vm05.stdout:1/163: rename d4/dd/l1e to d4/l33 0 2026-03-10T10:19:23.051 INFO:tasks.workunit.client.1.vm05.stdout:5/213: dwrite da/f10 [0,4194304] 0 2026-03-10T10:19:23.051 INFO:tasks.workunit.client.0.vm02.stdout:6/223: creat d0/f4c x:0 0 0 2026-03-10T10:19:23.051 INFO:tasks.workunit.client.0.vm02.stdout:4/330: creat d1/d52/d53/f66 x:0 0 0 2026-03-10T10:19:23.051 INFO:tasks.workunit.client.0.vm02.stdout:3/228: rename d1/c2 to d1/d8/d44/c4f 0 2026-03-10T10:19:23.051 INFO:tasks.workunit.client.0.vm02.stdout:6/224: readlink d0/l3 0 2026-03-10T10:19:23.051 INFO:tasks.workunit.client.0.vm02.stdout:3/229: dwrite d1/d8/fb [0,4194304] 0 2026-03-10T10:19:23.051 INFO:tasks.workunit.client.0.vm02.stdout:4/331: chown d1/d41/f58 150080 1 2026-03-10T10:19:23.051 INFO:tasks.workunit.client.0.vm02.stdout:6/225: rmdir d0/d3a 39 2026-03-10T10:19:23.053 INFO:tasks.workunit.client.1.vm05.stdout:9/200: dread d0/d1/d13/f27 [0,4194304] 0 2026-03-10T10:19:23.062 INFO:tasks.workunit.client.0.vm02.stdout:4/332: creat d1/d32/d3e/f67 x:0 0 0 2026-03-10T10:19:23.068 INFO:tasks.workunit.client.1.vm05.stdout:7/209: readlink d5/d1d/d29/l2e 0 2026-03-10T10:19:23.068 INFO:tasks.workunit.client.1.vm05.stdout:1/164: dwrite d4/dd/f1f [0,4194304] 0 2026-03-10T10:19:23.073 INFO:tasks.workunit.client.1.vm05.stdout:4/165: rename d1/d3/c1c to d1/d31/dc/c38 0 2026-03-10T10:19:23.078 INFO:tasks.workunit.client.0.vm02.stdout:4/333: symlink d1/d2/d55/l68 0 2026-03-10T10:19:23.088 INFO:tasks.workunit.client.0.vm02.stdout:4/334: write d1/d10/db/f35 [1925919,102601] 0 2026-03-10T10:19:23.088 INFO:tasks.workunit.client.0.vm02.stdout:4/335: write d1/d10/db/f20 [554267,57532] 0 2026-03-10T10:19:23.088 INFO:tasks.workunit.client.0.vm02.stdout:4/336: rename d1/d2/d1a/f4b to d1/d32/f69 0 2026-03-10T10:19:23.088 INFO:tasks.workunit.client.0.vm02.stdout:3/230: getdents d1/d6 0 2026-03-10T10:19:23.088 INFO:tasks.workunit.client.0.vm02.stdout:4/337: mknod d1/d41/d5e/c6a 0 2026-03-10T10:19:23.088 INFO:tasks.workunit.client.1.vm05.stdout:5/214: mknod da/c40 0 2026-03-10T10:19:23.088 INFO:tasks.workunit.client.0.vm02.stdout:4/338: read - d1/d2/d55/f5f zero size 2026-03-10T10:19:23.089 INFO:tasks.workunit.client.1.vm05.stdout:7/210: mknod d5/d1d/d20/d2d/c38 0 2026-03-10T10:19:23.089 INFO:tasks.workunit.client.0.vm02.stdout:3/231: creat d1/f50 x:0 0 0 2026-03-10T10:19:23.092 INFO:tasks.workunit.client.0.vm02.stdout:3/232: creat d1/d20/f51 x:0 0 0 2026-03-10T10:19:23.093 INFO:tasks.workunit.client.1.vm05.stdout:7/211: creat d5/d26/f39 x:0 0 0 2026-03-10T10:19:23.094 INFO:tasks.workunit.client.0.vm02.stdout:4/339: mknod d1/d2/c6b 0 2026-03-10T10:19:23.094 INFO:tasks.workunit.client.0.vm02.stdout:4/340: stat d1/d32/c47 0 2026-03-10T10:19:23.106 INFO:tasks.workunit.client.1.vm05.stdout:7/212: dread - d5/dd/f28 zero size 2026-03-10T10:19:23.106 INFO:tasks.workunit.client.1.vm05.stdout:5/215: creat da/f41 x:0 0 0 2026-03-10T10:19:23.106 INFO:tasks.workunit.client.1.vm05.stdout:7/213: creat d5/d1d/d29/f3a x:0 0 0 2026-03-10T10:19:23.106 INFO:tasks.workunit.client.0.vm02.stdout:4/341: mkdir d1/d2/d37/d6c 0 2026-03-10T10:19:23.106 INFO:tasks.workunit.client.0.vm02.stdout:4/342: write d1/d41/f58 [282861,28630] 0 2026-03-10T10:19:23.106 INFO:tasks.workunit.client.0.vm02.stdout:4/343: mknod d1/d10/db/c6d 0 2026-03-10T10:19:23.106 INFO:tasks.workunit.client.0.vm02.stdout:4/344: link d1/d10/db/c6d d1/d52/c6e 0 2026-03-10T10:19:23.106 INFO:tasks.workunit.client.0.vm02.stdout:4/345: chown d1/d2/d37/d6c 2159 1 2026-03-10T10:19:23.112 INFO:tasks.workunit.client.1.vm05.stdout:1/165: read d4/d20/f2d [171313,74871] 0 2026-03-10T10:19:23.116 INFO:tasks.workunit.client.1.vm05.stdout:5/216: dread da/db/d26/d35/f30 [0,4194304] 0 2026-03-10T10:19:23.118 INFO:tasks.workunit.client.1.vm05.stdout:5/217: symlink da/db/l42 0 2026-03-10T10:19:23.119 INFO:tasks.workunit.client.1.vm05.stdout:1/166: dwrite d4/dd/f1f [0,4194304] 0 2026-03-10T10:19:23.122 INFO:tasks.workunit.client.1.vm05.stdout:5/218: write da/db/fd [3235920,100695] 0 2026-03-10T10:19:23.122 INFO:tasks.workunit.client.1.vm05.stdout:5/219: rename da/db to da/db/d43 22 2026-03-10T10:19:23.133 INFO:tasks.workunit.client.1.vm05.stdout:5/220: getdents da/db/d26/d35/d38 0 2026-03-10T10:19:23.136 INFO:tasks.workunit.client.1.vm05.stdout:5/221: creat da/db/d28/f44 x:0 0 0 2026-03-10T10:19:23.136 INFO:tasks.workunit.client.1.vm05.stdout:5/222: fdatasync da/f41 0 2026-03-10T10:19:23.145 INFO:tasks.workunit.client.1.vm05.stdout:5/223: symlink da/db/d26/d35/d38/l45 0 2026-03-10T10:19:23.145 INFO:tasks.workunit.client.1.vm05.stdout:1/167: dread d4/df/d1c/f2a [0,4194304] 0 2026-03-10T10:19:23.148 INFO:tasks.workunit.client.1.vm05.stdout:5/224: chown da/f10 2671970 1 2026-03-10T10:19:23.152 INFO:tasks.workunit.client.1.vm05.stdout:1/168: mknod d4/df/d1c/c34 0 2026-03-10T10:19:23.157 INFO:tasks.workunit.client.1.vm05.stdout:5/225: getdents da/db/de 0 2026-03-10T10:19:23.162 INFO:tasks.workunit.client.1.vm05.stdout:1/169: dwrite d4/f2f [0,4194304] 0 2026-03-10T10:19:23.172 INFO:tasks.workunit.client.1.vm05.stdout:5/226: creat da/db/de/f46 x:0 0 0 2026-03-10T10:19:23.173 INFO:tasks.workunit.client.1.vm05.stdout:5/227: creat da/db/d28/f47 x:0 0 0 2026-03-10T10:19:23.173 INFO:tasks.workunit.client.1.vm05.stdout:1/170: rename d4/df/d1c/c34 to d4/df/c35 0 2026-03-10T10:19:23.173 INFO:tasks.workunit.client.1.vm05.stdout:5/228: dwrite da/f10 [0,4194304] 0 2026-03-10T10:19:23.189 INFO:tasks.workunit.client.1.vm05.stdout:5/229: rename da/db/d26/f3f to da/db/d26/d35/d38/f48 0 2026-03-10T10:19:23.191 INFO:tasks.workunit.client.1.vm05.stdout:1/171: dwrite d4/d20/f31 [0,4194304] 0 2026-03-10T10:19:23.199 INFO:tasks.workunit.client.1.vm05.stdout:6/177: read dd/d36/d3f/f1e [484075,94511] 0 2026-03-10T10:19:23.203 INFO:tasks.workunit.client.1.vm05.stdout:5/230: dwrite da/db/f1d [0,4194304] 0 2026-03-10T10:19:23.204 INFO:tasks.workunit.client.1.vm05.stdout:1/172: link d4/df/f11 d4/f36 0 2026-03-10T10:19:23.204 INFO:tasks.workunit.client.1.vm05.stdout:1/173: readlink d4/d20/l2b 0 2026-03-10T10:19:23.205 INFO:tasks.workunit.client.1.vm05.stdout:5/231: mknod da/db/d28/d32/c49 0 2026-03-10T10:19:23.208 INFO:tasks.workunit.client.0.vm02.stdout:6/226: dread d0/d8/d29/d2f/f38 [0,4194304] 0 2026-03-10T10:19:23.229 INFO:tasks.workunit.client.0.vm02.stdout:6/227: read - d0/d3a/f40 zero size 2026-03-10T10:19:23.232 INFO:tasks.workunit.client.1.vm05.stdout:5/232: symlink da/db/de/l4a 0 2026-03-10T10:19:23.232 INFO:tasks.workunit.client.0.vm02.stdout:6/228: dwrite d0/f4c [0,4194304] 0 2026-03-10T10:19:23.236 INFO:tasks.workunit.client.0.vm02.stdout:6/229: dread d0/d8/d29/d2f/d4b/f26 [0,4194304] 0 2026-03-10T10:19:23.239 INFO:tasks.workunit.client.0.vm02.stdout:6/230: dread d0/d8/d9/f14 [0,4194304] 0 2026-03-10T10:19:23.239 INFO:tasks.workunit.client.0.vm02.stdout:6/231: read d0/d8/d9/f14 [62968,77901] 0 2026-03-10T10:19:23.245 INFO:tasks.workunit.client.1.vm05.stdout:5/233: mkdir da/db/de/d4b 0 2026-03-10T10:19:23.245 INFO:tasks.workunit.client.1.vm05.stdout:9/201: getdents d0/df 0 2026-03-10T10:19:23.259 INFO:tasks.workunit.client.1.vm05.stdout:9/202: creat d0/d1/d16/f40 x:0 0 0 2026-03-10T10:19:23.259 INFO:tasks.workunit.client.0.vm02.stdout:6/232: mknod d0/d8/d29/d2f/d4b/c4d 0 2026-03-10T10:19:23.263 INFO:tasks.workunit.client.1.vm05.stdout:5/234: creat da/db/d26/f4c x:0 0 0 2026-03-10T10:19:23.267 INFO:tasks.workunit.client.0.vm02.stdout:5/379: dwrite d1/db/d11/d13/d28/f2c [0,4194304] 0 2026-03-10T10:19:23.269 INFO:tasks.workunit.client.1.vm05.stdout:9/203: dwrite d0/f7 [0,4194304] 0 2026-03-10T10:19:23.271 INFO:tasks.workunit.client.1.vm05.stdout:9/204: readlink d0/d1/d13/l6 0 2026-03-10T10:19:23.274 INFO:tasks.workunit.client.0.vm02.stdout:6/233: rename d0/d8/f2a to d0/d8/d29/d2f/f4e 0 2026-03-10T10:19:23.304 INFO:tasks.workunit.client.1.vm05.stdout:1/174: rmdir d4/d20 39 2026-03-10T10:19:23.304 INFO:tasks.workunit.client.1.vm05.stdout:3/253: dwrite fb [0,4194304] 0 2026-03-10T10:19:23.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:23 vm05.local ceph-mon[59051]: from='client.14666 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:23.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:23 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/2376298377' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:19:23.305 INFO:tasks.workunit.client.0.vm02.stdout:5/380: mknod d1/db/d11/d62/c83 0 2026-03-10T10:19:23.305 INFO:tasks.workunit.client.0.vm02.stdout:5/381: readlink d1/l5e 0 2026-03-10T10:19:23.305 INFO:tasks.workunit.client.0.vm02.stdout:5/382: stat d1/db/d11/d13/f25 0 2026-03-10T10:19:23.305 INFO:tasks.workunit.client.0.vm02.stdout:6/234: creat d0/d8/d9/f4f x:0 0 0 2026-03-10T10:19:23.305 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:22 vm02.local ceph-mon[50200]: from='client.14666 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:23.305 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:22 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/2376298377' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:19:23.305 INFO:tasks.workunit.client.1.vm05.stdout:1/175: stat d4/lc 0 2026-03-10T10:19:23.305 INFO:tasks.workunit.client.0.vm02.stdout:5/383: truncate d1/db/d11/f3e 3761716 0 2026-03-10T10:19:23.305 INFO:tasks.workunit.client.0.vm02.stdout:5/384: rename d1/db/d11/d16/d29 to d1/db/d11/d84 0 2026-03-10T10:19:23.305 INFO:tasks.workunit.client.1.vm05.stdout:8/157: getdents d7/d14/d24 0 2026-03-10T10:19:23.305 INFO:tasks.workunit.client.1.vm05.stdout:9/205: dwrite d0/d1/d13/f27 [0,4194304] 0 2026-03-10T10:19:23.307 INFO:tasks.workunit.client.0.vm02.stdout:5/385: mkdir d1/db/d11/d16/d79/d85 0 2026-03-10T10:19:23.309 INFO:tasks.workunit.client.0.vm02.stdout:2/323: dwrite d0/d10/f5f [4194304,4194304] 0 2026-03-10T10:19:23.313 INFO:tasks.workunit.client.1.vm05.stdout:1/176: readlink d4/d20/l29 0 2026-03-10T10:19:23.341 INFO:tasks.workunit.client.1.vm05.stdout:8/158: readlink d7/l1a 0 2026-03-10T10:19:23.341 INFO:tasks.workunit.client.1.vm05.stdout:3/254: symlink dd/d39/d5c/l5d 0 2026-03-10T10:19:23.341 INFO:tasks.workunit.client.1.vm05.stdout:3/255: rmdir dd/d15/d1f 39 2026-03-10T10:19:23.341 INFO:tasks.workunit.client.1.vm05.stdout:1/177: mkdir d4/d37 0 2026-03-10T10:19:23.341 INFO:tasks.workunit.client.0.vm02.stdout:5/386: symlink d1/db/d11/d84/d40/d4f/d5f/d6d/l86 0 2026-03-10T10:19:23.341 INFO:tasks.workunit.client.0.vm02.stdout:1/259: truncate d4/da/f12 3389748 0 2026-03-10T10:19:23.341 INFO:tasks.workunit.client.0.vm02.stdout:5/387: rmdir d1/db/d11/d84/d40/d4f 39 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:5/388: dread - d1/db/d11/f7d zero size 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:5/389: chown d1/c81 7856707 1 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:1/260: mknod d4/d2c/d53/c59 0 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:7/251: truncate d1/dc/d16/d28/d2d/f42 357160 0 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:7/252: chown d1/dc/d10/f24 18717 1 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:2/324: creat d0/d1a/d49/d5e/d65/f6c x:0 0 0 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:5/390: mknod d1/db/d11/d16/d48/c87 0 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:7/253: chown d1/f17 2702 1 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:7/254: chown d1/f17 469 1 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:2/325: mknod d0/d1a/d24/c6d 0 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.0.vm02.stdout:2/326: dwrite d0/d1a/f66 [0,4194304] 0 2026-03-10T10:19:23.342 INFO:tasks.workunit.client.1.vm05.stdout:1/178: creat d4/df/d1c/f38 x:0 0 0 2026-03-10T10:19:23.344 INFO:tasks.workunit.client.1.vm05.stdout:8/159: creat d7/f2b x:0 0 0 2026-03-10T10:19:23.344 INFO:tasks.workunit.client.0.vm02.stdout:7/255: creat d1/dc/d16/d28/f4e x:0 0 0 2026-03-10T10:19:23.344 INFO:tasks.workunit.client.0.vm02.stdout:7/256: readlink d1/dc/d10/l11 0 2026-03-10T10:19:23.345 INFO:tasks.workunit.client.0.vm02.stdout:7/257: chown d1/dc/le 8750 1 2026-03-10T10:19:23.348 INFO:tasks.workunit.client.0.vm02.stdout:2/327: creat d0/d1a/d24/f6e x:0 0 0 2026-03-10T10:19:23.358 INFO:tasks.workunit.client.0.vm02.stdout:8/295: getdents d1/d1c/d24/d35 0 2026-03-10T10:19:23.358 INFO:tasks.workunit.client.0.vm02.stdout:3/233: getdents d1/d8/d21 0 2026-03-10T10:19:23.358 INFO:tasks.workunit.client.0.vm02.stdout:3/234: dread - d1/d20/f40 zero size 2026-03-10T10:19:23.358 INFO:tasks.workunit.client.0.vm02.stdout:1/261: link d4/da/d1a/d22/l51 d4/d4a/l5a 0 2026-03-10T10:19:23.358 INFO:tasks.workunit.client.0.vm02.stdout:1/262: dread d4/f21 [0,4194304] 0 2026-03-10T10:19:23.359 INFO:tasks.workunit.client.1.vm05.stdout:8/160: link f2 d7/d14/d24/f2c 0 2026-03-10T10:19:23.359 INFO:tasks.workunit.client.1.vm05.stdout:0/194: dwrite d1/d7/f16 [0,4194304] 0 2026-03-10T10:19:23.360 INFO:tasks.workunit.client.0.vm02.stdout:2/328: fsync d0/d10/f4b 0 2026-03-10T10:19:23.365 INFO:tasks.workunit.client.0.vm02.stdout:5/391: creat d1/db/f88 x:0 0 0 2026-03-10T10:19:23.370 INFO:tasks.workunit.client.0.vm02.stdout:5/392: dwrite d1/db/d11/d84/d40/f66 [0,4194304] 0 2026-03-10T10:19:23.371 INFO:tasks.workunit.client.0.vm02.stdout:1/263: dread d4/d1b/f44 [0,4194304] 0 2026-03-10T10:19:23.373 INFO:tasks.workunit.client.0.vm02.stdout:3/235: unlink d1/d6/l1e 0 2026-03-10T10:19:23.374 INFO:tasks.workunit.client.1.vm05.stdout:0/195: symlink d1/d2/d9/d31/d12/d20/l43 0 2026-03-10T10:19:23.375 INFO:tasks.workunit.client.0.vm02.stdout:8/296: creat d1/d1c/d23/d3e/f5a x:0 0 0 2026-03-10T10:19:23.376 INFO:tasks.workunit.client.0.vm02.stdout:5/393: unlink d1/db/d11/d84/f52 0 2026-03-10T10:19:23.377 INFO:tasks.workunit.client.0.vm02.stdout:3/236: mkdir d1/d20/d52 0 2026-03-10T10:19:23.378 INFO:tasks.workunit.client.0.vm02.stdout:2/329: symlink d0/d10/d69/l6f 0 2026-03-10T10:19:23.379 INFO:tasks.workunit.client.0.vm02.stdout:8/297: mkdir d1/d1c/d43/d5b 0 2026-03-10T10:19:23.380 INFO:tasks.workunit.client.0.vm02.stdout:8/298: rename d1 to d1/d1c/d43/d5c 22 2026-03-10T10:19:23.383 INFO:tasks.workunit.client.0.vm02.stdout:8/299: dwrite d1/d1c/d23/d3e/f5a [0,4194304] 0 2026-03-10T10:19:23.385 INFO:tasks.workunit.client.0.vm02.stdout:2/330: creat d0/f70 x:0 0 0 2026-03-10T10:19:23.391 INFO:tasks.workunit.client.0.vm02.stdout:5/394: creat d1/db/d11/d62/d67/f89 x:0 0 0 2026-03-10T10:19:23.393 INFO:tasks.workunit.client.0.vm02.stdout:5/395: rename d1/db/d11/d13/d28/d37/f72 to d1/db/d11/d84/f8a 0 2026-03-10T10:19:23.394 INFO:tasks.workunit.client.0.vm02.stdout:8/300: creat d1/d1c/d23/d25/f5d x:0 0 0 2026-03-10T10:19:23.395 INFO:tasks.workunit.client.0.vm02.stdout:8/301: dread - d1/d1c/d23/d3e/f50 zero size 2026-03-10T10:19:23.395 INFO:tasks.workunit.client.0.vm02.stdout:5/396: mknod d1/c8b 0 2026-03-10T10:19:23.398 INFO:tasks.workunit.client.0.vm02.stdout:5/397: symlink d1/db/d11/d13/d28/d37/l8c 0 2026-03-10T10:19:23.400 INFO:tasks.workunit.client.0.vm02.stdout:5/398: link d1/c81 d1/db/d11/d16/d79/c8d 0 2026-03-10T10:19:23.401 INFO:tasks.workunit.client.0.vm02.stdout:5/399: write d1/db/d11/d62/f65 [3515,110843] 0 2026-03-10T10:19:23.401 INFO:tasks.workunit.client.0.vm02.stdout:5/400: fsync d1/db/d11/d16/f19 0 2026-03-10T10:19:23.402 INFO:tasks.workunit.client.0.vm02.stdout:5/401: chown d1/db/f88 117 1 2026-03-10T10:19:23.406 INFO:tasks.workunit.client.0.vm02.stdout:5/402: link d1/db/d11/d13/d28/d37/l5a d1/db/d11/d84/d40/d4f/l8e 0 2026-03-10T10:19:23.407 INFO:tasks.workunit.client.0.vm02.stdout:5/403: unlink d1/db/fd 0 2026-03-10T10:19:23.414 INFO:tasks.workunit.client.0.vm02.stdout:8/302: dread d1/d1c/f2a [0,4194304] 0 2026-03-10T10:19:23.435 INFO:tasks.workunit.client.1.vm05.stdout:3/256: dread dd/d15/d24/d2c/d3b/f48 [0,4194304] 0 2026-03-10T10:19:23.445 INFO:tasks.workunit.client.1.vm05.stdout:3/257: getdents dd/d15/d24/d2c 0 2026-03-10T10:19:23.449 INFO:tasks.workunit.client.1.vm05.stdout:3/258: unlink dd/d15/d1f/f54 0 2026-03-10T10:19:23.451 INFO:tasks.workunit.client.1.vm05.stdout:3/259: write dd/d15/d1f/f3d [1403651,117113] 0 2026-03-10T10:19:23.523 INFO:tasks.workunit.client.1.vm05.stdout:2/189: getdents db 0 2026-03-10T10:19:23.525 INFO:tasks.workunit.client.1.vm05.stdout:2/190: creat db/d12/f3c x:0 0 0 2026-03-10T10:19:23.525 INFO:tasks.workunit.client.1.vm05.stdout:2/191: readlink db/d1c/l22 0 2026-03-10T10:19:23.528 INFO:tasks.workunit.client.1.vm05.stdout:2/192: write db/d12/f31 [444864,398] 0 2026-03-10T10:19:23.531 INFO:tasks.workunit.client.1.vm05.stdout:2/193: write db/f33 [538438,29299] 0 2026-03-10T10:19:23.543 INFO:tasks.workunit.client.1.vm05.stdout:2/194: unlink db/d1c/l22 0 2026-03-10T10:19:23.566 INFO:tasks.workunit.client.0.vm02.stdout:4/346: getdents d1/d41/d5e 0 2026-03-10T10:19:23.566 INFO:tasks.workunit.client.0.vm02.stdout:4/347: fdatasync d1/d2/d1a/f4c 0 2026-03-10T10:19:23.566 INFO:tasks.workunit.client.0.vm02.stdout:4/348: write d1/d41/f58 [1222182,47524] 0 2026-03-10T10:19:23.566 INFO:tasks.workunit.client.0.vm02.stdout:4/349: creat d1/f6f x:0 0 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:7/214: rmdir d5/d26 39 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/166: readlink d1/d3/l4 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:7/215: mkdir d5/d1d/d20/d3b 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/167: stat d1/d31/dc/f21 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:7/216: creat d5/d17/f3c x:0 0 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:7/217: stat d5/dd/f12 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/168: creat d1/f39 x:0 0 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:7/218: creat d5/d1d/d20/d2d/f3d x:0 0 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/169: creat d1/d31/dc/f3a x:0 0 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:7/219: mkdir d5/d1d/d29/d3e 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:7/220: write d5/f34 [1000348,58228] 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/170: read - d1/d31/f13 zero size 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/171: chown d1/d31/l32 34584 1 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/172: write d1/d31/f2f [335545,4388] 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:7/221: dwrite d5/d26/f2c [0,4194304] 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/173: mknod d1/d31/c3b 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/174: mknod d1/d31/c3c 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/175: write d1/f19 [831073,82396] 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:7/222: mknod d5/d1d/d20/d35/c3f 0 2026-03-10T10:19:23.567 INFO:tasks.workunit.client.1.vm05.stdout:4/176: creat d1/d31/dc/f3d x:0 0 0 2026-03-10T10:19:23.568 INFO:tasks.workunit.client.1.vm05.stdout:7/223: truncate d5/d26/f39 567141 0 2026-03-10T10:19:23.569 INFO:tasks.workunit.client.1.vm05.stdout:7/224: stat d5/dd/f2f 0 2026-03-10T10:19:23.570 INFO:tasks.workunit.client.1.vm05.stdout:7/225: creat d5/d17/f40 x:0 0 0 2026-03-10T10:19:23.571 INFO:tasks.workunit.client.1.vm05.stdout:4/177: link d1/d31/dc/l37 d1/d31/l3e 0 2026-03-10T10:19:23.571 INFO:tasks.workunit.client.1.vm05.stdout:7/226: chown d5/fe 386 1 2026-03-10T10:19:23.571 INFO:tasks.workunit.client.1.vm05.stdout:4/178: dread - d1/d31/f34 zero size 2026-03-10T10:19:23.574 INFO:tasks.workunit.client.1.vm05.stdout:7/227: getdents d5/d26 0 2026-03-10T10:19:23.580 INFO:tasks.workunit.client.1.vm05.stdout:4/179: dread d1/f17 [4194304,4194304] 0 2026-03-10T10:19:23.580 INFO:tasks.workunit.client.1.vm05.stdout:7/228: link d5/d1d/d29/f3a d5/d26/f41 0 2026-03-10T10:19:23.580 INFO:tasks.workunit.client.1.vm05.stdout:4/180: mknod d1/d31/dc/c3f 0 2026-03-10T10:19:23.582 INFO:tasks.workunit.client.1.vm05.stdout:4/181: dread d1/d31/f1b [0,4194304] 0 2026-03-10T10:19:23.583 INFO:tasks.workunit.client.1.vm05.stdout:4/182: write d1/d31/dc/f1f [4500083,125842] 0 2026-03-10T10:19:23.586 INFO:tasks.workunit.client.1.vm05.stdout:4/183: truncate d1/f19 1304876 0 2026-03-10T10:19:23.587 INFO:tasks.workunit.client.1.vm05.stdout:4/184: write d1/d31/fd [2258837,96215] 0 2026-03-10T10:19:23.589 INFO:tasks.workunit.client.1.vm05.stdout:4/185: mkdir d1/d31/dc/d40 0 2026-03-10T10:19:23.598 INFO:tasks.workunit.client.1.vm05.stdout:8/161: read f6 [408995,89589] 0 2026-03-10T10:19:23.608 INFO:tasks.workunit.client.1.vm05.stdout:4/186: dwrite d1/d31/f34 [0,4194304] 0 2026-03-10T10:19:23.613 INFO:tasks.workunit.client.1.vm05.stdout:4/187: write d1/d3/f12 [2470900,39127] 0 2026-03-10T10:19:23.619 INFO:tasks.workunit.client.1.vm05.stdout:4/188: creat d1/d31/f41 x:0 0 0 2026-03-10T10:19:23.822 INFO:tasks.workunit.client.1.vm05.stdout:2/195: sync 2026-03-10T10:19:23.826 INFO:tasks.workunit.client.1.vm05.stdout:2/196: write db/f36 [3015490,100857] 0 2026-03-10T10:19:23.827 INFO:tasks.workunit.client.1.vm05.stdout:2/197: creat db/d1c/f3d x:0 0 0 2026-03-10T10:19:23.828 INFO:tasks.workunit.client.1.vm05.stdout:2/198: stat db/d28/c39 0 2026-03-10T10:19:23.828 INFO:tasks.workunit.client.1.vm05.stdout:2/199: fdatasync f1 0 2026-03-10T10:19:23.828 INFO:tasks.workunit.client.1.vm05.stdout:2/200: fdatasync f1 0 2026-03-10T10:19:23.828 INFO:tasks.workunit.client.1.vm05.stdout:2/201: stat db/d12/f31 0 2026-03-10T10:19:23.830 INFO:tasks.workunit.client.1.vm05.stdout:2/202: rmdir db/d28 39 2026-03-10T10:19:23.832 INFO:tasks.workunit.client.1.vm05.stdout:2/203: truncate db/d12/f31 916838 0 2026-03-10T10:19:23.833 INFO:tasks.workunit.client.1.vm05.stdout:2/204: symlink db/d2d/l3e 0 2026-03-10T10:19:23.837 INFO:tasks.workunit.client.1.vm05.stdout:2/205: dwrite db/f25 [0,4194304] 0 2026-03-10T10:19:23.859 INFO:tasks.workunit.client.1.vm05.stdout:2/206: dwrite db/f33 [0,4194304] 0 2026-03-10T10:19:23.863 INFO:tasks.workunit.client.1.vm05.stdout:2/207: write db/f36 [1429776,17804] 0 2026-03-10T10:19:23.930 INFO:tasks.workunit.client.0.vm02.stdout:9/185: dread da/ff [0,4194304] 0 2026-03-10T10:19:23.931 INFO:tasks.workunit.client.0.vm02.stdout:9/186: mkdir da/d3c 0 2026-03-10T10:19:23.931 INFO:tasks.workunit.client.0.vm02.stdout:9/187: read - da/d10/f3b zero size 2026-03-10T10:19:23.932 INFO:tasks.workunit.client.0.vm02.stdout:9/188: getdents da/d10/d38 0 2026-03-10T10:19:23.933 INFO:tasks.workunit.client.0.vm02.stdout:9/189: dread - da/d10/f23 zero size 2026-03-10T10:19:23.933 INFO:tasks.workunit.client.0.vm02.stdout:9/190: dread - da/d10/d2c/f32 zero size 2026-03-10T10:19:23.934 INFO:tasks.workunit.client.0.vm02.stdout:9/191: read - da/f30 zero size 2026-03-10T10:19:23.934 INFO:tasks.workunit.client.0.vm02.stdout:9/192: chown da/c11 113186617 1 2026-03-10T10:19:23.936 INFO:tasks.workunit.client.0.vm02.stdout:9/193: read da/f28 [3689196,42822] 0 2026-03-10T10:19:23.937 INFO:tasks.workunit.client.0.vm02.stdout:9/194: creat da/d10/d2c/d34/f3d x:0 0 0 2026-03-10T10:19:23.938 INFO:tasks.workunit.client.0.vm02.stdout:9/195: rename da/d10/f20 to da/d3c/f3e 0 2026-03-10T10:19:23.939 INFO:tasks.workunit.client.0.vm02.stdout:9/196: readlink da/d10/d2c/l2e 0 2026-03-10T10:19:23.940 INFO:tasks.workunit.client.0.vm02.stdout:9/197: write da/d10/f19 [939168,29854] 0 2026-03-10T10:19:23.942 INFO:tasks.workunit.client.0.vm02.stdout:9/198: dread da/f14 [0,4194304] 0 2026-03-10T10:19:23.944 INFO:tasks.workunit.client.0.vm02.stdout:9/199: symlink da/d10/d2c/d34/l3f 0 2026-03-10T10:19:23.944 INFO:tasks.workunit.client.0.vm02.stdout:9/200: fdatasync da/f30 0 2026-03-10T10:19:23.946 INFO:tasks.workunit.client.0.vm02.stdout:9/201: rename da/d10/d2c/l2e to da/l40 0 2026-03-10T10:19:23.947 INFO:tasks.workunit.client.0.vm02.stdout:9/202: creat da/d10/f41 x:0 0 0 2026-03-10T10:19:23.948 INFO:tasks.workunit.client.0.vm02.stdout:9/203: mknod da/d10/d2c/d34/c42 0 2026-03-10T10:19:23.948 INFO:tasks.workunit.client.0.vm02.stdout:9/204: symlink da/d3c/l43 0 2026-03-10T10:19:23.998 INFO:tasks.workunit.client.1.vm05.stdout:5/235: rename da/db/de to da/db/d26/d35/d38/d4d 0 2026-03-10T10:19:24.012 INFO:tasks.workunit.client.1.vm05.stdout:7/229: rename d5/f25 to d5/d1d/d29/d3e/f42 0 2026-03-10T10:19:24.016 INFO:tasks.workunit.client.1.vm05.stdout:5/236: creat da/db/d26/d35/d38/d4d/d4b/f4e x:0 0 0 2026-03-10T10:19:24.023 INFO:tasks.workunit.client.1.vm05.stdout:4/189: rename d1/l2 to d1/d31/dc/d40/l42 0 2026-03-10T10:19:24.027 INFO:tasks.workunit.client.1.vm05.stdout:5/237: dread da/db/f1d [0,4194304] 0 2026-03-10T10:19:24.027 INFO:tasks.workunit.client.1.vm05.stdout:7/230: dwrite d5/ff [0,4194304] 0 2026-03-10T10:19:24.029 INFO:tasks.workunit.client.1.vm05.stdout:2/208: rename db/f33 to db/d28/f3f 0 2026-03-10T10:19:24.033 INFO:tasks.workunit.client.0.vm02.stdout:3/237: dread d1/f1c [0,4194304] 0 2026-03-10T10:19:24.034 INFO:tasks.workunit.client.1.vm05.stdout:7/231: symlink d5/d17/l43 0 2026-03-10T10:19:24.037 INFO:tasks.workunit.client.1.vm05.stdout:5/238: mknod da/db/d26/d35/c4f 0 2026-03-10T10:19:24.038 INFO:tasks.workunit.client.0.vm02.stdout:3/238: dwrite d1/d6/f49 [0,4194304] 0 2026-03-10T10:19:24.075 INFO:tasks.workunit.client.0.vm02.stdout:3/239: creat d1/d6/f53 x:0 0 0 2026-03-10T10:19:24.075 INFO:tasks.workunit.client.0.vm02.stdout:3/240: truncate d1/d20/f4b 550806 0 2026-03-10T10:19:24.075 INFO:tasks.workunit.client.0.vm02.stdout:3/241: write d1/d8/d21/f4d [17368,127732] 0 2026-03-10T10:19:24.075 INFO:tasks.workunit.client.1.vm05.stdout:4/190: dwrite d1/d31/dc/f25 [0,4194304] 0 2026-03-10T10:19:24.075 INFO:tasks.workunit.client.1.vm05.stdout:5/239: rename da/db/f24 to da/db/d26/d35/d38/d4d/f50 0 2026-03-10T10:19:24.075 INFO:tasks.workunit.client.1.vm05.stdout:5/240: creat da/db/d26/d35/d38/f51 x:0 0 0 2026-03-10T10:19:24.075 INFO:tasks.workunit.client.1.vm05.stdout:4/191: link d1/d31/c16 d1/d31/dc/d40/c43 0 2026-03-10T10:19:24.075 INFO:tasks.workunit.client.1.vm05.stdout:5/241: fsync f9 0 2026-03-10T10:19:24.075 INFO:tasks.workunit.client.1.vm05.stdout:4/192: dwrite d1/f17 [4194304,4194304] 0 2026-03-10T10:19:24.213 INFO:tasks.workunit.client.1.vm05.stdout:2/209: sync 2026-03-10T10:19:24.213 INFO:tasks.workunit.client.1.vm05.stdout:7/232: sync 2026-03-10T10:19:24.215 INFO:tasks.workunit.client.1.vm05.stdout:2/210: readlink db/d2d/l32 0 2026-03-10T10:19:24.223 INFO:tasks.workunit.client.1.vm05.stdout:2/211: dwrite db/f25 [0,4194304] 0 2026-03-10T10:19:24.226 INFO:tasks.workunit.client.1.vm05.stdout:7/233: dwrite d5/d17/f40 [0,4194304] 0 2026-03-10T10:19:24.231 INFO:tasks.workunit.client.1.vm05.stdout:2/212: dwrite db/f36 [4194304,4194304] 0 2026-03-10T10:19:24.232 INFO:tasks.workunit.client.1.vm05.stdout:6/178: truncate dd/f14 3133245 0 2026-03-10T10:19:24.240 INFO:tasks.workunit.client.1.vm05.stdout:7/234: mknod d5/d1d/d20/d3b/c44 0 2026-03-10T10:19:24.244 INFO:tasks.workunit.client.1.vm05.stdout:2/213: truncate db/d12/f1a 2754360 0 2026-03-10T10:19:24.244 INFO:tasks.workunit.client.1.vm05.stdout:7/235: write d5/dd/f12 [1915714,126901] 0 2026-03-10T10:19:24.245 INFO:tasks.workunit.client.1.vm05.stdout:7/236: dread - d5/d26/f41 zero size 2026-03-10T10:19:24.245 INFO:tasks.workunit.client.1.vm05.stdout:7/237: chown d5/d1d/f31 153 1 2026-03-10T10:19:24.246 INFO:tasks.workunit.client.1.vm05.stdout:2/214: mkdir db/d1c/d40 0 2026-03-10T10:19:24.246 INFO:tasks.workunit.client.1.vm05.stdout:7/238: chown d5/dd/f23 887 1 2026-03-10T10:19:24.250 INFO:tasks.workunit.client.1.vm05.stdout:7/239: symlink d5/d1d/d20/d3b/l45 0 2026-03-10T10:19:24.251 INFO:tasks.workunit.client.1.vm05.stdout:7/240: chown d5/d1d/d20/d35/f37 201 1 2026-03-10T10:19:24.252 INFO:tasks.workunit.client.1.vm05.stdout:2/215: mknod db/d1c/d40/c41 0 2026-03-10T10:19:24.266 INFO:tasks.workunit.client.1.vm05.stdout:2/216: link db/d1c/d40/c41 db/c42 0 2026-03-10T10:19:24.303 INFO:tasks.workunit.client.1.vm05.stdout:2/217: dread db/d28/f30 [0,4194304] 0 2026-03-10T10:19:24.303 INFO:tasks.workunit.client.1.vm05.stdout:5/242: dread da/f10 [8388608,4194304] 0 2026-03-10T10:19:24.303 INFO:tasks.workunit.client.1.vm05.stdout:2/218: dread - db/d12/f3b zero size 2026-03-10T10:19:24.303 INFO:tasks.workunit.client.1.vm05.stdout:2/219: dread - db/d1c/f3d zero size 2026-03-10T10:19:24.304 INFO:tasks.workunit.client.1.vm05.stdout:2/220: chown db/d28/f30 128715 1 2026-03-10T10:19:24.304 INFO:tasks.workunit.client.1.vm05.stdout:2/221: symlink db/d1c/d40/l43 0 2026-03-10T10:19:24.304 INFO:tasks.workunit.client.1.vm05.stdout:2/222: chown ca 17 1 2026-03-10T10:19:24.304 INFO:tasks.workunit.client.1.vm05.stdout:2/223: dwrite db/d28/f35 [0,4194304] 0 2026-03-10T10:19:24.494 INFO:tasks.workunit.client.1.vm05.stdout:7/241: sync 2026-03-10T10:19:24.494 INFO:tasks.workunit.client.1.vm05.stdout:7/242: read d5/d17/f19 [47887,81879] 0 2026-03-10T10:19:24.495 INFO:tasks.workunit.client.1.vm05.stdout:7/243: stat d5/cc 0 2026-03-10T10:19:24.505 INFO:tasks.workunit.client.1.vm05.stdout:5/243: sync 2026-03-10T10:19:24.591 INFO:tasks.workunit.client.0.vm02.stdout:0/267: dwrite d9/d18/f1e [0,4194304] 0 2026-03-10T10:19:24.604 INFO:tasks.workunit.client.1.vm05.stdout:1/179: rmdir d4/df/d1c 39 2026-03-10T10:19:24.606 INFO:tasks.workunit.client.1.vm05.stdout:9/206: dwrite d0/df/d11/f2d [0,4194304] 0 2026-03-10T10:19:24.607 INFO:tasks.workunit.client.0.vm02.stdout:0/268: creat d9/d18/d1a/d43/d49/f53 x:0 0 0 2026-03-10T10:19:24.608 INFO:tasks.workunit.client.0.vm02.stdout:7/258: truncate d1/dc/d16/f1e 1868237 0 2026-03-10T10:19:24.612 INFO:tasks.workunit.client.1.vm05.stdout:0/196: write d1/d2/d9/d31/d12/f1e [374743,45285] 0 2026-03-10T10:19:24.614 INFO:tasks.workunit.client.0.vm02.stdout:2/331: write d0/d1a/d49/d5e/d65/f6c [948770,35767] 0 2026-03-10T10:19:24.617 INFO:tasks.workunit.client.0.vm02.stdout:5/404: rmdir d1/db/d11/d62 39 2026-03-10T10:19:24.624 INFO:tasks.workunit.client.1.vm05.stdout:9/207: dread d0/d1/d16/f18 [0,4194304] 0 2026-03-10T10:19:24.624 INFO:tasks.workunit.client.1.vm05.stdout:0/197: creat d1/d2/d39/d3d/f44 x:0 0 0 2026-03-10T10:19:24.626 INFO:tasks.workunit.client.0.vm02.stdout:5/405: link d1/db/d11/d62/d67/f89 d1/db/d11/d62/f8f 0 2026-03-10T10:19:24.627 INFO:tasks.workunit.client.1.vm05.stdout:9/208: symlink d0/d1/d13/d26/l41 0 2026-03-10T10:19:24.629 INFO:tasks.workunit.client.0.vm02.stdout:5/406: read d1/db/d11/f4a [427321,114714] 0 2026-03-10T10:19:24.630 INFO:tasks.workunit.client.0.vm02.stdout:5/407: write d1/f68 [1015627,53306] 0 2026-03-10T10:19:24.630 INFO:tasks.workunit.client.1.vm05.stdout:9/209: fdatasync d0/d1/d13/de/f38 0 2026-03-10T10:19:24.631 INFO:tasks.workunit.client.1.vm05.stdout:9/210: dread - d0/d1/d16/f40 zero size 2026-03-10T10:19:24.635 INFO:tasks.workunit.client.0.vm02.stdout:8/303: dwrite d1/f12 [0,4194304] 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.0.vm02.stdout:8/304: rmdir d1/d1c/d43 39 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.0.vm02.stdout:5/408: truncate d1/db/d11/d16/f19 6124858 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.0.vm02.stdout:8/305: symlink d1/d1c/d23/d3e/l5e 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.0.vm02.stdout:8/306: write d1/d1c/d23/f3b [3289322,84339] 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.0.vm02.stdout:8/307: mkdir d1/d1c/d23/d5f 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.0.vm02.stdout:8/308: creat d1/d1c/d43/d5b/f60 x:0 0 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:0/198: symlink d1/d2/d9/d31/d13/d15/l45 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:9/211: rmdir d0/df/d11 39 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:9/212: dread - d0/d1/d16/f36 zero size 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:9/213: truncate d0/d1/d13/de/f38 934805 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:9/214: fsync d0/d1/fb 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:0/199: symlink d1/l46 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:9/215: mknod d0/d1/d13/de/d21/c42 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:0/200: unlink d1/d2/d9/l1c 0 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:0/201: read - d1/d7/f3c zero size 2026-03-10T10:19:24.652 INFO:tasks.workunit.client.1.vm05.stdout:9/216: rmdir d0/df/d11 39 2026-03-10T10:19:24.655 INFO:tasks.workunit.client.0.vm02.stdout:8/309: mknod d1/d1c/d23/c61 0 2026-03-10T10:19:24.657 INFO:tasks.workunit.client.0.vm02.stdout:8/310: rename d1/d1c/c37 to d1/d1c/d23/d3e/c62 0 2026-03-10T10:19:24.659 INFO:tasks.workunit.client.0.vm02.stdout:8/311: unlink d1/d2/f28 0 2026-03-10T10:19:24.661 INFO:tasks.workunit.client.0.vm02.stdout:8/312: link d1/d1c/d43/f46 d1/d1c/d43/d5b/f63 0 2026-03-10T10:19:24.662 INFO:tasks.workunit.client.0.vm02.stdout:8/313: fdatasync d1/d1c/d23/d25/f3d 0 2026-03-10T10:19:24.668 INFO:tasks.workunit.client.0.vm02.stdout:8/314: dread d1/d1c/d24/d35/f44 [0,4194304] 0 2026-03-10T10:19:24.672 INFO:tasks.workunit.client.0.vm02.stdout:8/315: dwrite d1/d1c/d43/f53 [0,4194304] 0 2026-03-10T10:19:24.679 INFO:tasks.workunit.client.0.vm02.stdout:8/316: unlink d1/d1c/d43/f45 0 2026-03-10T10:19:24.680 INFO:tasks.workunit.client.0.vm02.stdout:8/317: creat d1/d1c/d23/d25/f64 x:0 0 0 2026-03-10T10:19:24.683 INFO:tasks.workunit.client.0.vm02.stdout:8/318: dwrite d1/d1c/d23/d25/f64 [0,4194304] 0 2026-03-10T10:19:24.684 INFO:tasks.workunit.client.0.vm02.stdout:8/319: write d1/d1c/d24/d35/f4f [456891,7692] 0 2026-03-10T10:19:24.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:24 vm02.local ceph-mon[50200]: pgmap v151: 65 pgs: 65 active+clean; 1003 MiB data, 4.2 GiB used, 116 GiB / 120 GiB avail; 9.6 MiB/s rd, 128 MiB/s wr, 220 op/s 2026-03-10T10:19:24.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:24 vm05.local ceph-mon[59051]: pgmap v151: 65 pgs: 65 active+clean; 1003 MiB data, 4.2 GiB used, 116 GiB / 120 GiB avail; 9.6 MiB/s rd, 128 MiB/s wr, 220 op/s 2026-03-10T10:19:24.792 INFO:tasks.workunit.client.1.vm05.stdout:3/260: getdents dd/d15/d1f 0 2026-03-10T10:19:24.793 INFO:tasks.workunit.client.1.vm05.stdout:3/261: stat dd/d20/l21 0 2026-03-10T10:19:24.793 INFO:tasks.workunit.client.1.vm05.stdout:3/262: stat dd/d15/c43 0 2026-03-10T10:19:24.793 INFO:tasks.workunit.client.1.vm05.stdout:3/263: fsync f6 0 2026-03-10T10:19:24.796 INFO:tasks.workunit.client.1.vm05.stdout:3/264: mkdir dd/d20/d56/d5e 0 2026-03-10T10:19:24.797 INFO:tasks.workunit.client.1.vm05.stdout:3/265: chown dd/d20/f50 31 1 2026-03-10T10:19:24.801 INFO:tasks.workunit.client.1.vm05.stdout:3/266: mkdir dd/d39/d5f 0 2026-03-10T10:19:24.801 INFO:tasks.workunit.client.0.vm02.stdout:4/350: dwrite d1/d10/f30 [0,4194304] 0 2026-03-10T10:19:24.807 INFO:tasks.workunit.client.1.vm05.stdout:3/267: creat dd/d15/d24/d2c/f60 x:0 0 0 2026-03-10T10:19:24.809 INFO:tasks.workunit.client.1.vm05.stdout:8/162: write d7/d14/f22 [397481,75426] 0 2026-03-10T10:19:24.810 INFO:tasks.workunit.client.0.vm02.stdout:3/242: truncate d1/d8/d21/f3c 504699 0 2026-03-10T10:19:24.814 INFO:tasks.workunit.client.1.vm05.stdout:8/163: write d7/d14/f25 [4105923,35292] 0 2026-03-10T10:19:24.815 INFO:tasks.workunit.client.1.vm05.stdout:3/268: dwrite dd/d15/d24/f2f [0,4194304] 0 2026-03-10T10:19:24.821 INFO:tasks.workunit.client.1.vm05.stdout:8/164: mknod d7/d14/d24/c2d 0 2026-03-10T10:19:24.831 INFO:tasks.workunit.client.1.vm05.stdout:8/165: readlink d7/l1a 0 2026-03-10T10:19:24.832 INFO:tasks.workunit.client.0.vm02.stdout:9/205: rmdir da 39 2026-03-10T10:19:24.839 INFO:tasks.workunit.client.1.vm05.stdout:8/166: dwrite d7/f11 [4194304,4194304] 0 2026-03-10T10:19:24.846 INFO:tasks.workunit.client.1.vm05.stdout:4/193: getdents d1 0 2026-03-10T10:19:24.848 INFO:tasks.workunit.client.1.vm05.stdout:4/194: write d1/d31/dc/f2e [553132,107236] 0 2026-03-10T10:19:24.851 INFO:tasks.workunit.client.1.vm05.stdout:8/167: creat d7/d14/d15/f2e x:0 0 0 2026-03-10T10:19:24.855 INFO:tasks.workunit.client.1.vm05.stdout:5/244: rename da/db/d26/d35/d38/d4d to da/db/d26/d35/d52 0 2026-03-10T10:19:24.857 INFO:tasks.workunit.client.1.vm05.stdout:5/245: mknod da/db/d26/d35/d38/c53 0 2026-03-10T10:19:24.857 INFO:tasks.workunit.client.1.vm05.stdout:0/202: rename d1/d2/d9/d31/l26 to d1/d2/d39/l47 0 2026-03-10T10:19:24.858 INFO:tasks.workunit.client.1.vm05.stdout:5/246: readlink da/db/d26/d35/d52/l4a 0 2026-03-10T10:19:24.874 INFO:tasks.workunit.client.1.vm05.stdout:9/217: rename d0/df/f39 to d0/d1/d13/d26/f43 0 2026-03-10T10:19:24.880 INFO:tasks.workunit.client.1.vm05.stdout:5/247: creat da/db/d28/d32/f54 x:0 0 0 2026-03-10T10:19:24.881 INFO:tasks.workunit.client.0.vm02.stdout:5/409: write d1/db/d11/f3e [2648939,4314] 0 2026-03-10T10:19:24.881 INFO:tasks.workunit.client.0.vm02.stdout:5/410: link d1/db/c1d d1/db/d11/d62/c90 0 2026-03-10T10:19:24.881 INFO:tasks.workunit.client.0.vm02.stdout:5/411: fsync d1/db/d11/d13/f1f 0 2026-03-10T10:19:24.881 INFO:tasks.workunit.client.1.vm05.stdout:5/248: symlink da/db/d26/d35/d38/l55 0 2026-03-10T10:19:24.881 INFO:tasks.workunit.client.1.vm05.stdout:9/218: creat d0/df/f44 x:0 0 0 2026-03-10T10:19:24.881 INFO:tasks.workunit.client.1.vm05.stdout:2/224: truncate db/d12/f1d 4580675 0 2026-03-10T10:19:24.881 INFO:tasks.workunit.client.1.vm05.stdout:9/219: rmdir d0/d1/d13/de/d21 39 2026-03-10T10:19:24.885 INFO:tasks.workunit.client.1.vm05.stdout:2/225: stat db/c18 0 2026-03-10T10:19:24.887 INFO:tasks.workunit.client.1.vm05.stdout:6/179: read dd/f14 [742931,79367] 0 2026-03-10T10:19:24.887 INFO:tasks.workunit.client.1.vm05.stdout:5/249: getdents da/db/d28 0 2026-03-10T10:19:24.888 INFO:tasks.workunit.client.1.vm05.stdout:7/244: write d5/d1d/d20/d2d/f30 [57689,39865] 0 2026-03-10T10:19:24.893 INFO:tasks.workunit.client.0.vm02.stdout:1/264: truncate d4/da/f12 2905161 0 2026-03-10T10:19:24.894 INFO:tasks.workunit.client.1.vm05.stdout:5/250: write da/db/d28/d32/f54 [838753,437] 0 2026-03-10T10:19:24.894 INFO:tasks.workunit.client.1.vm05.stdout:7/245: write d5/f13 [4486817,20109] 0 2026-03-10T10:19:24.901 INFO:tasks.workunit.client.0.vm02.stdout:1/265: dwrite d4/da/d1a/f3d [0,4194304] 0 2026-03-10T10:19:24.901 INFO:tasks.workunit.client.1.vm05.stdout:6/180: link dd/f14 dd/d1b/f40 0 2026-03-10T10:19:24.902 INFO:tasks.workunit.client.0.vm02.stdout:1/266: dread - d4/da/d1a/d22/f49 zero size 2026-03-10T10:19:24.912 INFO:tasks.workunit.client.1.vm05.stdout:5/251: link da/db/d28/d32/f54 da/db/d28/f56 0 2026-03-10T10:19:24.912 INFO:tasks.workunit.client.1.vm05.stdout:5/252: read - da/db/f3b zero size 2026-03-10T10:19:24.922 INFO:tasks.workunit.client.1.vm05.stdout:6/181: getdents dd/d27 0 2026-03-10T10:19:25.028 INFO:tasks.workunit.client.0.vm02.stdout:5/412: sync 2026-03-10T10:19:25.038 INFO:tasks.workunit.client.0.vm02.stdout:5/413: dread d1/db/d11/f47 [0,4194304] 0 2026-03-10T10:19:25.040 INFO:tasks.workunit.client.1.vm05.stdout:1/180: mkdir d4/d39 0 2026-03-10T10:19:25.040 INFO:tasks.workunit.client.0.vm02.stdout:6/235: write d0/f1c [1907664,39347] 0 2026-03-10T10:19:25.042 INFO:tasks.workunit.client.0.vm02.stdout:0/269: write d9/d18/d1a/d22/d24/f2f [369790,57624] 0 2026-03-10T10:19:25.050 INFO:tasks.workunit.client.0.vm02.stdout:5/414: dread d1/db/f56 [0,4194304] 0 2026-03-10T10:19:25.052 INFO:tasks.workunit.client.1.vm05.stdout:1/181: creat d4/d39/f3a x:0 0 0 2026-03-10T10:19:25.055 INFO:tasks.workunit.client.0.vm02.stdout:6/236: dread d0/d8/d9/d31/f3d [0,4194304] 0 2026-03-10T10:19:25.056 INFO:tasks.workunit.client.0.vm02.stdout:6/237: truncate d0/d8/d29/d2f/f38 1044569 0 2026-03-10T10:19:25.056 INFO:tasks.workunit.client.0.vm02.stdout:6/238: readlink d0/d8/l22 0 2026-03-10T10:19:25.057 INFO:tasks.workunit.client.0.vm02.stdout:6/239: write d0/f1c [2602110,105638] 0 2026-03-10T10:19:25.062 INFO:tasks.workunit.client.0.vm02.stdout:0/270: mknod d9/d18/d1a/d3c/c54 0 2026-03-10T10:19:25.064 INFO:tasks.workunit.client.0.vm02.stdout:6/240: mkdir d0/d8/d29/d2f/d50 0 2026-03-10T10:19:25.064 INFO:tasks.workunit.client.0.vm02.stdout:6/241: chown d0/d8/d29/d2f/d4b/c10 1 1 2026-03-10T10:19:25.065 INFO:tasks.workunit.client.0.vm02.stdout:6/242: truncate d0/d8/d29/d2f/f4e 4224787 0 2026-03-10T10:19:25.070 INFO:tasks.workunit.client.0.vm02.stdout:5/415: dread d1/db/d11/d13/f25 [0,4194304] 0 2026-03-10T10:19:25.076 INFO:tasks.workunit.client.0.vm02.stdout:5/416: rename d1/db/d11/d13/d28/f2c to d1/db/d11/d13/d28/f91 0 2026-03-10T10:19:25.077 INFO:tasks.workunit.client.0.vm02.stdout:5/417: dread - d1/db/d11/d84/d40/d4f/d5f/d6d/d71/f80 zero size 2026-03-10T10:19:25.093 INFO:tasks.workunit.client.0.vm02.stdout:4/351: dwrite d1/d10/f30 [4194304,4194304] 0 2026-03-10T10:19:25.095 INFO:tasks.workunit.client.0.vm02.stdout:4/352: chown d1/d10/db/l27 3941 1 2026-03-10T10:19:25.107 INFO:tasks.workunit.client.0.vm02.stdout:4/353: dread d1/d52/d53/f5b [0,4194304] 0 2026-03-10T10:19:25.111 INFO:tasks.workunit.client.0.vm02.stdout:1/267: mkdir d4/da/d1a/d5b 0 2026-03-10T10:19:25.115 INFO:tasks.workunit.client.0.vm02.stdout:1/268: symlink d4/da/d27/d38/d3c/l5c 0 2026-03-10T10:19:25.122 INFO:tasks.workunit.client.0.vm02.stdout:9/206: write da/d10/f23 [174303,128373] 0 2026-03-10T10:19:25.124 INFO:tasks.workunit.client.0.vm02.stdout:9/207: dread - da/f1f zero size 2026-03-10T10:19:25.124 INFO:tasks.workunit.client.0.vm02.stdout:9/208: dread - da/f30 zero size 2026-03-10T10:19:25.126 INFO:tasks.workunit.client.0.vm02.stdout:9/209: dread da/f14 [0,4194304] 0 2026-03-10T10:19:25.127 INFO:tasks.workunit.client.0.vm02.stdout:9/210: symlink da/d10/d2c/l44 0 2026-03-10T10:19:25.129 INFO:tasks.workunit.client.0.vm02.stdout:9/211: creat da/d10/d38/f45 x:0 0 0 2026-03-10T10:19:25.129 INFO:tasks.workunit.client.0.vm02.stdout:9/212: read - da/d10/d2c/f32 zero size 2026-03-10T10:19:25.131 INFO:tasks.workunit.client.0.vm02.stdout:9/213: symlink da/d10/d2c/d34/d35/l46 0 2026-03-10T10:19:25.132 INFO:tasks.workunit.client.0.vm02.stdout:9/214: chown da/d10/d2c/d34/f3d 11861897 1 2026-03-10T10:19:25.134 INFO:tasks.workunit.client.1.vm05.stdout:4/195: mknod d1/d3/c44 0 2026-03-10T10:19:25.136 INFO:tasks.workunit.client.0.vm02.stdout:9/215: dwrite da/d10/d2c/d34/f3d [0,4194304] 0 2026-03-10T10:19:25.147 INFO:tasks.workunit.client.0.vm02.stdout:9/216: link da/d10/f27 da/d10/d38/f47 0 2026-03-10T10:19:25.147 INFO:tasks.workunit.client.1.vm05.stdout:4/196: dread d1/d3/f12 [0,4194304] 0 2026-03-10T10:19:25.161 INFO:tasks.workunit.client.1.vm05.stdout:0/203: dread d1/d2/d9/d31/f36 [0,4194304] 0 2026-03-10T10:19:25.165 INFO:tasks.workunit.client.1.vm05.stdout:0/204: dread d1/d7/f16 [0,4194304] 0 2026-03-10T10:19:25.166 INFO:tasks.workunit.client.1.vm05.stdout:0/205: readlink d1/d2/d9/d31/d13/d15/l2c 0 2026-03-10T10:19:25.172 INFO:tasks.workunit.client.1.vm05.stdout:0/206: dwrite d1/f38 [0,4194304] 0 2026-03-10T10:19:25.175 INFO:tasks.workunit.client.1.vm05.stdout:0/207: fdatasync d1/d2/d39/d3d/f44 0 2026-03-10T10:19:25.175 INFO:tasks.workunit.client.1.vm05.stdout:0/208: readlink d1/l46 0 2026-03-10T10:19:25.180 INFO:tasks.workunit.client.1.vm05.stdout:0/209: read d1/f11 [111113,50030] 0 2026-03-10T10:19:25.206 INFO:tasks.workunit.client.1.vm05.stdout:7/246: creat d5/d1d/f46 x:0 0 0 2026-03-10T10:19:25.208 INFO:tasks.workunit.client.1.vm05.stdout:7/247: creat d5/d1d/d20/d35/f47 x:0 0 0 2026-03-10T10:19:25.209 INFO:tasks.workunit.client.1.vm05.stdout:7/248: read d5/ff [433943,32039] 0 2026-03-10T10:19:25.209 INFO:tasks.workunit.client.1.vm05.stdout:0/210: creat d1/d2/d9/d31/d13/f48 x:0 0 0 2026-03-10T10:19:25.210 INFO:tasks.workunit.client.1.vm05.stdout:0/211: fdatasync d1/d7/f27 0 2026-03-10T10:19:25.214 INFO:tasks.workunit.client.1.vm05.stdout:2/226: dread db/d12/f1d [0,4194304] 0 2026-03-10T10:19:25.216 INFO:tasks.workunit.client.1.vm05.stdout:7/249: symlink d5/d1d/d29/l48 0 2026-03-10T10:19:25.217 INFO:tasks.workunit.client.1.vm05.stdout:7/250: dread - d5/d1d/f46 zero size 2026-03-10T10:19:25.217 INFO:tasks.workunit.client.1.vm05.stdout:3/269: mknod dd/c61 0 2026-03-10T10:19:25.218 INFO:tasks.workunit.client.1.vm05.stdout:7/251: mknod d5/d1d/c49 0 2026-03-10T10:19:25.219 INFO:tasks.workunit.client.0.vm02.stdout:6/243: link d0/c1 d0/d8/c51 0 2026-03-10T10:19:25.220 INFO:tasks.workunit.client.0.vm02.stdout:6/244: fsync d0/f21 0 2026-03-10T10:19:25.223 INFO:tasks.workunit.client.1.vm05.stdout:7/252: symlink d5/d1d/d20/d35/l4a 0 2026-03-10T10:19:25.224 INFO:tasks.workunit.client.1.vm05.stdout:7/253: dread - d5/d1d/d20/d2d/f3d zero size 2026-03-10T10:19:25.225 INFO:tasks.workunit.client.1.vm05.stdout:7/254: fsync d5/d1d/d20/d2d/f3d 0 2026-03-10T10:19:25.226 INFO:tasks.workunit.client.1.vm05.stdout:3/270: chown dd/d15/d24/d2c/f3e 145111 1 2026-03-10T10:19:25.226 INFO:tasks.workunit.client.1.vm05.stdout:7/255: fdatasync d5/d26/f33 0 2026-03-10T10:19:25.226 INFO:tasks.workunit.client.1.vm05.stdout:3/271: write fb [1839966,81882] 0 2026-03-10T10:19:25.227 INFO:tasks.workunit.client.1.vm05.stdout:7/256: write d5/dd/f12 [8771013,6891] 0 2026-03-10T10:19:25.228 INFO:tasks.workunit.client.1.vm05.stdout:7/257: chown d5/d17 324507 1 2026-03-10T10:19:25.230 INFO:tasks.workunit.client.1.vm05.stdout:3/272: symlink dd/d39/d5c/l62 0 2026-03-10T10:19:25.230 INFO:tasks.workunit.client.1.vm05.stdout:7/258: write d5/d17/f3c [390907,11535] 0 2026-03-10T10:19:25.232 INFO:tasks.workunit.client.1.vm05.stdout:3/273: write f6 [2811283,70182] 0 2026-03-10T10:19:25.236 INFO:tasks.workunit.client.1.vm05.stdout:7/259: mknod d5/d1d/d20/d2d/c4b 0 2026-03-10T10:19:25.279 INFO:tasks.workunit.client.1.vm05.stdout:5/253: dwrite da/db/f29 [0,4194304] 0 2026-03-10T10:19:25.283 INFO:tasks.workunit.client.1.vm05.stdout:5/254: symlink da/db/d26/d35/d52/d4b/l57 0 2026-03-10T10:19:25.285 INFO:tasks.workunit.client.1.vm05.stdout:5/255: mkdir da/db/d26/d35/d58 0 2026-03-10T10:19:25.285 INFO:tasks.workunit.client.1.vm05.stdout:5/256: dread - da/f41 zero size 2026-03-10T10:19:25.286 INFO:tasks.workunit.client.1.vm05.stdout:5/257: fdatasync f9 0 2026-03-10T10:19:25.288 INFO:tasks.workunit.client.1.vm05.stdout:5/258: fdatasync da/db/d26/d35/d52/f2c 0 2026-03-10T10:19:25.290 INFO:tasks.workunit.client.1.vm05.stdout:5/259: mknod da/db/d26/d35/d58/c59 0 2026-03-10T10:19:25.297 INFO:tasks.workunit.client.1.vm05.stdout:5/260: dwrite da/f41 [0,4194304] 0 2026-03-10T10:19:25.302 INFO:tasks.workunit.client.1.vm05.stdout:5/261: truncate da/db/d26/d35/f2a 985573 0 2026-03-10T10:19:25.306 INFO:tasks.workunit.client.1.vm05.stdout:5/262: symlink da/db/d26/d35/d52/d4b/l5a 0 2026-03-10T10:19:25.314 INFO:tasks.workunit.client.1.vm05.stdout:5/263: rename da/db/d26/d35/f30 to da/db/d26/d35/d38/f5b 0 2026-03-10T10:19:25.314 INFO:tasks.workunit.client.0.vm02.stdout:2/332: mkdir d0/d71 0 2026-03-10T10:19:25.315 INFO:tasks.workunit.client.1.vm05.stdout:5/264: write da/db/d26/d35/d52/d4b/f4e [1042606,101369] 0 2026-03-10T10:19:25.330 INFO:tasks.workunit.client.1.vm05.stdout:5/265: dread da/db/f1e [0,4194304] 0 2026-03-10T10:19:25.404 INFO:tasks.workunit.client.0.vm02.stdout:7/259: truncate d1/dc/f25 236804 0 2026-03-10T10:19:25.405 INFO:tasks.workunit.client.0.vm02.stdout:7/260: mknod d1/dc/d10/c4f 0 2026-03-10T10:19:25.442 INFO:tasks.workunit.client.0.vm02.stdout:8/320: rename d1/f21 to d1/f65 0 2026-03-10T10:19:25.443 INFO:tasks.workunit.client.1.vm05.stdout:1/182: truncate d4/d20/f31 1579405 0 2026-03-10T10:19:25.446 INFO:tasks.workunit.client.0.vm02.stdout:0/271: dwrite d9/d18/d1a/d22/d24/d25/f3a [4194304,4194304] 0 2026-03-10T10:19:25.447 INFO:tasks.workunit.client.0.vm02.stdout:0/272: write d9/d18/d1a/d43/d49/f53 [219020,34370] 0 2026-03-10T10:19:25.452 INFO:tasks.workunit.client.0.vm02.stdout:0/273: dwrite d9/d18/d1a/d22/d24/d25/f3a [4194304,4194304] 0 2026-03-10T10:19:25.453 INFO:tasks.workunit.client.1.vm05.stdout:1/183: creat d4/d39/f3b x:0 0 0 2026-03-10T10:19:25.454 INFO:tasks.workunit.client.0.vm02.stdout:5/418: write d1/db/d11/d1a/f27 [2369539,34019] 0 2026-03-10T10:19:25.454 INFO:tasks.workunit.client.1.vm05.stdout:3/274: creat dd/d15/d24/f63 x:0 0 0 2026-03-10T10:19:25.454 INFO:tasks.workunit.client.1.vm05.stdout:1/184: chown d4/d20 850 1 2026-03-10T10:19:25.467 INFO:tasks.workunit.client.0.vm02.stdout:0/274: creat d9/d18/d1a/d43/f55 x:0 0 0 2026-03-10T10:19:25.468 INFO:tasks.workunit.client.1.vm05.stdout:3/275: truncate dd/d15/f23 14489 0 2026-03-10T10:19:25.468 INFO:tasks.workunit.client.1.vm05.stdout:1/185: rename d4/df/c1b to d4/d20/c3c 0 2026-03-10T10:19:25.471 INFO:tasks.workunit.client.0.vm02.stdout:5/419: dread d1/db/d11/d13/f1c [0,4194304] 0 2026-03-10T10:19:25.472 INFO:tasks.workunit.client.1.vm05.stdout:1/186: dwrite d4/d39/f3a [0,4194304] 0 2026-03-10T10:19:25.473 INFO:tasks.workunit.client.1.vm05.stdout:6/182: unlink dd/d36/d3f/d12/d24/d28/l34 0 2026-03-10T10:19:25.473 INFO:tasks.workunit.client.0.vm02.stdout:0/275: mknod d9/d18/d1a/d3c/c56 0 2026-03-10T10:19:25.474 INFO:tasks.workunit.client.1.vm05.stdout:1/187: chown d4/d39 15 1 2026-03-10T10:19:25.475 INFO:tasks.workunit.client.0.vm02.stdout:5/420: creat d1/db/d11/d16/d48/f92 x:0 0 0 2026-03-10T10:19:25.476 INFO:tasks.workunit.client.1.vm05.stdout:3/276: mknod dd/d20/d56/c64 0 2026-03-10T10:19:25.476 INFO:tasks.workunit.client.1.vm05.stdout:3/277: dread - dd/d15/d24/f42 zero size 2026-03-10T10:19:25.480 INFO:tasks.workunit.client.0.vm02.stdout:0/276: mkdir d9/d18/d1a/d43/d57 0 2026-03-10T10:19:25.481 INFO:tasks.workunit.client.0.vm02.stdout:0/277: fdatasync d9/d34/d3d/f4e 0 2026-03-10T10:19:25.486 INFO:tasks.workunit.client.0.vm02.stdout:0/278: creat d9/d34/d3d/f58 x:0 0 0 2026-03-10T10:19:25.488 INFO:tasks.workunit.client.1.vm05.stdout:1/188: mkdir d4/d3d 0 2026-03-10T10:19:25.489 INFO:tasks.workunit.client.1.vm05.stdout:3/278: getdents dd/d20/d56/d5e 0 2026-03-10T10:19:25.490 INFO:tasks.workunit.client.1.vm05.stdout:1/189: chown d4/df/d1c/f38 334 1 2026-03-10T10:19:25.492 INFO:tasks.workunit.client.1.vm05.stdout:1/190: rmdir d4/df/d1c 39 2026-03-10T10:19:25.494 INFO:tasks.workunit.client.1.vm05.stdout:3/279: link dd/d15/d1f/f53 dd/f65 0 2026-03-10T10:19:25.496 INFO:tasks.workunit.client.1.vm05.stdout:1/191: dwrite d4/dd/f1f [0,4194304] 0 2026-03-10T10:19:25.507 INFO:tasks.workunit.client.1.vm05.stdout:1/192: write d4/df/d1c/f2a [2155202,59750] 0 2026-03-10T10:19:25.511 INFO:tasks.workunit.client.1.vm05.stdout:1/193: mkdir d4/d39/d3e 0 2026-03-10T10:19:25.511 INFO:tasks.workunit.client.1.vm05.stdout:1/194: dread - d4/df/d1c/f38 zero size 2026-03-10T10:19:25.514 INFO:tasks.workunit.client.1.vm05.stdout:1/195: creat d4/d39/d3e/f3f x:0 0 0 2026-03-10T10:19:25.516 INFO:tasks.workunit.client.1.vm05.stdout:1/196: chown d4/ca 2 1 2026-03-10T10:19:25.522 INFO:tasks.workunit.client.1.vm05.stdout:1/197: rename d4/df/c1d to d4/d39/d3e/c40 0 2026-03-10T10:19:25.527 INFO:tasks.workunit.client.1.vm05.stdout:1/198: creat d4/d37/f41 x:0 0 0 2026-03-10T10:19:25.527 INFO:tasks.workunit.client.1.vm05.stdout:1/199: truncate d4/dd/f1f 4303249 0 2026-03-10T10:19:25.530 INFO:tasks.workunit.client.1.vm05.stdout:1/200: symlink d4/l42 0 2026-03-10T10:19:25.533 INFO:tasks.workunit.client.1.vm05.stdout:1/201: chown d4/df/c35 19055 1 2026-03-10T10:19:25.533 INFO:tasks.workunit.client.1.vm05.stdout:1/202: rename d4/dd/c2e to d4/d37/c43 0 2026-03-10T10:19:25.534 INFO:tasks.workunit.client.1.vm05.stdout:1/203: chown d4/d39/d3e 36 1 2026-03-10T10:19:25.539 INFO:tasks.workunit.client.1.vm05.stdout:7/260: sync 2026-03-10T10:19:25.546 INFO:tasks.workunit.client.0.vm02.stdout:0/279: sync 2026-03-10T10:19:25.546 INFO:tasks.workunit.client.0.vm02.stdout:5/421: sync 2026-03-10T10:19:25.546 INFO:tasks.workunit.client.1.vm05.stdout:7/261: creat d5/d1d/d20/d2d/f4c x:0 0 0 2026-03-10T10:19:25.546 INFO:tasks.workunit.client.0.vm02.stdout:5/422: readlink d1/db/d11/l34 0 2026-03-10T10:19:25.549 INFO:tasks.workunit.client.1.vm05.stdout:7/262: creat d5/d26/f4d x:0 0 0 2026-03-10T10:19:25.553 INFO:tasks.workunit.client.0.vm02.stdout:0/280: mknod d9/d18/d1a/d22/d24/d25/c59 0 2026-03-10T10:19:25.558 INFO:tasks.workunit.client.0.vm02.stdout:0/281: symlink d9/d34/l5a 0 2026-03-10T10:19:25.559 INFO:tasks.workunit.client.1.vm05.stdout:7/263: dread d5/dd/f12 [0,4194304] 0 2026-03-10T10:19:25.559 INFO:tasks.workunit.client.1.vm05.stdout:7/264: dread - d5/d1d/d20/d35/f47 zero size 2026-03-10T10:19:25.559 INFO:tasks.workunit.client.1.vm05.stdout:7/265: dread - d5/d1d/d29/f3a zero size 2026-03-10T10:19:25.559 INFO:tasks.workunit.client.1.vm05.stdout:7/266: fdatasync d5/d1d/d20/d2d/f30 0 2026-03-10T10:19:25.561 INFO:tasks.workunit.client.0.vm02.stdout:0/282: dwrite d9/d18/d1a/d22/d24/f40 [0,4194304] 0 2026-03-10T10:19:25.568 INFO:tasks.workunit.client.0.vm02.stdout:4/354: write d1/d2/f31 [79339,104755] 0 2026-03-10T10:19:25.575 INFO:tasks.workunit.client.0.vm02.stdout:1/269: dwrite d4/d1b/f44 [0,4194304] 0 2026-03-10T10:19:25.575 INFO:tasks.workunit.client.0.vm02.stdout:9/217: rename da/l2d to da/d10/d2c/d34/d35/l48 0 2026-03-10T10:19:25.578 INFO:tasks.workunit.client.1.vm05.stdout:9/220: creat d0/f45 x:0 0 0 2026-03-10T10:19:25.587 INFO:tasks.workunit.client.1.vm05.stdout:8/168: mkdir d7/d2f 0 2026-03-10T10:19:25.589 INFO:tasks.workunit.client.0.vm02.stdout:3/243: creat d1/f54 x:0 0 0 2026-03-10T10:19:25.589 INFO:tasks.workunit.client.0.vm02.stdout:3/244: write d1/d8/f46 [975134,128178] 0 2026-03-10T10:19:25.598 INFO:tasks.workunit.client.0.vm02.stdout:0/283: symlink d9/d18/l5b 0 2026-03-10T10:19:25.598 INFO:tasks.workunit.client.1.vm05.stdout:4/197: write f0 [2913119,82514] 0 2026-03-10T10:19:25.599 INFO:tasks.workunit.client.0.vm02.stdout:4/355: creat d1/d52/d53/f70 x:0 0 0 2026-03-10T10:19:25.603 INFO:tasks.workunit.client.0.vm02.stdout:9/218: creat da/d10/f49 x:0 0 0 2026-03-10T10:19:25.605 INFO:tasks.workunit.client.1.vm05.stdout:2/227: mknod db/c44 0 2026-03-10T10:19:25.605 INFO:tasks.workunit.client.1.vm05.stdout:4/198: write d1/d31/dc/f3a [167279,14687] 0 2026-03-10T10:19:25.610 INFO:tasks.workunit.client.1.vm05.stdout:0/212: write d1/f11 [1468911,54636] 0 2026-03-10T10:19:25.611 INFO:tasks.workunit.client.1.vm05.stdout:0/213: write d1/f11 [998104,95323] 0 2026-03-10T10:19:25.611 INFO:tasks.workunit.client.1.vm05.stdout:7/267: creat d5/f4e x:0 0 0 2026-03-10T10:19:25.612 INFO:tasks.workunit.client.1.vm05.stdout:7/268: chown d5/d1d/d29 44558213 1 2026-03-10T10:19:25.618 INFO:tasks.workunit.client.0.vm02.stdout:6/245: dwrite d0/d8/d29/d2f/d4b/f26 [0,4194304] 0 2026-03-10T10:19:25.631 INFO:tasks.workunit.client.0.vm02.stdout:2/333: truncate d0/d10/f46 3812484 0 2026-03-10T10:19:25.640 INFO:tasks.workunit.client.1.vm05.stdout:5/266: dwrite da/db/d26/d35/d52/f50 [0,4194304] 0 2026-03-10T10:19:25.644 INFO:tasks.workunit.client.0.vm02.stdout:1/270: dread d4/da/d1a/f19 [4194304,4194304] 0 2026-03-10T10:19:25.645 INFO:tasks.workunit.client.0.vm02.stdout:1/271: dread - d4/d2c/f54 zero size 2026-03-10T10:19:25.645 INFO:tasks.workunit.client.0.vm02.stdout:1/272: stat d4/da/d27/d38/d3c 0 2026-03-10T10:19:25.646 INFO:tasks.workunit.client.0.vm02.stdout:7/261: unlink d1/dc/d16/d28/d2d/f42 0 2026-03-10T10:19:25.654 INFO:tasks.workunit.client.1.vm05.stdout:4/199: mkdir d1/d31/dc/d40/d45 0 2026-03-10T10:19:25.658 INFO:tasks.workunit.client.1.vm05.stdout:9/221: creat d0/d1/d13/de/f46 x:0 0 0 2026-03-10T10:19:25.662 INFO:tasks.workunit.client.1.vm05.stdout:8/169: link d7/d14/d15/l2a d7/d14/d15/l30 0 2026-03-10T10:19:25.663 INFO:tasks.workunit.client.0.vm02.stdout:8/321: write d1/f40 [187737,121253] 0 2026-03-10T10:19:25.665 INFO:tasks.workunit.client.1.vm05.stdout:0/214: mkdir d1/d2/d9/d31/d13/d2f/d49 0 2026-03-10T10:19:25.668 INFO:tasks.workunit.client.0.vm02.stdout:6/246: mkdir d0/d8/d29/d52 0 2026-03-10T10:19:25.676 INFO:tasks.workunit.client.1.vm05.stdout:7/269: creat d5/d17/f4f x:0 0 0 2026-03-10T10:19:25.676 INFO:tasks.workunit.client.1.vm05.stdout:5/267: rename da/db/d26/d35/d52 to da/db/d26/d5c 0 2026-03-10T10:19:25.676 INFO:tasks.workunit.client.0.vm02.stdout:1/273: creat d4/d1b/f5d x:0 0 0 2026-03-10T10:19:25.676 INFO:tasks.workunit.client.0.vm02.stdout:7/262: unlink d1/f32 0 2026-03-10T10:19:25.676 INFO:tasks.workunit.client.1.vm05.stdout:2/228: dwrite db/f19 [4194304,4194304] 0 2026-03-10T10:19:25.683 INFO:tasks.workunit.client.1.vm05.stdout:6/183: dwrite dd/f29 [0,4194304] 0 2026-03-10T10:19:25.691 INFO:tasks.workunit.client.1.vm05.stdout:6/184: chown dd/d27/f2f 29 1 2026-03-10T10:19:25.695 INFO:tasks.workunit.client.1.vm05.stdout:8/170: symlink d7/d14/d24/l31 0 2026-03-10T10:19:25.695 INFO:tasks.workunit.client.1.vm05.stdout:8/171: write d7/d14/f22 [1121969,90394] 0 2026-03-10T10:19:25.695 INFO:tasks.workunit.client.1.vm05.stdout:0/215: write d1/d2/d9/d31/f36 [1150588,96351] 0 2026-03-10T10:19:25.699 INFO:tasks.workunit.client.1.vm05.stdout:3/280: dwrite dd/d15/d24/d2c/f3f [0,4194304] 0 2026-03-10T10:19:25.715 INFO:tasks.workunit.client.1.vm05.stdout:2/229: rmdir db/d1c 39 2026-03-10T10:19:25.716 INFO:tasks.workunit.client.0.vm02.stdout:5/423: write d1/db/f1e [4913387,14371] 0 2026-03-10T10:19:25.717 INFO:tasks.workunit.client.1.vm05.stdout:1/204: dwrite d4/df/f11 [4194304,4194304] 0 2026-03-10T10:19:25.729 INFO:tasks.workunit.client.1.vm05.stdout:1/205: dwrite d4/d39/d3e/f3f [0,4194304] 0 2026-03-10T10:19:25.731 INFO:tasks.workunit.client.1.vm05.stdout:1/206: write d4/dd/f1f [4172894,74119] 0 2026-03-10T10:19:25.731 INFO:tasks.workunit.client.1.vm05.stdout:1/207: chown d4/d39 8535 1 2026-03-10T10:19:25.734 INFO:tasks.workunit.client.0.vm02.stdout:3/245: link d1/d8/d21/c3e d1/d6/c55 0 2026-03-10T10:19:25.742 INFO:tasks.workunit.client.1.vm05.stdout:8/172: rename d7/l17 to d7/d2f/l32 0 2026-03-10T10:19:25.744 INFO:tasks.workunit.client.0.vm02.stdout:4/356: creat d1/d10/f71 x:0 0 0 2026-03-10T10:19:25.744 INFO:tasks.workunit.client.0.vm02.stdout:4/357: chown d1/d2/d37 110826863 1 2026-03-10T10:19:25.745 INFO:tasks.workunit.client.1.vm05.stdout:3/281: mkdir dd/d39/d66 0 2026-03-10T10:19:25.745 INFO:tasks.workunit.client.1.vm05.stdout:1/208: dwrite d4/df/f11 [0,4194304] 0 2026-03-10T10:19:25.752 INFO:tasks.workunit.client.0.vm02.stdout:0/284: dwrite f2 [0,4194304] 0 2026-03-10T10:19:25.753 INFO:tasks.workunit.client.0.vm02.stdout:0/285: readlink d9/d18/l44 0 2026-03-10T10:19:25.753 INFO:tasks.workunit.client.0.vm02.stdout:0/286: chown d9/d18/d1a/d22/f3f 310 1 2026-03-10T10:19:25.754 INFO:tasks.workunit.client.1.vm05.stdout:4/200: truncate d1/d3/f10 1052181 0 2026-03-10T10:19:25.756 INFO:tasks.workunit.client.0.vm02.stdout:2/334: chown d0/d1a/c3d 0 1 2026-03-10T10:19:25.761 INFO:tasks.workunit.client.0.vm02.stdout:9/219: dwrite da/d10/f1d [0,4194304] 0 2026-03-10T10:19:25.763 INFO:tasks.workunit.client.0.vm02.stdout:1/274: truncate d4/da/d27/f35 3526202 0 2026-03-10T10:19:25.767 INFO:tasks.workunit.client.0.vm02.stdout:9/220: dwrite da/d10/f41 [0,4194304] 0 2026-03-10T10:19:25.767 INFO:tasks.workunit.client.1.vm05.stdout:2/230: dwrite db/d28/f3f [0,4194304] 0 2026-03-10T10:19:25.773 INFO:tasks.workunit.client.0.vm02.stdout:9/221: dread da/d10/f1d [0,4194304] 0 2026-03-10T10:19:25.784 INFO:tasks.workunit.client.1.vm05.stdout:2/231: dread db/d28/f3f [0,4194304] 0 2026-03-10T10:19:25.789 INFO:tasks.workunit.client.0.vm02.stdout:7/263: symlink d1/dc/d10/d38/l50 0 2026-03-10T10:19:25.807 INFO:tasks.workunit.client.1.vm05.stdout:8/173: creat d7/d14/f33 x:0 0 0 2026-03-10T10:19:25.807 INFO:tasks.workunit.client.1.vm05.stdout:1/209: creat d4/dd/f44 x:0 0 0 2026-03-10T10:19:25.807 INFO:tasks.workunit.client.1.vm05.stdout:0/216: creat d1/d2/d9/d31/d13/d17/f4a x:0 0 0 2026-03-10T10:19:25.807 INFO:tasks.workunit.client.1.vm05.stdout:1/210: truncate d4/dd/f44 341417 0 2026-03-10T10:19:25.807 INFO:tasks.workunit.client.1.vm05.stdout:9/222: link d0/d1/d13/de/d21/l3c d0/d1/d16/l47 0 2026-03-10T10:19:25.807 INFO:tasks.workunit.client.0.vm02.stdout:3/246: rename d1/f4e to d1/d8/d44/f56 0 2026-03-10T10:19:25.807 INFO:tasks.workunit.client.0.vm02.stdout:3/247: dread - d1/d6/f48 zero size 2026-03-10T10:19:25.807 INFO:tasks.workunit.client.0.vm02.stdout:8/322: unlink d1/d1c/d23/f2d 0 2026-03-10T10:19:25.807 INFO:tasks.workunit.client.0.vm02.stdout:4/358: symlink d1/d2/d1a/d49/l72 0 2026-03-10T10:19:25.808 INFO:tasks.workunit.client.0.vm02.stdout:9/222: mkdir da/d10/d38/d4a 0 2026-03-10T10:19:25.808 INFO:tasks.workunit.client.0.vm02.stdout:9/223: truncate da/d10/f33 338975 0 2026-03-10T10:19:25.810 INFO:tasks.workunit.client.1.vm05.stdout:7/270: getdents d5/d1d/d20/d35 0 2026-03-10T10:19:25.820 INFO:tasks.workunit.client.0.vm02.stdout:3/248: symlink d1/d6/l57 0 2026-03-10T10:19:25.821 INFO:tasks.workunit.client.0.vm02.stdout:4/359: mknod d1/d52/c73 0 2026-03-10T10:19:25.824 INFO:tasks.workunit.client.0.vm02.stdout:3/249: dwrite d1/d20/f40 [0,4194304] 0 2026-03-10T10:19:25.829 INFO:tasks.workunit.client.1.vm05.stdout:3/282: creat dd/d15/d24/d2c/d3b/f67 x:0 0 0 2026-03-10T10:19:25.829 INFO:tasks.workunit.client.0.vm02.stdout:3/250: write d1/d6/f53 [858971,28868] 0 2026-03-10T10:19:25.835 INFO:tasks.workunit.client.0.vm02.stdout:3/251: dread d1/d8/f46 [0,4194304] 0 2026-03-10T10:19:25.839 INFO:tasks.workunit.client.0.vm02.stdout:3/252: dwrite d1/d8/d21/f4c [0,4194304] 0 2026-03-10T10:19:25.839 INFO:tasks.workunit.client.0.vm02.stdout:3/253: chown d1/d8/f2e 365737 1 2026-03-10T10:19:25.848 INFO:tasks.workunit.client.1.vm05.stdout:4/201: creat d1/d3/f46 x:0 0 0 2026-03-10T10:19:25.855 INFO:tasks.workunit.client.1.vm05.stdout:6/185: dwrite dd/d1b/f40 [0,4194304] 0 2026-03-10T10:19:25.856 INFO:tasks.workunit.client.1.vm05.stdout:9/223: dwrite d0/df/d11/f2d [0,4194304] 0 2026-03-10T10:19:25.858 INFO:tasks.workunit.client.0.vm02.stdout:4/360: creat d1/d2/d37/d63/f74 x:0 0 0 2026-03-10T10:19:25.860 INFO:tasks.workunit.client.0.vm02.stdout:2/335: creat d0/f72 x:0 0 0 2026-03-10T10:19:25.860 INFO:tasks.workunit.client.0.vm02.stdout:2/336: read - d0/d10/f6a zero size 2026-03-10T10:19:25.861 INFO:tasks.workunit.client.1.vm05.stdout:3/283: truncate dd/d15/d24/d2c/f38 239339 0 2026-03-10T10:19:25.864 INFO:tasks.workunit.client.1.vm05.stdout:4/202: unlink d1/d31/dc/l35 0 2026-03-10T10:19:25.869 INFO:tasks.workunit.client.1.vm05.stdout:5/268: truncate f5 3309316 0 2026-03-10T10:19:25.875 INFO:tasks.workunit.client.1.vm05.stdout:9/224: symlink d0/d1/d13/d26/l48 0 2026-03-10T10:19:25.876 INFO:tasks.workunit.client.0.vm02.stdout:5/424: getdents d1/db/d11/d13/d28/d37/d3d 0 2026-03-10T10:19:25.876 INFO:tasks.workunit.client.0.vm02.stdout:5/425: dread - d1/db/d11/d13/f4e zero size 2026-03-10T10:19:25.876 INFO:tasks.workunit.client.0.vm02.stdout:6/247: dwrite d0/d8/d9/f13 [0,4194304] 0 2026-03-10T10:19:25.876 INFO:tasks.workunit.client.1.vm05.stdout:5/269: dwrite da/db/fd [4194304,4194304] 0 2026-03-10T10:19:25.876 INFO:tasks.workunit.client.1.vm05.stdout:0/217: getdents d1/d7 0 2026-03-10T10:19:25.876 INFO:tasks.workunit.client.1.vm05.stdout:5/270: truncate da/db/d26/d35/d38/f51 817670 0 2026-03-10T10:19:25.886 INFO:tasks.workunit.client.1.vm05.stdout:3/284: rename dd/d20/f26 to dd/d20/d56/f68 0 2026-03-10T10:19:25.887 INFO:tasks.workunit.client.0.vm02.stdout:0/287: rename d9/d18/d1a/d22/l39 to d9/d18/d1a/l5c 0 2026-03-10T10:19:25.887 INFO:tasks.workunit.client.0.vm02.stdout:8/323: link d1/d1c/d23/d25/f64 d1/d1c/f66 0 2026-03-10T10:19:25.894 INFO:tasks.workunit.client.1.vm05.stdout:4/203: symlink d1/d31/dc/d40/l47 0 2026-03-10T10:19:25.900 INFO:tasks.workunit.client.1.vm05.stdout:0/218: fsync d1/d7/f16 0 2026-03-10T10:19:25.902 INFO:tasks.workunit.client.0.vm02.stdout:3/254: mkdir d1/d58 0 2026-03-10T10:19:25.902 INFO:tasks.workunit.client.0.vm02.stdout:9/224: creat da/f4b x:0 0 0 2026-03-10T10:19:25.906 INFO:tasks.workunit.client.1.vm05.stdout:9/225: mknod d0/c49 0 2026-03-10T10:19:25.906 INFO:tasks.workunit.client.1.vm05.stdout:9/226: read - d0/d1/d13/d26/f43 zero size 2026-03-10T10:19:25.907 INFO:tasks.workunit.client.0.vm02.stdout:6/248: chown d0/d8/d9/le 63985019 1 2026-03-10T10:19:25.910 INFO:tasks.workunit.client.0.vm02.stdout:6/249: dread d0/d8/d9/d31/f3d [0,4194304] 0 2026-03-10T10:19:25.927 INFO:tasks.workunit.client.1.vm05.stdout:0/219: unlink d1/d2/d9/f3f 0 2026-03-10T10:19:25.927 INFO:tasks.workunit.client.1.vm05.stdout:3/285: mkdir dd/d15/d69 0 2026-03-10T10:19:25.927 INFO:tasks.workunit.client.1.vm05.stdout:5/271: getdents da/db 0 2026-03-10T10:19:25.927 INFO:tasks.workunit.client.1.vm05.stdout:9/227: rename d0/df/f25 to d0/d1/f4a 0 2026-03-10T10:19:25.927 INFO:tasks.workunit.client.0.vm02.stdout:8/324: rename d1/d1c/d24/f59 to d1/d2/f67 0 2026-03-10T10:19:25.927 INFO:tasks.workunit.client.0.vm02.stdout:4/361: mkdir d1/d75 0 2026-03-10T10:19:25.927 INFO:tasks.workunit.client.0.vm02.stdout:5/426: mkdir d1/db/d11/d16/d79/d85/d93 0 2026-03-10T10:19:25.928 INFO:tasks.workunit.client.0.vm02.stdout:3/255: rename d1/l2c to d1/d20/l59 0 2026-03-10T10:19:25.928 INFO:tasks.workunit.client.0.vm02.stdout:6/250: link d0/d8/d9/f13 d0/d8/d29/d2f/d4b/f53 0 2026-03-10T10:19:25.928 INFO:tasks.workunit.client.0.vm02.stdout:6/251: write d0/d8/d29/d2f/d4b/f39 [128860,11651] 0 2026-03-10T10:19:25.928 INFO:tasks.workunit.client.1.vm05.stdout:5/272: unlink da/c40 0 2026-03-10T10:19:25.931 INFO:tasks.workunit.client.0.vm02.stdout:6/252: creat d0/d8/d9/f54 x:0 0 0 2026-03-10T10:19:25.932 INFO:tasks.workunit.client.1.vm05.stdout:3/286: link dd/d15/d24/d2c/f3f dd/d15/f6a 0 2026-03-10T10:19:25.932 INFO:tasks.workunit.client.0.vm02.stdout:0/288: getdents d9/d18/d1a/d22/d24/d25 0 2026-03-10T10:19:25.934 INFO:tasks.workunit.client.0.vm02.stdout:6/253: creat d0/d8/d29/d2f/f55 x:0 0 0 2026-03-10T10:19:25.941 INFO:tasks.workunit.client.0.vm02.stdout:0/289: rmdir d9/d18 39 2026-03-10T10:19:25.941 INFO:tasks.workunit.client.0.vm02.stdout:6/254: dwrite d0/d8/d29/d2f/d4b/f17 [0,4194304] 0 2026-03-10T10:19:25.944 INFO:tasks.workunit.client.0.vm02.stdout:0/290: stat d9/d34/c3b 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.1.vm05.stdout:3/287: creat dd/d39/d5c/f6b x:0 0 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.1.vm05.stdout:3/288: readlink dd/d39/d5c/l5d 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.1.vm05.stdout:9/228: dwrite d0/d1/f4a [0,4194304] 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.1.vm05.stdout:3/289: chown dd/d20 0 1 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.0.vm02.stdout:0/291: fsync d9/d34/d3d/f41 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.0.vm02.stdout:6/255: write d0/d8/f45 [1747737,108013] 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.0.vm02.stdout:6/256: write d0/d8/d29/d2f/f55 [30714,39955] 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.0.vm02.stdout:0/292: unlink d9/f4d 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.0.vm02.stdout:6/257: creat d0/d8/f56 x:0 0 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.0.vm02.stdout:6/258: stat d0/d8/d9 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.0.vm02.stdout:6/259: mknod d0/d8/d29/d52/c57 0 2026-03-10T10:19:25.970 INFO:tasks.workunit.client.0.vm02.stdout:6/260: mknod d0/d8/d9/c58 0 2026-03-10T10:19:25.988 INFO:tasks.workunit.client.1.vm05.stdout:6/186: dread dd/d36/d3f/f22 [0,4194304] 0 2026-03-10T10:19:25.991 INFO:tasks.workunit.client.0.vm02.stdout:1/275: write d4/da/f13 [2679246,36617] 0 2026-03-10T10:19:25.992 INFO:tasks.workunit.client.0.vm02.stdout:1/276: truncate d4/d2c/d53/f58 928697 0 2026-03-10T10:19:25.994 INFO:tasks.workunit.client.0.vm02.stdout:8/325: dread d1/d1c/f3f [0,4194304] 0 2026-03-10T10:19:25.994 INFO:tasks.workunit.client.0.vm02.stdout:8/326: stat d1/d1c/f14 0 2026-03-10T10:19:25.995 INFO:tasks.workunit.client.0.vm02.stdout:3/256: sync 2026-03-10T10:19:25.996 INFO:tasks.workunit.client.0.vm02.stdout:3/257: truncate d1/f50 408034 0 2026-03-10T10:19:25.997 INFO:tasks.workunit.client.0.vm02.stdout:9/225: rename da/d10 to da/d3c/d4c 0 2026-03-10T10:19:25.999 INFO:tasks.workunit.client.0.vm02.stdout:9/226: chown da/d3c/d4c/f23 645977 1 2026-03-10T10:19:26.007 INFO:tasks.workunit.client.1.vm05.stdout:1/211: chown d4/d20/f31 2298 1 2026-03-10T10:19:26.007 INFO:tasks.workunit.client.1.vm05.stdout:1/212: write d4/d37/f41 [646691,83910] 0 2026-03-10T10:19:26.007 INFO:tasks.workunit.client.1.vm05.stdout:2/232: write db/d12/f1a [3706447,12858] 0 2026-03-10T10:19:26.007 INFO:tasks.workunit.client.0.vm02.stdout:7/264: dwrite d1/f17 [0,4194304] 0 2026-03-10T10:19:26.007 INFO:tasks.workunit.client.0.vm02.stdout:7/265: fdatasync d1/dc/d16/d28/f4e 0 2026-03-10T10:19:26.007 INFO:tasks.workunit.client.0.vm02.stdout:8/327: rmdir d1/d1c/d24/d35/d56 39 2026-03-10T10:19:26.007 INFO:tasks.workunit.client.1.vm05.stdout:8/174: dwrite d7/f8 [0,4194304] 0 2026-03-10T10:19:26.010 INFO:tasks.workunit.client.1.vm05.stdout:7/271: write d5/dd/f12 [3352785,121295] 0 2026-03-10T10:19:26.011 INFO:tasks.workunit.client.0.vm02.stdout:3/258: write d1/d8/d44/f56 [520376,55047] 0 2026-03-10T10:19:26.013 INFO:tasks.workunit.client.1.vm05.stdout:7/272: chown d5/l11 55 1 2026-03-10T10:19:26.015 INFO:tasks.workunit.client.1.vm05.stdout:7/273: chown d5/d1d/d20/d35/l4a 382236320 1 2026-03-10T10:19:26.017 INFO:tasks.workunit.client.0.vm02.stdout:3/259: dwrite d1/d6/f48 [0,4194304] 0 2026-03-10T10:19:26.018 INFO:tasks.workunit.client.0.vm02.stdout:3/260: chown d1/d6/f32 6634349 1 2026-03-10T10:19:26.019 INFO:tasks.workunit.client.1.vm05.stdout:4/204: read d1/d31/dc/f2e [741090,46579] 0 2026-03-10T10:19:26.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:25 vm02.local ceph-mon[50200]: pgmap v152: 65 pgs: 65 active+clean; 1.1 GiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 11 MiB/s rd, 109 MiB/s wr, 165 op/s 2026-03-10T10:19:26.032 INFO:tasks.workunit.client.0.vm02.stdout:4/362: rename d1/d41/f58 to d1/d2/d44/f76 0 2026-03-10T10:19:26.032 INFO:tasks.workunit.client.1.vm05.stdout:6/187: dread dd/d27/f2f [0,4194304] 0 2026-03-10T10:19:26.032 INFO:tasks.workunit.client.1.vm05.stdout:8/175: dwrite d7/f11 [4194304,4194304] 0 2026-03-10T10:19:26.032 INFO:tasks.workunit.client.0.vm02.stdout:4/363: read d1/d10/db/f35 [3322120,34671] 0 2026-03-10T10:19:26.033 INFO:tasks.workunit.client.0.vm02.stdout:7/266: mknod d1/dc/d16/d28/c51 0 2026-03-10T10:19:26.035 INFO:tasks.workunit.client.0.vm02.stdout:8/328: creat d1/f68 x:0 0 0 2026-03-10T10:19:26.036 INFO:tasks.workunit.client.0.vm02.stdout:0/293: dread d9/d18/d1a/d43/f45 [0,4194304] 0 2026-03-10T10:19:26.037 INFO:tasks.workunit.client.0.vm02.stdout:0/294: write d9/d34/d3d/f41 [387015,37749] 0 2026-03-10T10:19:26.038 INFO:tasks.workunit.client.0.vm02.stdout:0/295: fsync d9/d18/d1a/d22/d24/f40 0 2026-03-10T10:19:26.039 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:25 vm05.local ceph-mon[59051]: pgmap v152: 65 pgs: 65 active+clean; 1.1 GiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 11 MiB/s rd, 109 MiB/s wr, 165 op/s 2026-03-10T10:19:26.045 INFO:tasks.workunit.client.0.vm02.stdout:9/227: dread da/d3c/d4c/f26 [0,4194304] 0 2026-03-10T10:19:26.053 INFO:tasks.workunit.client.1.vm05.stdout:3/290: dread f2 [4194304,4194304] 0 2026-03-10T10:19:26.080 INFO:tasks.workunit.client.0.vm02.stdout:5/427: rename d1/db/d11/d62/d67/f89 to d1/db/d11/d16/d79/d85/f94 0 2026-03-10T10:19:26.080 INFO:tasks.workunit.client.0.vm02.stdout:5/428: truncate d1/db/d11/d84/f82 645465 0 2026-03-10T10:19:26.080 INFO:tasks.workunit.client.0.vm02.stdout:5/429: dwrite d1/db/d11/d62/f74 [0,4194304] 0 2026-03-10T10:19:26.080 INFO:tasks.workunit.client.0.vm02.stdout:0/296: mkdir d9/d18/d1a/d46/d5d 0 2026-03-10T10:19:26.080 INFO:tasks.workunit.client.0.vm02.stdout:0/297: write f2 [1271806,68875] 0 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.0.vm02.stdout:9/228: creat da/d3c/d4c/d2c/d34/f4d x:0 0 0 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.0.vm02.stdout:1/277: rename d4/f4b to d4/da/d27/d38/f5e 0 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.0.vm02.stdout:7/267: mknod d1/c52 0 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.1.vm05.stdout:7/274: rmdir d5/d1d/d20/d3b 39 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.1.vm05.stdout:4/205: rename d1/f17 to d1/d31/dc/d40/d45/f48 0 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.1.vm05.stdout:4/206: readlink d1/d31/dc/l28 0 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.1.vm05.stdout:2/233: symlink db/d1c/l45 0 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.1.vm05.stdout:2/234: dread - db/d12/f3c zero size 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.1.vm05.stdout:2/235: dread db/d12/f29 [0,4194304] 0 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.1.vm05.stdout:8/176: link d7/d14/f23 d7/d14/d24/f34 0 2026-03-10T10:19:26.081 INFO:tasks.workunit.client.1.vm05.stdout:3/291: mknod dd/d20/d56/d5e/c6c 0 2026-03-10T10:19:26.083 INFO:tasks.workunit.client.1.vm05.stdout:3/292: dwrite dd/d15/d1f/f2a [0,4194304] 0 2026-03-10T10:19:26.084 INFO:tasks.workunit.client.0.vm02.stdout:1/278: rename d4/da/l17 to d4/da/d27/d38/d3c/l5f 0 2026-03-10T10:19:26.085 INFO:tasks.workunit.client.1.vm05.stdout:3/293: dwrite dd/d39/d5c/f6b [0,4194304] 0 2026-03-10T10:19:26.090 INFO:tasks.workunit.client.1.vm05.stdout:4/207: dread - d1/d31/dc/f21 zero size 2026-03-10T10:19:26.090 INFO:tasks.workunit.client.1.vm05.stdout:6/188: creat dd/d36/d3f/f41 x:0 0 0 2026-03-10T10:19:26.092 INFO:tasks.workunit.client.1.vm05.stdout:8/177: write f6 [1857764,60404] 0 2026-03-10T10:19:26.093 INFO:tasks.workunit.client.1.vm05.stdout:3/294: rmdir dd 39 2026-03-10T10:19:26.094 INFO:tasks.workunit.client.0.vm02.stdout:5/430: getdents d1/db/d11/d13 0 2026-03-10T10:19:26.095 INFO:tasks.workunit.client.0.vm02.stdout:5/431: truncate d1/db/d11/d84/d40/f7a 21499 0 2026-03-10T10:19:26.103 INFO:tasks.workunit.client.1.vm05.stdout:7/275: rename d5/d1d/d20/d35/c3f to d5/c50 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.0.vm02.stdout:5/432: mkdir d1/db/d11/d84/d95 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.0.vm02.stdout:5/433: dread - d1/db/d11/d84/d40/d4f/d5f/f73 zero size 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.0.vm02.stdout:5/434: read d1/db/d11/f33 [652715,12462] 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.0.vm02.stdout:5/435: getdents d1/db/d11/d84/d40/d4f/d5f/d6d/d71 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.1.vm05.stdout:8/178: creat d7/d14/d24/f35 x:0 0 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.1.vm05.stdout:3/295: readlink dd/l58 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.1.vm05.stdout:7/276: symlink d5/d1d/d29/l51 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.1.vm05.stdout:7/277: truncate d5/d17/f4f 755311 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.1.vm05.stdout:8/179: rename d7/d14/c20 to d7/d14/d24/c36 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.1.vm05.stdout:7/278: creat d5/d17/f52 x:0 0 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.1.vm05.stdout:3/296: mkdir dd/d15/d24/d2c/d6d 0 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.1.vm05.stdout:3/297: chown dd/l1e 242681152 1 2026-03-10T10:19:26.120 INFO:tasks.workunit.client.1.vm05.stdout:3/298: dwrite dd/d15/d1f/f2b [4194304,4194304] 0 2026-03-10T10:19:26.122 INFO:tasks.workunit.client.1.vm05.stdout:3/299: creat dd/d39/d66/f6e x:0 0 0 2026-03-10T10:19:26.130 INFO:tasks.workunit.client.1.vm05.stdout:3/300: read dd/d15/d24/d2c/d3b/f48 [81709,72953] 0 2026-03-10T10:19:26.131 INFO:tasks.workunit.client.1.vm05.stdout:3/301: link dd/fe dd/d39/f6f 0 2026-03-10T10:19:26.131 INFO:tasks.workunit.client.1.vm05.stdout:3/302: fdatasync f2 0 2026-03-10T10:19:26.131 INFO:tasks.workunit.client.1.vm05.stdout:6/189: dread dd/d36/d3f/d12/f20 [0,4194304] 0 2026-03-10T10:19:26.137 INFO:tasks.workunit.client.1.vm05.stdout:6/190: fsync dd/d36/d3f/f1e 0 2026-03-10T10:19:26.138 INFO:tasks.workunit.client.1.vm05.stdout:3/303: dread dd/d39/d5c/f6b [0,4194304] 0 2026-03-10T10:19:26.139 INFO:tasks.workunit.client.1.vm05.stdout:6/191: creat dd/d27/d30/f42 x:0 0 0 2026-03-10T10:19:26.140 INFO:tasks.workunit.client.1.vm05.stdout:3/304: rename dd/d15/d24/c47 to dd/d39/d5c/c70 0 2026-03-10T10:19:26.141 INFO:tasks.workunit.client.1.vm05.stdout:3/305: truncate dd/d15/d24/d2c/d3b/f55 22747 0 2026-03-10T10:19:26.142 INFO:tasks.workunit.client.1.vm05.stdout:6/192: rename dd/c3b to dd/d27/d2a/c43 0 2026-03-10T10:19:26.148 INFO:tasks.workunit.client.1.vm05.stdout:3/306: link dd/d15/d1f/l22 dd/d15/d24/d2c/d6d/l71 0 2026-03-10T10:19:26.156 INFO:tasks.workunit.client.1.vm05.stdout:3/307: mknod dd/d15/d4c/c72 0 2026-03-10T10:19:26.169 INFO:tasks.workunit.client.0.vm02.stdout:4/364: sync 2026-03-10T10:19:26.169 INFO:tasks.workunit.client.0.vm02.stdout:4/365: fsync d1/d10/f8 0 2026-03-10T10:19:26.175 INFO:tasks.workunit.client.0.vm02.stdout:0/298: sync 2026-03-10T10:19:26.175 INFO:tasks.workunit.client.0.vm02.stdout:9/229: sync 2026-03-10T10:19:26.176 INFO:tasks.workunit.client.0.vm02.stdout:9/230: write da/d3c/d4c/f3b [355941,79067] 0 2026-03-10T10:19:26.179 INFO:tasks.workunit.client.0.vm02.stdout:0/299: dread d9/d18/d1a/d22/d24/d25/f3a [4194304,4194304] 0 2026-03-10T10:19:26.184 INFO:tasks.workunit.client.0.vm02.stdout:0/300: dwrite d9/d18/d1a/d22/d24/f2f [0,4194304] 0 2026-03-10T10:19:26.202 INFO:tasks.workunit.client.0.vm02.stdout:4/366: rmdir d1/d2/d37/d6c 0 2026-03-10T10:19:26.202 INFO:tasks.workunit.client.0.vm02.stdout:0/301: creat d9/d18/d1a/d43/d49/f5e x:0 0 0 2026-03-10T10:19:26.202 INFO:tasks.workunit.client.0.vm02.stdout:4/367: creat d1/d52/f77 x:0 0 0 2026-03-10T10:19:26.202 INFO:tasks.workunit.client.0.vm02.stdout:4/368: write d1/d2/f34 [2458869,38944] 0 2026-03-10T10:19:26.202 INFO:tasks.workunit.client.0.vm02.stdout:0/302: mknod d9/d18/d1a/d22/d24/d51/c5f 0 2026-03-10T10:19:26.202 INFO:tasks.workunit.client.0.vm02.stdout:2/337: chown d0/f36 24699 1 2026-03-10T10:19:26.209 INFO:tasks.workunit.client.0.vm02.stdout:0/303: rmdir d9/d18/d1a/d22 39 2026-03-10T10:19:26.211 INFO:tasks.workunit.client.0.vm02.stdout:0/304: symlink d9/d18/d1a/d43/d57/l60 0 2026-03-10T10:19:26.212 INFO:tasks.workunit.client.0.vm02.stdout:0/305: readlink d9/d18/l44 0 2026-03-10T10:19:26.212 INFO:tasks.workunit.client.0.vm02.stdout:2/338: getdents d0/d10/d69 0 2026-03-10T10:19:26.213 INFO:tasks.workunit.client.0.vm02.stdout:0/306: chown d9/d18/d1a/d22/d24/f2f 25358299 1 2026-03-10T10:19:26.220 INFO:tasks.workunit.client.0.vm02.stdout:0/307: dwrite d9/d18/d1a/d43/f45 [0,4194304] 0 2026-03-10T10:19:26.220 INFO:tasks.workunit.client.1.vm05.stdout:0/220: write d1/d2/d9/d31/d12/d20/f37 [696049,114067] 0 2026-03-10T10:19:26.220 INFO:tasks.workunit.client.0.vm02.stdout:0/308: write f2 [2973684,51224] 0 2026-03-10T10:19:26.223 INFO:tasks.workunit.client.0.vm02.stdout:0/309: mkdir d9/d18/d1a/d43/d57/d61 0 2026-03-10T10:19:26.226 INFO:tasks.workunit.client.0.vm02.stdout:9/231: sync 2026-03-10T10:19:26.227 INFO:tasks.workunit.client.0.vm02.stdout:0/310: write d9/d18/d1a/d22/d24/f26 [3318168,96251] 0 2026-03-10T10:19:26.227 INFO:tasks.workunit.client.1.vm05.stdout:5/273: sync 2026-03-10T10:19:26.227 INFO:tasks.workunit.client.0.vm02.stdout:9/232: chown da/d3c/d4c/f1d 195268 1 2026-03-10T10:19:26.228 INFO:tasks.workunit.client.1.vm05.stdout:5/274: fdatasync da/f20 0 2026-03-10T10:19:26.229 INFO:tasks.workunit.client.1.vm05.stdout:0/221: getdents d1/d2/d9/d31/d13/d17 0 2026-03-10T10:19:26.229 INFO:tasks.workunit.client.1.vm05.stdout:0/222: readlink d1/d2/d9/d31/d13/d15/l45 0 2026-03-10T10:19:26.230 INFO:tasks.workunit.client.0.vm02.stdout:9/233: symlink da/d3c/d4c/d2c/d34/l4e 0 2026-03-10T10:19:26.231 INFO:tasks.workunit.client.1.vm05.stdout:5/275: truncate da/db/d26/d5c/f2c 38463 0 2026-03-10T10:19:26.253 INFO:tasks.workunit.client.0.vm02.stdout:9/234: mknod da/d3c/d4c/d2c/d34/d35/c4f 0 2026-03-10T10:19:26.253 INFO:tasks.workunit.client.0.vm02.stdout:9/235: dread - da/d3c/d4c/d2c/d34/f3a zero size 2026-03-10T10:19:26.253 INFO:tasks.workunit.client.1.vm05.stdout:1/213: sync 2026-03-10T10:19:26.253 INFO:tasks.workunit.client.1.vm05.stdout:9/229: sync 2026-03-10T10:19:26.253 INFO:tasks.workunit.client.1.vm05.stdout:2/236: sync 2026-03-10T10:19:26.253 INFO:tasks.workunit.client.1.vm05.stdout:7/279: sync 2026-03-10T10:19:26.253 INFO:tasks.workunit.client.1.vm05.stdout:3/308: sync 2026-03-10T10:19:26.253 INFO:tasks.workunit.client.1.vm05.stdout:5/276: symlink da/db/d26/d5c/d4b/l5d 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:2/237: write db/f24 [8647104,56253] 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:3/309: write dd/d15/d24/d2c/d3b/f40 [901030,93912] 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:1/214: dread d4/df/d1c/f23 [0,4194304] 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:1/215: dread d4/f2f [0,4194304] 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:2/238: symlink db/d12/l46 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:3/310: truncate dd/d15/f1c 4731285 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:5/277: creat da/f5e x:0 0 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:2/239: creat db/d2d/f47 x:0 0 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:1/216: unlink d4/f2f 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:2/240: truncate db/d12/f3b 901367 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:3/311: truncate dd/d15/d24/d2c/d3b/f48 1187344 0 2026-03-10T10:19:26.254 INFO:tasks.workunit.client.1.vm05.stdout:2/241: dread db/f25 [0,4194304] 0 2026-03-10T10:19:26.269 INFO:tasks.workunit.client.0.vm02.stdout:2/339: read d0/d10/f46 [1701926,3650] 0 2026-03-10T10:19:26.271 INFO:tasks.workunit.client.0.vm02.stdout:2/340: rmdir d0/d10 39 2026-03-10T10:19:26.298 INFO:tasks.workunit.client.1.vm05.stdout:0/223: dread d1/d2/d9/d31/d12/f1e [0,4194304] 0 2026-03-10T10:19:26.315 INFO:tasks.workunit.client.1.vm05.stdout:5/278: dread da/db/fd [0,4194304] 0 2026-03-10T10:19:26.317 INFO:tasks.workunit.client.1.vm05.stdout:5/279: symlink da/l5f 0 2026-03-10T10:19:26.344 INFO:tasks.workunit.client.1.vm05.stdout:1/217: sync 2026-03-10T10:19:26.352 INFO:tasks.workunit.client.1.vm05.stdout:2/242: sync 2026-03-10T10:19:26.352 INFO:tasks.workunit.client.1.vm05.stdout:0/224: sync 2026-03-10T10:19:26.358 INFO:tasks.workunit.client.1.vm05.stdout:2/243: creat db/d2d/f48 x:0 0 0 2026-03-10T10:19:26.368 INFO:tasks.workunit.client.1.vm05.stdout:2/244: dwrite db/f25 [0,4194304] 0 2026-03-10T10:19:26.372 INFO:tasks.workunit.client.1.vm05.stdout:2/245: symlink db/d1c/d40/l49 0 2026-03-10T10:19:26.375 INFO:tasks.workunit.client.1.vm05.stdout:2/246: stat db/d12/f3c 0 2026-03-10T10:19:26.394 INFO:tasks.workunit.client.1.vm05.stdout:4/208: dread d1/d31/dc/f1f [0,4194304] 0 2026-03-10T10:19:26.395 INFO:tasks.workunit.client.1.vm05.stdout:2/247: dread db/f26 [4194304,4194304] 0 2026-03-10T10:19:26.400 INFO:tasks.workunit.client.1.vm05.stdout:2/248: readlink db/l10 0 2026-03-10T10:19:26.404 INFO:tasks.workunit.client.0.vm02.stdout:9/236: dread da/f15 [0,4194304] 0 2026-03-10T10:19:26.404 INFO:tasks.workunit.client.1.vm05.stdout:4/209: getdents d1/d3 0 2026-03-10T10:19:26.414 INFO:tasks.workunit.client.1.vm05.stdout:4/210: dwrite d1/d31/dc/f33 [0,4194304] 0 2026-03-10T10:19:26.417 INFO:tasks.workunit.client.1.vm05.stdout:4/211: chown d1/d31/dc/d40/d45 13 1 2026-03-10T10:19:26.420 INFO:tasks.workunit.client.1.vm05.stdout:2/249: dwrite db/f14 [0,4194304] 0 2026-03-10T10:19:26.421 INFO:tasks.workunit.client.1.vm05.stdout:4/212: write d1/fb [7591956,126416] 0 2026-03-10T10:19:26.430 INFO:tasks.workunit.client.1.vm05.stdout:4/213: symlink d1/d31/l49 0 2026-03-10T10:19:26.432 INFO:tasks.workunit.client.1.vm05.stdout:2/250: creat db/f4a x:0 0 0 2026-03-10T10:19:26.433 INFO:tasks.workunit.client.1.vm05.stdout:2/251: mknod db/c4b 0 2026-03-10T10:19:26.433 INFO:tasks.workunit.client.1.vm05.stdout:2/252: chown db/d12/f3b 10 1 2026-03-10T10:19:26.455 INFO:tasks.workunit.client.0.vm02.stdout:3/261: dwrite d1/d8/d21/f29 [0,4194304] 0 2026-03-10T10:19:26.457 INFO:tasks.workunit.client.0.vm02.stdout:8/329: write d1/d2/f29 [310408,24105] 0 2026-03-10T10:19:26.457 INFO:tasks.workunit.client.0.vm02.stdout:3/262: chown d1/d8/f46 44 1 2026-03-10T10:19:26.474 INFO:tasks.workunit.client.0.vm02.stdout:3/263: mknod d1/c5a 0 2026-03-10T10:19:26.480 INFO:tasks.workunit.client.0.vm02.stdout:1/279: write d4/fe [2191344,36604] 0 2026-03-10T10:19:26.483 INFO:tasks.workunit.client.0.vm02.stdout:3/264: dread d1/d8/f46 [0,4194304] 0 2026-03-10T10:19:26.483 INFO:tasks.workunit.client.0.vm02.stdout:7/268: write d1/f5 [4659108,120981] 0 2026-03-10T10:19:26.484 INFO:tasks.workunit.client.0.vm02.stdout:3/265: stat d1/d8/d21/c3e 0 2026-03-10T10:19:26.485 INFO:tasks.workunit.client.0.vm02.stdout:3/266: write d1/d6/f49 [2483613,50591] 0 2026-03-10T10:19:26.485 INFO:tasks.workunit.client.0.vm02.stdout:3/267: fsync d1/d8/d44/f56 0 2026-03-10T10:19:26.494 INFO:tasks.workunit.client.0.vm02.stdout:3/268: mknod d1/d8/c5b 0 2026-03-10T10:19:26.496 INFO:tasks.workunit.client.0.vm02.stdout:5/436: write d1/db/d11/d16/f19 [4304254,123423] 0 2026-03-10T10:19:26.504 INFO:tasks.workunit.client.0.vm02.stdout:5/437: read d1/db/d11/d84/f82 [608130,67969] 0 2026-03-10T10:19:26.505 INFO:tasks.workunit.client.0.vm02.stdout:5/438: truncate d1/db/d11/d13/f25 640750 0 2026-03-10T10:19:26.508 INFO:tasks.workunit.client.1.vm05.stdout:6/193: rename dd/d27 to dd/d36/d3f/d12/d44 0 2026-03-10T10:19:26.508 INFO:tasks.workunit.client.0.vm02.stdout:5/439: creat d1/db/f96 x:0 0 0 2026-03-10T10:19:26.509 INFO:tasks.workunit.client.0.vm02.stdout:5/440: creat d1/db/d11/d84/d40/d4f/f97 x:0 0 0 2026-03-10T10:19:26.517 INFO:tasks.workunit.client.0.vm02.stdout:1/280: read d4/f26 [88894,58725] 0 2026-03-10T10:19:26.522 INFO:tasks.workunit.client.1.vm05.stdout:8/180: dwrite f2 [0,4194304] 0 2026-03-10T10:19:26.527 INFO:tasks.workunit.client.1.vm05.stdout:9/230: rename d0/df/d11/c32 to d0/d1/d13/c4b 0 2026-03-10T10:19:26.528 INFO:tasks.workunit.client.0.vm02.stdout:3/269: dread d1/d8/d21/f4d [0,4194304] 0 2026-03-10T10:19:26.530 INFO:tasks.workunit.client.1.vm05.stdout:5/280: rename da/db/d28/d32/c3e to da/db/d26/d35/d38/c60 0 2026-03-10T10:19:26.530 INFO:tasks.workunit.client.0.vm02.stdout:3/270: symlink d1/l5c 0 2026-03-10T10:19:26.531 INFO:tasks.workunit.client.0.vm02.stdout:3/271: mkdir d1/d5d 0 2026-03-10T10:19:26.535 INFO:tasks.workunit.client.0.vm02.stdout:8/330: dread d1/d2/f29 [0,4194304] 0 2026-03-10T10:19:26.536 INFO:tasks.workunit.client.1.vm05.stdout:5/281: dread - da/db/d26/d35/d38/f48 zero size 2026-03-10T10:19:26.539 INFO:tasks.workunit.client.0.vm02.stdout:3/272: link d1/d20/f41 d1/d8/d21/f5e 0 2026-03-10T10:19:26.539 INFO:tasks.workunit.client.1.vm05.stdout:6/194: dread dd/d1b/f1d [0,4194304] 0 2026-03-10T10:19:26.539 INFO:tasks.workunit.client.0.vm02.stdout:3/273: chown d1/d6/f3a 76312 1 2026-03-10T10:19:26.539 INFO:tasks.workunit.client.0.vm02.stdout:8/331: dwrite d1/d2/f29 [0,4194304] 0 2026-03-10T10:19:26.541 INFO:tasks.workunit.client.0.vm02.stdout:3/274: truncate d1/f50 1038140 0 2026-03-10T10:19:26.547 INFO:tasks.workunit.client.1.vm05.stdout:6/195: symlink dd/d36/d3f/d12/d44/l45 0 2026-03-10T10:19:26.555 INFO:tasks.workunit.client.0.vm02.stdout:4/369: rename d1/d2 to d1/d41/d5e/d78 0 2026-03-10T10:19:26.555 INFO:tasks.workunit.client.1.vm05.stdout:5/282: rename da/db/d26/d35/l22 to da/l61 0 2026-03-10T10:19:26.557 INFO:tasks.workunit.client.0.vm02.stdout:4/370: creat d1/d52/d53/f79 x:0 0 0 2026-03-10T10:19:26.559 INFO:tasks.workunit.client.0.vm02.stdout:0/311: rename d9/d18/d1a/f1f to d9/d18/d1a/d22/d4a/f62 0 2026-03-10T10:19:26.559 INFO:tasks.workunit.client.0.vm02.stdout:0/312: fdatasync d9/d34/d3d/f58 0 2026-03-10T10:19:26.561 INFO:tasks.workunit.client.1.vm05.stdout:5/283: mknod da/db/d26/d5c/c62 0 2026-03-10T10:19:26.561 INFO:tasks.workunit.client.1.vm05.stdout:6/196: dwrite dd/d36/d3f/d12/f20 [0,4194304] 0 2026-03-10T10:19:26.563 INFO:tasks.workunit.client.0.vm02.stdout:0/313: mknod d9/d18/d1a/d43/d57/c63 0 2026-03-10T10:19:26.563 INFO:tasks.workunit.client.1.vm05.stdout:6/197: readlink dd/d36/d3f/d12/d44/l38 0 2026-03-10T10:19:26.564 INFO:tasks.workunit.client.0.vm02.stdout:4/371: dread d1/d41/d5e/d78/d37/f2e [0,4194304] 0 2026-03-10T10:19:26.565 INFO:tasks.workunit.client.0.vm02.stdout:2/341: rename d0/d10/c5a to d0/d10/d69/c73 0 2026-03-10T10:19:26.573 INFO:tasks.workunit.client.1.vm05.stdout:6/198: creat dd/d36/d3f/d12/d44/f46 x:0 0 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.1.vm05.stdout:6/199: symlink dd/d36/d3f/d12/d44/l47 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.1.vm05.stdout:6/200: mkdir dd/d36/d3f/d12/d44/d2a/d3d/d48 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.1.vm05.stdout:7/280: write d5/dd/f23 [527230,81] 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.1.vm05.stdout:7/281: chown d5/d26/f4d 1442 1 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.1.vm05.stdout:6/201: dwrite dd/d36/d3f/d12/d44/d30/f42 [0,4194304] 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:4/372: creat d1/d41/d5e/d78/d1a/d49/f7a x:0 0 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:5/441: rename d1/db/d11/d16/c1b to d1/db/d11/d84/c98 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:5/442: chown d1/db/d11/d62/d67 3563 1 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:9/237: getdents da/d3c/d4c/d2c/d34/d35 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:4/373: readlink d1/l4e 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:3/275: rename d1/d6/f32 to d1/d8/d44/f5f 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:3/276: truncate d1/f54 402068 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:3/277: stat d1/d6/l57 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:3/278: stat d1/d8/d21/c3e 0 2026-03-10T10:19:26.593 INFO:tasks.workunit.client.0.vm02.stdout:5/443: creat d1/db/d11/d84/d40/d4f/f99 x:0 0 0 2026-03-10T10:19:26.594 INFO:tasks.workunit.client.1.vm05.stdout:8/181: sync 2026-03-10T10:19:26.597 INFO:tasks.workunit.client.1.vm05.stdout:7/282: creat d5/d1d/f53 x:0 0 0 2026-03-10T10:19:26.600 INFO:tasks.workunit.client.0.vm02.stdout:9/238: mkdir da/d3c/d4c/d50 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.1.vm05.stdout:7/283: fsync d5/d1d/f32 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.1.vm05.stdout:5/284: dread da/f10 [0,4194304] 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.1.vm05.stdout:1/218: getdents d4 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.1.vm05.stdout:3/312: truncate dd/d15/d1f/f2b 2800723 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.1.vm05.stdout:8/182: dwrite d7/d14/f22 [0,4194304] 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.1.vm05.stdout:0/225: dwrite d1/d2/d9/d31/d13/d17/f1b [0,4194304] 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.1.vm05.stdout:3/313: dread dd/d20/f50 [0,4194304] 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.0.vm02.stdout:0/314: link d9/d18/d1a/d3c/c54 d9/d34/d3d/c64 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.0.vm02.stdout:4/374: creat d1/d32/f7b x:0 0 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.0.vm02.stdout:6/261: dread d0/d8/d29/d2f/f4e [0,4194304] 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.0.vm02.stdout:5/444: rename d1/l5e to d1/db/d11/d13/d28/d37/d3d/l9a 0 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.0.vm02.stdout:9/239: dread - da/d3c/d4c/d38/f47 zero size 2026-03-10T10:19:26.624 INFO:tasks.workunit.client.0.vm02.stdout:0/315: mkdir d9/d34/d3d/d65 0 2026-03-10T10:19:26.625 INFO:tasks.workunit.client.0.vm02.stdout:4/375: rename d1/d41/d5e/d78/d55/f5f to d1/d41/d5e/d78/d55/f7c 0 2026-03-10T10:19:26.627 INFO:tasks.workunit.client.0.vm02.stdout:6/262: creat d0/d8/d29/f59 x:0 0 0 2026-03-10T10:19:26.631 INFO:tasks.workunit.client.1.vm05.stdout:0/226: dwrite d1/f38 [0,4194304] 0 2026-03-10T10:19:26.633 INFO:tasks.workunit.client.1.vm05.stdout:1/219: rename d4/df/c30 to d4/c45 0 2026-03-10T10:19:26.640 INFO:tasks.workunit.client.0.vm02.stdout:0/316: truncate d9/d18/f2a 231430 0 2026-03-10T10:19:26.650 INFO:tasks.workunit.client.1.vm05.stdout:3/314: creat dd/d15/d4c/f73 x:0 0 0 2026-03-10T10:19:26.650 INFO:tasks.workunit.client.1.vm05.stdout:3/315: stat dd/d15/d24/d2c/d3b/f55 0 2026-03-10T10:19:26.650 INFO:tasks.workunit.client.0.vm02.stdout:9/240: mknod da/d3c/d4c/d38/d4a/c51 0 2026-03-10T10:19:26.650 INFO:tasks.workunit.client.0.vm02.stdout:9/241: write da/d3c/d4c/f49 [660411,76666] 0 2026-03-10T10:19:26.650 INFO:tasks.workunit.client.0.vm02.stdout:5/445: link d1/db/d11/d13/f4e d1/db/d11/d13/d28/d37/d3d/f9b 0 2026-03-10T10:19:26.651 INFO:tasks.workunit.client.0.vm02.stdout:0/317: dread d9/f28 [0,4194304] 0 2026-03-10T10:19:26.654 INFO:tasks.workunit.client.0.vm02.stdout:9/242: symlink da/d3c/d4c/d38/d4a/l52 0 2026-03-10T10:19:26.654 INFO:tasks.workunit.client.1.vm05.stdout:0/227: dwrite d1/d2/d9/f40 [0,4194304] 0 2026-03-10T10:19:26.654 INFO:tasks.workunit.client.0.vm02.stdout:6/263: creat d0/d8/f5a x:0 0 0 2026-03-10T10:19:26.655 INFO:tasks.workunit.client.1.vm05.stdout:3/316: mkdir dd/d15/d24/d74 0 2026-03-10T10:19:26.657 INFO:tasks.workunit.client.0.vm02.stdout:0/318: unlink d9/d18/d1a/d22/d24/c3e 0 2026-03-10T10:19:26.661 INFO:tasks.workunit.client.1.vm05.stdout:0/228: rename d1/l46 to d1/d2/d9/d31/d13/d15/l4b 0 2026-03-10T10:19:26.677 INFO:tasks.workunit.client.1.vm05.stdout:8/183: creat d7/f37 x:0 0 0 2026-03-10T10:19:26.678 INFO:tasks.workunit.client.1.vm05.stdout:8/184: truncate d7/d14/f23 733580 0 2026-03-10T10:19:26.678 INFO:tasks.workunit.client.1.vm05.stdout:0/229: unlink d1/d7/f3c 0 2026-03-10T10:19:26.679 INFO:tasks.workunit.client.0.vm02.stdout:6/264: dwrite d0/d8/d9/f13 [0,4194304] 0 2026-03-10T10:19:26.679 INFO:tasks.workunit.client.0.vm02.stdout:6/265: symlink d0/d8/d9/d31/d32/l5b 0 2026-03-10T10:19:26.679 INFO:tasks.workunit.client.0.vm02.stdout:6/266: dread - d0/d8/d29/f59 zero size 2026-03-10T10:19:26.679 INFO:tasks.workunit.client.0.vm02.stdout:6/267: fsync d0/f28 0 2026-03-10T10:19:26.679 INFO:tasks.workunit.client.0.vm02.stdout:6/268: write d0/d8/d9/f4f [536021,4950] 0 2026-03-10T10:19:26.679 INFO:tasks.workunit.client.0.vm02.stdout:6/269: write d0/d8/d9/f13 [331911,55660] 0 2026-03-10T10:19:26.683 INFO:tasks.workunit.client.0.vm02.stdout:6/270: mknod d0/d3a/c5c 0 2026-03-10T10:19:26.684 INFO:tasks.workunit.client.0.vm02.stdout:6/271: chown d0/d8/d29/f59 3 1 2026-03-10T10:19:26.684 INFO:tasks.workunit.client.1.vm05.stdout:7/284: sync 2026-03-10T10:19:26.685 INFO:tasks.workunit.client.1.vm05.stdout:0/230: creat d1/d2/d9/d31/d13/f4c x:0 0 0 2026-03-10T10:19:26.687 INFO:tasks.workunit.client.1.vm05.stdout:4/214: fsync d1/d31/dc/f33 0 2026-03-10T10:19:26.689 INFO:tasks.workunit.client.0.vm02.stdout:6/272: getdents d0/d8/d29/d52 0 2026-03-10T10:19:26.690 INFO:tasks.workunit.client.0.vm02.stdout:6/273: stat d0/d8/d29/d2f/f33 0 2026-03-10T10:19:26.694 INFO:tasks.workunit.client.0.vm02.stdout:6/274: dwrite d0/f4c [4194304,4194304] 0 2026-03-10T10:19:26.732 INFO:tasks.workunit.client.1.vm05.stdout:1/220: dread d4/dd/f15 [0,4194304] 0 2026-03-10T10:19:26.733 INFO:tasks.workunit.client.1.vm05.stdout:0/231: rmdir d1/d2/d39/d3d 39 2026-03-10T10:19:26.733 INFO:tasks.workunit.client.1.vm05.stdout:0/232: write d1/d2/d9/d31/d13/f3e [793925,80608] 0 2026-03-10T10:19:26.733 INFO:tasks.workunit.client.1.vm05.stdout:4/215: rename d1/fb to d1/d3/f4a 0 2026-03-10T10:19:26.733 INFO:tasks.workunit.client.1.vm05.stdout:4/216: readlink d1/d31/l29 0 2026-03-10T10:19:26.733 INFO:tasks.workunit.client.0.vm02.stdout:6/275: write d0/d8/d29/f59 [163758,44520] 0 2026-03-10T10:19:26.734 INFO:tasks.workunit.client.0.vm02.stdout:6/276: dread d0/d8/d29/d2f/d4b/f53 [0,4194304] 0 2026-03-10T10:19:26.734 INFO:tasks.workunit.client.0.vm02.stdout:6/277: creat d0/f5d x:0 0 0 2026-03-10T10:19:26.734 INFO:tasks.workunit.client.0.vm02.stdout:6/278: symlink d0/d8/d9/l5e 0 2026-03-10T10:19:26.734 INFO:tasks.workunit.client.0.vm02.stdout:6/279: dwrite d0/d8/d9/d31/d32/f36 [0,4194304] 0 2026-03-10T10:19:26.734 INFO:tasks.workunit.client.0.vm02.stdout:6/280: mknod d0/d8/d29/d2f/d50/c5f 0 2026-03-10T10:19:26.734 INFO:tasks.workunit.client.0.vm02.stdout:6/281: rmdir d0/d8/d9/d31/d32 39 2026-03-10T10:19:26.739 INFO:tasks.workunit.client.0.vm02.stdout:6/282: dwrite d0/d8/d29/d2f/d4b/f26 [0,4194304] 0 2026-03-10T10:19:26.747 INFO:tasks.workunit.client.0.vm02.stdout:6/283: dwrite d0/d8/d9/f13 [0,4194304] 0 2026-03-10T10:19:26.748 INFO:tasks.workunit.client.0.vm02.stdout:6/284: write d0/f28 [316113,33562] 0 2026-03-10T10:19:26.750 INFO:tasks.workunit.client.0.vm02.stdout:0/319: sync 2026-03-10T10:19:26.753 INFO:tasks.workunit.client.0.vm02.stdout:6/285: dwrite d0/d3a/f40 [0,4194304] 0 2026-03-10T10:19:26.763 INFO:tasks.workunit.client.0.vm02.stdout:0/320: creat d9/d18/d1a/d46/d5d/f66 x:0 0 0 2026-03-10T10:19:26.768 INFO:tasks.workunit.client.0.vm02.stdout:0/321: mkdir d9/d34/d3d/d67 0 2026-03-10T10:19:26.770 INFO:tasks.workunit.client.0.vm02.stdout:6/286: mkdir d0/d8/d9/d31/d32/d60 0 2026-03-10T10:19:26.771 INFO:tasks.workunit.client.0.vm02.stdout:0/322: dread d9/d18/d1a/d43/d49/f53 [0,4194304] 0 2026-03-10T10:19:26.771 INFO:tasks.workunit.client.0.vm02.stdout:0/323: chown d9/d34/l5a 25601 1 2026-03-10T10:19:26.771 INFO:tasks.workunit.client.0.vm02.stdout:6/287: write d0/f21 [2471202,6816] 0 2026-03-10T10:19:26.776 INFO:tasks.workunit.client.1.vm05.stdout:7/285: sync 2026-03-10T10:19:26.780 INFO:tasks.workunit.client.0.vm02.stdout:6/288: dwrite d0/d8/d29/d2f/f33 [8388608,4194304] 0 2026-03-10T10:19:26.791 INFO:tasks.workunit.client.0.vm02.stdout:6/289: dwrite d0/d8/d29/d2f/f33 [0,4194304] 0 2026-03-10T10:19:26.791 INFO:tasks.workunit.client.0.vm02.stdout:6/290: chown d0/d8/d9/d31/d32 416478 1 2026-03-10T10:19:26.791 INFO:tasks.workunit.client.1.vm05.stdout:1/221: sync 2026-03-10T10:19:26.791 INFO:tasks.workunit.client.1.vm05.stdout:0/233: sync 2026-03-10T10:19:26.791 INFO:tasks.workunit.client.1.vm05.stdout:0/234: read - d1/d2/d9/d31/d13/f48 zero size 2026-03-10T10:19:26.791 INFO:tasks.workunit.client.1.vm05.stdout:7/286: mknod d5/d1d/d29/d3e/c54 0 2026-03-10T10:19:26.799 INFO:tasks.workunit.client.0.vm02.stdout:6/291: creat d0/d8/d29/d2f/f61 x:0 0 0 2026-03-10T10:19:26.799 INFO:tasks.workunit.client.1.vm05.stdout:7/287: unlink d5/l9 0 2026-03-10T10:19:26.800 INFO:tasks.workunit.client.1.vm05.stdout:7/288: write d5/f22 [1298267,93409] 0 2026-03-10T10:19:26.801 INFO:tasks.workunit.client.1.vm05.stdout:0/235: link d1/d2/d9/d31/d13/l1f d1/d2/d9/d31/d12/d41/l4d 0 2026-03-10T10:19:26.801 INFO:tasks.workunit.client.1.vm05.stdout:7/289: read d5/d17/f19 [496652,75096] 0 2026-03-10T10:19:26.804 INFO:tasks.workunit.client.1.vm05.stdout:0/236: mkdir d1/d2/d9/d31/d13/d15/d4e 0 2026-03-10T10:19:26.808 INFO:tasks.workunit.client.1.vm05.stdout:0/237: creat d1/d2/d9/d31/d13/d2f/d49/f4f x:0 0 0 2026-03-10T10:19:26.810 INFO:tasks.workunit.client.0.vm02.stdout:6/292: dread d0/f1c [0,4194304] 0 2026-03-10T10:19:26.811 INFO:tasks.workunit.client.1.vm05.stdout:0/238: mkdir d1/d2/d9/d50 0 2026-03-10T10:19:26.811 INFO:tasks.workunit.client.0.vm02.stdout:6/293: symlink d0/d3a/l62 0 2026-03-10T10:19:26.812 INFO:tasks.workunit.client.0.vm02.stdout:6/294: write d0/f1c [2911455,117324] 0 2026-03-10T10:19:26.812 INFO:tasks.workunit.client.1.vm05.stdout:0/239: write d1/d2/d9/d31/f36 [911235,75848] 0 2026-03-10T10:19:26.812 INFO:tasks.workunit.client.0.vm02.stdout:6/295: write d0/f4c [5329090,125580] 0 2026-03-10T10:19:26.813 INFO:tasks.workunit.client.1.vm05.stdout:1/222: sync 2026-03-10T10:19:26.817 INFO:tasks.workunit.client.1.vm05.stdout:7/290: read d5/d26/f39 [79950,23873] 0 2026-03-10T10:19:26.825 INFO:tasks.workunit.client.1.vm05.stdout:0/240: mkdir d1/d2/d9/d31/d13/d15/d4e/d51 0 2026-03-10T10:19:26.828 INFO:tasks.workunit.client.1.vm05.stdout:1/223: truncate d4/d39/f3b 722029 0 2026-03-10T10:19:26.829 INFO:tasks.workunit.client.1.vm05.stdout:7/291: dwrite d5/f34 [0,4194304] 0 2026-03-10T10:19:26.829 INFO:tasks.workunit.client.1.vm05.stdout:1/224: chown d4/l9 179764 1 2026-03-10T10:19:26.830 INFO:tasks.workunit.client.1.vm05.stdout:7/292: creat d5/d1d/d20/d2d/f55 x:0 0 0 2026-03-10T10:19:26.831 INFO:tasks.workunit.client.1.vm05.stdout:7/293: write d5/dd/f1a [3270071,87348] 0 2026-03-10T10:19:26.841 INFO:tasks.workunit.client.1.vm05.stdout:7/294: dwrite d5/f13 [4194304,4194304] 0 2026-03-10T10:19:26.845 INFO:tasks.workunit.client.1.vm05.stdout:2/253: truncate db/f23 1221227 0 2026-03-10T10:19:26.847 INFO:tasks.workunit.client.0.vm02.stdout:7/269: write d1/dc/f25 [284741,57624] 0 2026-03-10T10:19:26.854 INFO:tasks.workunit.client.0.vm02.stdout:1/281: write d4/da/d1a/d22/f23 [201089,113379] 0 2026-03-10T10:19:26.855 INFO:tasks.workunit.client.1.vm05.stdout:9/231: getdents d0/df/d11 0 2026-03-10T10:19:26.857 INFO:tasks.workunit.client.1.vm05.stdout:2/254: rmdir db/d12 39 2026-03-10T10:19:26.858 INFO:tasks.workunit.client.1.vm05.stdout:2/255: stat db/d28/f35 0 2026-03-10T10:19:26.858 INFO:tasks.workunit.client.1.vm05.stdout:2/256: chown db/d2d/l3e 271 1 2026-03-10T10:19:26.858 INFO:tasks.workunit.client.1.vm05.stdout:9/232: dread d0/df/d11/f2d [0,4194304] 0 2026-03-10T10:19:26.863 INFO:tasks.workunit.client.1.vm05.stdout:8/185: fsync d7/d14/d15/f1f 0 2026-03-10T10:19:26.869 INFO:tasks.workunit.client.1.vm05.stdout:2/257: read f1 [1251165,9623] 0 2026-03-10T10:19:26.870 INFO:tasks.workunit.client.1.vm05.stdout:2/258: dread - db/f4a zero size 2026-03-10T10:19:26.871 INFO:tasks.workunit.client.1.vm05.stdout:8/186: dread d7/d14/f22 [0,4194304] 0 2026-03-10T10:19:26.876 INFO:tasks.workunit.client.1.vm05.stdout:8/187: rmdir d7/d14 39 2026-03-10T10:19:26.876 INFO:tasks.workunit.client.1.vm05.stdout:2/259: dwrite db/d28/f3f [0,4194304] 0 2026-03-10T10:19:26.885 INFO:tasks.workunit.client.0.vm02.stdout:8/332: write d1/d1c/d23/d25/f3d [482427,27586] 0 2026-03-10T10:19:26.885 INFO:tasks.workunit.client.0.vm02.stdout:8/333: stat d1/d2/c48 0 2026-03-10T10:19:26.886 INFO:tasks.workunit.client.0.vm02.stdout:8/334: chown d1/d1c/d23/d3e/f5a 1879712 1 2026-03-10T10:19:26.887 INFO:tasks.workunit.client.1.vm05.stdout:2/260: symlink db/d12/l4c 0 2026-03-10T10:19:26.896 INFO:tasks.workunit.client.0.vm02.stdout:8/335: getdents d1/d1c/d43 0 2026-03-10T10:19:26.901 INFO:tasks.workunit.client.1.vm05.stdout:2/261: link db/d12/f29 db/d1c/d40/f4d 0 2026-03-10T10:19:26.903 INFO:tasks.workunit.client.0.vm02.stdout:8/336: dwrite d1/d2/f67 [0,4194304] 0 2026-03-10T10:19:26.906 INFO:tasks.workunit.client.1.vm05.stdout:2/262: dwrite db/f36 [8388608,4194304] 0 2026-03-10T10:19:26.916 INFO:tasks.workunit.client.1.vm05.stdout:2/263: dwrite db/f36 [8388608,4194304] 0 2026-03-10T10:19:26.919 INFO:tasks.workunit.client.1.vm05.stdout:2/264: write db/f24 [1567640,73187] 0 2026-03-10T10:19:26.921 INFO:tasks.workunit.client.0.vm02.stdout:8/337: unlink d1/l4d 0 2026-03-10T10:19:26.927 INFO:tasks.workunit.client.1.vm05.stdout:2/265: unlink db/c42 0 2026-03-10T10:19:26.937 INFO:tasks.workunit.client.0.vm02.stdout:8/338: dread d1/d1c/f34 [0,4194304] 0 2026-03-10T10:19:26.938 INFO:tasks.workunit.client.1.vm05.stdout:2/266: truncate db/d12/f29 2155415 0 2026-03-10T10:19:26.942 INFO:tasks.workunit.client.1.vm05.stdout:2/267: unlink db/d12/f29 0 2026-03-10T10:19:26.946 INFO:tasks.workunit.client.0.vm02.stdout:8/339: mknod d1/d1c/d23/c69 0 2026-03-10T10:19:26.946 INFO:tasks.workunit.client.0.vm02.stdout:8/340: chown d1/d1c/d24/d35 35572 1 2026-03-10T10:19:26.949 INFO:tasks.workunit.client.0.vm02.stdout:8/341: mkdir d1/d1c/d43/d6a 0 2026-03-10T10:19:26.950 INFO:tasks.workunit.client.0.vm02.stdout:8/342: fdatasync d1/d1c/d23/d25/f4c 0 2026-03-10T10:19:26.953 INFO:tasks.workunit.client.0.vm02.stdout:8/343: creat d1/d1c/d24/f6b x:0 0 0 2026-03-10T10:19:26.955 INFO:tasks.workunit.client.0.vm02.stdout:8/344: mknod d1/d1c/d24/d35/c6c 0 2026-03-10T10:19:26.956 INFO:tasks.workunit.client.0.vm02.stdout:8/345: creat d1/f6d x:0 0 0 2026-03-10T10:19:27.008 INFO:tasks.workunit.client.1.vm05.stdout:7/295: dread d5/d17/f40 [0,4194304] 0 2026-03-10T10:19:27.012 INFO:tasks.workunit.client.1.vm05.stdout:7/296: creat d5/d1d/f56 x:0 0 0 2026-03-10T10:19:27.014 INFO:tasks.workunit.client.1.vm05.stdout:7/297: mknod d5/d1d/d29/c57 0 2026-03-10T10:19:27.026 INFO:tasks.workunit.client.1.vm05.stdout:7/298: dwrite d5/d26/f2c [0,4194304] 0 2026-03-10T10:19:27.026 INFO:tasks.workunit.client.1.vm05.stdout:7/299: unlink d5/dd/f23 0 2026-03-10T10:19:27.028 INFO:tasks.workunit.client.1.vm05.stdout:7/300: link d5/d17/f19 d5/d1d/d20/d2d/f58 0 2026-03-10T10:19:27.029 INFO:tasks.workunit.client.1.vm05.stdout:7/301: readlink d5/l2b 0 2026-03-10T10:19:27.030 INFO:tasks.workunit.client.1.vm05.stdout:7/302: readlink l4 0 2026-03-10T10:19:27.031 INFO:tasks.workunit.client.1.vm05.stdout:7/303: dread - d5/d17/f52 zero size 2026-03-10T10:19:27.037 INFO:tasks.workunit.client.1.vm05.stdout:7/304: mknod d5/d1d/d20/c59 0 2026-03-10T10:19:27.038 INFO:tasks.workunit.client.1.vm05.stdout:7/305: creat d5/d26/f5a x:0 0 0 2026-03-10T10:19:27.042 INFO:tasks.workunit.client.0.vm02.stdout:2/342: dwrite d0/d1a/d24/f34 [0,4194304] 0 2026-03-10T10:19:27.046 INFO:tasks.workunit.client.0.vm02.stdout:2/343: symlink d0/d1a/d49/d5e/l74 0 2026-03-10T10:19:27.046 INFO:tasks.workunit.client.1.vm05.stdout:7/306: symlink d5/d1d/d20/d3b/l5b 0 2026-03-10T10:19:27.050 INFO:tasks.workunit.client.0.vm02.stdout:2/344: dwrite d0/d1a/d49/f54 [0,4194304] 0 2026-03-10T10:19:27.054 INFO:tasks.workunit.client.1.vm05.stdout:7/307: write d5/d1d/f31 [4495389,71490] 0 2026-03-10T10:19:27.056 INFO:tasks.workunit.client.1.vm05.stdout:7/308: creat d5/d1d/d29/f5c x:0 0 0 2026-03-10T10:19:27.100 INFO:tasks.workunit.client.1.vm05.stdout:6/202: write f3 [5302960,117145] 0 2026-03-10T10:19:27.102 INFO:tasks.workunit.client.1.vm05.stdout:6/203: truncate dd/f29 5052439 0 2026-03-10T10:19:27.104 INFO:tasks.workunit.client.1.vm05.stdout:6/204: mknod dd/d36/d3f/d12/d24/c49 0 2026-03-10T10:19:27.108 INFO:tasks.workunit.client.1.vm05.stdout:6/205: dwrite dd/d36/d3f/d12/d44/d30/f37 [0,4194304] 0 2026-03-10T10:19:27.120 INFO:tasks.workunit.client.1.vm05.stdout:6/206: mkdir dd/d36/d3f/d12/d44/d30/d4a 0 2026-03-10T10:19:27.121 INFO:tasks.workunit.client.1.vm05.stdout:6/207: unlink dd/d1b/f3c 0 2026-03-10T10:19:27.124 INFO:tasks.workunit.client.1.vm05.stdout:6/208: dread dd/d36/d3f/d12/f35 [0,4194304] 0 2026-03-10T10:19:27.129 INFO:tasks.workunit.client.1.vm05.stdout:6/209: dwrite dd/d36/d3f/d12/d44/d30/f42 [4194304,4194304] 0 2026-03-10T10:19:27.137 INFO:tasks.workunit.client.1.vm05.stdout:6/210: dread dd/d36/d3f/d12/f35 [0,4194304] 0 2026-03-10T10:19:27.138 INFO:tasks.workunit.client.1.vm05.stdout:6/211: chown f3 37 1 2026-03-10T10:19:27.141 INFO:tasks.workunit.client.1.vm05.stdout:6/212: rename dd/d36/d3f/d12/d44/d30/f37 to dd/d36/d3f/d12/d44/d2a/d3d/d48/f4b 0 2026-03-10T10:19:27.144 INFO:tasks.workunit.client.1.vm05.stdout:6/213: write dd/d36/d3f/d12/f35 [603892,12918] 0 2026-03-10T10:19:27.150 INFO:tasks.workunit.client.1.vm05.stdout:6/214: dwrite f3 [4194304,4194304] 0 2026-03-10T10:19:27.163 INFO:tasks.workunit.client.1.vm05.stdout:6/215: dwrite dd/f29 [0,4194304] 0 2026-03-10T10:19:27.171 INFO:tasks.workunit.client.0.vm02.stdout:3/279: truncate d1/d6/f49 1502931 0 2026-03-10T10:19:27.179 INFO:tasks.workunit.client.0.vm02.stdout:4/376: write d1/d10/db/f15 [4287701,11291] 0 2026-03-10T10:19:27.181 INFO:tasks.workunit.client.1.vm05.stdout:5/285: truncate da/db/f29 2780674 0 2026-03-10T10:19:27.181 INFO:tasks.workunit.client.1.vm05.stdout:3/317: fsync dd/d15/d4c/f73 0 2026-03-10T10:19:27.184 INFO:tasks.workunit.client.0.vm02.stdout:5/446: write d1/db/d11/d13/f1c [1551781,130754] 0 2026-03-10T10:19:27.186 INFO:tasks.workunit.client.0.vm02.stdout:4/377: creat d1/d32/d3e/f7d x:0 0 0 2026-03-10T10:19:27.188 INFO:tasks.workunit.client.1.vm05.stdout:3/318: creat dd/d15/d1f/f75 x:0 0 0 2026-03-10T10:19:27.219 INFO:tasks.workunit.client.1.vm05.stdout:5/286: mkdir da/d63 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.0.vm02.stdout:9/243: write da/f13 [1415663,59964] 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.0.vm02.stdout:9/244: readlink da/d3c/d4c/d2c/d34/d35/l46 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.0.vm02.stdout:4/378: dread d1/d32/f46 [0,4194304] 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.0.vm02.stdout:4/379: readlink d1/d41/d5e/d78/d37/l17 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.0.vm02.stdout:9/245: mkdir da/d3c/d53 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.0.vm02.stdout:9/246: creat da/d3c/d4c/d38/d4a/f54 x:0 0 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.0.vm02.stdout:9/247: stat da/d3c/d4c/f17 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.0.vm02.stdout:9/248: dwrite da/d3c/d4c/f49 [0,4194304] 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.1.vm05.stdout:0/241: dread - d1/d7/f27 zero size 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.1.vm05.stdout:0/242: chown d1/d2/d9/d31/d13/d17/l23 34 1 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.1.vm05.stdout:5/287: fsync f5 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.1.vm05.stdout:0/243: read - d1/d2/d9/d31/d12/d20/f2e zero size 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.1.vm05.stdout:0/244: creat d1/d2/d9/d31/d13/d15/f52 x:0 0 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.1.vm05.stdout:0/245: write d1/d2/d9/d31/d13/f48 [174137,59528] 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.1.vm05.stdout:5/288: getdents da/db/d26/d5c/d4b 0 2026-03-10T10:19:27.220 INFO:tasks.workunit.client.1.vm05.stdout:5/289: dwrite da/db/d26/d35/f1c [0,4194304] 0 2026-03-10T10:19:27.224 INFO:tasks.workunit.client.1.vm05.stdout:5/290: write da/db/f1e [4682077,12936] 0 2026-03-10T10:19:27.226 INFO:tasks.workunit.client.1.vm05.stdout:0/246: dwrite d1/d2/d9/f1d [4194304,4194304] 0 2026-03-10T10:19:27.230 INFO:tasks.workunit.client.1.vm05.stdout:0/247: write d1/d2/d9/f1d [5067356,63722] 0 2026-03-10T10:19:27.236 INFO:tasks.workunit.client.1.vm05.stdout:5/291: dread da/db/d26/d35/f31 [0,4194304] 0 2026-03-10T10:19:27.240 INFO:tasks.workunit.client.1.vm05.stdout:0/248: link d1/d2/d9/d31/d13/d15/l2c d1/d2/d9/d31/d12/d20/l53 0 2026-03-10T10:19:27.260 INFO:tasks.workunit.client.0.vm02.stdout:5/447: dread d1/db/d11/d84/d40/d4f/d5f/f6b [0,4194304] 0 2026-03-10T10:19:27.261 INFO:tasks.workunit.client.0.vm02.stdout:5/448: mkdir d1/d9c 0 2026-03-10T10:19:27.261 INFO:tasks.workunit.client.0.vm02.stdout:5/449: dread - d1/db/d11/f7d zero size 2026-03-10T10:19:27.292 INFO:tasks.workunit.client.0.vm02.stdout:4/380: sync 2026-03-10T10:19:27.292 INFO:tasks.workunit.client.0.vm02.stdout:4/381: stat d1/d41/d5e/d78/d37/f2e 0 2026-03-10T10:19:27.293 INFO:tasks.workunit.client.0.vm02.stdout:4/382: fsync d1/d32/f46 0 2026-03-10T10:19:27.294 INFO:tasks.workunit.client.0.vm02.stdout:4/383: mkdir d1/d41/d7e 0 2026-03-10T10:19:27.298 INFO:tasks.workunit.client.0.vm02.stdout:4/384: rename d1/d41/d5e/d78/d37/d63 to d1/d41/d5e/d78/d7f 0 2026-03-10T10:19:27.300 INFO:tasks.workunit.client.0.vm02.stdout:4/385: getdents d1/d41/d5e/d78/d1a/d49 0 2026-03-10T10:19:27.301 INFO:tasks.workunit.client.0.vm02.stdout:4/386: dread - d1/d52/d53/f70 zero size 2026-03-10T10:19:27.302 INFO:tasks.workunit.client.0.vm02.stdout:4/387: symlink d1/l80 0 2026-03-10T10:19:27.316 INFO:tasks.workunit.client.1.vm05.stdout:4/217: stat d1/d3/c22 0 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.1.vm05.stdout:4/218: chown d1/f39 530 1 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.1.vm05.stdout:4/219: rmdir d1/d31/dc/d40/d45 39 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.1.vm05.stdout:1/225: truncate d4/df/f11 451336 0 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.1.vm05.stdout:1/226: dwrite d4/d39/d3e/f3f [0,4194304] 0 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.0.vm02.stdout:0/324: truncate d9/d18/d1a/d22/d24/f2f 3882753 0 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.0.vm02.stdout:0/325: fsync d9/d34/d3d/f58 0 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.0.vm02.stdout:0/326: readlink d9/d34/l37 0 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.0.vm02.stdout:0/327: stat d9/d34/d3d/d67 0 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.0.vm02.stdout:6/296: rmdir d0/d8/d29 39 2026-03-10T10:19:27.342 INFO:tasks.workunit.client.0.vm02.stdout:0/328: rmdir d9/d18/d1a/d43/d49 39 2026-03-10T10:19:27.343 INFO:tasks.workunit.client.0.vm02.stdout:0/329: dread d9/d18/d1a/d43/f45 [0,4194304] 0 2026-03-10T10:19:27.343 INFO:tasks.workunit.client.0.vm02.stdout:6/297: creat d0/d8/d29/d52/f63 x:0 0 0 2026-03-10T10:19:27.343 INFO:tasks.workunit.client.0.vm02.stdout:6/298: write d0/f21 [4459835,104182] 0 2026-03-10T10:19:27.343 INFO:tasks.workunit.client.0.vm02.stdout:6/299: readlink d0/d3a/l62 0 2026-03-10T10:19:27.343 INFO:tasks.workunit.client.0.vm02.stdout:6/300: dread d0/f21 [0,4194304] 0 2026-03-10T10:19:27.343 INFO:tasks.workunit.client.0.vm02.stdout:1/282: write d4/da/f12 [3656795,6267] 0 2026-03-10T10:19:27.343 INFO:tasks.workunit.client.0.vm02.stdout:1/283: dread - d4/da/d27/d38/f5e zero size 2026-03-10T10:19:27.352 INFO:tasks.workunit.client.1.vm05.stdout:9/233: dwrite d0/f2a [4194304,4194304] 0 2026-03-10T10:19:27.358 INFO:tasks.workunit.client.1.vm05.stdout:8/188: dwrite d7/d14/d24/f26 [0,4194304] 0 2026-03-10T10:19:27.358 INFO:tasks.workunit.client.1.vm05.stdout:4/220: getdents d1/d31/dc/d40 0 2026-03-10T10:19:27.359 INFO:tasks.workunit.client.0.vm02.stdout:0/330: rename d9/d18/d1a/d3c/c54 to d9/d18/d1a/c68 0 2026-03-10T10:19:27.369 INFO:tasks.workunit.client.0.vm02.stdout:5/450: dread d1/db/d11/d62/f65 [0,4194304] 0 2026-03-10T10:19:27.370 INFO:tasks.workunit.client.0.vm02.stdout:6/301: sync 2026-03-10T10:19:27.376 INFO:tasks.workunit.client.0.vm02.stdout:1/284: dread d4/da/f13 [0,4194304] 0 2026-03-10T10:19:27.377 INFO:tasks.workunit.client.1.vm05.stdout:0/249: sync 2026-03-10T10:19:27.379 INFO:tasks.workunit.client.1.vm05.stdout:0/250: chown d1/d2/d9/d31/d12/f1e 647661 1 2026-03-10T10:19:27.387 INFO:tasks.workunit.client.1.vm05.stdout:2/268: truncate db/f26 5265586 0 2026-03-10T10:19:27.389 INFO:tasks.workunit.client.1.vm05.stdout:8/189: rmdir d7/d2f 39 2026-03-10T10:19:27.389 INFO:tasks.workunit.client.1.vm05.stdout:8/190: chown d7/l1b 7140 1 2026-03-10T10:19:27.393 INFO:tasks.workunit.client.0.vm02.stdout:1/285: dwrite d4/da/d27/d38/f4e [0,4194304] 0 2026-03-10T10:19:27.395 INFO:tasks.workunit.client.1.vm05.stdout:1/227: write d4/d20/f2d [891518,103823] 0 2026-03-10T10:19:27.396 INFO:tasks.workunit.client.1.vm05.stdout:4/221: rmdir d1/d3 39 2026-03-10T10:19:27.397 INFO:tasks.workunit.client.1.vm05.stdout:4/222: write d1/d31/dc/f3a [97826,48048] 0 2026-03-10T10:19:27.398 INFO:tasks.workunit.client.0.vm02.stdout:5/451: symlink d1/db/d11/l9d 0 2026-03-10T10:19:27.406 INFO:tasks.workunit.client.0.vm02.stdout:7/270: dread d1/f5 [0,4194304] 0 2026-03-10T10:19:27.412 INFO:tasks.workunit.client.0.vm02.stdout:8/346: write d1/d1c/f34 [2879318,15181] 0 2026-03-10T10:19:27.417 INFO:tasks.workunit.client.1.vm05.stdout:7/309: getdents d5/d26 0 2026-03-10T10:19:27.423 INFO:tasks.workunit.client.1.vm05.stdout:8/191: creat d7/d14/f38 x:0 0 0 2026-03-10T10:19:27.423 INFO:tasks.workunit.client.0.vm02.stdout:1/286: fsync d4/da/d27/d38/f5e 0 2026-03-10T10:19:27.423 INFO:tasks.workunit.client.0.vm02.stdout:6/302: creat d0/d8/f64 x:0 0 0 2026-03-10T10:19:27.424 INFO:tasks.workunit.client.0.vm02.stdout:5/452: creat d1/d6a/f9e x:0 0 0 2026-03-10T10:19:27.427 INFO:tasks.workunit.client.1.vm05.stdout:2/269: dwrite db/d28/f30 [4194304,4194304] 0 2026-03-10T10:19:27.434 INFO:tasks.workunit.client.1.vm05.stdout:7/310: mkdir d5/d1d/d20/d2d/d5d 0 2026-03-10T10:19:27.434 INFO:tasks.workunit.client.1.vm05.stdout:7/311: dread - d5/d17/f52 zero size 2026-03-10T10:19:27.436 INFO:tasks.workunit.client.1.vm05.stdout:7/312: read d5/d17/f19 [3400435,8591] 0 2026-03-10T10:19:27.442 INFO:tasks.workunit.client.0.vm02.stdout:5/453: dwrite d1/db/f56 [0,4194304] 0 2026-03-10T10:19:27.444 INFO:tasks.workunit.client.0.vm02.stdout:5/454: chown d1/db/d11/l9d 3 1 2026-03-10T10:19:27.447 INFO:tasks.workunit.client.1.vm05.stdout:0/251: rename d1/d7 to d1/d2/d9/d31/d54 0 2026-03-10T10:19:27.450 INFO:tasks.workunit.client.0.vm02.stdout:2/345: dwrite d0/f9 [0,4194304] 0 2026-03-10T10:19:27.457 INFO:tasks.workunit.client.0.vm02.stdout:2/346: readlink d0/d1a/d49/d5e/l74 0 2026-03-10T10:19:27.457 INFO:tasks.workunit.client.0.vm02.stdout:3/280: write d1/f12 [4231059,117044] 0 2026-03-10T10:19:27.457 INFO:tasks.workunit.client.0.vm02.stdout:3/281: dread d1/f54 [0,4194304] 0 2026-03-10T10:19:27.460 INFO:tasks.workunit.client.0.vm02.stdout:1/287: truncate d4/da/d27/f35 3688360 0 2026-03-10T10:19:27.461 INFO:tasks.workunit.client.1.vm05.stdout:6/216: dwrite dd/d1b/f1d [0,4194304] 0 2026-03-10T10:19:27.461 INFO:tasks.workunit.client.0.vm02.stdout:7/271: creat d1/d33/f53 x:0 0 0 2026-03-10T10:19:27.466 INFO:tasks.workunit.client.1.vm05.stdout:9/234: getdents d0/d1/d13/d26 0 2026-03-10T10:19:27.470 INFO:tasks.workunit.client.1.vm05.stdout:3/319: dwrite dd/d15/d24/d2c/f3e [0,4194304] 0 2026-03-10T10:19:27.476 INFO:tasks.workunit.client.0.vm02.stdout:5/455: dwrite d1/db/d11/d13/d28/f91 [4194304,4194304] 0 2026-03-10T10:19:27.484 INFO:tasks.workunit.client.1.vm05.stdout:7/313: creat d5/d26/f5e x:0 0 0 2026-03-10T10:19:27.489 INFO:tasks.workunit.client.1.vm05.stdout:8/192: link d7/f11 d7/d14/d15/f39 0 2026-03-10T10:19:27.490 INFO:tasks.workunit.client.1.vm05.stdout:8/193: read - d7/f37 zero size 2026-03-10T10:19:27.494 INFO:tasks.workunit.client.1.vm05.stdout:4/223: dread d1/d3/f4a [4194304,4194304] 0 2026-03-10T10:19:27.499 INFO:tasks.workunit.client.0.vm02.stdout:3/282: fdatasync d1/f28 0 2026-03-10T10:19:27.505 INFO:tasks.workunit.client.1.vm05.stdout:1/228: creat d4/f46 x:0 0 0 2026-03-10T10:19:27.505 INFO:tasks.workunit.client.1.vm05.stdout:1/229: dread - d4/df/d1c/f38 zero size 2026-03-10T10:19:27.505 INFO:tasks.workunit.client.0.vm02.stdout:9/249: fsync da/d3c/d4c/d38/d4a/f54 0 2026-03-10T10:19:27.505 INFO:tasks.workunit.client.0.vm02.stdout:9/250: dwrite da/f13 [0,4194304] 0 2026-03-10T10:19:27.506 INFO:tasks.workunit.client.1.vm05.stdout:0/252: dread - d1/d2/d9/d31/d13/d2f/d49/f4f zero size 2026-03-10T10:19:27.506 INFO:tasks.workunit.client.1.vm05.stdout:4/224: dwrite d1/d31/f36 [0,4194304] 0 2026-03-10T10:19:27.506 INFO:tasks.workunit.client.1.vm05.stdout:0/253: chown d1/d2/d9 7435497 1 2026-03-10T10:19:27.507 INFO:tasks.workunit.client.1.vm05.stdout:4/225: write d1/d31/f2f [500242,120286] 0 2026-03-10T10:19:27.508 INFO:tasks.workunit.client.1.vm05.stdout:4/226: dread - d1/f39 zero size 2026-03-10T10:19:27.511 INFO:tasks.workunit.client.0.vm02.stdout:8/347: fsync d1/d1c/d43/f4b 0 2026-03-10T10:19:27.512 INFO:tasks.workunit.client.0.vm02.stdout:8/348: truncate d1/d1c/f1e 4840209 0 2026-03-10T10:19:27.513 INFO:tasks.workunit.client.1.vm05.stdout:5/292: write da/f2e [469194,80035] 0 2026-03-10T10:19:27.515 INFO:tasks.workunit.client.0.vm02.stdout:1/288: truncate d4/f26 49975 0 2026-03-10T10:19:27.519 INFO:tasks.workunit.client.0.vm02.stdout:7/272: rmdir d1/dc 39 2026-03-10T10:19:27.534 INFO:tasks.workunit.client.0.vm02.stdout:2/347: truncate d0/f1b 1948109 0 2026-03-10T10:19:27.534 INFO:tasks.workunit.client.0.vm02.stdout:3/283: rename d1/d20/f40 to d1/d58/f60 0 2026-03-10T10:19:27.534 INFO:tasks.workunit.client.1.vm05.stdout:9/235: dread - d0/df/d11/f33 zero size 2026-03-10T10:19:27.534 INFO:tasks.workunit.client.1.vm05.stdout:7/314: symlink d5/d17/l5f 0 2026-03-10T10:19:27.534 INFO:tasks.workunit.client.1.vm05.stdout:8/194: mkdir d7/d14/d3a 0 2026-03-10T10:19:27.534 INFO:tasks.workunit.client.1.vm05.stdout:1/230: unlink d4/dd/f44 0 2026-03-10T10:19:27.534 INFO:tasks.workunit.client.1.vm05.stdout:0/254: dread - d1/d2/d39/d3d/f44 zero size 2026-03-10T10:19:27.535 INFO:tasks.workunit.client.0.vm02.stdout:1/289: mknod d4/d2c/c60 0 2026-03-10T10:19:27.538 INFO:tasks.workunit.client.0.vm02.stdout:2/348: creat d0/d1a/d49/d5e/d65/f75 x:0 0 0 2026-03-10T10:19:27.539 INFO:tasks.workunit.client.1.vm05.stdout:5/293: creat da/db/d26/f64 x:0 0 0 2026-03-10T10:19:27.540 INFO:tasks.workunit.client.1.vm05.stdout:5/294: chown l0 118431 1 2026-03-10T10:19:27.541 INFO:tasks.workunit.client.1.vm05.stdout:7/315: mkdir d5/d1d/d29/d60 0 2026-03-10T10:19:27.542 INFO:tasks.workunit.client.1.vm05.stdout:5/295: stat da/db/d26/d35/d38 0 2026-03-10T10:19:27.543 INFO:tasks.workunit.client.1.vm05.stdout:0/255: dread d1/d2/f21 [0,4194304] 0 2026-03-10T10:19:27.543 INFO:tasks.workunit.client.0.vm02.stdout:5/456: sync 2026-03-10T10:19:27.543 INFO:tasks.workunit.client.1.vm05.stdout:8/195: mkdir d7/d14/d15/d3b 0 2026-03-10T10:19:27.545 INFO:tasks.workunit.client.1.vm05.stdout:1/231: dwrite d4/df/d1c/f38 [0,4194304] 0 2026-03-10T10:19:27.551 INFO:tasks.workunit.client.1.vm05.stdout:7/316: truncate d5/d1d/f46 102442 0 2026-03-10T10:19:27.553 INFO:tasks.workunit.client.0.vm02.stdout:1/290: dread - d4/d1b/f34 zero size 2026-03-10T10:19:27.554 INFO:tasks.workunit.client.0.vm02.stdout:1/291: write d4/d2c/d53/f58 [1587304,23105] 0 2026-03-10T10:19:27.559 INFO:tasks.workunit.client.1.vm05.stdout:1/232: dwrite d4/dd/f1f [0,4194304] 0 2026-03-10T10:19:27.562 INFO:tasks.workunit.client.1.vm05.stdout:3/320: sync 2026-03-10T10:19:27.562 INFO:tasks.workunit.client.1.vm05.stdout:3/321: chown dd/d39/d5c 14938003 1 2026-03-10T10:19:27.565 INFO:tasks.workunit.client.0.vm02.stdout:7/273: rename d1/dc/d16/d28/d2d/d36/l3f to d1/d1b/l54 0 2026-03-10T10:19:27.565 INFO:tasks.workunit.client.1.vm05.stdout:3/322: dread dd/d15/d24/d2c/d3b/f55 [0,4194304] 0 2026-03-10T10:19:27.567 INFO:tasks.workunit.client.1.vm05.stdout:4/227: read d1/d31/dc/f3a [53828,64025] 0 2026-03-10T10:19:27.569 INFO:tasks.workunit.client.0.vm02.stdout:9/251: link da/d3c/d4c/d2c/d34/d35/l46 da/d3c/d4c/d2c/d34/d35/l55 0 2026-03-10T10:19:27.569 INFO:tasks.workunit.client.0.vm02.stdout:9/252: dread - da/d3c/d4c/d2c/d34/f36 zero size 2026-03-10T10:19:27.569 INFO:tasks.workunit.client.1.vm05.stdout:5/296: creat da/db/d26/d35/d38/f65 x:0 0 0 2026-03-10T10:19:27.570 INFO:tasks.workunit.client.1.vm05.stdout:8/196: rename d7/d14/f29 to d7/d14/d15/f3c 0 2026-03-10T10:19:27.570 INFO:tasks.workunit.client.0.vm02.stdout:9/253: write da/f13 [3192561,23261] 0 2026-03-10T10:19:27.571 INFO:tasks.workunit.client.1.vm05.stdout:7/317: mknod d5/d1d/d20/d3b/c61 0 2026-03-10T10:19:27.572 INFO:tasks.workunit.client.0.vm02.stdout:7/274: fdatasync d1/f5 0 2026-03-10T10:19:27.572 INFO:tasks.workunit.client.1.vm05.stdout:7/318: dread - d5/dd/f28 zero size 2026-03-10T10:19:27.574 INFO:tasks.workunit.client.1.vm05.stdout:3/323: dwrite dd/d15/d24/d2c/f3e [0,4194304] 0 2026-03-10T10:19:27.575 INFO:tasks.workunit.client.1.vm05.stdout:7/319: fsync d5/d26/f4d 0 2026-03-10T10:19:27.575 INFO:tasks.workunit.client.1.vm05.stdout:4/228: dwrite d1/d31/f36 [0,4194304] 0 2026-03-10T10:19:27.575 INFO:tasks.workunit.client.1.vm05.stdout:7/320: chown d5/d17/f3c 2 1 2026-03-10T10:19:27.575 INFO:tasks.workunit.client.0.vm02.stdout:5/457: link d1/db/d11/d84/d40/d4f/d5f/d6d/d71/f80 d1/db/d11/d16/d79/d85/f9f 0 2026-03-10T10:19:27.578 INFO:tasks.workunit.client.1.vm05.stdout:1/233: dread d4/dd/f15 [4194304,4194304] 0 2026-03-10T10:19:27.578 INFO:tasks.workunit.client.0.vm02.stdout:9/254: mkdir da/d3c/d4c/d56 0 2026-03-10T10:19:27.578 INFO:tasks.workunit.client.0.vm02.stdout:9/255: stat da/d3c/d4c/d2c/d34/d35/c4f 0 2026-03-10T10:19:27.581 INFO:tasks.workunit.client.0.vm02.stdout:1/292: mknod d4/da/d1a/c61 0 2026-03-10T10:19:27.585 INFO:tasks.workunit.client.1.vm05.stdout:5/297: mknod da/db/c66 0 2026-03-10T10:19:27.586 INFO:tasks.workunit.client.0.vm02.stdout:5/458: rename d1/db/d11/d84/d40/d4f/f99 to d1/db/d11/d16/d79/d85/fa0 0 2026-03-10T10:19:27.586 INFO:tasks.workunit.client.0.vm02.stdout:9/256: creat da/d3c/d4c/d2c/d34/f57 x:0 0 0 2026-03-10T10:19:27.587 INFO:tasks.workunit.client.0.vm02.stdout:1/293: creat d4/da/d1a/d22/f62 x:0 0 0 2026-03-10T10:19:27.590 INFO:tasks.workunit.client.0.vm02.stdout:9/257: dread da/d3c/d4c/f3b [0,4194304] 0 2026-03-10T10:19:27.592 INFO:tasks.workunit.client.1.vm05.stdout:4/229: mkdir d1/d31/d4b 0 2026-03-10T10:19:27.612 INFO:tasks.workunit.client.0.vm02.stdout:1/294: fsync d4/d1b/f34 0 2026-03-10T10:19:27.612 INFO:tasks.workunit.client.0.vm02.stdout:9/258: fdatasync da/d3c/f3e 0 2026-03-10T10:19:27.612 INFO:tasks.workunit.client.0.vm02.stdout:9/259: rmdir da/d3c/d4c/d2c/d34/d35 39 2026-03-10T10:19:27.612 INFO:tasks.workunit.client.1.vm05.stdout:1/234: unlink d4/l9 0 2026-03-10T10:19:27.612 INFO:tasks.workunit.client.1.vm05.stdout:4/230: chown d1/d3/l4 1218180 1 2026-03-10T10:19:27.612 INFO:tasks.workunit.client.1.vm05.stdout:1/235: mknod d4/d39/d3e/c47 0 2026-03-10T10:19:27.612 INFO:tasks.workunit.client.1.vm05.stdout:1/236: write d4/dd/f1f [4531323,30403] 0 2026-03-10T10:19:27.612 INFO:tasks.workunit.client.1.vm05.stdout:8/197: link d7/l18 d7/l3d 0 2026-03-10T10:19:27.615 INFO:tasks.workunit.client.1.vm05.stdout:8/198: dwrite d7/f11 [4194304,4194304] 0 2026-03-10T10:19:27.620 INFO:tasks.workunit.client.1.vm05.stdout:8/199: mknod d7/d14/d24/c3e 0 2026-03-10T10:19:27.621 INFO:tasks.workunit.client.1.vm05.stdout:8/200: dread - d7/d14/d24/f35 zero size 2026-03-10T10:19:27.623 INFO:tasks.workunit.client.1.vm05.stdout:8/201: write d7/f11 [3180838,14414] 0 2026-03-10T10:19:27.630 INFO:tasks.workunit.client.1.vm05.stdout:1/237: sync 2026-03-10T10:19:27.631 INFO:tasks.workunit.client.1.vm05.stdout:0/256: fsync d1/d2/d9/d31/d13/f48 0 2026-03-10T10:19:27.632 INFO:tasks.workunit.client.1.vm05.stdout:1/238: write d4/f46 [663243,96756] 0 2026-03-10T10:19:27.633 INFO:tasks.workunit.client.1.vm05.stdout:1/239: symlink d4/d20/l48 0 2026-03-10T10:19:27.634 INFO:tasks.workunit.client.1.vm05.stdout:0/257: write d1/d2/d9/d31/d12/d20/f37 [302978,92918] 0 2026-03-10T10:19:27.653 INFO:tasks.workunit.client.1.vm05.stdout:0/258: dread d1/d2/d9/d31/f36 [0,4194304] 0 2026-03-10T10:19:27.703 INFO:tasks.workunit.client.0.vm02.stdout:4/388: write d1/d41/d5e/d78/d37/f14 [397244,130686] 0 2026-03-10T10:19:27.708 INFO:tasks.workunit.client.1.vm05.stdout:1/240: rename d4/df/f11 to d4/d20/f49 0 2026-03-10T10:19:27.709 INFO:tasks.workunit.client.0.vm02.stdout:4/389: mkdir d1/d41/d5e/d78/d1a/d49/d81 0 2026-03-10T10:19:27.714 INFO:tasks.workunit.client.1.vm05.stdout:1/241: dwrite d4/dd/f1f [0,4194304] 0 2026-03-10T10:19:27.715 INFO:tasks.workunit.client.0.vm02.stdout:4/390: mkdir d1/d41/d5e/d78/d7f/d82 0 2026-03-10T10:19:27.716 INFO:tasks.workunit.client.0.vm02.stdout:4/391: write d1/d52/d53/f70 [1024431,42132] 0 2026-03-10T10:19:27.737 INFO:tasks.workunit.client.1.vm05.stdout:1/242: creat d4/d39/d3e/f4a x:0 0 0 2026-03-10T10:19:27.737 INFO:tasks.workunit.client.1.vm05.stdout:7/321: getdents d5/d1d/d20/d2d 0 2026-03-10T10:19:27.737 INFO:tasks.workunit.client.0.vm02.stdout:6/303: write d0/d8/d9/f14 [544952,44847] 0 2026-03-10T10:19:27.742 INFO:tasks.workunit.client.0.vm02.stdout:6/304: dwrite d0/d8/d29/d2f/d4b/f39 [0,4194304] 0 2026-03-10T10:19:27.747 INFO:tasks.workunit.client.1.vm05.stdout:1/243: dread d4/d39/d3e/f3f [0,4194304] 0 2026-03-10T10:19:27.757 INFO:tasks.workunit.client.1.vm05.stdout:7/322: creat d5/dd/f62 x:0 0 0 2026-03-10T10:19:27.759 INFO:tasks.workunit.client.0.vm02.stdout:6/305: rmdir d0/d8/d9/d31/d32 39 2026-03-10T10:19:27.761 INFO:tasks.workunit.client.1.vm05.stdout:6/217: write fb [3625392,77301] 0 2026-03-10T10:19:27.763 INFO:tasks.workunit.client.1.vm05.stdout:6/218: write dd/d36/d3f/d12/d44/d30/f42 [4646190,116147] 0 2026-03-10T10:19:27.778 INFO:tasks.workunit.client.1.vm05.stdout:2/270: truncate db/f24 4860181 0 2026-03-10T10:19:27.778 INFO:tasks.workunit.client.1.vm05.stdout:7/323: mknod d5/d1d/d29/c63 0 2026-03-10T10:19:27.778 INFO:tasks.workunit.client.0.vm02.stdout:4/392: dread d1/d41/d5e/d78/f3f [4194304,4194304] 0 2026-03-10T10:19:27.778 INFO:tasks.workunit.client.0.vm02.stdout:4/393: unlink d1/d41/d5e/d78/d44/f76 0 2026-03-10T10:19:27.778 INFO:tasks.workunit.client.0.vm02.stdout:4/394: creat d1/d52/d53/f83 x:0 0 0 2026-03-10T10:19:27.779 INFO:tasks.workunit.client.1.vm05.stdout:2/271: rmdir db/d2d 39 2026-03-10T10:19:27.782 INFO:tasks.workunit.client.1.vm05.stdout:7/324: symlink d5/d1d/l64 0 2026-03-10T10:19:27.783 INFO:tasks.workunit.client.1.vm05.stdout:2/272: fdatasync db/d2d/f47 0 2026-03-10T10:19:27.789 INFO:tasks.workunit.client.1.vm05.stdout:7/325: creat d5/d1d/d29/d3e/f65 x:0 0 0 2026-03-10T10:19:27.790 INFO:tasks.workunit.client.1.vm05.stdout:1/244: rename d4/d37/c43 to d4/d3d/c4b 0 2026-03-10T10:19:27.790 INFO:tasks.workunit.client.1.vm05.stdout:2/273: mkdir db/d4e 0 2026-03-10T10:19:27.791 INFO:tasks.workunit.client.1.vm05.stdout:1/245: creat d4/d3d/f4c x:0 0 0 2026-03-10T10:19:27.794 INFO:tasks.workunit.client.1.vm05.stdout:2/274: mkdir db/d28/d4f 0 2026-03-10T10:19:27.795 INFO:tasks.workunit.client.0.vm02.stdout:4/395: dread d1/d10/db/f20 [0,4194304] 0 2026-03-10T10:19:27.795 INFO:tasks.workunit.client.1.vm05.stdout:1/246: chown d4/l6 0 1 2026-03-10T10:19:27.796 INFO:tasks.workunit.client.0.vm02.stdout:4/396: write d1/d41/d5e/d78/d55/f7c [132259,5127] 0 2026-03-10T10:19:27.800 INFO:tasks.workunit.client.0.vm02.stdout:4/397: symlink d1/d41/l84 0 2026-03-10T10:19:27.808 INFO:tasks.workunit.client.1.vm05.stdout:1/247: creat d4/d39/d3e/f4d x:0 0 0 2026-03-10T10:19:27.809 INFO:tasks.workunit.client.0.vm02.stdout:4/398: dread d1/d41/d5e/d78/d1a/f4c [0,4194304] 0 2026-03-10T10:19:27.810 INFO:tasks.workunit.client.1.vm05.stdout:7/326: dread d5/d1d/d20/d2d/f30 [0,4194304] 0 2026-03-10T10:19:27.815 INFO:tasks.workunit.client.1.vm05.stdout:7/327: dread - d5/d26/f5e zero size 2026-03-10T10:19:27.827 INFO:tasks.workunit.client.0.vm02.stdout:3/284: write d1/d6/f43 [359306,552] 0 2026-03-10T10:19:27.830 INFO:tasks.workunit.client.0.vm02.stdout:8/349: truncate d1/d1c/d23/f3b 3730629 0 2026-03-10T10:19:27.833 INFO:tasks.workunit.client.0.vm02.stdout:8/350: truncate d1/d1c/f14 856007 0 2026-03-10T10:19:27.837 INFO:tasks.workunit.client.1.vm05.stdout:9/236: truncate d0/d1/d13/de/f38 798769 0 2026-03-10T10:19:27.843 INFO:tasks.workunit.client.0.vm02.stdout:8/351: dwrite d1/d1c/f33 [0,4194304] 0 2026-03-10T10:19:27.849 INFO:tasks.workunit.client.0.vm02.stdout:8/352: rmdir d1/d1c/d24/d35/d56 39 2026-03-10T10:19:27.858 INFO:tasks.workunit.client.0.vm02.stdout:2/349: dwrite d0/d1a/f53 [0,4194304] 0 2026-03-10T10:19:27.877 INFO:tasks.workunit.client.0.vm02.stdout:2/350: write d0/f70 [1043372,90578] 0 2026-03-10T10:19:27.877 INFO:tasks.workunit.client.0.vm02.stdout:2/351: mknod d0/d1a/d24/c76 0 2026-03-10T10:19:27.877 INFO:tasks.workunit.client.0.vm02.stdout:5/459: write d1/db/d11/d13/f21 [331866,50589] 0 2026-03-10T10:19:27.877 INFO:tasks.workunit.client.0.vm02.stdout:5/460: chown d1/db/d11/d84/d40/d4f 293586 1 2026-03-10T10:19:27.877 INFO:tasks.workunit.client.0.vm02.stdout:5/461: rmdir d1/db/d11/d1a 39 2026-03-10T10:19:27.877 INFO:tasks.workunit.client.1.vm05.stdout:5/298: readlink da/db/l42 0 2026-03-10T10:19:27.878 INFO:tasks.workunit.client.1.vm05.stdout:3/324: write f1 [2905031,36992] 0 2026-03-10T10:19:27.878 INFO:tasks.workunit.client.1.vm05.stdout:3/325: dread - dd/d39/d66/f6e zero size 2026-03-10T10:19:27.878 INFO:tasks.workunit.client.1.vm05.stdout:5/299: dread - da/db/d28/f47 zero size 2026-03-10T10:19:27.878 INFO:tasks.workunit.client.1.vm05.stdout:3/326: symlink dd/d15/d4c/l76 0 2026-03-10T10:19:27.878 INFO:tasks.workunit.client.1.vm05.stdout:9/237: dread d0/d1/f9 [0,4194304] 0 2026-03-10T10:19:27.878 INFO:tasks.workunit.client.1.vm05.stdout:3/327: creat dd/d15/d24/d2c/d3b/f77 x:0 0 0 2026-03-10T10:19:27.880 INFO:tasks.workunit.client.0.vm02.stdout:2/352: rename d0/d1a/d49/c5d to d0/c77 0 2026-03-10T10:19:27.885 INFO:tasks.workunit.client.0.vm02.stdout:2/353: creat d0/d1a/d49/f78 x:0 0 0 2026-03-10T10:19:27.885 INFO:tasks.workunit.client.1.vm05.stdout:5/300: rename l0 to da/db/d26/d35/d58/l67 0 2026-03-10T10:19:27.885 INFO:tasks.workunit.client.1.vm05.stdout:9/238: mkdir d0/d1/d4c 0 2026-03-10T10:19:27.885 INFO:tasks.workunit.client.0.vm02.stdout:2/354: stat d0/d1a/f47 0 2026-03-10T10:19:27.885 INFO:tasks.workunit.client.1.vm05.stdout:9/239: dread - d0/d1/d16/f36 zero size 2026-03-10T10:19:27.886 INFO:tasks.workunit.client.1.vm05.stdout:5/301: rename da/db/d28/f47 to da/db/d26/d5c/f68 0 2026-03-10T10:19:27.886 INFO:tasks.workunit.client.0.vm02.stdout:5/462: creat d1/db/d11/d16/fa1 x:0 0 0 2026-03-10T10:19:27.891 INFO:tasks.workunit.client.0.vm02.stdout:2/355: symlink d0/d10/l79 0 2026-03-10T10:19:27.891 INFO:tasks.workunit.client.1.vm05.stdout:3/328: link dd/l13 dd/d15/d1f/l78 0 2026-03-10T10:19:27.892 INFO:tasks.workunit.client.1.vm05.stdout:9/240: mknod d0/df/c4d 0 2026-03-10T10:19:27.893 INFO:tasks.workunit.client.1.vm05.stdout:3/329: chown dd/d39/d66/f6e 51009 1 2026-03-10T10:19:27.895 INFO:tasks.workunit.client.0.vm02.stdout:4/399: sync 2026-03-10T10:19:27.895 INFO:tasks.workunit.client.0.vm02.stdout:4/400: stat d1/d41 0 2026-03-10T10:19:27.898 INFO:tasks.workunit.client.1.vm05.stdout:5/302: dwrite da/db/d26/d35/d38/f5b [0,4194304] 0 2026-03-10T10:19:27.898 INFO:tasks.workunit.client.1.vm05.stdout:9/241: creat d0/d1/d13/d26/f4e x:0 0 0 2026-03-10T10:19:27.899 INFO:tasks.workunit.client.0.vm02.stdout:4/401: creat d1/d75/f85 x:0 0 0 2026-03-10T10:19:27.902 INFO:tasks.workunit.client.0.vm02.stdout:4/402: link d1/d41/d5e/c6a d1/d10/c86 0 2026-03-10T10:19:27.902 INFO:tasks.workunit.client.0.vm02.stdout:4/403: chown d1/d41/d5e/d78/l29 12557163 1 2026-03-10T10:19:27.904 INFO:tasks.workunit.client.1.vm05.stdout:9/242: truncate d0/d1/d16/f3d 425477 0 2026-03-10T10:19:27.907 INFO:tasks.workunit.client.1.vm05.stdout:7/328: sync 2026-03-10T10:19:27.912 INFO:tasks.workunit.client.1.vm05.stdout:3/330: dwrite dd/d15/d24/d2c/d3b/f55 [0,4194304] 0 2026-03-10T10:19:27.915 INFO:tasks.workunit.client.1.vm05.stdout:7/329: mkdir d5/d17/d66 0 2026-03-10T10:19:27.922 INFO:tasks.workunit.client.1.vm05.stdout:5/303: stat da/db/f1d 0 2026-03-10T10:19:27.922 INFO:tasks.workunit.client.0.vm02.stdout:1/295: write d4/d1b/f4c [1027785,72168] 0 2026-03-10T10:19:27.922 INFO:tasks.workunit.client.1.vm05.stdout:5/304: chown da/f10 0 1 2026-03-10T10:19:27.922 INFO:tasks.workunit.client.1.vm05.stdout:7/330: write d5/d1d/f32 [980705,88171] 0 2026-03-10T10:19:27.922 INFO:tasks.workunit.client.1.vm05.stdout:7/331: write d5/f34 [1333381,103041] 0 2026-03-10T10:19:27.922 INFO:tasks.workunit.client.1.vm05.stdout:9/243: rename d0/f2a to d0/d1/d13/d26/f4f 0 2026-03-10T10:19:27.923 INFO:tasks.workunit.client.1.vm05.stdout:5/305: chown da/db/d26/d5c/f50 1 1 2026-03-10T10:19:27.923 INFO:tasks.workunit.client.1.vm05.stdout:5/306: stat da/db/d26/d5c/f50 0 2026-03-10T10:19:27.924 INFO:tasks.workunit.client.0.vm02.stdout:1/296: symlink d4/da/d1a/d22/l63 0 2026-03-10T10:19:27.926 INFO:tasks.workunit.client.1.vm05.stdout:3/331: stat dd/d39/d5c/c70 0 2026-03-10T10:19:27.932 INFO:tasks.workunit.client.1.vm05.stdout:9/244: creat d0/df/d11/f50 x:0 0 0 2026-03-10T10:19:27.933 INFO:tasks.workunit.client.1.vm05.stdout:5/307: creat da/db/d28/d32/f69 x:0 0 0 2026-03-10T10:19:27.933 INFO:tasks.workunit.client.1.vm05.stdout:3/332: creat dd/d15/d24/f79 x:0 0 0 2026-03-10T10:19:27.938 INFO:tasks.workunit.client.1.vm05.stdout:5/308: creat da/db/d26/d5c/d4b/f6a x:0 0 0 2026-03-10T10:19:27.938 INFO:tasks.workunit.client.1.vm05.stdout:3/333: write dd/d39/d66/f6e [300649,44371] 0 2026-03-10T10:19:27.941 INFO:tasks.workunit.client.0.vm02.stdout:4/404: sync 2026-03-10T10:19:27.942 INFO:tasks.workunit.client.1.vm05.stdout:9/245: link d0/d1/d13/de/l2b d0/d1/d13/d26/l51 0 2026-03-10T10:19:27.946 INFO:tasks.workunit.client.1.vm05.stdout:5/309: dread da/db/d26/d35/f1c [0,4194304] 0 2026-03-10T10:19:27.946 INFO:tasks.workunit.client.1.vm05.stdout:3/334: dwrite f9 [4194304,4194304] 0 2026-03-10T10:19:27.948 INFO:tasks.workunit.client.0.vm02.stdout:4/405: rename d1/d10/db/f1e to d1/d41/d5e/f87 0 2026-03-10T10:19:27.951 INFO:tasks.workunit.client.0.vm02.stdout:4/406: read d1/d52/f5a [1254014,98386] 0 2026-03-10T10:19:27.951 INFO:tasks.workunit.client.1.vm05.stdout:9/246: rename d0/df/f44 to d0/df/d11/f52 0 2026-03-10T10:19:27.955 INFO:tasks.workunit.client.0.vm02.stdout:4/407: dwrite d1/d10/f30 [0,4194304] 0 2026-03-10T10:19:27.955 INFO:tasks.workunit.client.1.vm05.stdout:5/310: chown da/db/d26/d35/d58 29130323 1 2026-03-10T10:19:27.955 INFO:tasks.workunit.client.0.vm02.stdout:4/408: write d1/d10/f8 [4369922,66337] 0 2026-03-10T10:19:27.956 INFO:tasks.workunit.client.1.vm05.stdout:3/335: dread - dd/d15/d24/f79 zero size 2026-03-10T10:19:27.956 INFO:tasks.workunit.client.0.vm02.stdout:4/409: write d1/d10/f30 [5134362,18245] 0 2026-03-10T10:19:27.967 INFO:tasks.workunit.client.0.vm02.stdout:4/410: getdents d1/d41/d5e/d78/d1a/d49/d81 0 2026-03-10T10:19:27.969 INFO:tasks.workunit.client.0.vm02.stdout:4/411: mkdir d1/d10/d88 0 2026-03-10T10:19:27.970 INFO:tasks.workunit.client.0.vm02.stdout:4/412: unlink d1/d41/d5e/d78/d37/l25 0 2026-03-10T10:19:27.971 INFO:tasks.workunit.client.0.vm02.stdout:4/413: mknod d1/d41/c89 0 2026-03-10T10:19:27.971 INFO:tasks.workunit.client.0.vm02.stdout:4/414: rename d1 to d1/d75/d8a 22 2026-03-10T10:19:27.972 INFO:tasks.workunit.client.0.vm02.stdout:4/415: dread - d1/d41/d5e/d78/d1a/d49/f7a zero size 2026-03-10T10:19:27.972 INFO:tasks.workunit.client.0.vm02.stdout:4/416: read - d1/f6f zero size 2026-03-10T10:19:27.974 INFO:tasks.workunit.client.0.vm02.stdout:4/417: mknod d1/d41/d5e/d78/d37/c8b 0 2026-03-10T10:19:27.976 INFO:tasks.workunit.client.1.vm05.stdout:4/231: write d1/d31/dc/f21 [689906,121325] 0 2026-03-10T10:19:27.985 INFO:tasks.workunit.client.1.vm05.stdout:9/247: dread d0/d1/d13/f27 [0,4194304] 0 2026-03-10T10:19:27.985 INFO:tasks.workunit.client.0.vm02.stdout:9/260: dwrite da/d3c/d4c/f17 [4194304,4194304] 0 2026-03-10T10:19:27.986 INFO:tasks.workunit.client.1.vm05.stdout:4/232: dread - d1/d3/f46 zero size 2026-03-10T10:19:27.999 INFO:tasks.workunit.client.1.vm05.stdout:0/259: write d1/d2/d9/d31/d12/d20/f2e [1037110,933] 0 2026-03-10T10:19:28.004 INFO:tasks.workunit.client.1.vm05.stdout:9/248: dwrite d0/df/d11/f2d [0,4194304] 0 2026-03-10T10:19:28.011 INFO:tasks.workunit.client.1.vm05.stdout:0/260: dread - d1/d2/d9/d31/d13/d2f/d49/f4f zero size 2026-03-10T10:19:28.012 INFO:tasks.workunit.client.1.vm05.stdout:8/202: dwrite d7/d14/f23 [0,4194304] 0 2026-03-10T10:19:28.013 INFO:tasks.workunit.client.0.vm02.stdout:4/418: sync 2026-03-10T10:19:28.014 INFO:tasks.workunit.client.0.vm02.stdout:4/419: read - d1/d52/f77 zero size 2026-03-10T10:19:28.016 INFO:tasks.workunit.client.1.vm05.stdout:5/311: sync 2026-03-10T10:19:28.017 INFO:tasks.workunit.client.0.vm02.stdout:4/420: creat d1/d41/d5e/d78/d1a/f8c x:0 0 0 2026-03-10T10:19:28.018 INFO:tasks.workunit.client.0.vm02.stdout:4/421: chown d1/d10/db/f15 15767 1 2026-03-10T10:19:28.019 INFO:tasks.workunit.client.0.vm02.stdout:9/261: sync 2026-03-10T10:19:28.030 INFO:tasks.workunit.client.1.vm05.stdout:5/312: dwrite da/f2e [0,4194304] 0 2026-03-10T10:19:28.030 INFO:tasks.workunit.client.0.vm02.stdout:0/331: truncate d9/d18/d1a/d22/d24/f2f 703744 0 2026-03-10T10:19:28.030 INFO:tasks.workunit.client.0.vm02.stdout:0/332: fsync d9/d18/d1a/d43/f55 0 2026-03-10T10:19:28.030 INFO:tasks.workunit.client.0.vm02.stdout:4/422: mkdir d1/d41/d5e/d78/d37/d8d 0 2026-03-10T10:19:28.031 INFO:tasks.workunit.client.1.vm05.stdout:0/261: readlink d1/d2/d9/d31/d12/d20/l53 0 2026-03-10T10:19:28.031 INFO:tasks.workunit.client.1.vm05.stdout:0/262: chown d1/d2/d9/d31/d12/d20/f37 242912127 1 2026-03-10T10:19:28.032 INFO:tasks.workunit.client.0.vm02.stdout:0/333: creat d9/d34/d3d/f69 x:0 0 0 2026-03-10T10:19:28.032 INFO:tasks.workunit.client.0.vm02.stdout:0/334: chown d9/d34/l37 102 1 2026-03-10T10:19:28.040 INFO:tasks.workunit.client.0.vm02.stdout:0/335: unlink d9/d18/d1a/d43/d57/l60 0 2026-03-10T10:19:28.042 INFO:tasks.workunit.client.0.vm02.stdout:9/262: sync 2026-03-10T10:19:28.043 INFO:tasks.workunit.client.1.vm05.stdout:5/313: creat da/db/d26/d5c/f6b x:0 0 0 2026-03-10T10:19:28.046 INFO:tasks.workunit.client.0.vm02.stdout:0/336: rename d9/d18/d1a/d43/f55 to d9/d18/f6a 0 2026-03-10T10:19:28.048 INFO:tasks.workunit.client.1.vm05.stdout:0/263: dwrite d1/f11 [0,4194304] 0 2026-03-10T10:19:28.048 INFO:tasks.workunit.client.0.vm02.stdout:0/337: write d9/d18/d1a/d22/d24/f40 [3410988,123920] 0 2026-03-10T10:19:28.051 INFO:tasks.workunit.client.0.vm02.stdout:9/263: chown da/c21 0 1 2026-03-10T10:19:28.063 INFO:tasks.workunit.client.0.vm02.stdout:9/264: dread da/f28 [0,4194304] 0 2026-03-10T10:19:28.065 INFO:tasks.workunit.client.1.vm05.stdout:0/264: fdatasync d1/d2/d9/d31/f36 0 2026-03-10T10:19:28.066 INFO:tasks.workunit.client.0.vm02.stdout:9/265: truncate da/f28 3544834 0 2026-03-10T10:19:28.067 INFO:tasks.workunit.client.0.vm02.stdout:9/266: fsync da/f13 0 2026-03-10T10:19:28.068 INFO:tasks.workunit.client.0.vm02.stdout:0/338: mkdir d9/d18/d1a/d43/d49/d6b 0 2026-03-10T10:19:28.069 INFO:tasks.workunit.client.0.vm02.stdout:9/267: fdatasync da/f15 0 2026-03-10T10:19:28.072 INFO:tasks.workunit.client.0.vm02.stdout:9/268: rename da/d3c/d4c/d2c/d34/l37 to da/d3c/d4c/l58 0 2026-03-10T10:19:28.076 INFO:tasks.workunit.client.0.vm02.stdout:9/269: dwrite da/d3c/d4c/d2c/d34/f3d [0,4194304] 0 2026-03-10T10:19:28.079 INFO:tasks.workunit.client.0.vm02.stdout:9/270: chown l2 190605556 1 2026-03-10T10:19:28.080 INFO:tasks.workunit.client.0.vm02.stdout:9/271: dread da/d3c/d4c/f33 [0,4194304] 0 2026-03-10T10:19:28.088 INFO:tasks.workunit.client.0.vm02.stdout:0/339: link d9/d18/d1a/d22/d4a/f62 d9/f6c 0 2026-03-10T10:19:28.088 INFO:tasks.workunit.client.1.vm05.stdout:5/314: getdents da/db/d26/d35/d38 0 2026-03-10T10:19:28.089 INFO:tasks.workunit.client.0.vm02.stdout:0/340: dread d9/d18/d1a/d43/d49/f53 [0,4194304] 0 2026-03-10T10:19:28.090 INFO:tasks.workunit.client.1.vm05.stdout:5/315: write da/db/d26/d35/f2a [766628,66205] 0 2026-03-10T10:19:28.097 INFO:tasks.workunit.client.1.vm05.stdout:5/316: read - da/db/f3b zero size 2026-03-10T10:19:28.097 INFO:tasks.workunit.client.1.vm05.stdout:5/317: readlink da/db/d28/l2f 0 2026-03-10T10:19:28.097 INFO:tasks.workunit.client.0.vm02.stdout:0/341: rmdir d9/d18/d1a/d3c 39 2026-03-10T10:19:28.097 INFO:tasks.workunit.client.0.vm02.stdout:0/342: creat d9/d34/d3d/d65/f6d x:0 0 0 2026-03-10T10:19:28.097 INFO:tasks.workunit.client.0.vm02.stdout:0/343: rmdir d9/d18/d1a/d43/d57/d61 0 2026-03-10T10:19:28.098 INFO:tasks.workunit.client.0.vm02.stdout:0/344: dread - d9/d34/d3d/f69 zero size 2026-03-10T10:19:28.098 INFO:tasks.workunit.client.0.vm02.stdout:0/345: dread d9/f6c [0,4194304] 0 2026-03-10T10:19:28.105 INFO:tasks.workunit.client.1.vm05.stdout:5/318: unlink da/db/d28/d32/f54 0 2026-03-10T10:19:28.106 INFO:tasks.workunit.client.1.vm05.stdout:5/319: creat da/db/d26/d35/d38/f6c x:0 0 0 2026-03-10T10:19:28.107 INFO:tasks.workunit.client.1.vm05.stdout:5/320: dread - da/db/d28/d32/f69 zero size 2026-03-10T10:19:28.107 INFO:tasks.workunit.client.0.vm02.stdout:9/272: sync 2026-03-10T10:19:28.115 INFO:tasks.workunit.client.1.vm05.stdout:5/321: dwrite da/db/d26/d35/f2a [0,4194304] 0 2026-03-10T10:19:28.129 INFO:tasks.workunit.client.1.vm05.stdout:5/322: creat da/db/f6d x:0 0 0 2026-03-10T10:19:28.133 INFO:tasks.workunit.client.1.vm05.stdout:5/323: mkdir da/db/d28/d6e 0 2026-03-10T10:19:28.136 INFO:tasks.workunit.client.1.vm05.stdout:5/324: dread da/db/fd [0,4194304] 0 2026-03-10T10:19:28.138 INFO:tasks.workunit.client.1.vm05.stdout:5/325: truncate da/db/d26/d35/d38/f65 636396 0 2026-03-10T10:19:28.145 INFO:tasks.workunit.client.1.vm05.stdout:5/326: mknod da/c6f 0 2026-03-10T10:19:28.145 INFO:tasks.workunit.client.1.vm05.stdout:5/327: mkdir da/db/d26/d70 0 2026-03-10T10:19:28.152 INFO:tasks.workunit.client.1.vm05.stdout:5/328: unlink da/db/d28/c39 0 2026-03-10T10:19:28.153 INFO:tasks.workunit.client.0.vm02.stdout:7/275: dread d1/dc/f25 [0,4194304] 0 2026-03-10T10:19:28.155 INFO:tasks.workunit.client.1.vm05.stdout:5/329: creat da/db/d28/d32/f71 x:0 0 0 2026-03-10T10:19:28.156 INFO:tasks.workunit.client.0.vm02.stdout:7/276: truncate d1/dc/ff 1873942 0 2026-03-10T10:19:28.158 INFO:tasks.workunit.client.0.vm02.stdout:7/277: unlink d1/dc/d10/f24 0 2026-03-10T10:19:28.159 INFO:tasks.workunit.client.0.vm02.stdout:7/278: mkdir d1/dc/d55 0 2026-03-10T10:19:28.160 INFO:tasks.workunit.client.1.vm05.stdout:5/330: mkdir da/db/d26/d70/d72 0 2026-03-10T10:19:28.160 INFO:tasks.workunit.client.0.vm02.stdout:7/279: symlink d1/dc/d10/d38/l56 0 2026-03-10T10:19:28.161 INFO:tasks.workunit.client.0.vm02.stdout:7/280: stat d1/d1b 0 2026-03-10T10:19:28.161 INFO:tasks.workunit.client.0.vm02.stdout:7/281: dread - d1/d33/f53 zero size 2026-03-10T10:19:28.162 INFO:tasks.workunit.client.0.vm02.stdout:7/282: chown d1/dc/d10/c4f 133727287 1 2026-03-10T10:19:28.163 INFO:tasks.workunit.client.1.vm05.stdout:5/331: write da/db/d26/f64 [311415,41884] 0 2026-03-10T10:19:28.163 INFO:tasks.workunit.client.0.vm02.stdout:7/283: dread - d1/dc/d16/f48 zero size 2026-03-10T10:19:28.166 INFO:tasks.workunit.client.0.vm02.stdout:7/284: rename d1/dc/d16/d28/d2c/l30 to d1/d1b/d49/l57 0 2026-03-10T10:19:28.167 INFO:tasks.workunit.client.1.vm05.stdout:5/332: chown da/db/d26/d5c/c62 199473234 1 2026-03-10T10:19:28.171 INFO:tasks.workunit.client.1.vm05.stdout:5/333: readlink da/db/d26/d5c/d4b/l57 0 2026-03-10T10:19:28.176 INFO:tasks.workunit.client.0.vm02.stdout:6/306: truncate d0/d8/d9/f13 3355519 0 2026-03-10T10:19:28.182 INFO:tasks.workunit.client.1.vm05.stdout:5/334: rmdir da/db/d26/d35 39 2026-03-10T10:19:28.182 INFO:tasks.workunit.client.0.vm02.stdout:6/307: symlink d0/d8/d29/d2f/l65 0 2026-03-10T10:19:28.183 INFO:tasks.workunit.client.0.vm02.stdout:6/308: unlink d0/d8/d9/d31/l44 0 2026-03-10T10:19:28.183 INFO:tasks.workunit.client.0.vm02.stdout:6/309: stat d0/f28 0 2026-03-10T10:19:28.183 INFO:tasks.workunit.client.0.vm02.stdout:6/310: chown d0/d8/d29/d2f/d4b/l2e 42061558 1 2026-03-10T10:19:28.183 INFO:tasks.workunit.client.0.vm02.stdout:6/311: write d0/f5d [967717,105255] 0 2026-03-10T10:19:28.183 INFO:tasks.workunit.client.0.vm02.stdout:7/285: sync 2026-03-10T10:19:28.185 INFO:tasks.workunit.client.1.vm05.stdout:9/249: dread d0/d1/d13/de/d21/f34 [0,4194304] 0 2026-03-10T10:19:28.187 INFO:tasks.workunit.client.0.vm02.stdout:7/286: truncate d1/dc/d10/f13 2776622 0 2026-03-10T10:19:28.188 INFO:tasks.workunit.client.0.vm02.stdout:6/312: link d0/d8/l1f d0/d8/d29/d2f/d4b/l66 0 2026-03-10T10:19:28.189 INFO:tasks.workunit.client.1.vm05.stdout:9/250: creat d0/d1/d13/de/d21/f53 x:0 0 0 2026-03-10T10:19:28.189 INFO:tasks.workunit.client.0.vm02.stdout:7/287: dread - d1/dc/d44/f4a zero size 2026-03-10T10:19:28.189 INFO:tasks.workunit.client.1.vm05.stdout:9/251: chown d0/f1e 56 1 2026-03-10T10:19:28.189 INFO:tasks.workunit.client.0.vm02.stdout:7/288: chown d1/dc/d16/d28/d2d 31405017 1 2026-03-10T10:19:28.190 INFO:tasks.workunit.client.0.vm02.stdout:6/313: creat d0/d8/d29/d2f/f67 x:0 0 0 2026-03-10T10:19:28.193 INFO:tasks.workunit.client.0.vm02.stdout:6/314: dwrite d0/f4c [0,4194304] 0 2026-03-10T10:19:28.206 INFO:tasks.workunit.client.0.vm02.stdout:6/315: symlink d0/d8/d29/d2f/d4b/l68 0 2026-03-10T10:19:28.207 INFO:tasks.workunit.client.1.vm05.stdout:6/219: dwrite dd/fe [0,4194304] 0 2026-03-10T10:19:28.207 INFO:tasks.workunit.client.0.vm02.stdout:6/316: symlink d0/d8/d29/l69 0 2026-03-10T10:19:28.210 INFO:tasks.workunit.client.0.vm02.stdout:6/317: creat d0/d8/d9/f6a x:0 0 0 2026-03-10T10:19:28.210 INFO:tasks.workunit.client.0.vm02.stdout:6/318: chown d0/d8/d9/d31 1 1 2026-03-10T10:19:28.218 INFO:tasks.workunit.client.0.vm02.stdout:6/319: dread d0/d8/d29/d2f/f33 [4194304,4194304] 0 2026-03-10T10:19:28.218 INFO:tasks.workunit.client.1.vm05.stdout:6/220: dread - dd/d36/d3f/f41 zero size 2026-03-10T10:19:28.218 INFO:tasks.workunit.client.1.vm05.stdout:6/221: truncate dd/d36/d3f/f22 1109285 0 2026-03-10T10:19:28.220 INFO:tasks.workunit.client.0.vm02.stdout:6/320: creat d0/f6b x:0 0 0 2026-03-10T10:19:28.223 INFO:tasks.workunit.client.1.vm05.stdout:6/222: creat dd/d36/d3f/d12/d44/d2a/d3d/d3e/f4c x:0 0 0 2026-03-10T10:19:28.224 INFO:tasks.workunit.client.0.vm02.stdout:6/321: mknod d0/d3a/c6c 0 2026-03-10T10:19:28.224 INFO:tasks.workunit.client.0.vm02.stdout:6/322: chown d0/d8/f56 15957 1 2026-03-10T10:19:28.230 INFO:tasks.workunit.client.0.vm02.stdout:6/323: getdents d0/d8/d9/d31/d32/d60 0 2026-03-10T10:19:28.241 INFO:tasks.workunit.client.0.vm02.stdout:3/285: write d1/d6/f1b [4796107,75097] 0 2026-03-10T10:19:28.250 INFO:tasks.workunit.client.1.vm05.stdout:1/248: dwrite d4/dd/f15 [8388608,4194304] 0 2026-03-10T10:19:28.251 INFO:tasks.workunit.client.0.vm02.stdout:8/353: write d1/d1c/d24/f31 [3970968,74578] 0 2026-03-10T10:19:28.258 INFO:tasks.workunit.client.1.vm05.stdout:5/335: readlink da/db/d26/d35/d58/l67 0 2026-03-10T10:19:28.259 INFO:tasks.workunit.client.1.vm05.stdout:1/249: dwrite d4/df/d1c/f2a [4194304,4194304] 0 2026-03-10T10:19:28.260 INFO:tasks.workunit.client.0.vm02.stdout:5/463: dwrite d1/f3 [4194304,4194304] 0 2026-03-10T10:19:28.266 INFO:tasks.workunit.client.0.vm02.stdout:2/356: dwrite d0/d10/f1f [0,4194304] 0 2026-03-10T10:19:28.266 INFO:tasks.workunit.client.1.vm05.stdout:1/250: mkdir d4/d37/d4e 0 2026-03-10T10:19:28.274 INFO:tasks.workunit.client.1.vm05.stdout:5/336: rename da/db/d26/d35/d58 to da/db/d26/d35/d73 0 2026-03-10T10:19:28.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:27 vm02.local ceph-mon[50200]: pgmap v153: 65 pgs: 65 active+clean; 1.2 GiB data, 4.8 GiB used, 115 GiB / 120 GiB avail; 16 MiB/s rd, 128 MiB/s wr, 190 op/s 2026-03-10T10:19:28.284 INFO:tasks.workunit.client.0.vm02.stdout:8/354: creat d1/d1c/d24/d35/f6e x:0 0 0 2026-03-10T10:19:28.284 INFO:tasks.workunit.client.1.vm05.stdout:7/332: rmdir d5 39 2026-03-10T10:19:28.285 INFO:tasks.workunit.client.1.vm05.stdout:1/251: mknod d4/d3d/c4f 0 2026-03-10T10:19:28.286 INFO:tasks.workunit.client.1.vm05.stdout:5/337: dwrite da/db/d26/d35/d38/f6c [0,4194304] 0 2026-03-10T10:19:28.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:27 vm05.local ceph-mon[59051]: pgmap v153: 65 pgs: 65 active+clean; 1.2 GiB data, 4.8 GiB used, 115 GiB / 120 GiB avail; 16 MiB/s rd, 128 MiB/s wr, 190 op/s 2026-03-10T10:19:28.298 INFO:tasks.workunit.client.0.vm02.stdout:1/297: dwrite d4/f5 [0,4194304] 0 2026-03-10T10:19:28.299 INFO:tasks.workunit.client.0.vm02.stdout:1/298: write d4/da/f12 [17071,1436] 0 2026-03-10T10:19:28.302 INFO:tasks.workunit.client.0.vm02.stdout:5/464: link d1/db/d11/d16/d48/c87 d1/db/d11/d13/d28/d37/d3d/ca2 0 2026-03-10T10:19:28.303 INFO:tasks.workunit.client.0.vm02.stdout:5/465: chown d1/db/d11/d84/d40/d4f/d5f/d6d/d71 0 1 2026-03-10T10:19:28.303 INFO:tasks.workunit.client.1.vm05.stdout:7/333: dread - d5/dd/f28 zero size 2026-03-10T10:19:28.305 INFO:tasks.workunit.client.1.vm05.stdout:1/252: unlink d4/d39/d3e/f4a 0 2026-03-10T10:19:28.306 INFO:tasks.workunit.client.0.vm02.stdout:8/355: mknod d1/d1c/d23/d25/c6f 0 2026-03-10T10:19:28.314 INFO:tasks.workunit.client.0.vm02.stdout:5/466: read d1/db/d11/d13/d28/f91 [3657501,52468] 0 2026-03-10T10:19:28.319 INFO:tasks.workunit.client.0.vm02.stdout:5/467: dwrite d1/db/d11/d13/f1f [0,4194304] 0 2026-03-10T10:19:28.338 INFO:tasks.workunit.client.1.vm05.stdout:7/334: creat d5/d1d/d20/d2d/d5d/f67 x:0 0 0 2026-03-10T10:19:28.338 INFO:tasks.workunit.client.0.vm02.stdout:8/356: symlink d1/d2/l70 0 2026-03-10T10:19:28.342 INFO:tasks.workunit.client.0.vm02.stdout:8/357: dwrite d1/d1c/d24/d35/f4f [0,4194304] 0 2026-03-10T10:19:28.344 INFO:tasks.workunit.client.0.vm02.stdout:2/357: dread d0/d1a/f25 [0,4194304] 0 2026-03-10T10:19:28.346 INFO:tasks.workunit.client.0.vm02.stdout:1/299: mknod d4/d4a/c64 0 2026-03-10T10:19:28.346 INFO:tasks.workunit.client.1.vm05.stdout:3/336: write dd/d15/d24/d2c/f3f [892107,36267] 0 2026-03-10T10:19:28.346 INFO:tasks.workunit.client.0.vm02.stdout:1/300: chown d4/d2c/f54 3431769 1 2026-03-10T10:19:28.349 INFO:tasks.workunit.client.0.vm02.stdout:5/468: mkdir d1/db/d11/d13/d28/d37/d3d/da3 0 2026-03-10T10:19:28.352 INFO:tasks.workunit.client.1.vm05.stdout:3/337: readlink dd/d15/d24/l59 0 2026-03-10T10:19:28.355 INFO:tasks.workunit.client.0.vm02.stdout:8/358: write d1/d1c/d23/d25/f2b [1545378,15773] 0 2026-03-10T10:19:28.361 INFO:tasks.workunit.client.0.vm02.stdout:1/301: unlink d4/da/f25 0 2026-03-10T10:19:28.363 INFO:tasks.workunit.client.1.vm05.stdout:4/233: dwrite d1/d3/f26 [0,4194304] 0 2026-03-10T10:19:28.379 INFO:tasks.workunit.client.1.vm05.stdout:4/234: symlink d1/d31/l4c 0 2026-03-10T10:19:28.381 INFO:tasks.workunit.client.0.vm02.stdout:4/423: dwrite d1/d32/d3e/f42 [0,4194304] 0 2026-03-10T10:19:28.384 INFO:tasks.workunit.client.1.vm05.stdout:8/203: truncate d7/f8 297128 0 2026-03-10T10:19:28.385 INFO:tasks.workunit.client.1.vm05.stdout:8/204: dread - d7/d14/f33 zero size 2026-03-10T10:19:28.388 INFO:tasks.workunit.client.1.vm05.stdout:8/205: mkdir d7/d14/d24/d3f 0 2026-03-10T10:19:28.397 INFO:tasks.workunit.client.1.vm05.stdout:0/265: truncate d1/d2/d9/fd 229466 0 2026-03-10T10:19:28.398 INFO:tasks.workunit.client.0.vm02.stdout:8/359: rmdir d1/d1c/d23/d5f 0 2026-03-10T10:19:28.398 INFO:tasks.workunit.client.0.vm02.stdout:9/273: rmdir da/d3c/d4c/d2c/d34 39 2026-03-10T10:19:28.398 INFO:tasks.workunit.client.0.vm02.stdout:8/360: fsync d1/d1c/d23/d25/f5d 0 2026-03-10T10:19:28.398 INFO:tasks.workunit.client.0.vm02.stdout:9/274: dread da/f15 [0,4194304] 0 2026-03-10T10:19:28.398 INFO:tasks.workunit.client.1.vm05.stdout:0/266: mkdir d1/d2/d9/d31/d13/d55 0 2026-03-10T10:19:28.402 INFO:tasks.workunit.client.0.vm02.stdout:0/346: dwrite d9/d18/d1a/d22/d24/f4f [0,4194304] 0 2026-03-10T10:19:28.420 INFO:tasks.workunit.client.0.vm02.stdout:8/361: mkdir d1/d1c/d24/d71 0 2026-03-10T10:19:28.423 INFO:tasks.workunit.client.1.vm05.stdout:7/335: sync 2026-03-10T10:19:28.427 INFO:tasks.workunit.client.1.vm05.stdout:2/275: dwrite db/d12/f1d [0,4194304] 0 2026-03-10T10:19:28.428 INFO:tasks.workunit.client.0.vm02.stdout:0/347: symlink d9/d34/d3d/l6e 0 2026-03-10T10:19:28.429 INFO:tasks.workunit.client.0.vm02.stdout:0/348: chown d9/d18/d1a/d43 482838174 1 2026-03-10T10:19:28.430 INFO:tasks.workunit.client.0.vm02.stdout:4/424: getdents d1/d41/d5e/d78/d1a/d49 0 2026-03-10T10:19:28.431 INFO:tasks.workunit.client.1.vm05.stdout:2/276: creat db/d1c/d40/f50 x:0 0 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.0.vm02.stdout:6/324: rename d0/d8/d9/d31 to d0/d8/d29/d6d 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.0.vm02.stdout:8/362: rmdir d1/d1c/d43/d5b 39 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.0.vm02.stdout:7/289: unlink d1/dc/d10/f13 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.0.vm02.stdout:7/290: dread - d1/dc/d16/d28/d2d/f2f zero size 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.0.vm02.stdout:9/275: getdents da/d3c 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.0.vm02.stdout:4/425: getdents d1/d41/d5e/d78/d1a/d49/d81 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.1.vm05.stdout:0/267: rmdir d1/d2/d9/d31/d13/d15/d4e/d51 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.1.vm05.stdout:0/268: truncate d1/d2/d9/d31/d13/d17/f4a 1035862 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.1.vm05.stdout:2/277: stat db/l1b 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.1.vm05.stdout:2/278: write db/f25 [5225710,98804] 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.1.vm05.stdout:0/269: creat d1/d2/d9/d31/d13/d17/f56 x:0 0 0 2026-03-10T10:19:28.452 INFO:tasks.workunit.client.1.vm05.stdout:0/270: creat d1/d2/d9/d31/d13/d17/f57 x:0 0 0 2026-03-10T10:19:28.453 INFO:tasks.workunit.client.1.vm05.stdout:6/223: truncate dd/d36/d3f/d12/d44/d2a/d3d/d48/f4b 2231095 0 2026-03-10T10:19:28.453 INFO:tasks.workunit.client.1.vm05.stdout:6/224: chown dd/d36/d3f/d12 25748335 1 2026-03-10T10:19:28.457 INFO:tasks.workunit.client.1.vm05.stdout:6/225: write dd/d36/d3f/d12/f35 [1325231,12206] 0 2026-03-10T10:19:28.464 INFO:tasks.workunit.client.0.vm02.stdout:0/349: creat d9/d18/d1a/f6f x:0 0 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.1.vm05.stdout:9/252: dwrite d0/d1/d13/de/f38 [0,4194304] 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.1.vm05.stdout:6/226: mkdir dd/d36/d3f/d12/d24/d28/d4d 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.1.vm05.stdout:6/227: read dd/d36/d3f/d12/d44/f2f [196877,4115] 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.1.vm05.stdout:0/271: symlink d1/d2/d9/d31/d54/l58 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.0.vm02.stdout:3/286: truncate d1/d8/fb 3282657 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.0.vm02.stdout:4/426: fdatasync d1/d10/db/f43 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.0.vm02.stdout:4/427: write d1/d41/d5e/d78/d37/f2e [964045,62814] 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.0.vm02.stdout:4/428: write d1/d10/db/f16 [2230348,30906] 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.0.vm02.stdout:4/429: write d1/f6f [234070,121594] 0 2026-03-10T10:19:28.488 INFO:tasks.workunit.client.0.vm02.stdout:0/350: symlink d9/d18/d1a/d43/d49/l70 0 2026-03-10T10:19:28.491 INFO:tasks.workunit.client.0.vm02.stdout:0/351: dwrite d9/d18/f1e [0,4194304] 0 2026-03-10T10:19:28.492 INFO:tasks.workunit.client.0.vm02.stdout:0/352: rename d9 to d9/d34/d3d/d67/d71 22 2026-03-10T10:19:28.501 INFO:tasks.workunit.client.1.vm05.stdout:8/206: read d7/f11 [661122,58907] 0 2026-03-10T10:19:28.510 INFO:tasks.workunit.client.1.vm05.stdout:0/272: symlink d1/d2/d9/d31/d13/d15/l59 0 2026-03-10T10:19:28.512 INFO:tasks.workunit.client.0.vm02.stdout:4/430: creat d1/d41/d5e/d78/d7f/f8e x:0 0 0 2026-03-10T10:19:28.512 INFO:tasks.workunit.client.1.vm05.stdout:9/253: symlink d0/df/l54 0 2026-03-10T10:19:28.512 INFO:tasks.workunit.client.1.vm05.stdout:0/273: dread - d1/d2/d9/d31/d13/d17/f56 zero size 2026-03-10T10:19:28.515 INFO:tasks.workunit.client.0.vm02.stdout:8/363: creat d1/d1c/f72 x:0 0 0 2026-03-10T10:19:28.517 INFO:tasks.workunit.client.0.vm02.stdout:9/276: link da/d3c/d4c/f31 da/d3c/d4c/d38/d4a/f59 0 2026-03-10T10:19:28.520 INFO:tasks.workunit.client.1.vm05.stdout:0/274: dwrite d1/d2/d9/d31/d13/d17/f1b [0,4194304] 0 2026-03-10T10:19:28.530 INFO:tasks.workunit.client.1.vm05.stdout:1/253: write d4/d39/d3e/f3f [4641782,128173] 0 2026-03-10T10:19:28.531 INFO:tasks.workunit.client.0.vm02.stdout:2/358: dwrite d0/d1a/f25 [0,4194304] 0 2026-03-10T10:19:28.542 INFO:tasks.workunit.client.1.vm05.stdout:3/338: write dd/f52 [681969,79184] 0 2026-03-10T10:19:28.543 INFO:tasks.workunit.client.0.vm02.stdout:5/469: dwrite d1/db/d11/f33 [0,4194304] 0 2026-03-10T10:19:28.546 INFO:tasks.workunit.client.0.vm02.stdout:5/470: chown d1/db/d11/d84/d40/d4f/d5f/f73 0 1 2026-03-10T10:19:28.548 INFO:tasks.workunit.client.0.vm02.stdout:1/302: write d4/da/d1a/f19 [8628051,94062] 0 2026-03-10T10:19:28.555 INFO:tasks.workunit.client.1.vm05.stdout:8/207: getdents d7/d14/d15/d3b 0 2026-03-10T10:19:28.556 INFO:tasks.workunit.client.1.vm05.stdout:0/275: dwrite d1/d2/d9/d31/d13/d17/f4a [0,4194304] 0 2026-03-10T10:19:28.556 INFO:tasks.workunit.client.1.vm05.stdout:6/228: rename dd/d36/d3f/d12/d44/d30/f42 to dd/d36/d3f/d12/d24/f4e 0 2026-03-10T10:19:28.556 INFO:tasks.workunit.client.1.vm05.stdout:6/229: read f2 [7960737,83768] 0 2026-03-10T10:19:28.560 INFO:tasks.workunit.client.0.vm02.stdout:3/287: mknod d1/c61 0 2026-03-10T10:19:28.563 INFO:tasks.workunit.client.1.vm05.stdout:4/235: dwrite d1/d31/f13 [0,4194304] 0 2026-03-10T10:19:28.564 INFO:tasks.workunit.client.1.vm05.stdout:4/236: chown d1/d31/f1a 733133 1 2026-03-10T10:19:28.564 INFO:tasks.workunit.client.0.vm02.stdout:4/431: rename d1/d41/d5e/d78/d1a/l3d to d1/d41/d5e/d78/d7f/d82/l8f 0 2026-03-10T10:19:28.567 INFO:tasks.workunit.client.0.vm02.stdout:8/364: unlink d1/l17 0 2026-03-10T10:19:28.571 INFO:tasks.workunit.client.0.vm02.stdout:9/277: unlink da/d3c/d4c/d2c/l44 0 2026-03-10T10:19:28.572 INFO:tasks.workunit.client.1.vm05.stdout:4/237: dwrite d1/d3/f46 [0,4194304] 0 2026-03-10T10:19:28.575 INFO:tasks.workunit.client.1.vm05.stdout:8/208: creat d7/d14/f40 x:0 0 0 2026-03-10T10:19:28.583 INFO:tasks.workunit.client.1.vm05.stdout:6/230: creat dd/d36/d3f/d12/f4f x:0 0 0 2026-03-10T10:19:28.585 INFO:tasks.workunit.client.1.vm05.stdout:6/231: write dd/f14 [4213881,62178] 0 2026-03-10T10:19:28.587 INFO:tasks.workunit.client.1.vm05.stdout:0/276: rename d1/d2/d9/d31/d13/f48 to d1/d2/d9/d31/d13/d17/f5a 0 2026-03-10T10:19:28.605 INFO:tasks.workunit.client.0.vm02.stdout:1/303: dread d4/f26 [0,4194304] 0 2026-03-10T10:19:28.626 INFO:tasks.workunit.client.1.vm05.stdout:3/339: truncate dd/d15/d24/d2c/f3c 4379203 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.1.vm05.stdout:8/209: dwrite d7/f11 [0,4194304] 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.1.vm05.stdout:8/210: fsync d7/f37 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.1.vm05.stdout:6/232: rmdir dd/d36/d3f/d12/d24 39 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.1.vm05.stdout:8/211: dwrite d7/f37 [0,4194304] 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:3/288: readlink d1/d20/l30 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:9/278: truncate da/d3c/d4c/d2c/d34/f36 414890 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:9/279: chown da/d3c/d4c/d2c/c2f 18770 1 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:9/280: dread - da/d3c/d4c/d38/d4a/f54 zero size 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:2/359: mkdir d0/d7a 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:5/471: mknod d1/db/d11/d1a/ca4 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:1/304: mkdir d4/da/d1a/d47/d65 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:1/305: truncate d4/d1b/f5d 905921 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:9/281: symlink da/d3c/d4c/d38/d4a/l5a 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:9/282: dwrite da/d3c/d4c/d2c/d34/f4d [0,4194304] 0 2026-03-10T10:19:28.627 INFO:tasks.workunit.client.0.vm02.stdout:5/472: rename d1/db/d11/d1a/c77 to d1/d6a/ca5 0 2026-03-10T10:19:28.632 INFO:tasks.workunit.client.0.vm02.stdout:5/473: dwrite d1/db/d11/d84/d40/d4f/f97 [0,4194304] 0 2026-03-10T10:19:28.632 INFO:tasks.workunit.client.1.vm05.stdout:1/254: sync 2026-03-10T10:19:28.632 INFO:tasks.workunit.client.1.vm05.stdout:8/212: write d7/d14/f38 [644791,105177] 0 2026-03-10T10:19:28.634 INFO:tasks.workunit.client.0.vm02.stdout:5/474: truncate d1/db/f88 671120 0 2026-03-10T10:19:28.635 INFO:tasks.workunit.client.0.vm02.stdout:9/283: rmdir da/d3c 39 2026-03-10T10:19:28.640 INFO:tasks.workunit.client.1.vm05.stdout:6/233: sync 2026-03-10T10:19:28.649 INFO:tasks.workunit.client.1.vm05.stdout:7/336: truncate d5/f13 639723 0 2026-03-10T10:19:28.654 INFO:tasks.workunit.client.1.vm05.stdout:2/279: write db/f23 [1205277,31357] 0 2026-03-10T10:19:28.657 INFO:tasks.workunit.client.0.vm02.stdout:6/325: truncate d0/d8/d9/f14 3867835 0 2026-03-10T10:19:28.660 INFO:tasks.workunit.client.0.vm02.stdout:7/291: dwrite d1/dc/d16/d28/d2d/f2f [0,4194304] 0 2026-03-10T10:19:28.660 INFO:tasks.workunit.client.0.vm02.stdout:7/292: chown d1/d33 1590 1 2026-03-10T10:19:28.661 INFO:tasks.workunit.client.0.vm02.stdout:7/293: chown d1/dc/f26 554544 1 2026-03-10T10:19:28.662 INFO:tasks.workunit.client.0.vm02.stdout:7/294: chown d1/dc/d16/d28/d2d/f4c 868980978 1 2026-03-10T10:19:28.670 INFO:tasks.workunit.client.0.vm02.stdout:5/475: rename d1/d6a/f9e to d1/db/d11/d62/d67/fa6 0 2026-03-10T10:19:28.676 INFO:tasks.workunit.client.0.vm02.stdout:3/289: getdents d1/d20 0 2026-03-10T10:19:28.680 INFO:tasks.workunit.client.0.vm02.stdout:1/306: fdatasync d4/da/d1a/f19 0 2026-03-10T10:19:28.680 INFO:tasks.workunit.client.1.vm05.stdout:8/213: unlink d7/c13 0 2026-03-10T10:19:28.683 INFO:tasks.workunit.client.0.vm02.stdout:1/307: dwrite d4/da/d27/d38/f4e [0,4194304] 0 2026-03-10T10:19:28.684 INFO:tasks.workunit.client.1.vm05.stdout:8/214: dread d7/f37 [0,4194304] 0 2026-03-10T10:19:28.685 INFO:tasks.workunit.client.0.vm02.stdout:1/308: fsync d4/da/d1a/d22/f62 0 2026-03-10T10:19:28.689 INFO:tasks.workunit.client.0.vm02.stdout:1/309: dwrite d4/da/d1a/d22/f62 [0,4194304] 0 2026-03-10T10:19:28.695 INFO:tasks.workunit.client.0.vm02.stdout:1/310: chown d4/da/d27/d38/d3c/l55 53 1 2026-03-10T10:19:28.695 INFO:tasks.workunit.client.1.vm05.stdout:7/337: mkdir d5/d1d/d20/d2d/d68 0 2026-03-10T10:19:28.695 INFO:tasks.workunit.client.1.vm05.stdout:7/338: readlink d5/d1d/d20/d35/l4a 0 2026-03-10T10:19:28.696 INFO:tasks.workunit.client.1.vm05.stdout:7/339: dread d5/f22 [0,4194304] 0 2026-03-10T10:19:28.697 INFO:tasks.workunit.client.1.vm05.stdout:2/280: symlink db/d2d/l51 0 2026-03-10T10:19:28.698 INFO:tasks.workunit.client.1.vm05.stdout:6/234: dwrite f3 [4194304,4194304] 0 2026-03-10T10:19:28.706 INFO:tasks.workunit.client.0.vm02.stdout:7/295: readlink d1/dc/l9 0 2026-03-10T10:19:28.714 INFO:tasks.workunit.client.0.vm02.stdout:5/476: unlink d1/f3 0 2026-03-10T10:19:28.714 INFO:tasks.workunit.client.0.vm02.stdout:1/311: truncate d4/f18 960993 0 2026-03-10T10:19:28.715 INFO:tasks.workunit.client.1.vm05.stdout:3/340: link dd/d15/c25 dd/d15/c7a 0 2026-03-10T10:19:28.716 INFO:tasks.workunit.client.0.vm02.stdout:7/296: mknod d1/dc/d44/c58 0 2026-03-10T10:19:28.717 INFO:tasks.workunit.client.1.vm05.stdout:8/215: mknod d7/d14/d15/d3b/c41 0 2026-03-10T10:19:28.720 INFO:tasks.workunit.client.1.vm05.stdout:8/216: read d7/d14/d24/f2c [1329209,103501] 0 2026-03-10T10:19:28.727 INFO:tasks.workunit.client.0.vm02.stdout:5/477: fsync d1/db/d11/d16/d48/f5b 0 2026-03-10T10:19:28.727 INFO:tasks.workunit.client.1.vm05.stdout:2/281: fdatasync db/f2e 0 2026-03-10T10:19:28.727 INFO:tasks.workunit.client.0.vm02.stdout:3/290: truncate d1/d8/d21/f3c 1071779 0 2026-03-10T10:19:28.728 INFO:tasks.workunit.client.0.vm02.stdout:6/326: link d0/d3a/c6c d0/d8/d29/d6d/d32/d60/c6e 0 2026-03-10T10:19:28.730 INFO:tasks.workunit.client.1.vm05.stdout:7/340: sync 2026-03-10T10:19:28.731 INFO:tasks.workunit.client.0.vm02.stdout:6/327: dwrite d0/d8/f64 [0,4194304] 0 2026-03-10T10:19:28.745 INFO:tasks.workunit.client.0.vm02.stdout:7/297: creat d1/dc/d16/d28/d2d/d36/f59 x:0 0 0 2026-03-10T10:19:28.746 INFO:tasks.workunit.client.1.vm05.stdout:8/217: creat d7/d14/d24/f42 x:0 0 0 2026-03-10T10:19:28.748 INFO:tasks.workunit.client.1.vm05.stdout:8/218: truncate d7/d14/d15/f2e 671627 0 2026-03-10T10:19:28.749 INFO:tasks.workunit.client.1.vm05.stdout:8/219: read d7/d14/d15/f39 [975937,95047] 0 2026-03-10T10:19:28.760 INFO:tasks.workunit.client.0.vm02.stdout:7/298: rmdir d1/d33 39 2026-03-10T10:19:28.762 INFO:tasks.workunit.client.1.vm05.stdout:3/341: creat dd/d15/d24/d2c/f7b x:0 0 0 2026-03-10T10:19:28.763 INFO:tasks.workunit.client.1.vm05.stdout:8/220: rename d7/d14/f25 to d7/d14/d15/d3b/f43 0 2026-03-10T10:19:28.764 INFO:tasks.workunit.client.1.vm05.stdout:8/221: fdatasync d7/d14/f23 0 2026-03-10T10:19:28.765 INFO:tasks.workunit.client.0.vm02.stdout:9/284: truncate da/d3c/d4c/d38/d4a/f59 144264 0 2026-03-10T10:19:28.770 INFO:tasks.workunit.client.0.vm02.stdout:5/478: mkdir d1/db/d11/d13/d28/da7 0 2026-03-10T10:19:28.775 INFO:tasks.workunit.client.0.vm02.stdout:7/299: mknod d1/dc/d44/c5a 0 2026-03-10T10:19:28.777 INFO:tasks.workunit.client.0.vm02.stdout:3/291: rmdir d1/d5d 0 2026-03-10T10:19:28.782 INFO:tasks.workunit.client.1.vm05.stdout:2/282: sync 2026-03-10T10:19:28.805 INFO:tasks.workunit.client.1.vm05.stdout:2/283: write db/d28/f30 [7488730,11426] 0 2026-03-10T10:19:28.805 INFO:tasks.workunit.client.1.vm05.stdout:2/284: creat db/d2d/f52 x:0 0 0 2026-03-10T10:19:28.805 INFO:tasks.workunit.client.0.vm02.stdout:7/300: dread - d1/d33/f53 zero size 2026-03-10T10:19:28.805 INFO:tasks.workunit.client.0.vm02.stdout:5/479: link d1/db/f96 d1/d9c/fa8 0 2026-03-10T10:19:28.805 INFO:tasks.workunit.client.0.vm02.stdout:3/292: getdents d1/d20/d52 0 2026-03-10T10:19:28.805 INFO:tasks.workunit.client.0.vm02.stdout:3/293: fsync d1/d8/d21/f4a 0 2026-03-10T10:19:28.805 INFO:tasks.workunit.client.0.vm02.stdout:5/480: creat d1/d9c/fa9 x:0 0 0 2026-03-10T10:19:28.805 INFO:tasks.workunit.client.0.vm02.stdout:3/294: creat d1/d6/f62 x:0 0 0 2026-03-10T10:19:28.809 INFO:tasks.workunit.client.0.vm02.stdout:3/295: dwrite d1/d20/f4b [0,4194304] 0 2026-03-10T10:19:28.812 INFO:tasks.workunit.client.0.vm02.stdout:5/481: rename d1/db/d11/d16/f19 to d1/d6a/faa 0 2026-03-10T10:19:28.813 INFO:tasks.workunit.client.0.vm02.stdout:5/482: write d1/db/d11/d84/d40/d4f/f97 [158325,95985] 0 2026-03-10T10:19:28.815 INFO:tasks.workunit.client.0.vm02.stdout:3/296: creat d1/d6/f63 x:0 0 0 2026-03-10T10:19:28.817 INFO:tasks.workunit.client.0.vm02.stdout:5/483: symlink d1/db/d11/d13/lab 0 2026-03-10T10:19:28.818 INFO:tasks.workunit.client.0.vm02.stdout:5/484: write d1/d9c/fa8 [14961,106545] 0 2026-03-10T10:19:28.822 INFO:tasks.workunit.client.0.vm02.stdout:5/485: fsync d1/db/d11/d13/f25 0 2026-03-10T10:19:28.824 INFO:tasks.workunit.client.1.vm05.stdout:7/341: sync 2026-03-10T10:19:28.825 INFO:tasks.workunit.client.0.vm02.stdout:6/328: sync 2026-03-10T10:19:28.827 INFO:tasks.workunit.client.1.vm05.stdout:5/338: dread f5 [0,4194304] 0 2026-03-10T10:19:28.828 INFO:tasks.workunit.client.0.vm02.stdout:5/486: dwrite d1/db/d11/d16/d79/d85/fa0 [0,4194304] 0 2026-03-10T10:19:28.829 INFO:tasks.workunit.client.1.vm05.stdout:7/342: fsync d5/d17/f19 0 2026-03-10T10:19:28.830 INFO:tasks.workunit.client.1.vm05.stdout:7/343: dread - d5/d1d/d20/d2d/f55 zero size 2026-03-10T10:19:28.834 INFO:tasks.workunit.client.0.vm02.stdout:5/487: rmdir d1/db/d11 39 2026-03-10T10:19:28.841 INFO:tasks.workunit.client.0.vm02.stdout:5/488: symlink d1/db/d11/d16/d79/lac 0 2026-03-10T10:19:28.841 INFO:tasks.workunit.client.0.vm02.stdout:5/489: stat d1/db/d11/f47 0 2026-03-10T10:19:28.842 INFO:tasks.workunit.client.0.vm02.stdout:5/490: stat d1/db/d11/d13/d28/d37/d3d/f75 0 2026-03-10T10:19:28.843 INFO:tasks.workunit.client.0.vm02.stdout:5/491: unlink d1/c20 0 2026-03-10T10:19:28.845 INFO:tasks.workunit.client.0.vm02.stdout:5/492: chown d1/db/d11/d13/d28/d37/d3d/l9a 712 1 2026-03-10T10:19:28.845 INFO:tasks.workunit.client.0.vm02.stdout:5/493: fdatasync d1/db/d11/d13/f21 0 2026-03-10T10:19:28.847 INFO:tasks.workunit.client.0.vm02.stdout:5/494: mknod d1/db/d11/cad 0 2026-03-10T10:19:28.847 INFO:tasks.workunit.client.0.vm02.stdout:5/495: readlink d1/db/d11/d13/d28/d37/l8c 0 2026-03-10T10:19:28.848 INFO:tasks.workunit.client.0.vm02.stdout:5/496: chown d1/db/d11/d16/d79/d85/fa0 47906 1 2026-03-10T10:19:28.849 INFO:tasks.workunit.client.0.vm02.stdout:5/497: rmdir d1/db/d11/d16/d48 39 2026-03-10T10:19:28.850 INFO:tasks.workunit.client.0.vm02.stdout:5/498: chown d1/c7 3 1 2026-03-10T10:19:28.855 INFO:tasks.workunit.client.0.vm02.stdout:5/499: stat d1/db/d11/d84/d40/d4f 0 2026-03-10T10:19:28.855 INFO:tasks.workunit.client.0.vm02.stdout:5/500: symlink d1/db/d11/d62/d67/lae 0 2026-03-10T10:19:28.855 INFO:tasks.workunit.client.0.vm02.stdout:5/501: link d1/db/d11/f33 d1/db/d11/d62/d67/faf 0 2026-03-10T10:19:28.886 INFO:tasks.workunit.client.1.vm05.stdout:9/254: dread d0/d1/fb [0,4194304] 0 2026-03-10T10:19:28.887 INFO:tasks.workunit.client.1.vm05.stdout:9/255: fdatasync d0/df/d11/f2d 0 2026-03-10T10:19:28.889 INFO:tasks.workunit.client.1.vm05.stdout:7/344: dread d5/d1d/f32 [0,4194304] 0 2026-03-10T10:19:28.890 INFO:tasks.workunit.client.1.vm05.stdout:7/345: write d5/d1d/d29/d3e/f65 [18951,125861] 0 2026-03-10T10:19:28.891 INFO:tasks.workunit.client.1.vm05.stdout:7/346: dread - d5/d1d/d20/d2d/f4c zero size 2026-03-10T10:19:28.893 INFO:tasks.workunit.client.1.vm05.stdout:7/347: chown d5/c14 41 1 2026-03-10T10:19:28.896 INFO:tasks.workunit.client.0.vm02.stdout:7/301: dread d1/fd [0,4194304] 0 2026-03-10T10:19:28.897 INFO:tasks.workunit.client.1.vm05.stdout:7/348: symlink d5/d1d/d29/d3e/l69 0 2026-03-10T10:19:28.901 INFO:tasks.workunit.client.0.vm02.stdout:7/302: rmdir d1/d1b/d4d 0 2026-03-10T10:19:28.903 INFO:tasks.workunit.client.0.vm02.stdout:7/303: write d1/dc/f3 [8047989,15619] 0 2026-03-10T10:19:28.912 INFO:tasks.workunit.client.0.vm02.stdout:7/304: stat d1/dc/d44/l47 0 2026-03-10T10:19:28.912 INFO:tasks.workunit.client.0.vm02.stdout:2/360: dread d0/f70 [0,4194304] 0 2026-03-10T10:19:28.912 INFO:tasks.workunit.client.0.vm02.stdout:2/361: write d0/d1a/d24/f6e [769765,107195] 0 2026-03-10T10:19:28.912 INFO:tasks.workunit.client.0.vm02.stdout:2/362: getdents d0/d1a/d24 0 2026-03-10T10:19:28.912 INFO:tasks.workunit.client.0.vm02.stdout:2/363: stat d0/d1a/d24/f48 0 2026-03-10T10:19:28.920 INFO:tasks.workunit.client.0.vm02.stdout:2/364: rmdir d0/d7a 0 2026-03-10T10:19:28.921 INFO:tasks.workunit.client.0.vm02.stdout:2/365: mknod d0/d1a/d49/d5e/c7b 0 2026-03-10T10:19:28.922 INFO:tasks.workunit.client.0.vm02.stdout:2/366: write d0/d1a/f25 [564640,47383] 0 2026-03-10T10:19:28.925 INFO:tasks.workunit.client.0.vm02.stdout:2/367: truncate d0/fe 6810234 0 2026-03-10T10:19:28.925 INFO:tasks.workunit.client.0.vm02.stdout:2/368: chown d0/d1a/d49/f64 786780 1 2026-03-10T10:19:28.957 INFO:tasks.workunit.client.0.vm02.stdout:7/305: sync 2026-03-10T10:19:28.959 INFO:tasks.workunit.client.0.vm02.stdout:5/502: read d1/db/d11/d1a/f27 [1255292,103171] 0 2026-03-10T10:19:28.960 INFO:tasks.workunit.client.0.vm02.stdout:7/306: symlink d1/dc/d16/d28/l5b 0 2026-03-10T10:19:28.963 INFO:tasks.workunit.client.0.vm02.stdout:7/307: fdatasync d1/d1b/f43 0 2026-03-10T10:19:28.965 INFO:tasks.workunit.client.0.vm02.stdout:7/308: dread d1/dc/f25 [0,4194304] 0 2026-03-10T10:19:28.966 INFO:tasks.workunit.client.0.vm02.stdout:3/297: dread d1/d8/d21/f35 [0,4194304] 0 2026-03-10T10:19:28.975 INFO:tasks.workunit.client.0.vm02.stdout:7/309: creat d1/dc/d16/d28/d2d/d36/f5c x:0 0 0 2026-03-10T10:19:28.975 INFO:tasks.workunit.client.0.vm02.stdout:3/298: creat d1/d20/f64 x:0 0 0 2026-03-10T10:19:28.975 INFO:tasks.workunit.client.0.vm02.stdout:7/310: symlink d1/dc/d10/l5d 0 2026-03-10T10:19:28.975 INFO:tasks.workunit.client.0.vm02.stdout:7/311: symlink d1/dc/d16/d28/l5e 0 2026-03-10T10:19:28.975 INFO:tasks.workunit.client.0.vm02.stdout:7/312: mkdir d1/dc/d44/d5f 0 2026-03-10T10:19:28.979 INFO:tasks.workunit.client.0.vm02.stdout:7/313: rename d1/d33 to d1/dc/d60 0 2026-03-10T10:19:28.980 INFO:tasks.workunit.client.0.vm02.stdout:7/314: creat d1/d1b/f61 x:0 0 0 2026-03-10T10:19:28.981 INFO:tasks.workunit.client.0.vm02.stdout:7/315: dread d1/dc/f25 [0,4194304] 0 2026-03-10T10:19:28.984 INFO:tasks.workunit.client.0.vm02.stdout:7/316: dread d1/dc/d16/d28/d2d/f2f [0,4194304] 0 2026-03-10T10:19:28.985 INFO:tasks.workunit.client.0.vm02.stdout:7/317: fdatasync d1/f17 0 2026-03-10T10:19:29.000 INFO:tasks.workunit.client.0.vm02.stdout:7/318: sync 2026-03-10T10:19:29.002 INFO:tasks.workunit.client.0.vm02.stdout:4/432: rmdir d1/d41/d5e/d78/d1a 39 2026-03-10T10:19:29.003 INFO:tasks.workunit.client.0.vm02.stdout:0/353: truncate d9/d18/d1a/d22/d24/f26 3614044 0 2026-03-10T10:19:29.005 INFO:tasks.workunit.client.0.vm02.stdout:4/433: dread d1/d32/f69 [0,4194304] 0 2026-03-10T10:19:29.009 INFO:tasks.workunit.client.1.vm05.stdout:4/238: write d1/d31/f1b [770081,130131] 0 2026-03-10T10:19:29.018 INFO:tasks.workunit.client.0.vm02.stdout:4/434: unlink d1/d41/d5e/d78/d55/c56 0 2026-03-10T10:19:29.020 INFO:tasks.workunit.client.0.vm02.stdout:8/365: write d1/d1c/f20 [851940,92987] 0 2026-03-10T10:19:29.021 INFO:tasks.workunit.client.0.vm02.stdout:8/366: write d1/f40 [1715317,48180] 0 2026-03-10T10:19:29.028 INFO:tasks.workunit.client.1.vm05.stdout:0/277: dwrite d1/d2/d9/f32 [0,4194304] 0 2026-03-10T10:19:29.029 INFO:tasks.workunit.client.1.vm05.stdout:0/278: chown d1/d2/fc 12820748 1 2026-03-10T10:19:29.031 INFO:tasks.workunit.client.0.vm02.stdout:4/435: creat d1/d41/d5e/d78/d44/f90 x:0 0 0 2026-03-10T10:19:29.036 INFO:tasks.workunit.client.0.vm02.stdout:4/436: dwrite d1/d52/d53/f5b [0,4194304] 0 2026-03-10T10:19:29.042 INFO:tasks.workunit.client.0.vm02.stdout:7/319: dread d1/f34 [4194304,4194304] 0 2026-03-10T10:19:29.050 INFO:tasks.workunit.client.0.vm02.stdout:7/320: dwrite d1/dc/d16/d28/f4e [0,4194304] 0 2026-03-10T10:19:29.057 INFO:tasks.workunit.client.0.vm02.stdout:7/321: dwrite d1/d1b/f61 [0,4194304] 0 2026-03-10T10:19:29.062 INFO:tasks.workunit.client.0.vm02.stdout:7/322: chown d1/dc/d55 0 1 2026-03-10T10:19:29.071 INFO:tasks.workunit.client.1.vm05.stdout:1/255: dwrite d4/d20/f31 [0,4194304] 0 2026-03-10T10:19:29.072 INFO:tasks.workunit.client.0.vm02.stdout:0/354: creat d9/d18/d1a/d43/f72 x:0 0 0 2026-03-10T10:19:29.073 INFO:tasks.workunit.client.0.vm02.stdout:0/355: read d9/d18/f1e [3097412,75573] 0 2026-03-10T10:19:29.077 INFO:tasks.workunit.client.1.vm05.stdout:1/256: dread d4/d39/d3e/f3f [0,4194304] 0 2026-03-10T10:19:29.079 INFO:tasks.workunit.client.1.vm05.stdout:1/257: write d4/f46 [532601,3003] 0 2026-03-10T10:19:29.082 INFO:tasks.workunit.client.0.vm02.stdout:8/367: dread - d1/d1c/d43/d5b/f63 zero size 2026-03-10T10:19:29.093 INFO:tasks.workunit.client.0.vm02.stdout:7/323: write d1/dc/f25 [1219736,105654] 0 2026-03-10T10:19:29.094 INFO:tasks.workunit.client.0.vm02.stdout:7/324: chown d1/dc/d60/f53 15855 1 2026-03-10T10:19:29.095 INFO:tasks.workunit.client.1.vm05.stdout:4/239: symlink d1/d31/l4d 0 2026-03-10T10:19:29.099 INFO:tasks.workunit.client.0.vm02.stdout:0/356: rename d9/d34/l37 to d9/d18/d1a/d43/d49/l73 0 2026-03-10T10:19:29.106 INFO:tasks.workunit.client.0.vm02.stdout:8/368: chown d1/d1c/d23/d25/c30 0 1 2026-03-10T10:19:29.109 INFO:tasks.workunit.client.0.vm02.stdout:8/369: dwrite d1/d1c/f1e [0,4194304] 0 2026-03-10T10:19:29.111 INFO:tasks.workunit.client.0.vm02.stdout:8/370: readlink d1/d1c/l41 0 2026-03-10T10:19:29.119 INFO:tasks.workunit.client.1.vm05.stdout:1/258: creat d4/d39/f50 x:0 0 0 2026-03-10T10:19:29.122 INFO:tasks.workunit.client.0.vm02.stdout:1/312: dwrite d4/f26 [0,4194304] 0 2026-03-10T10:19:29.124 INFO:tasks.workunit.client.0.vm02.stdout:1/313: stat d4/da/d1a/d22 0 2026-03-10T10:19:29.126 INFO:tasks.workunit.client.1.vm05.stdout:4/240: rmdir d1/d31/dc/d40 39 2026-03-10T10:19:29.126 INFO:tasks.workunit.client.1.vm05.stdout:6/235: truncate f2 4257416 0 2026-03-10T10:19:29.130 INFO:tasks.workunit.client.0.vm02.stdout:4/437: link d1/d10/db/f24 d1/d10/db/f91 0 2026-03-10T10:19:29.138 INFO:tasks.workunit.client.1.vm05.stdout:4/241: chown d1/d31/dc/d40/d45 222 1 2026-03-10T10:19:29.139 INFO:tasks.workunit.client.1.vm05.stdout:6/236: unlink dd/d36/d3f/d12/d44/d2a/d3d/d3e/f4c 0 2026-03-10T10:19:29.141 INFO:tasks.workunit.client.0.vm02.stdout:1/314: creat d4/da/d27/f66 x:0 0 0 2026-03-10T10:19:29.144 INFO:tasks.workunit.client.1.vm05.stdout:1/259: dwrite d4/df/d1c/f23 [0,4194304] 0 2026-03-10T10:19:29.154 INFO:tasks.workunit.client.0.vm02.stdout:0/357: rename d9/d18/d1a/d22/d4a to d9/d18/d1a/d43/d74 0 2026-03-10T10:19:29.154 INFO:tasks.workunit.client.1.vm05.stdout:8/222: dwrite d7/f21 [0,4194304] 0 2026-03-10T10:19:29.154 INFO:tasks.workunit.client.1.vm05.stdout:3/342: dwrite dd/fe [0,4194304] 0 2026-03-10T10:19:29.154 INFO:tasks.workunit.client.1.vm05.stdout:2/285: truncate db/d1c/d40/f4d 3089178 0 2026-03-10T10:19:29.154 INFO:tasks.workunit.client.1.vm05.stdout:2/286: chown db/d1c/f3d 26086103 1 2026-03-10T10:19:29.154 INFO:tasks.workunit.client.1.vm05.stdout:8/223: chown d7/d14/f23 54 1 2026-03-10T10:19:29.157 INFO:tasks.workunit.client.0.vm02.stdout:0/358: symlink d9/d34/d3d/d67/l75 0 2026-03-10T10:19:29.161 INFO:tasks.workunit.client.1.vm05.stdout:9/256: write d0/df/f3b [511552,11723] 0 2026-03-10T10:19:29.162 INFO:tasks.workunit.client.1.vm05.stdout:7/349: write d5/d26/f41 [327554,58740] 0 2026-03-10T10:19:29.163 INFO:tasks.workunit.client.1.vm05.stdout:9/257: readlink d0/d1/d13/de/d21/l29 0 2026-03-10T10:19:29.165 INFO:tasks.workunit.client.1.vm05.stdout:7/350: chown d5/dd/f12 139818 1 2026-03-10T10:19:29.165 INFO:tasks.workunit.client.1.vm05.stdout:3/343: dwrite dd/d15/d24/f2f [0,4194304] 0 2026-03-10T10:19:29.172 INFO:tasks.workunit.client.1.vm05.stdout:5/339: dwrite f9 [0,4194304] 0 2026-03-10T10:19:29.173 INFO:tasks.workunit.client.1.vm05.stdout:3/344: truncate dd/d15/d24/d2c/f60 834055 0 2026-03-10T10:19:29.174 INFO:tasks.workunit.client.0.vm02.stdout:0/359: mknod d9/d18/d1a/d22/d24/c76 0 2026-03-10T10:19:29.176 INFO:tasks.workunit.client.0.vm02.stdout:0/360: write d9/d18/d1a/d22/d24/f4f [1546416,30390] 0 2026-03-10T10:19:29.176 INFO:tasks.workunit.client.1.vm05.stdout:4/242: dwrite d1/d3/f12 [0,4194304] 0 2026-03-10T10:19:29.179 INFO:tasks.workunit.client.0.vm02.stdout:1/315: link d4/da/d1a/l1e d4/l67 0 2026-03-10T10:19:29.188 INFO:tasks.workunit.client.1.vm05.stdout:5/340: dwrite da/f41 [0,4194304] 0 2026-03-10T10:19:29.192 INFO:tasks.workunit.client.1.vm05.stdout:5/341: dread da/f41 [0,4194304] 0 2026-03-10T10:19:29.200 INFO:tasks.workunit.client.0.vm02.stdout:0/361: fdatasync d9/f28 0 2026-03-10T10:19:29.205 INFO:tasks.workunit.client.0.vm02.stdout:2/369: dwrite d0/f2c [4194304,4194304] 0 2026-03-10T10:19:29.220 INFO:tasks.workunit.client.0.vm02.stdout:5/503: write d1/db/d11/d62/f8f [264357,89498] 0 2026-03-10T10:19:29.220 INFO:tasks.workunit.client.1.vm05.stdout:7/351: symlink d5/d1d/d20/d3b/l6a 0 2026-03-10T10:19:29.222 INFO:tasks.workunit.client.0.vm02.stdout:0/362: rename l7 to d9/d18/d1a/d22/d24/l77 0 2026-03-10T10:19:29.223 INFO:tasks.workunit.client.0.vm02.stdout:0/363: chown d9/d18/d1a/l1d 265482 1 2026-03-10T10:19:29.227 INFO:tasks.workunit.client.0.vm02.stdout:5/504: dwrite d1/db/d11/f3e [0,4194304] 0 2026-03-10T10:19:29.229 INFO:tasks.workunit.client.0.vm02.stdout:7/325: write d1/d1b/f43 [4109142,83707] 0 2026-03-10T10:19:29.231 INFO:tasks.workunit.client.0.vm02.stdout:0/364: dwrite d9/d18/d1a/d22/d24/f4f [0,4194304] 0 2026-03-10T10:19:29.235 INFO:tasks.workunit.client.0.vm02.stdout:2/370: mknod d0/d10/d69/c7c 0 2026-03-10T10:19:29.236 INFO:tasks.workunit.client.1.vm05.stdout:2/287: mknod db/d4e/c53 0 2026-03-10T10:19:29.240 INFO:tasks.workunit.client.0.vm02.stdout:3/299: write d1/f54 [1257549,67440] 0 2026-03-10T10:19:29.243 INFO:tasks.workunit.client.1.vm05.stdout:4/243: rename d1/d31/c1e to d1/d31/dc/d40/c4e 0 2026-03-10T10:19:29.244 INFO:tasks.workunit.client.1.vm05.stdout:5/342: sync 2026-03-10T10:19:29.252 INFO:tasks.workunit.client.1.vm05.stdout:3/345: mkdir dd/d15/d24/d2c/d3b/d7c 0 2026-03-10T10:19:29.273 INFO:tasks.workunit.client.0.vm02.stdout:0/365: symlink d9/d18/d1a/d46/l78 0 2026-03-10T10:19:29.277 INFO:tasks.workunit.client.0.vm02.stdout:2/371: symlink d0/d1a/d49/l7d 0 2026-03-10T10:19:29.278 INFO:tasks.workunit.client.0.vm02.stdout:2/372: write d0/d10/f6a [579834,111212] 0 2026-03-10T10:19:29.282 INFO:tasks.workunit.client.0.vm02.stdout:5/505: dread d1/db/d11/d84/d40/f66 [0,4194304] 0 2026-03-10T10:19:29.289 INFO:tasks.workunit.client.1.vm05.stdout:2/288: creat db/d1c/f54 x:0 0 0 2026-03-10T10:19:29.289 INFO:tasks.workunit.client.0.vm02.stdout:7/326: symlink d1/dc/d55/l62 0 2026-03-10T10:19:29.289 INFO:tasks.workunit.client.0.vm02.stdout:7/327: chown d1/dc/f3 99167120 1 2026-03-10T10:19:29.289 INFO:tasks.workunit.client.0.vm02.stdout:0/366: mkdir d9/d18/d1a/d22/d24/d79 0 2026-03-10T10:19:29.289 INFO:tasks.workunit.client.0.vm02.stdout:0/367: chown d9/d34/d3d/f69 2875610 1 2026-03-10T10:19:29.289 INFO:tasks.workunit.client.0.vm02.stdout:2/373: rmdir d0/d10/d69 39 2026-03-10T10:19:29.289 INFO:tasks.workunit.client.0.vm02.stdout:3/300: symlink d1/l65 0 2026-03-10T10:19:29.290 INFO:tasks.workunit.client.0.vm02.stdout:5/506: creat d1/db/d11/d16/d79/d85/fb0 x:0 0 0 2026-03-10T10:19:29.291 INFO:tasks.workunit.client.0.vm02.stdout:0/368: creat d9/d34/d3d/d65/f7a x:0 0 0 2026-03-10T10:19:29.295 INFO:tasks.workunit.client.0.vm02.stdout:5/507: dwrite d1/db/d11/d16/d79/d85/f94 [0,4194304] 0 2026-03-10T10:19:29.297 INFO:tasks.workunit.client.0.vm02.stdout:5/508: dread d1/db/f56 [0,4194304] 0 2026-03-10T10:19:29.299 INFO:tasks.workunit.client.1.vm05.stdout:8/224: link d7/d14/f40 d7/f44 0 2026-03-10T10:19:29.300 INFO:tasks.workunit.client.0.vm02.stdout:7/328: dread d1/f17 [0,4194304] 0 2026-03-10T10:19:29.309 INFO:tasks.workunit.client.0.vm02.stdout:3/301: mkdir d1/d8/d21/d66 0 2026-03-10T10:19:29.309 INFO:tasks.workunit.client.1.vm05.stdout:1/260: link d4/l42 d4/d39/l51 0 2026-03-10T10:19:29.319 INFO:tasks.workunit.client.1.vm05.stdout:2/289: unlink db/d28/c39 0 2026-03-10T10:19:29.320 INFO:tasks.workunit.client.1.vm05.stdout:4/244: fdatasync d1/d3/f5 0 2026-03-10T10:19:29.324 INFO:tasks.workunit.client.0.vm02.stdout:5/509: mknod d1/db/d11/d13/cb1 0 2026-03-10T10:19:29.336 INFO:tasks.workunit.client.1.vm05.stdout:2/290: mkdir db/d4e/d55 0 2026-03-10T10:19:29.336 INFO:tasks.workunit.client.0.vm02.stdout:7/329: mknod d1/dc/d44/d5f/c63 0 2026-03-10T10:19:29.336 INFO:tasks.workunit.client.0.vm02.stdout:3/302: mknod d1/d8/d21/d66/c67 0 2026-03-10T10:19:29.338 INFO:tasks.workunit.client.1.vm05.stdout:1/261: chown d4/l42 11225066 1 2026-03-10T10:19:29.339 INFO:tasks.workunit.client.1.vm05.stdout:1/262: fdatasync d4/d39/f50 0 2026-03-10T10:19:29.341 INFO:tasks.workunit.client.1.vm05.stdout:4/245: truncate d1/d3/f10 863243 0 2026-03-10T10:19:29.345 INFO:tasks.workunit.client.1.vm05.stdout:2/291: link f1 db/d1c/f56 0 2026-03-10T10:19:29.349 INFO:tasks.workunit.client.1.vm05.stdout:4/246: write d1/d3/f4a [9023673,2343] 0 2026-03-10T10:19:29.353 INFO:tasks.workunit.client.1.vm05.stdout:2/292: symlink db/d4e/l57 0 2026-03-10T10:19:29.357 INFO:tasks.workunit.client.1.vm05.stdout:4/247: rename d1/d31/dc/c3f to d1/d31/c4f 0 2026-03-10T10:19:29.363 INFO:tasks.workunit.client.1.vm05.stdout:1/263: dread d4/d39/f3a [0,4194304] 0 2026-03-10T10:19:29.364 INFO:tasks.workunit.client.0.vm02.stdout:3/303: getdents d1/d58 0 2026-03-10T10:19:29.364 INFO:tasks.workunit.client.0.vm02.stdout:3/304: stat d1/d6/f36 0 2026-03-10T10:19:29.364 INFO:tasks.workunit.client.0.vm02.stdout:3/305: symlink d1/d6/l68 0 2026-03-10T10:19:29.370 INFO:tasks.workunit.client.1.vm05.stdout:1/264: dwrite d4/df/d1c/f38 [0,4194304] 0 2026-03-10T10:19:29.370 INFO:tasks.workunit.client.1.vm05.stdout:2/293: mknod db/d28/d4f/c58 0 2026-03-10T10:19:29.373 INFO:tasks.workunit.client.0.vm02.stdout:3/306: dread d1/d8/f46 [0,4194304] 0 2026-03-10T10:19:29.377 INFO:tasks.workunit.client.1.vm05.stdout:6/237: dread dd/d36/d3f/d12/d44/f2f [0,4194304] 0 2026-03-10T10:19:29.377 INFO:tasks.workunit.client.1.vm05.stdout:1/265: rmdir d4/df/d1c 39 2026-03-10T10:19:29.379 INFO:tasks.workunit.client.0.vm02.stdout:3/307: symlink d1/d6/l69 0 2026-03-10T10:19:29.380 INFO:tasks.workunit.client.0.vm02.stdout:9/285: write da/f28 [3228899,120022] 0 2026-03-10T10:19:29.382 INFO:tasks.workunit.client.0.vm02.stdout:9/286: symlink da/d3c/d4c/d38/l5b 0 2026-03-10T10:19:29.383 INFO:tasks.workunit.client.0.vm02.stdout:9/287: chown da/d3c/l43 13615079 1 2026-03-10T10:19:29.391 INFO:tasks.workunit.client.0.vm02.stdout:3/308: creat d1/f6a x:0 0 0 2026-03-10T10:19:29.393 INFO:tasks.workunit.client.0.vm02.stdout:9/288: creat da/f5c x:0 0 0 2026-03-10T10:19:29.396 INFO:tasks.workunit.client.0.vm02.stdout:9/289: mknod da/d3c/d4c/d38/d4a/c5d 0 2026-03-10T10:19:29.397 INFO:tasks.workunit.client.0.vm02.stdout:9/290: mknod da/d3c/c5e 0 2026-03-10T10:19:29.398 INFO:tasks.workunit.client.0.vm02.stdout:9/291: chown da/d3c/d4c 354772 1 2026-03-10T10:19:29.399 INFO:tasks.workunit.client.1.vm05.stdout:2/294: getdents db 0 2026-03-10T10:19:29.402 INFO:tasks.workunit.client.1.vm05.stdout:2/295: rmdir db 39 2026-03-10T10:19:29.405 INFO:tasks.workunit.client.1.vm05.stdout:2/296: fdatasync db/f23 0 2026-03-10T10:19:29.406 INFO:tasks.workunit.client.1.vm05.stdout:2/297: chown db/f26 1473516 1 2026-03-10T10:19:29.409 INFO:tasks.workunit.client.1.vm05.stdout:2/298: write db/f23 [1020578,21532] 0 2026-03-10T10:19:29.410 INFO:tasks.workunit.client.1.vm05.stdout:2/299: mkdir db/d28/d4f/d59 0 2026-03-10T10:19:29.422 INFO:tasks.workunit.client.1.vm05.stdout:2/300: sync 2026-03-10T10:19:29.422 INFO:tasks.workunit.client.1.vm05.stdout:2/301: dread - db/d1c/f3d zero size 2026-03-10T10:19:29.448 INFO:tasks.workunit.client.0.vm02.stdout:6/329: truncate d0/d8/d9/f14 1733206 0 2026-03-10T10:19:29.450 INFO:tasks.workunit.client.0.vm02.stdout:6/330: mkdir d0/d8/d29/d6d/d32/d60/d6f 0 2026-03-10T10:19:29.451 INFO:tasks.workunit.client.0.vm02.stdout:6/331: creat d0/d8/d29/d6d/d32/f70 x:0 0 0 2026-03-10T10:19:29.453 INFO:tasks.workunit.client.0.vm02.stdout:6/332: mknod d0/d8/d29/c71 0 2026-03-10T10:19:29.453 INFO:tasks.workunit.client.0.vm02.stdout:6/333: readlink d0/d8/d29/d2f/l65 0 2026-03-10T10:19:29.454 INFO:tasks.workunit.client.0.vm02.stdout:6/334: read d0/d8/f64 [1733482,102912] 0 2026-03-10T10:19:29.481 INFO:tasks.workunit.client.1.vm05.stdout:6/238: write f3 [3029932,25193] 0 2026-03-10T10:19:29.483 INFO:tasks.workunit.client.0.vm02.stdout:8/371: dwrite d1/d2/f27 [0,4194304] 0 2026-03-10T10:19:29.483 INFO:tasks.workunit.client.1.vm05.stdout:6/239: dread dd/d1b/f1d [0,4194304] 0 2026-03-10T10:19:29.489 INFO:tasks.workunit.client.0.vm02.stdout:8/372: fdatasync d1/d1c/f3f 0 2026-03-10T10:19:29.493 INFO:tasks.workunit.client.0.vm02.stdout:8/373: dwrite d1/d1c/d24/d35/f6e [0,4194304] 0 2026-03-10T10:19:29.497 INFO:tasks.workunit.client.0.vm02.stdout:8/374: creat d1/f73 x:0 0 0 2026-03-10T10:19:29.497 INFO:tasks.workunit.client.0.vm02.stdout:8/375: fsync d1/f40 0 2026-03-10T10:19:29.498 INFO:tasks.workunit.client.0.vm02.stdout:8/376: truncate d1/d1c/f20 4645334 0 2026-03-10T10:19:29.523 INFO:tasks.workunit.client.0.vm02.stdout:4/438: write d1/d10/db/f91 [707583,67935] 0 2026-03-10T10:19:29.526 INFO:tasks.workunit.client.0.vm02.stdout:8/377: symlink d1/d1c/d23/d25/l74 0 2026-03-10T10:19:29.536 INFO:tasks.workunit.client.1.vm05.stdout:9/258: dwrite d0/df/d11/f2c [0,4194304] 0 2026-03-10T10:19:29.538 INFO:tasks.workunit.client.1.vm05.stdout:9/259: fsync d0/d1/d16/f40 0 2026-03-10T10:19:29.538 INFO:tasks.workunit.client.0.vm02.stdout:1/316: dwrite d4/da/d27/d38/f3b [0,4194304] 0 2026-03-10T10:19:29.552 INFO:tasks.workunit.client.1.vm05.stdout:9/260: fdatasync d0/f7 0 2026-03-10T10:19:29.559 INFO:tasks.workunit.client.1.vm05.stdout:4/248: creat d1/d31/dc/d40/d45/f50 x:0 0 0 2026-03-10T10:19:29.559 INFO:tasks.workunit.client.0.vm02.stdout:1/317: getdents d4/da/d27/d38 0 2026-03-10T10:19:29.561 INFO:tasks.workunit.client.1.vm05.stdout:9/261: mkdir d0/d1/d13/d55 0 2026-03-10T10:19:29.562 INFO:tasks.workunit.client.1.vm05.stdout:4/249: truncate d1/d31/dc/f2a 4442830 0 2026-03-10T10:19:29.562 INFO:tasks.workunit.client.1.vm05.stdout:0/279: write d1/d2/d9/fd [491543,121086] 0 2026-03-10T10:19:29.565 INFO:tasks.workunit.client.1.vm05.stdout:9/262: symlink d0/df/d11/l56 0 2026-03-10T10:19:29.565 INFO:tasks.workunit.client.1.vm05.stdout:7/352: write d5/d26/f39 [1581803,55987] 0 2026-03-10T10:19:29.566 INFO:tasks.workunit.client.0.vm02.stdout:0/369: rmdir d9/d18/d1a 39 2026-03-10T10:19:29.568 INFO:tasks.workunit.client.0.vm02.stdout:1/318: rename d4/da/d1a/d22/l63 to d4/da/d1a/l68 0 2026-03-10T10:19:29.569 INFO:tasks.workunit.client.1.vm05.stdout:3/346: write fa [539096,64744] 0 2026-03-10T10:19:29.571 INFO:tasks.workunit.client.1.vm05.stdout:0/280: creat d1/d2/d9/d31/d12/f5b x:0 0 0 2026-03-10T10:19:29.571 INFO:tasks.workunit.client.1.vm05.stdout:4/250: dwrite d1/d3/f12 [0,4194304] 0 2026-03-10T10:19:29.571 INFO:tasks.workunit.client.1.vm05.stdout:3/347: read - dd/d15/d1f/f53 zero size 2026-03-10T10:19:29.579 INFO:tasks.workunit.client.1.vm05.stdout:7/353: symlink d5/d1d/d20/d35/l6b 0 2026-03-10T10:19:29.582 INFO:tasks.workunit.client.1.vm05.stdout:0/281: sync 2026-03-10T10:19:29.582 INFO:tasks.workunit.client.1.vm05.stdout:3/348: sync 2026-03-10T10:19:29.582 INFO:tasks.workunit.client.1.vm05.stdout:7/354: dread - d5/dd/f28 zero size 2026-03-10T10:19:29.583 INFO:tasks.workunit.client.1.vm05.stdout:3/349: stat dd/l4a 0 2026-03-10T10:19:29.584 INFO:tasks.workunit.client.1.vm05.stdout:7/355: chown d5/d1d/d29/d3e/l69 32 1 2026-03-10T10:19:29.584 INFO:tasks.workunit.client.1.vm05.stdout:7/356: fdatasync d5/d17/f4f 0 2026-03-10T10:19:29.586 INFO:tasks.workunit.client.0.vm02.stdout:2/374: write d0/d1a/f47 [1594,109562] 0 2026-03-10T10:19:29.590 INFO:tasks.workunit.client.0.vm02.stdout:2/375: dwrite d0/d1a/f31 [0,4194304] 0 2026-03-10T10:19:29.595 INFO:tasks.workunit.client.0.vm02.stdout:2/376: dwrite d0/d1a/d24/f6e [0,4194304] 0 2026-03-10T10:19:29.597 INFO:tasks.workunit.client.0.vm02.stdout:2/377: write d0/d1a/d24/f34 [130597,105002] 0 2026-03-10T10:19:29.603 INFO:tasks.workunit.client.1.vm05.stdout:5/343: write da/db/f29 [3701738,109867] 0 2026-03-10T10:19:29.611 INFO:tasks.workunit.client.1.vm05.stdout:8/225: write d7/f1c [1196158,84021] 0 2026-03-10T10:19:29.627 INFO:tasks.workunit.client.0.vm02.stdout:5/510: write d1/f7f [384716,9205] 0 2026-03-10T10:19:29.627 INFO:tasks.workunit.client.1.vm05.stdout:8/226: dwrite d7/d14/d15/f2e [0,4194304] 0 2026-03-10T10:19:29.627 INFO:tasks.workunit.client.1.vm05.stdout:0/282: creat d1/d2/d9/d31/d13/d2f/d49/f5c x:0 0 0 2026-03-10T10:19:29.627 INFO:tasks.workunit.client.1.vm05.stdout:4/251: creat d1/d31/d4b/f51 x:0 0 0 2026-03-10T10:19:29.627 INFO:tasks.workunit.client.1.vm05.stdout:4/252: dwrite f0 [0,4194304] 0 2026-03-10T10:19:29.635 INFO:tasks.workunit.client.1.vm05.stdout:3/350: creat dd/d20/d56/f7d x:0 0 0 2026-03-10T10:19:29.641 INFO:tasks.workunit.client.0.vm02.stdout:7/330: truncate d1/dc/f25 1112061 0 2026-03-10T10:19:29.644 INFO:tasks.workunit.client.0.vm02.stdout:5/511: fsync d1/db/d11/d13/d28/f35 0 2026-03-10T10:19:29.646 INFO:tasks.workunit.client.0.vm02.stdout:7/331: creat d1/dc/d55/f64 x:0 0 0 2026-03-10T10:19:29.670 INFO:tasks.workunit.client.0.vm02.stdout:9/292: write da/d3c/d4c/f2b [1911585,6923] 0 2026-03-10T10:19:29.670 INFO:tasks.workunit.client.0.vm02.stdout:7/332: fdatasync d1/fd 0 2026-03-10T10:19:29.670 INFO:tasks.workunit.client.0.vm02.stdout:3/309: write d1/d8/d21/f35 [764674,16493] 0 2026-03-10T10:19:29.670 INFO:tasks.workunit.client.1.vm05.stdout:0/283: readlink d1/d2/d9/d31/d13/d15/l4b 0 2026-03-10T10:19:29.671 INFO:tasks.workunit.client.0.vm02.stdout:9/293: truncate da/f5c 833866 0 2026-03-10T10:19:29.671 INFO:tasks.workunit.client.0.vm02.stdout:3/310: chown d1/d6/l69 2329 1 2026-03-10T10:19:29.672 INFO:tasks.workunit.client.0.vm02.stdout:3/311: chown d1/d8/d21/f29 175223777 1 2026-03-10T10:19:29.672 INFO:tasks.workunit.client.1.vm05.stdout:0/284: chown d1/d2/c42 200 1 2026-03-10T10:19:29.676 INFO:tasks.workunit.client.1.vm05.stdout:1/266: dwrite d4/d20/f49 [0,4194304] 0 2026-03-10T10:19:29.676 INFO:tasks.workunit.client.1.vm05.stdout:0/285: dread d1/f11 [0,4194304] 0 2026-03-10T10:19:29.680 INFO:tasks.workunit.client.1.vm05.stdout:3/351: creat dd/d39/d66/f7e x:0 0 0 2026-03-10T10:19:29.685 INFO:tasks.workunit.client.1.vm05.stdout:1/267: dwrite d4/d20/f49 [0,4194304] 0 2026-03-10T10:19:29.685 INFO:tasks.workunit.client.1.vm05.stdout:4/253: dwrite d1/d31/dc/f3a [0,4194304] 0 2026-03-10T10:19:29.685 INFO:tasks.workunit.client.1.vm05.stdout:0/286: dread - d1/d2/d9/d31/d13/d17/f56 zero size 2026-03-10T10:19:29.685 INFO:tasks.workunit.client.1.vm05.stdout:3/352: write dd/d15/d4c/f73 [831771,39349] 0 2026-03-10T10:19:29.685 INFO:tasks.workunit.client.1.vm05.stdout:4/254: stat d1/d31/f1b 0 2026-03-10T10:19:29.692 INFO:tasks.workunit.client.1.vm05.stdout:1/268: truncate d4/d39/f50 901945 0 2026-03-10T10:19:29.692 INFO:tasks.workunit.client.1.vm05.stdout:2/302: dwrite db/d1c/f1f [0,4194304] 0 2026-03-10T10:19:29.693 INFO:tasks.workunit.client.1.vm05.stdout:1/269: read - d4/d39/d3e/f4d zero size 2026-03-10T10:19:29.695 INFO:tasks.workunit.client.0.vm02.stdout:9/294: chown da/d3c/d4c/c12 393542355 1 2026-03-10T10:19:29.696 INFO:tasks.workunit.client.0.vm02.stdout:9/295: write da/f13 [4007509,93460] 0 2026-03-10T10:19:29.697 INFO:tasks.workunit.client.0.vm02.stdout:6/335: dwrite d0/d8/d29/d2f/d4b/f53 [0,4194304] 0 2026-03-10T10:19:29.697 INFO:tasks.workunit.client.0.vm02.stdout:9/296: dread - da/d3c/d4c/d38/d4a/f54 zero size 2026-03-10T10:19:29.701 INFO:tasks.workunit.client.0.vm02.stdout:9/297: stat da/d3c/d4c/d38/l5b 0 2026-03-10T10:19:29.708 INFO:tasks.workunit.client.0.vm02.stdout:3/312: readlink d1/l37 0 2026-03-10T10:19:29.709 INFO:tasks.workunit.client.1.vm05.stdout:4/255: readlink d1/d3/le 0 2026-03-10T10:19:29.711 INFO:tasks.workunit.client.0.vm02.stdout:3/313: dread d1/d8/f46 [0,4194304] 0 2026-03-10T10:19:29.713 INFO:tasks.workunit.client.1.vm05.stdout:4/256: dwrite d1/d31/dc/f33 [0,4194304] 0 2026-03-10T10:19:29.729 INFO:tasks.workunit.client.1.vm05.stdout:3/353: symlink dd/d15/d24/d2c/d3b/l7f 0 2026-03-10T10:19:29.737 INFO:tasks.workunit.client.1.vm05.stdout:8/227: link d7/f1c d7/d2f/f45 0 2026-03-10T10:19:29.746 INFO:tasks.workunit.client.1.vm05.stdout:2/303: mknod db/d1c/d40/c5a 0 2026-03-10T10:19:29.747 INFO:tasks.workunit.client.1.vm05.stdout:1/270: chown d4/df/d1c/f2a 63539224 1 2026-03-10T10:19:29.749 INFO:tasks.workunit.client.1.vm05.stdout:4/257: creat d1/d31/dc/d40/d45/f52 x:0 0 0 2026-03-10T10:19:29.749 INFO:tasks.workunit.client.1.vm05.stdout:4/258: readlink d1/d31/l32 0 2026-03-10T10:19:29.755 INFO:tasks.workunit.client.1.vm05.stdout:1/271: chown d4/d39/d3e/c40 31 1 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:0/287: dread d1/d2/d9/d31/d54/f16 [0,4194304] 0 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:2/304: symlink db/d28/d4f/d59/l5b 0 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:2/305: dread - db/d2d/f48 zero size 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:1/272: mknod d4/d37/d4e/c52 0 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:2/306: chown db/d28/d4f/d59/l5b 13339988 1 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:2/307: mknod db/d28/d4f/d59/c5c 0 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:2/308: link db/f26 db/d2d/f5d 0 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:2/309: getdents db/d12 0 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:2/310: mkdir db/d2d/d5e 0 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:2/311: readlink db/d4e/l57 0 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:2/312: write db/f14 [3698825,40206] 0 2026-03-10T10:19:29.778 INFO:tasks.workunit.client.1.vm05.stdout:1/273: dread d4/d37/f41 [0,4194304] 0 2026-03-10T10:19:29.779 INFO:tasks.workunit.client.1.vm05.stdout:1/274: readlink d4/l6 0 2026-03-10T10:19:29.779 INFO:tasks.workunit.client.1.vm05.stdout:1/275: stat d4/d39/d3e/f4d 0 2026-03-10T10:19:29.781 INFO:tasks.workunit.client.1.vm05.stdout:1/276: mkdir d4/df/d1c/d53 0 2026-03-10T10:19:29.783 INFO:tasks.workunit.client.1.vm05.stdout:1/277: rename d4/d39/f50 to d4/d39/f54 0 2026-03-10T10:19:29.815 INFO:tasks.workunit.client.0.vm02.stdout:4/439: symlink d1/d41/d5e/d78/d37/l92 0 2026-03-10T10:19:29.815 INFO:tasks.workunit.client.0.vm02.stdout:4/440: chown d1/f6f 8752 1 2026-03-10T10:19:29.818 INFO:tasks.workunit.client.0.vm02.stdout:4/441: write d1/d32/f69 [806829,38299] 0 2026-03-10T10:19:29.822 INFO:tasks.workunit.client.0.vm02.stdout:9/298: sync 2026-03-10T10:19:29.823 INFO:tasks.workunit.client.0.vm02.stdout:9/299: chown da/d3c/d4c/d2c/d34/f4d 581155696 1 2026-03-10T10:19:29.830 INFO:tasks.workunit.client.0.vm02.stdout:9/300: mknod da/d3c/d4c/d38/c5f 0 2026-03-10T10:19:29.832 INFO:tasks.workunit.client.0.vm02.stdout:4/442: creat d1/d41/d5e/d78/d1a/f93 x:0 0 0 2026-03-10T10:19:29.833 INFO:tasks.workunit.client.0.vm02.stdout:9/301: rename da/d3c/d4c/f19 to da/d3c/d4c/f60 0 2026-03-10T10:19:29.835 INFO:tasks.workunit.client.0.vm02.stdout:9/302: rmdir da/d3c/d4c/d2c 39 2026-03-10T10:19:29.839 INFO:tasks.workunit.client.0.vm02.stdout:9/303: dwrite da/f5c [0,4194304] 0 2026-03-10T10:19:29.842 INFO:tasks.workunit.client.0.vm02.stdout:9/304: chown da/d3c/d4c/c1a 122 1 2026-03-10T10:19:29.843 INFO:tasks.workunit.client.0.vm02.stdout:4/443: creat d1/f94 x:0 0 0 2026-03-10T10:19:29.848 INFO:tasks.workunit.client.0.vm02.stdout:4/444: dwrite d1/d41/d5e/d78/d37/f2e [0,4194304] 0 2026-03-10T10:19:29.869 INFO:tasks.workunit.client.0.vm02.stdout:4/445: creat d1/d32/f95 x:0 0 0 2026-03-10T10:19:29.872 INFO:tasks.workunit.client.0.vm02.stdout:4/446: dread d1/d10/db/f24 [0,4194304] 0 2026-03-10T10:19:29.872 INFO:tasks.workunit.client.0.vm02.stdout:4/447: write d1/d32/d3e/f7d [875554,89577] 0 2026-03-10T10:19:29.873 INFO:tasks.workunit.client.0.vm02.stdout:4/448: truncate d1/d52/d53/f70 1799261 0 2026-03-10T10:19:29.879 INFO:tasks.workunit.client.0.vm02.stdout:2/378: rename d0/d10/c59 to d0/d1a/d49/c7e 0 2026-03-10T10:19:29.890 INFO:tasks.workunit.client.0.vm02.stdout:2/379: stat d0/d1a 0 2026-03-10T10:19:29.890 INFO:tasks.workunit.client.0.vm02.stdout:1/319: write d4/da/f28 [459017,92216] 0 2026-03-10T10:19:29.891 INFO:tasks.workunit.client.0.vm02.stdout:4/449: symlink d1/d32/l96 0 2026-03-10T10:19:29.891 INFO:tasks.workunit.client.0.vm02.stdout:1/320: creat d4/d2c/d53/f69 x:0 0 0 2026-03-10T10:19:29.897 INFO:tasks.workunit.client.0.vm02.stdout:4/450: dread d1/d32/d3e/f7d [0,4194304] 0 2026-03-10T10:19:29.914 INFO:tasks.workunit.client.0.vm02.stdout:1/321: dread d4/d1b/f44 [0,4194304] 0 2026-03-10T10:19:29.922 INFO:tasks.workunit.client.1.vm05.stdout:7/357: write d5/d1d/d20/d35/f36 [130454,29732] 0 2026-03-10T10:19:29.928 INFO:tasks.workunit.client.0.vm02.stdout:1/322: creat d4/da/d27/f6a x:0 0 0 2026-03-10T10:19:29.937 INFO:tasks.workunit.client.0.vm02.stdout:7/333: write d1/dc/d16/f1f [5135343,102941] 0 2026-03-10T10:19:29.938 INFO:tasks.workunit.client.0.vm02.stdout:7/334: stat d1/dc/d10/c4f 0 2026-03-10T10:19:29.939 INFO:tasks.workunit.client.1.vm05.stdout:7/358: dread d5/dd/f12 [0,4194304] 0 2026-03-10T10:19:29.940 INFO:tasks.workunit.client.1.vm05.stdout:7/359: chown d5/dd/f2f 1209 1 2026-03-10T10:19:29.953 INFO:tasks.workunit.client.1.vm05.stdout:9/263: mkdir d0/d1/d57 0 2026-03-10T10:19:29.953 INFO:tasks.workunit.client.0.vm02.stdout:5/512: creat d1/db/d11/d84/fb2 x:0 0 0 2026-03-10T10:19:29.953 INFO:tasks.workunit.client.1.vm05.stdout:9/264: truncate d0/d1/d16/f3d 504032 0 2026-03-10T10:19:29.954 INFO:tasks.workunit.client.1.vm05.stdout:9/265: read - d0/df/d11/f50 zero size 2026-03-10T10:19:29.954 INFO:tasks.workunit.client.1.vm05.stdout:6/240: mknod dd/d36/d3f/d12/d44/c50 0 2026-03-10T10:19:29.958 INFO:tasks.workunit.client.1.vm05.stdout:6/241: unlink dd/fe 0 2026-03-10T10:19:29.959 INFO:tasks.workunit.client.1.vm05.stdout:9/266: creat d0/d1/d13/d26/f58 x:0 0 0 2026-03-10T10:19:29.962 INFO:tasks.workunit.client.1.vm05.stdout:9/267: write d0/d1/d13/d26/f4f [3872459,62196] 0 2026-03-10T10:19:29.962 INFO:tasks.workunit.client.1.vm05.stdout:6/242: dwrite dd/d1b/f40 [0,4194304] 0 2026-03-10T10:19:29.964 INFO:tasks.workunit.client.0.vm02.stdout:6/336: dwrite d0/f43 [0,4194304] 0 2026-03-10T10:19:29.967 INFO:tasks.workunit.client.1.vm05.stdout:9/268: symlink d0/df/d11/l59 0 2026-03-10T10:19:29.968 INFO:tasks.workunit.client.1.vm05.stdout:9/269: write d0/d1/d16/f3d [576536,103219] 0 2026-03-10T10:19:29.968 INFO:tasks.workunit.client.1.vm05.stdout:6/243: symlink dd/d1b/l51 0 2026-03-10T10:19:29.972 INFO:tasks.workunit.client.0.vm02.stdout:6/337: unlink d0/f1c 0 2026-03-10T10:19:29.972 INFO:tasks.workunit.client.1.vm05.stdout:2/313: dread db/d12/f1d [0,4194304] 0 2026-03-10T10:19:29.972 INFO:tasks.workunit.client.0.vm02.stdout:3/314: write d1/d6/f49 [1573142,23910] 0 2026-03-10T10:19:29.973 INFO:tasks.workunit.client.0.vm02.stdout:6/338: truncate d0/d8/d29/d2f/d4b/f26 5073091 0 2026-03-10T10:19:29.973 INFO:tasks.workunit.client.0.vm02.stdout:3/315: chown d1/d8/d21/f5e 337661992 1 2026-03-10T10:19:29.974 INFO:tasks.workunit.client.0.vm02.stdout:6/339: read - d0/d8/d29/d6d/d32/f70 zero size 2026-03-10T10:19:29.974 INFO:tasks.workunit.client.1.vm05.stdout:9/270: dwrite d0/d1/d13/de/d21/f53 [0,4194304] 0 2026-03-10T10:19:29.977 INFO:tasks.workunit.client.1.vm05.stdout:9/271: truncate d0/df/f3b 1218742 0 2026-03-10T10:19:29.977 INFO:tasks.workunit.client.1.vm05.stdout:3/354: write dd/d15/d24/d2c/f38 [348538,121569] 0 2026-03-10T10:19:29.979 INFO:tasks.workunit.client.1.vm05.stdout:6/244: chown dd/d36/d3f/d12/d24 67139 1 2026-03-10T10:19:29.980 INFO:tasks.workunit.client.1.vm05.stdout:6/245: chown dd/d36/d3f/d12/l1a 0 1 2026-03-10T10:19:29.983 INFO:tasks.workunit.client.1.vm05.stdout:8/228: dwrite d7/f8 [0,4194304] 0 2026-03-10T10:19:29.986 INFO:tasks.workunit.client.0.vm02.stdout:3/316: dwrite d1/d8/d21/f2f [0,4194304] 0 2026-03-10T10:19:29.986 INFO:tasks.workunit.client.1.vm05.stdout:8/229: dread d7/d14/d24/f34 [0,4194304] 0 2026-03-10T10:19:29.993 INFO:tasks.workunit.client.0.vm02.stdout:6/340: symlink d0/d8/d29/d6d/d32/d60/l72 0 2026-03-10T10:19:29.994 INFO:tasks.workunit.client.1.vm05.stdout:1/278: getdents d4/d37/d4e 0 2026-03-10T10:19:30.010 INFO:tasks.workunit.client.1.vm05.stdout:5/344: creat da/db/d26/d35/f74 x:0 0 0 2026-03-10T10:19:30.013 INFO:tasks.workunit.client.1.vm05.stdout:9/272: unlink d0/df/d11/f2d 0 2026-03-10T10:19:30.014 INFO:tasks.workunit.client.1.vm05.stdout:9/273: truncate d0/df/d11/f52 167952 0 2026-03-10T10:19:30.026 INFO:tasks.workunit.client.1.vm05.stdout:3/355: unlink dd/d20/d56/d5e/c6c 0 2026-03-10T10:19:30.026 INFO:tasks.workunit.client.1.vm05.stdout:3/356: dread dd/f52 [0,4194304] 0 2026-03-10T10:19:30.026 INFO:tasks.workunit.client.0.vm02.stdout:6/341: rmdir d0/d8/d29/d2f 39 2026-03-10T10:19:30.026 INFO:tasks.workunit.client.0.vm02.stdout:6/342: chown d0/f5d 23 1 2026-03-10T10:19:30.026 INFO:tasks.workunit.client.0.vm02.stdout:6/343: fdatasync d0/d8/d9/f6a 0 2026-03-10T10:19:30.027 INFO:tasks.workunit.client.0.vm02.stdout:3/317: read d1/f12 [2078250,32329] 0 2026-03-10T10:19:30.042 INFO:tasks.workunit.client.0.vm02.stdout:9/305: fsync da/d3c/d4c/f60 0 2026-03-10T10:19:30.052 INFO:tasks.workunit.client.0.vm02.stdout:3/318: dread d1/d8/fb [0,4194304] 0 2026-03-10T10:19:30.052 INFO:tasks.workunit.client.0.vm02.stdout:3/319: chown d1/d6/c23 7913 1 2026-03-10T10:19:30.055 INFO:tasks.workunit.client.0.vm02.stdout:6/344: link d0/d8/f5a d0/d8/d29/d6d/d32/d60/f73 0 2026-03-10T10:19:30.055 INFO:tasks.workunit.client.0.vm02.stdout:6/345: chown d0 520590 1 2026-03-10T10:19:30.056 INFO:tasks.workunit.client.0.vm02.stdout:6/346: dread - d0/d8/d9/f54 zero size 2026-03-10T10:19:30.056 INFO:tasks.workunit.client.0.vm02.stdout:6/347: write d0/d8/d9/f13 [38493,54185] 0 2026-03-10T10:19:30.075 INFO:tasks.workunit.client.1.vm05.stdout:9/274: fsync d0/f28 0 2026-03-10T10:19:30.087 INFO:tasks.workunit.client.1.vm05.stdout:3/357: symlink dd/d39/d66/l80 0 2026-03-10T10:19:30.089 INFO:tasks.workunit.client.0.vm02.stdout:9/306: sync 2026-03-10T10:19:30.093 INFO:tasks.workunit.client.1.vm05.stdout:9/275: unlink d0/d1/l10 0 2026-03-10T10:19:30.097 INFO:tasks.workunit.client.0.vm02.stdout:9/307: creat da/d3c/d4c/d56/f61 x:0 0 0 2026-03-10T10:19:30.099 INFO:tasks.workunit.client.1.vm05.stdout:3/358: mknod dd/d39/c81 0 2026-03-10T10:19:30.104 INFO:tasks.workunit.client.0.vm02.stdout:9/308: mknod da/d3c/d4c/d38/c62 0 2026-03-10T10:19:30.104 INFO:tasks.workunit.client.0.vm02.stdout:9/309: symlink da/d3c/d4c/d38/d4a/l63 0 2026-03-10T10:19:30.104 INFO:tasks.workunit.client.1.vm05.stdout:3/359: readlink dd/l58 0 2026-03-10T10:19:30.104 INFO:tasks.workunit.client.1.vm05.stdout:3/360: chown dd/d39/c5b 0 1 2026-03-10T10:19:30.104 INFO:tasks.workunit.client.1.vm05.stdout:3/361: write dd/d15/d24/f42 [794595,56334] 0 2026-03-10T10:19:30.109 INFO:tasks.workunit.client.1.vm05.stdout:5/345: getdents da/db/d28/d6e 0 2026-03-10T10:19:30.111 INFO:tasks.workunit.client.1.vm05.stdout:9/276: mknod d0/d1/d13/c5a 0 2026-03-10T10:19:30.113 INFO:tasks.workunit.client.0.vm02.stdout:9/310: truncate da/d3c/d4c/f31 748231 0 2026-03-10T10:19:30.116 INFO:tasks.workunit.client.1.vm05.stdout:9/277: write d0/d1/d13/de/f38 [3565483,1391] 0 2026-03-10T10:19:30.120 INFO:tasks.workunit.client.1.vm05.stdout:6/246: fdatasync dd/d1b/f40 0 2026-03-10T10:19:30.120 INFO:tasks.workunit.client.1.vm05.stdout:5/346: mknod da/db/d26/d5c/d4b/c75 0 2026-03-10T10:19:30.121 INFO:tasks.workunit.client.1.vm05.stdout:3/362: dread fb [0,4194304] 0 2026-03-10T10:19:30.132 INFO:tasks.workunit.client.0.vm02.stdout:0/370: unlink d9/d18/d1a/c48 0 2026-03-10T10:19:30.132 INFO:tasks.workunit.client.1.vm05.stdout:9/278: dread d0/df/d11/f2c [0,4194304] 0 2026-03-10T10:19:30.133 INFO:tasks.workunit.client.1.vm05.stdout:9/279: read d0/d1/d13/de/d21/f34 [308749,32574] 0 2026-03-10T10:19:30.134 INFO:tasks.workunit.client.0.vm02.stdout:9/311: rename da/d3c/d4c/l58 to da/d3c/d4c/l64 0 2026-03-10T10:19:30.136 INFO:tasks.workunit.client.1.vm05.stdout:0/288: mkdir d1/d2/d5d 0 2026-03-10T10:19:30.137 INFO:tasks.workunit.client.1.vm05.stdout:3/363: dwrite dd/d15/d24/f42 [0,4194304] 0 2026-03-10T10:19:30.139 INFO:tasks.workunit.client.0.vm02.stdout:2/380: write d0/d1a/d49/f50 [60577,72024] 0 2026-03-10T10:19:30.139 INFO:tasks.workunit.client.0.vm02.stdout:2/381: fsync d0/f2c 0 2026-03-10T10:19:30.140 INFO:tasks.workunit.client.0.vm02.stdout:2/382: dread d0/f70 [0,4194304] 0 2026-03-10T10:19:30.141 INFO:tasks.workunit.client.1.vm05.stdout:5/347: sync 2026-03-10T10:19:30.141 INFO:tasks.workunit.client.0.vm02.stdout:2/383: write d0/d1a/d24/f6e [3404181,119598] 0 2026-03-10T10:19:30.148 INFO:tasks.workunit.client.1.vm05.stdout:0/289: dwrite d1/d2/d9/d31/d13/d17/f57 [0,4194304] 0 2026-03-10T10:19:30.149 INFO:tasks.workunit.client.1.vm05.stdout:5/348: dread da/db/fd [0,4194304] 0 2026-03-10T10:19:30.153 INFO:tasks.workunit.client.1.vm05.stdout:3/364: dwrite dd/d15/d24/d2c/d3b/f67 [0,4194304] 0 2026-03-10T10:19:30.159 INFO:tasks.workunit.client.0.vm02.stdout:9/312: creat da/f65 x:0 0 0 2026-03-10T10:19:30.160 INFO:tasks.workunit.client.1.vm05.stdout:5/349: dread da/db/d26/d35/f2a [0,4194304] 0 2026-03-10T10:19:30.160 INFO:tasks.workunit.client.0.vm02.stdout:2/384: creat d0/d10/d69/f7f x:0 0 0 2026-03-10T10:19:30.169 INFO:tasks.workunit.client.1.vm05.stdout:6/247: link l7 dd/d36/d3f/d12/d44/d2a/d3d/d3e/l52 0 2026-03-10T10:19:30.174 INFO:tasks.workunit.client.0.vm02.stdout:4/451: dwrite d1/d32/d3e/f67 [0,4194304] 0 2026-03-10T10:19:30.174 INFO:tasks.workunit.client.0.vm02.stdout:9/313: symlink da/d3c/d53/l66 0 2026-03-10T10:19:30.174 INFO:tasks.workunit.client.0.vm02.stdout:9/314: chown da/ff 41826532 1 2026-03-10T10:19:30.174 INFO:tasks.workunit.client.0.vm02.stdout:9/315: chown da/d3c 469 1 2026-03-10T10:19:30.175 INFO:tasks.workunit.client.1.vm05.stdout:9/280: creat d0/d1/d13/de/f5b x:0 0 0 2026-03-10T10:19:30.178 INFO:tasks.workunit.client.0.vm02.stdout:9/316: dwrite da/d3c/d4c/f23 [0,4194304] 0 2026-03-10T10:19:30.188 INFO:tasks.workunit.client.0.vm02.stdout:0/371: rename d9/d18/d1a/d22/d24/d25 to d9/d34/d3d/d7b 0 2026-03-10T10:19:30.189 INFO:tasks.workunit.client.0.vm02.stdout:1/323: dwrite d4/da/d1a/d22/f32 [0,4194304] 0 2026-03-10T10:19:30.190 INFO:tasks.workunit.client.0.vm02.stdout:1/324: write d4/da/d1a/f19 [4311529,25902] 0 2026-03-10T10:19:30.199 INFO:tasks.workunit.client.1.vm05.stdout:2/314: rmdir db/d1c/d40 39 2026-03-10T10:19:30.201 INFO:tasks.workunit.client.1.vm05.stdout:7/360: rename d5/d26/f5e to d5/f6c 0 2026-03-10T10:19:30.203 INFO:tasks.workunit.client.1.vm05.stdout:7/361: fsync d5/dd/f62 0 2026-03-10T10:19:30.203 INFO:tasks.workunit.client.0.vm02.stdout:9/317: mknod da/d3c/d4c/d38/c67 0 2026-03-10T10:19:30.210 INFO:tasks.workunit.client.0.vm02.stdout:7/335: write d1/f15 [254146,68223] 0 2026-03-10T10:19:30.214 INFO:tasks.workunit.client.0.vm02.stdout:4/452: mknod d1/d41/d5e/d78/c97 0 2026-03-10T10:19:30.215 INFO:tasks.workunit.client.1.vm05.stdout:5/350: write da/db/f1d [2778341,86673] 0 2026-03-10T10:19:30.215 INFO:tasks.workunit.client.0.vm02.stdout:9/318: stat da/d3c/d4c/d2c/c39 0 2026-03-10T10:19:30.216 INFO:tasks.workunit.client.0.vm02.stdout:0/372: rename d9/d18/d1a/c68 to d9/d18/d1a/d43/d74/c7c 0 2026-03-10T10:19:30.218 INFO:tasks.workunit.client.0.vm02.stdout:7/336: fdatasync d1/dc/d16/d28/d2d/f3d 0 2026-03-10T10:19:30.220 INFO:tasks.workunit.client.1.vm05.stdout:5/351: truncate da/f5e 116227 0 2026-03-10T10:19:30.222 INFO:tasks.workunit.client.1.vm05.stdout:6/248: link dd/d36/d3f/d12/d44/f46 dd/d36/d3f/d12/d44/d2a/d3d/f53 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:4/259: link d1/d31/f41 d1/d31/dc/f53 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:2/315: creat db/d1c/d40/f5f x:0 0 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:2/316: truncate db/d1c/f3d 380405 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:2/317: write db/d2d/f48 [1037766,110271] 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:1/279: rename d4/d39/l51 to d4/d3d/l55 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:1/280: dread d4/df/d1c/f38 [0,4194304] 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:6/249: symlink dd/d1b/l54 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:2/318: creat db/d28/f60 x:0 0 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:0/290: rename d1/d2/d9/d31/d13/d17/f4a to d1/d2/d9/d31/d13/d2f/f5e 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.1.vm05.stdout:6/250: dwrite dd/d36/d3f/d12/f35 [0,4194304] 0 2026-03-10T10:19:30.254 INFO:tasks.workunit.client.0.vm02.stdout:4/453: creat d1/d41/d5e/d78/d1a/f98 x:0 0 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:7/337: mknod d1/dc/d16/d28/d2d/d36/c65 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:4/454: mknod d1/d41/d5e/d78/d1a/c99 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:0/373: mkdir d9/d18/d1a/d22/d24/d79/d7d 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:0/374: readlink d9/d34/d3d/l6e 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:0/375: chown d9/f28 6021 1 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:1/325: getdents d4/d2c/d53 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:4/455: symlink d1/d41/d5e/d78/d7f/l9a 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:4/456: fsync d1/d41/d5e/d78/d55/f7c 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:4/457: dread d1/d41/d5e/d78/f34 [0,4194304] 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:4/458: chown d1/d32/f69 27649090 1 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:0/376: unlink d9/d18/d1a/d43/f45 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:0/377: dwrite d9/d34/d3d/f4e [0,4194304] 0 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:0/378: chown d9/d34/d3d/d67 81 1 2026-03-10T10:19:30.255 INFO:tasks.workunit.client.0.vm02.stdout:0/379: chown d9/d18/d1a/d22/f3f 2 1 2026-03-10T10:19:30.256 INFO:tasks.workunit.client.1.vm05.stdout:5/352: mknod da/db/d28/d32/c76 0 2026-03-10T10:19:30.257 INFO:tasks.workunit.client.1.vm05.stdout:6/251: dwrite dd/d36/d3f/d12/f35 [0,4194304] 0 2026-03-10T10:19:30.267 INFO:tasks.workunit.client.0.vm02.stdout:4/459: dread d1/d32/f46 [0,4194304] 0 2026-03-10T10:19:30.269 INFO:tasks.workunit.client.0.vm02.stdout:4/460: creat d1/d52/d53/f9b x:0 0 0 2026-03-10T10:19:30.271 INFO:tasks.workunit.client.1.vm05.stdout:7/362: getdents d5/d1d/d20/d2d/d5d 0 2026-03-10T10:19:30.271 INFO:tasks.workunit.client.0.vm02.stdout:7/338: dread d1/f34 [0,4194304] 0 2026-03-10T10:19:30.272 INFO:tasks.workunit.client.1.vm05.stdout:7/363: truncate d5/d1d/d20/d2d/d5d/f67 580829 0 2026-03-10T10:19:30.274 INFO:tasks.workunit.client.1.vm05.stdout:1/281: unlink d4/d3d/c4b 0 2026-03-10T10:19:30.277 INFO:tasks.workunit.client.1.vm05.stdout:5/353: mkdir da/db/d26/d5c/d4b/d77 0 2026-03-10T10:19:30.278 INFO:tasks.workunit.client.0.vm02.stdout:7/339: creat d1/dc/d16/d28/d2d/d36/f66 x:0 0 0 2026-03-10T10:19:30.278 INFO:tasks.workunit.client.1.vm05.stdout:1/282: dwrite d4/df/d1c/f23 [0,4194304] 0 2026-03-10T10:19:30.280 INFO:tasks.workunit.client.1.vm05.stdout:2/319: truncate db/f24 3590567 0 2026-03-10T10:19:30.288 INFO:tasks.workunit.client.0.vm02.stdout:7/340: mkdir d1/dc/d16/d28/d2d/d36/d67 0 2026-03-10T10:19:30.288 INFO:tasks.workunit.client.1.vm05.stdout:6/252: rmdir dd/d36/d3f/d12/d44/d2a 39 2026-03-10T10:19:30.291 INFO:tasks.workunit.client.1.vm05.stdout:4/260: rename d1/d31/dc/d40/c43 to d1/d31/c54 0 2026-03-10T10:19:30.294 INFO:tasks.workunit.client.1.vm05.stdout:0/291: creat d1/d2/d5d/f5f x:0 0 0 2026-03-10T10:19:30.295 INFO:tasks.workunit.client.0.vm02.stdout:7/341: symlink d1/dc/d10/d38/l68 0 2026-03-10T10:19:30.296 INFO:tasks.workunit.client.0.vm02.stdout:7/342: readlink d1/dc/d10/d38/l3a 0 2026-03-10T10:19:30.297 INFO:tasks.workunit.client.1.vm05.stdout:0/292: dread d1/d2/d9/d31/d12/d20/f2e [0,4194304] 0 2026-03-10T10:19:30.297 INFO:tasks.workunit.client.1.vm05.stdout:7/364: write d5/dd/f28 [117746,40691] 0 2026-03-10T10:19:30.298 INFO:tasks.workunit.client.0.vm02.stdout:4/461: getdents d1/d41/d5e/d78/d55 0 2026-03-10T10:19:30.301 INFO:tasks.workunit.client.0.vm02.stdout:4/462: dwrite d1/d10/db/f16 [4194304,4194304] 0 2026-03-10T10:19:30.310 INFO:tasks.workunit.client.1.vm05.stdout:1/283: write d4/d39/f3a [2200667,22694] 0 2026-03-10T10:19:30.310 INFO:tasks.workunit.client.0.vm02.stdout:4/463: chown d1/d41/d5e/f87 784484 1 2026-03-10T10:19:30.312 INFO:tasks.workunit.client.1.vm05.stdout:2/320: mkdir db/d61 0 2026-03-10T10:19:30.313 INFO:tasks.workunit.client.1.vm05.stdout:2/321: write db/d12/f1a [294323,57702] 0 2026-03-10T10:19:30.319 INFO:tasks.workunit.client.1.vm05.stdout:0/293: readlink d1/d2/d9/d31/d13/l1f 0 2026-03-10T10:19:30.319 INFO:tasks.workunit.client.1.vm05.stdout:0/294: readlink d1/d2/d9/d31/d12/d41/l4d 0 2026-03-10T10:19:30.321 INFO:tasks.workunit.client.0.vm02.stdout:4/464: rename d1/d32/d3e/f67 to d1/f9c 0 2026-03-10T10:19:30.324 INFO:tasks.workunit.client.0.vm02.stdout:4/465: read - d1/d41/d5e/d78/d1a/f4d zero size 2026-03-10T10:19:30.326 INFO:tasks.workunit.client.1.vm05.stdout:1/284: symlink d4/df/d1c/l56 0 2026-03-10T10:19:30.327 INFO:tasks.workunit.client.1.vm05.stdout:1/285: read d4/d39/f3a [897326,104143] 0 2026-03-10T10:19:30.328 INFO:tasks.workunit.client.1.vm05.stdout:2/322: rmdir db/d4e 39 2026-03-10T10:19:30.329 INFO:tasks.workunit.client.1.vm05.stdout:2/323: write db/d1c/f54 [297584,129330] 0 2026-03-10T10:19:30.331 INFO:tasks.workunit.client.1.vm05.stdout:6/253: rename dd/d36/d3f/d12/d44/c2b to dd/d36/c55 0 2026-03-10T10:19:30.367 INFO:tasks.workunit.client.0.vm02.stdout:4/466: getdents d1/d41/d5e/d78/d1a/d49 0 2026-03-10T10:19:30.367 INFO:tasks.workunit.client.0.vm02.stdout:4/467: dwrite d1/f94 [0,4194304] 0 2026-03-10T10:19:30.367 INFO:tasks.workunit.client.0.vm02.stdout:4/468: dread - d1/d52/f77 zero size 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:0/295: creat d1/d2/d9/d31/d13/d15/d4e/f60 x:0 0 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:7/365: mkdir d5/d1d/d29/d60/d6d 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:7/366: dread - d5/d1d/d29/f5c zero size 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:5/354: creat da/f78 x:0 0 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:7/367: dread d5/d26/f41 [0,4194304] 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:0/296: dwrite d1/d2/d9/d31/d12/f5b [0,4194304] 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:5/355: dread - da/db/d26/d5c/f6b zero size 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:2/324: mkdir db/d1c/d40/d62 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:2/325: readlink db/l10 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:0/297: mknod d1/d2/d39/c61 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:0/298: fsync d1/d2/d9/d31/d13/f4c 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:6/254: creat dd/d36/d3f/d12/f56 x:0 0 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:2/326: symlink db/l63 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:6/255: fsync dd/f14 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:0/299: fsync d1/d2/d9/f40 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:7/368: link d5/c7 d5/d1d/c6e 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:0/300: creat d1/d2/d9/d31/d13/d15/f62 x:0 0 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:2/327: symlink db/d1c/d40/d62/l64 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:0/301: fdatasync d1/d2/f21 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:7/369: read - d5/dd/f2f zero size 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:2/328: creat db/d2d/f65 x:0 0 0 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:2/329: chown db/d2d/f52 1310 1 2026-03-10T10:19:30.368 INFO:tasks.workunit.client.1.vm05.stdout:2/330: stat db/d1c/d40/f50 0 2026-03-10T10:19:30.369 INFO:tasks.workunit.client.1.vm05.stdout:7/370: mkdir d5/d1d/d20/d35/d6f 0 2026-03-10T10:19:30.369 INFO:tasks.workunit.client.1.vm05.stdout:7/371: truncate d5/d1d/d20/d35/f36 839039 0 2026-03-10T10:19:30.369 INFO:tasks.workunit.client.1.vm05.stdout:2/331: dwrite db/d2d/f52 [0,4194304] 0 2026-03-10T10:19:30.369 INFO:tasks.workunit.client.1.vm05.stdout:2/332: fsync db/d1c/f1f 0 2026-03-10T10:19:30.369 INFO:tasks.workunit.client.1.vm05.stdout:0/302: dwrite d1/d2/d9/d31/d13/d17/f56 [0,4194304] 0 2026-03-10T10:19:30.369 INFO:tasks.workunit.client.1.vm05.stdout:2/333: mknod db/d12/c66 0 2026-03-10T10:19:30.369 INFO:tasks.workunit.client.1.vm05.stdout:7/372: link d5/d26/f41 d5/d1d/d20/d3b/f70 0 2026-03-10T10:19:30.372 INFO:tasks.workunit.client.1.vm05.stdout:0/303: rmdir d1/d2/d9/d31/d12/d20 39 2026-03-10T10:19:30.372 INFO:tasks.workunit.client.1.vm05.stdout:2/334: rmdir db/d28/d4f 39 2026-03-10T10:19:30.372 INFO:tasks.workunit.client.1.vm05.stdout:7/373: fdatasync d5/d1d/d20/d35/f47 0 2026-03-10T10:19:30.373 INFO:tasks.workunit.client.1.vm05.stdout:0/304: dread - d1/d2/d9/d31/d13/f4c zero size 2026-03-10T10:19:30.380 INFO:tasks.workunit.client.1.vm05.stdout:7/374: mknod d5/d1d/d29/d60/d6d/c71 0 2026-03-10T10:19:30.381 INFO:tasks.workunit.client.1.vm05.stdout:7/375: symlink d5/l72 0 2026-03-10T10:19:30.382 INFO:tasks.workunit.client.1.vm05.stdout:7/376: rmdir d5/d1d/d20/d3b 39 2026-03-10T10:19:30.384 INFO:tasks.workunit.client.1.vm05.stdout:7/377: creat d5/dd/f73 x:0 0 0 2026-03-10T10:19:30.520 INFO:tasks.workunit.client.0.vm02.stdout:9/319: sync 2026-03-10T10:19:30.522 INFO:tasks.workunit.client.0.vm02.stdout:9/320: creat da/d3c/d4c/d2c/d34/f68 x:0 0 0 2026-03-10T10:19:30.531 INFO:tasks.workunit.client.0.vm02.stdout:9/321: symlink da/d3c/d4c/d2c/d34/d35/l69 0 2026-03-10T10:19:30.531 INFO:tasks.workunit.client.0.vm02.stdout:9/322: link da/d3c/d4c/d2c/d34/f4d da/d3c/d53/f6a 0 2026-03-10T10:19:30.532 INFO:tasks.workunit.client.1.vm05.stdout:6/256: sync 2026-03-10T10:19:30.533 INFO:tasks.workunit.client.0.vm02.stdout:9/323: dwrite da/f28 [0,4194304] 0 2026-03-10T10:19:30.533 INFO:tasks.workunit.client.0.vm02.stdout:9/324: dread - da/d3c/d4c/d38/f47 zero size 2026-03-10T10:19:30.534 INFO:tasks.workunit.client.1.vm05.stdout:6/257: symlink dd/d36/d3f/d12/d44/d30/l57 0 2026-03-10T10:19:30.535 INFO:tasks.workunit.client.1.vm05.stdout:6/258: truncate f3 5131169 0 2026-03-10T10:19:30.536 INFO:tasks.workunit.client.1.vm05.stdout:7/378: sync 2026-03-10T10:19:30.538 INFO:tasks.workunit.client.1.vm05.stdout:6/259: dread - dd/d36/d3f/d12/d44/d2a/d3d/f53 zero size 2026-03-10T10:19:30.538 INFO:tasks.workunit.client.1.vm05.stdout:7/379: dread d5/d17/f4f [0,4194304] 0 2026-03-10T10:19:30.539 INFO:tasks.workunit.client.1.vm05.stdout:7/380: truncate d5/d17/f19 342452 0 2026-03-10T10:19:30.540 INFO:tasks.workunit.client.1.vm05.stdout:6/260: mkdir dd/d36/d3f/d12/d58 0 2026-03-10T10:19:30.541 INFO:tasks.workunit.client.1.vm05.stdout:7/381: creat d5/d17/f74 x:0 0 0 2026-03-10T10:19:30.541 INFO:tasks.workunit.client.1.vm05.stdout:6/261: chown dd/d36/d3f/d12/d24/d28 4349419 1 2026-03-10T10:19:30.541 INFO:tasks.workunit.client.1.vm05.stdout:7/382: stat d5/d17/f40 0 2026-03-10T10:19:30.542 INFO:tasks.workunit.client.1.vm05.stdout:7/383: creat d5/d1d/d20/d2d/d5d/f75 x:0 0 0 2026-03-10T10:19:30.545 INFO:tasks.workunit.client.1.vm05.stdout:6/262: dwrite fb [0,4194304] 0 2026-03-10T10:19:30.548 INFO:tasks.workunit.client.1.vm05.stdout:0/305: dread d1/d2/d9/f40 [0,4194304] 0 2026-03-10T10:19:30.567 INFO:tasks.workunit.client.1.vm05.stdout:0/306: rename d1/d2 to d1/d2/d9/d31/d54/d63 22 2026-03-10T10:19:30.569 INFO:tasks.workunit.client.1.vm05.stdout:0/307: fsync d1/d2/d9/d31/d12/d20/f2e 0 2026-03-10T10:19:30.571 INFO:tasks.workunit.client.1.vm05.stdout:0/308: getdents d1/d2/d9/d31/d13 0 2026-03-10T10:19:30.623 INFO:tasks.workunit.client.1.vm05.stdout:9/281: dread d0/df/f3b [0,4194304] 0 2026-03-10T10:19:30.630 INFO:tasks.workunit.client.1.vm05.stdout:9/282: link d0/d1/d13/d26/f4f d0/d1/d16/f5c 0 2026-03-10T10:19:30.632 INFO:tasks.workunit.client.0.vm02.stdout:8/378: unlink d1/d1c/c51 0 2026-03-10T10:19:30.634 INFO:tasks.workunit.client.1.vm05.stdout:9/283: unlink d0/d1/d13/de/d21/f34 0 2026-03-10T10:19:30.635 INFO:tasks.workunit.client.0.vm02.stdout:8/379: creat d1/d1c/d23/f75 x:0 0 0 2026-03-10T10:19:30.636 INFO:tasks.workunit.client.1.vm05.stdout:9/284: chown d0/d1/d13/d26/l51 5201955 1 2026-03-10T10:19:30.636 INFO:tasks.workunit.client.0.vm02.stdout:8/380: truncate d1/d1c/d24/d35/f44 316166 0 2026-03-10T10:19:30.638 INFO:tasks.workunit.client.0.vm02.stdout:8/381: unlink d1/d1c/d23/d25/f2b 0 2026-03-10T10:19:30.648 INFO:tasks.workunit.client.1.vm05.stdout:9/285: mknod d0/d1/d13/c5d 0 2026-03-10T10:19:30.648 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:30 vm05.local ceph-mon[59051]: pgmap v154: 65 pgs: 65 active+clean; 1.4 GiB data, 5.3 GiB used, 115 GiB / 120 GiB avail; 20 MiB/s rd, 114 MiB/s wr, 184 op/s 2026-03-10T10:19:30.648 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:30 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:30.648 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:30 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:19:30.648 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:30 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:19:30.648 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:30 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:30.648 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:30 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:30.649 INFO:tasks.workunit.client.0.vm02.stdout:8/382: rename d1/d1c/d43/f53 to d1/d1c/d23/d25/f76 0 2026-03-10T10:19:30.649 INFO:tasks.workunit.client.0.vm02.stdout:8/383: chown d1/d1c/d23/d25/f3d 78404910 1 2026-03-10T10:19:30.649 INFO:tasks.workunit.client.1.vm05.stdout:9/286: fdatasync d0/d1/d13/d26/f4f 0 2026-03-10T10:19:30.649 INFO:tasks.workunit.client.1.vm05.stdout:9/287: truncate d0/d1/d16/f40 255771 0 2026-03-10T10:19:30.649 INFO:tasks.workunit.client.1.vm05.stdout:9/288: dwrite d0/df/d11/f50 [0,4194304] 0 2026-03-10T10:19:30.655 INFO:tasks.workunit.client.1.vm05.stdout:9/289: unlink d0/d1/d16/f18 0 2026-03-10T10:19:30.675 INFO:tasks.workunit.client.0.vm02.stdout:4/469: dread d1/d10/db/f15 [0,4194304] 0 2026-03-10T10:19:30.680 INFO:tasks.workunit.client.0.vm02.stdout:4/470: creat d1/f9d x:0 0 0 2026-03-10T10:19:30.681 INFO:tasks.workunit.client.0.vm02.stdout:4/471: truncate d1/d52/d53/f66 519480 0 2026-03-10T10:19:30.682 INFO:tasks.workunit.client.0.vm02.stdout:4/472: mkdir d1/d41/d5e/d78/d55/d9e 0 2026-03-10T10:19:30.683 INFO:tasks.workunit.client.0.vm02.stdout:4/473: write d1/d52/f77 [401387,112825] 0 2026-03-10T10:19:30.687 INFO:tasks.workunit.client.0.vm02.stdout:4/474: dwrite d1/d32/f69 [0,4194304] 0 2026-03-10T10:19:30.688 INFO:tasks.workunit.client.0.vm02.stdout:4/475: dread - d1/d41/d5e/d78/d7f/f8e zero size 2026-03-10T10:19:30.745 INFO:tasks.workunit.client.0.vm02.stdout:5/513: write d1/db/f2f [1829022,61924] 0 2026-03-10T10:19:30.753 INFO:tasks.workunit.client.0.vm02.stdout:5/514: dwrite d1/db/f15 [0,4194304] 0 2026-03-10T10:19:30.754 INFO:tasks.workunit.client.0.vm02.stdout:5/515: chown d1/db/d11/d13/d28 269297015 1 2026-03-10T10:19:30.754 INFO:tasks.workunit.client.0.vm02.stdout:5/516: chown d1/db/d11/d13/f1c 747673 1 2026-03-10T10:19:30.760 INFO:tasks.workunit.client.1.vm05.stdout:5/356: getdents da/db/d26/d35 0 2026-03-10T10:19:30.762 INFO:tasks.workunit.client.0.vm02.stdout:5/517: creat d1/db/d11/d84/d40/fb3 x:0 0 0 2026-03-10T10:19:30.763 INFO:tasks.workunit.client.0.vm02.stdout:5/518: write d1/db/d11/d84/fb2 [120436,123979] 0 2026-03-10T10:19:30.768 INFO:tasks.workunit.client.0.vm02.stdout:5/519: creat d1/db/d11/d16/d79/d85/d93/fb4 x:0 0 0 2026-03-10T10:19:30.777 INFO:tasks.workunit.client.1.vm05.stdout:8/230: dwrite d7/f9 [4194304,4194304] 0 2026-03-10T10:19:30.777 INFO:tasks.workunit.client.1.vm05.stdout:8/231: symlink d7/d2f/l46 0 2026-03-10T10:19:30.778 INFO:tasks.workunit.client.0.vm02.stdout:5/520: creat d1/db/d11/d16/d48/fb5 x:0 0 0 2026-03-10T10:19:30.778 INFO:tasks.workunit.client.0.vm02.stdout:5/521: stat d1/d9c/fa9 0 2026-03-10T10:19:30.778 INFO:tasks.workunit.client.0.vm02.stdout:3/320: truncate d1/d6/f48 3158716 0 2026-03-10T10:19:30.778 INFO:tasks.workunit.client.0.vm02.stdout:3/321: creat d1/d20/d52/f6b x:0 0 0 2026-03-10T10:19:30.778 INFO:tasks.workunit.client.0.vm02.stdout:3/322: write d1/f54 [1603342,68037] 0 2026-03-10T10:19:30.778 INFO:tasks.workunit.client.0.vm02.stdout:3/323: truncate d1/d6/f49 2194043 0 2026-03-10T10:19:30.780 INFO:tasks.workunit.client.0.vm02.stdout:6/348: truncate d0/d8/d29/d2f/d4b/f17 2332724 0 2026-03-10T10:19:30.783 INFO:tasks.workunit.client.0.vm02.stdout:3/324: truncate d1/f28 984297 0 2026-03-10T10:19:30.784 INFO:tasks.workunit.client.0.vm02.stdout:3/325: truncate d1/d20/d52/f6b 683278 0 2026-03-10T10:19:30.784 INFO:tasks.workunit.client.0.vm02.stdout:3/326: write d1/d6/f43 [1030118,115770] 0 2026-03-10T10:19:30.787 INFO:tasks.workunit.client.1.vm05.stdout:8/232: getdents d7/d14/d3a 0 2026-03-10T10:19:30.791 INFO:tasks.workunit.client.1.vm05.stdout:8/233: dwrite d7/d14/d15/f2e [4194304,4194304] 0 2026-03-10T10:19:30.796 INFO:tasks.workunit.client.0.vm02.stdout:6/349: dwrite d0/d8/d29/d2f/f38 [0,4194304] 0 2026-03-10T10:19:30.796 INFO:tasks.workunit.client.0.vm02.stdout:6/350: dwrite d0/d3a/f40 [0,4194304] 0 2026-03-10T10:19:30.810 INFO:tasks.workunit.client.0.vm02.stdout:3/327: dread d1/f25 [0,4194304] 0 2026-03-10T10:19:30.815 INFO:tasks.workunit.client.1.vm05.stdout:8/234: mknod d7/d14/d15/d3b/c47 0 2026-03-10T10:19:30.816 INFO:tasks.workunit.client.1.vm05.stdout:8/235: truncate f6 2885222 0 2026-03-10T10:19:30.818 INFO:tasks.workunit.client.0.vm02.stdout:3/328: fsync d1/f14 0 2026-03-10T10:19:30.819 INFO:tasks.workunit.client.0.vm02.stdout:6/351: symlink d0/d8/l74 0 2026-03-10T10:19:30.822 INFO:tasks.workunit.client.0.vm02.stdout:6/352: dread d0/d8/d9/f4f [0,4194304] 0 2026-03-10T10:19:30.822 INFO:tasks.workunit.client.0.vm02.stdout:3/329: creat d1/d20/d52/f6c x:0 0 0 2026-03-10T10:19:30.823 INFO:tasks.workunit.client.0.vm02.stdout:3/330: dread - d1/d6/f63 zero size 2026-03-10T10:19:30.825 INFO:tasks.workunit.client.0.vm02.stdout:6/353: creat d0/d8/d29/d6d/d32/f75 x:0 0 0 2026-03-10T10:19:30.834 INFO:tasks.workunit.client.0.vm02.stdout:6/354: rmdir d0/d8/d9 39 2026-03-10T10:19:30.836 INFO:tasks.workunit.client.0.vm02.stdout:2/385: truncate d0/d1a/d49/d5e/f63 1508101 0 2026-03-10T10:19:30.836 INFO:tasks.workunit.client.0.vm02.stdout:3/331: mknod d1/d8/c6d 0 2026-03-10T10:19:30.839 INFO:tasks.workunit.client.0.vm02.stdout:2/386: mkdir d0/d1a/d24/d80 0 2026-03-10T10:19:30.840 INFO:tasks.workunit.client.0.vm02.stdout:3/332: symlink d1/d8/d44/l6e 0 2026-03-10T10:19:30.842 INFO:tasks.workunit.client.1.vm05.stdout:3/365: dwrite dd/d15/d24/d2c/f32 [4194304,4194304] 0 2026-03-10T10:19:30.860 INFO:tasks.workunit.client.1.vm05.stdout:3/366: truncate dd/d15/f6a 4976524 0 2026-03-10T10:19:30.860 INFO:tasks.workunit.client.0.vm02.stdout:2/387: rmdir d0/d1a/d49 39 2026-03-10T10:19:30.860 INFO:tasks.workunit.client.0.vm02.stdout:3/333: dwrite d1/d8/d21/f47 [0,4194304] 0 2026-03-10T10:19:30.861 INFO:tasks.workunit.client.0.vm02.stdout:3/334: dread d1/d20/f38 [0,4194304] 0 2026-03-10T10:19:30.861 INFO:tasks.workunit.client.0.vm02.stdout:3/335: creat d1/d20/d52/f6f x:0 0 0 2026-03-10T10:19:30.861 INFO:tasks.workunit.client.0.vm02.stdout:6/355: getdents d0/d8/d29 0 2026-03-10T10:19:30.862 INFO:tasks.workunit.client.0.vm02.stdout:2/388: mkdir d0/d10/d81 0 2026-03-10T10:19:30.865 INFO:tasks.workunit.client.0.vm02.stdout:6/356: symlink d0/d8/d29/d6d/l76 0 2026-03-10T10:19:30.873 INFO:tasks.workunit.client.1.vm05.stdout:8/236: sync 2026-03-10T10:19:30.875 INFO:tasks.workunit.client.1.vm05.stdout:8/237: mknod d7/d14/d24/c48 0 2026-03-10T10:19:30.877 INFO:tasks.workunit.client.1.vm05.stdout:8/238: mkdir d7/d14/d3a/d49 0 2026-03-10T10:19:30.891 INFO:tasks.workunit.client.1.vm05.stdout:8/239: sync 2026-03-10T10:19:30.895 INFO:tasks.workunit.client.1.vm05.stdout:8/240: mknod d7/d14/d24/d3f/c4a 0 2026-03-10T10:19:30.897 INFO:tasks.workunit.client.1.vm05.stdout:8/241: link d7/d14/d24/f34 d7/d2f/f4b 0 2026-03-10T10:19:30.899 INFO:tasks.workunit.client.1.vm05.stdout:8/242: dread d7/fd [0,4194304] 0 2026-03-10T10:19:30.909 INFO:tasks.workunit.client.1.vm05.stdout:8/243: dread d7/d14/d15/f39 [4194304,4194304] 0 2026-03-10T10:19:30.910 INFO:tasks.workunit.client.1.vm05.stdout:3/367: dread dd/d15/f23 [0,4194304] 0 2026-03-10T10:19:30.911 INFO:tasks.workunit.client.1.vm05.stdout:8/244: write d7/f11 [3442994,6641] 0 2026-03-10T10:19:30.912 INFO:tasks.workunit.client.1.vm05.stdout:3/368: unlink dd/l10 0 2026-03-10T10:19:30.916 INFO:tasks.workunit.client.1.vm05.stdout:8/245: dwrite d7/d14/d15/f39 [0,4194304] 0 2026-03-10T10:19:30.922 INFO:tasks.workunit.client.0.vm02.stdout:1/326: dwrite d4/f21 [0,4194304] 0 2026-03-10T10:19:30.932 INFO:tasks.workunit.client.1.vm05.stdout:8/246: creat d7/d14/f4c x:0 0 0 2026-03-10T10:19:30.934 INFO:tasks.workunit.client.1.vm05.stdout:3/369: rmdir dd/d15/d24/d2c/d3b/d7c 0 2026-03-10T10:19:30.934 INFO:tasks.workunit.client.0.vm02.stdout:1/327: mknod d4/da/d1a/d22/c6b 0 2026-03-10T10:19:30.935 INFO:tasks.workunit.client.1.vm05.stdout:8/247: symlink d7/d14/d3a/l4d 0 2026-03-10T10:19:30.935 INFO:tasks.workunit.client.0.vm02.stdout:1/328: write d4/da/d27/d38/f4e [3569612,64150] 0 2026-03-10T10:19:30.936 INFO:tasks.workunit.client.0.vm02.stdout:1/329: fsync d4/da/d1a/d22/f32 0 2026-03-10T10:19:30.940 INFO:tasks.workunit.client.0.vm02.stdout:1/330: creat d4/d2c/d53/f6c x:0 0 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.1.vm05.stdout:3/370: getdents dd/d20 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.1.vm05.stdout:3/371: truncate dd/d15/f23 29033 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.1.vm05.stdout:3/372: readlink dd/l1e 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.1.vm05.stdout:3/373: readlink dd/lf 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.1.vm05.stdout:3/374: mknod dd/d15/c82 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.0.vm02.stdout:1/331: rename d4/da/c10 to d4/da/d27/d38/d3c/c6d 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.0.vm02.stdout:1/332: write d4/da/d27/f66 [171870,117947] 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.0.vm02.stdout:1/333: fsync d4/f5 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.0.vm02.stdout:1/334: fdatasync d4/da/d1a/f1c 0 2026-03-10T10:19:30.953 INFO:tasks.workunit.client.0.vm02.stdout:1/335: write d4/d2c/d53/f69 [362398,50082] 0 2026-03-10T10:19:30.954 INFO:tasks.workunit.client.0.vm02.stdout:1/336: creat d4/da/d1a/d47/d65/f6e x:0 0 0 2026-03-10T10:19:30.954 INFO:tasks.workunit.client.0.vm02.stdout:1/337: write d4/ff [9288590,121861] 0 2026-03-10T10:19:30.961 INFO:tasks.workunit.client.0.vm02.stdout:0/380: write d9/f6c [2271240,73281] 0 2026-03-10T10:19:30.965 INFO:tasks.workunit.client.0.vm02.stdout:0/381: unlink d9/d18/d1a/d46/c50 0 2026-03-10T10:19:31.026 INFO:tasks.workunit.client.0.vm02.stdout:7/343: write d1/dc/ff [601326,128395] 0 2026-03-10T10:19:31.028 INFO:tasks.workunit.client.1.vm05.stdout:4/261: mknod d1/d31/dc/d40/d45/c55 0 2026-03-10T10:19:31.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:30 vm02.local ceph-mon[50200]: pgmap v154: 65 pgs: 65 active+clean; 1.4 GiB data, 5.3 GiB used, 115 GiB / 120 GiB avail; 20 MiB/s rd, 114 MiB/s wr, 184 op/s 2026-03-10T10:19:31.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:30 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:31.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:30 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:19:31.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:30 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:19:31.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:30 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:31.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:30 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:31.030 INFO:tasks.workunit.client.1.vm05.stdout:4/262: symlink d1/d31/dc/d40/l56 0 2026-03-10T10:19:31.037 INFO:tasks.workunit.client.1.vm05.stdout:1/286: write d4/dd/f21 [569711,88890] 0 2026-03-10T10:19:31.042 INFO:tasks.workunit.client.1.vm05.stdout:1/287: rename d4/d39/f3b to d4/d3d/f57 0 2026-03-10T10:19:31.047 INFO:tasks.workunit.client.1.vm05.stdout:1/288: dwrite d4/d39/f3a [0,4194304] 0 2026-03-10T10:19:31.051 INFO:tasks.workunit.client.1.vm05.stdout:4/263: dread d1/d31/f2d [0,4194304] 0 2026-03-10T10:19:31.054 INFO:tasks.workunit.client.1.vm05.stdout:1/289: chown d4/l22 99399 1 2026-03-10T10:19:31.063 INFO:tasks.workunit.client.0.vm02.stdout:9/325: write da/d3c/d4c/f29 [1579930,94411] 0 2026-03-10T10:19:31.063 INFO:tasks.workunit.client.1.vm05.stdout:1/290: dread d4/d39/f3a [0,4194304] 0 2026-03-10T10:19:31.066 INFO:tasks.workunit.client.0.vm02.stdout:9/326: unlink da/d3c/d4c/l1c 0 2026-03-10T10:19:31.068 INFO:tasks.workunit.client.0.vm02.stdout:9/327: truncate da/ff 4696933 0 2026-03-10T10:19:31.078 INFO:tasks.workunit.client.1.vm05.stdout:4/264: dwrite d1/d31/dc/d40/d45/f50 [0,4194304] 0 2026-03-10T10:19:31.079 INFO:tasks.workunit.client.1.vm05.stdout:6/263: truncate f2 4674021 0 2026-03-10T10:19:31.079 INFO:tasks.workunit.client.0.vm02.stdout:9/328: symlink da/d3c/d4c/l6b 0 2026-03-10T10:19:31.079 INFO:tasks.workunit.client.0.vm02.stdout:9/329: mknod da/d3c/d4c/d2c/d34/c6c 0 2026-03-10T10:19:31.079 INFO:tasks.workunit.client.1.vm05.stdout:0/309: dwrite d1/d2/fc [0,4194304] 0 2026-03-10T10:19:31.085 INFO:tasks.workunit.client.1.vm05.stdout:4/265: chown d1/d31/dc/f2a 21 1 2026-03-10T10:19:31.086 INFO:tasks.workunit.client.0.vm02.stdout:7/344: sync 2026-03-10T10:19:31.089 INFO:tasks.workunit.client.1.vm05.stdout:4/266: write d1/d3/f5 [5602622,114535] 0 2026-03-10T10:19:31.096 INFO:tasks.workunit.client.1.vm05.stdout:2/335: dwrite f1 [0,4194304] 0 2026-03-10T10:19:31.100 INFO:tasks.workunit.client.0.vm02.stdout:8/384: dwrite d1/d1c/d23/d3e/f5a [0,4194304] 0 2026-03-10T10:19:31.100 INFO:tasks.workunit.client.1.vm05.stdout:0/310: rename d1/d2/d9/d31/d13/d2f/f5e to d1/d2/d39/d3d/f64 0 2026-03-10T10:19:31.112 INFO:tasks.workunit.client.1.vm05.stdout:4/267: creat d1/d31/dc/d40/d45/f57 x:0 0 0 2026-03-10T10:19:31.114 INFO:tasks.workunit.client.1.vm05.stdout:0/311: symlink d1/d2/d9/d31/d13/d2f/l65 0 2026-03-10T10:19:31.114 INFO:tasks.workunit.client.1.vm05.stdout:4/268: write d1/d31/dc/f1f [5464704,128467] 0 2026-03-10T10:19:31.115 INFO:tasks.workunit.client.1.vm05.stdout:4/269: chown d1/d3/c22 97 1 2026-03-10T10:19:31.116 INFO:tasks.workunit.client.1.vm05.stdout:2/336: mkdir db/d61/d67 0 2026-03-10T10:19:31.141 INFO:tasks.workunit.client.1.vm05.stdout:1/291: dread d4/d20/f2c [4194304,4194304] 0 2026-03-10T10:19:31.146 INFO:tasks.workunit.client.1.vm05.stdout:0/312: dwrite d1/d2/d39/d3d/f64 [0,4194304] 0 2026-03-10T10:19:31.146 INFO:tasks.workunit.client.1.vm05.stdout:2/337: truncate db/d12/f1d 1551887 0 2026-03-10T10:19:31.149 INFO:tasks.workunit.client.1.vm05.stdout:1/292: symlink d4/l58 0 2026-03-10T10:19:31.159 INFO:tasks.workunit.client.1.vm05.stdout:1/293: chown d4/ca 876980940 1 2026-03-10T10:19:31.162 INFO:tasks.workunit.client.1.vm05.stdout:0/313: symlink d1/d2/d9/d50/l66 0 2026-03-10T10:19:31.162 INFO:tasks.workunit.client.1.vm05.stdout:1/294: symlink d4/df/d1c/l59 0 2026-03-10T10:19:31.166 INFO:tasks.workunit.client.1.vm05.stdout:0/314: write d1/d2/d39/d3d/f64 [3004941,6987] 0 2026-03-10T10:19:31.173 INFO:tasks.workunit.client.0.vm02.stdout:9/330: truncate da/d3c/d4c/d2c/f32 977132 0 2026-03-10T10:19:31.179 INFO:tasks.workunit.client.0.vm02.stdout:9/331: symlink da/d3c/d4c/l6d 0 2026-03-10T10:19:31.179 INFO:tasks.workunit.client.1.vm05.stdout:9/290: dwrite d0/d1/d13/f8 [0,4194304] 0 2026-03-10T10:19:31.179 INFO:tasks.workunit.client.1.vm05.stdout:1/295: dwrite d4/d39/f54 [0,4194304] 0 2026-03-10T10:19:31.179 INFO:tasks.workunit.client.0.vm02.stdout:9/332: dread da/f5c [0,4194304] 0 2026-03-10T10:19:31.181 INFO:tasks.workunit.client.0.vm02.stdout:9/333: chown da/d3c/d4c/d38/f47 133 1 2026-03-10T10:19:31.182 INFO:tasks.workunit.client.0.vm02.stdout:4/476: rmdir d1/d52 39 2026-03-10T10:19:31.191 INFO:tasks.workunit.client.1.vm05.stdout:0/315: mknod d1/d2/d9/d31/d54/c67 0 2026-03-10T10:19:31.192 INFO:tasks.workunit.client.1.vm05.stdout:5/357: truncate da/f41 3278129 0 2026-03-10T10:19:31.192 INFO:tasks.workunit.client.0.vm02.stdout:9/334: mknod da/d3c/d4c/d2c/d34/d35/c6e 0 2026-03-10T10:19:31.195 INFO:tasks.workunit.client.0.vm02.stdout:5/522: truncate d1/db/f1e 2485226 0 2026-03-10T10:19:31.204 INFO:tasks.workunit.client.1.vm05.stdout:1/296: dwrite d4/d39/d3e/f4d [0,4194304] 0 2026-03-10T10:19:31.204 INFO:tasks.workunit.client.0.vm02.stdout:9/335: dread da/d3c/d4c/f3b [0,4194304] 0 2026-03-10T10:19:31.205 INFO:tasks.workunit.client.1.vm05.stdout:0/316: chown d1/d2/d39/l47 47068 1 2026-03-10T10:19:31.214 INFO:tasks.workunit.client.0.vm02.stdout:3/336: getdents d1/d20/d52 0 2026-03-10T10:19:31.216 INFO:tasks.workunit.client.0.vm02.stdout:2/389: write d0/d1a/d49/f4f [3190343,33090] 0 2026-03-10T10:19:31.217 INFO:tasks.workunit.client.0.vm02.stdout:6/357: dwrite d0/d8/d9/f14 [0,4194304] 0 2026-03-10T10:19:31.226 INFO:tasks.workunit.client.1.vm05.stdout:0/317: mkdir d1/d2/d5d/d68 0 2026-03-10T10:19:31.229 INFO:tasks.workunit.client.1.vm05.stdout:9/291: rename d0/d1/d13/c4b to d0/d1/c5e 0 2026-03-10T10:19:31.231 INFO:tasks.workunit.client.0.vm02.stdout:5/523: link d1/db/d11/d84/d40/d4f/d5f/f6b d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fb6 0 2026-03-10T10:19:31.232 INFO:tasks.workunit.client.0.vm02.stdout:5/524: dread - d1/db/d11/d84/d40/d4f/d5f/d6d/d71/f80 zero size 2026-03-10T10:19:31.237 INFO:tasks.workunit.client.0.vm02.stdout:4/477: rmdir d1/d41/d5e/d78/d55/d9e 0 2026-03-10T10:19:31.238 INFO:tasks.workunit.client.1.vm05.stdout:1/297: rename d4/dd/l1a to d4/d37/l5a 0 2026-03-10T10:19:31.240 INFO:tasks.workunit.client.1.vm05.stdout:8/248: write d7/f2b [423141,24271] 0 2026-03-10T10:19:31.248 INFO:tasks.workunit.client.0.vm02.stdout:1/338: truncate d4/f26 1950805 0 2026-03-10T10:19:31.266 INFO:tasks.workunit.client.0.vm02.stdout:0/382: truncate f2 2452753 0 2026-03-10T10:19:31.266 INFO:tasks.workunit.client.0.vm02.stdout:6/358: creat d0/d8/d29/d2f/f77 x:0 0 0 2026-03-10T10:19:31.266 INFO:tasks.workunit.client.0.vm02.stdout:5/525: mknod d1/db/d11/d84/cb7 0 2026-03-10T10:19:31.266 INFO:tasks.workunit.client.1.vm05.stdout:3/375: dwrite dd/d15/d1f/f53 [0,4194304] 0 2026-03-10T10:19:31.266 INFO:tasks.workunit.client.1.vm05.stdout:3/376: fsync dd/d15/d24/d2c/d3b/f67 0 2026-03-10T10:19:31.266 INFO:tasks.workunit.client.1.vm05.stdout:1/298: symlink d4/d37/d4e/l5b 0 2026-03-10T10:19:31.266 INFO:tasks.workunit.client.1.vm05.stdout:0/318: creat d1/d2/d39/f69 x:0 0 0 2026-03-10T10:19:31.266 INFO:tasks.workunit.client.1.vm05.stdout:0/319: write d1/d2/d9/d31/d12/d20/f37 [746457,22789] 0 2026-03-10T10:19:31.266 INFO:tasks.workunit.client.1.vm05.stdout:0/320: readlink d1/d2/d9/d31/d13/d17/l2b 0 2026-03-10T10:19:31.268 INFO:tasks.workunit.client.0.vm02.stdout:5/526: dread d1/db/f15 [0,4194304] 0 2026-03-10T10:19:31.277 INFO:tasks.workunit.client.1.vm05.stdout:3/377: creat dd/d15/d24/d2c/d3b/f83 x:0 0 0 2026-03-10T10:19:31.281 INFO:tasks.workunit.client.0.vm02.stdout:2/390: mknod d0/d10/c82 0 2026-03-10T10:19:31.281 INFO:tasks.workunit.client.1.vm05.stdout:7/384: dwrite d5/d1d/d20/d2d/f58 [0,4194304] 0 2026-03-10T10:19:31.283 INFO:tasks.workunit.client.1.vm05.stdout:3/378: dread dd/fe [4194304,4194304] 0 2026-03-10T10:19:31.283 INFO:tasks.workunit.client.1.vm05.stdout:8/249: sync 2026-03-10T10:19:31.289 INFO:tasks.workunit.client.1.vm05.stdout:1/299: rename d4/d37/d4e/c52 to d4/df/d1c/c5c 0 2026-03-10T10:19:31.291 INFO:tasks.workunit.client.0.vm02.stdout:1/339: unlink d4/d1b/l2f 0 2026-03-10T10:19:31.293 INFO:tasks.workunit.client.0.vm02.stdout:0/383: rmdir d9/d18/d1a/d22/d24 39 2026-03-10T10:19:31.296 INFO:tasks.workunit.client.0.vm02.stdout:6/359: creat d0/d8/d29/d2f/d50/f78 x:0 0 0 2026-03-10T10:19:31.300 INFO:tasks.workunit.client.0.vm02.stdout:9/336: creat da/f6f x:0 0 0 2026-03-10T10:19:31.306 INFO:tasks.workunit.client.0.vm02.stdout:5/527: creat d1/db/d11/d84/d40/d4f/d5f/d6d/fb8 x:0 0 0 2026-03-10T10:19:31.309 INFO:tasks.workunit.client.0.vm02.stdout:2/391: mknod d0/d1a/d24/c83 0 2026-03-10T10:19:31.319 INFO:tasks.workunit.client.0.vm02.stdout:2/392: chown d0/d1a/d24/c3b 13747063 1 2026-03-10T10:19:31.319 INFO:tasks.workunit.client.0.vm02.stdout:2/393: chown d0/f70 984 1 2026-03-10T10:19:31.319 INFO:tasks.workunit.client.1.vm05.stdout:7/385: dwrite d5/d17/f40 [4194304,4194304] 0 2026-03-10T10:19:31.319 INFO:tasks.workunit.client.1.vm05.stdout:7/386: write d5/d1d/f31 [1553475,58268] 0 2026-03-10T10:19:31.319 INFO:tasks.workunit.client.1.vm05.stdout:1/300: fdatasync d4/d37/f41 0 2026-03-10T10:19:31.319 INFO:tasks.workunit.client.0.vm02.stdout:1/340: creat d4/d1b/f6f x:0 0 0 2026-03-10T10:19:31.320 INFO:tasks.workunit.client.1.vm05.stdout:0/321: mknod d1/d2/d9/d31/d12/d41/c6a 0 2026-03-10T10:19:31.323 INFO:tasks.workunit.client.0.vm02.stdout:6/360: creat d0/d8/d9/f79 x:0 0 0 2026-03-10T10:19:31.325 INFO:tasks.workunit.client.0.vm02.stdout:9/337: mkdir da/d3c/d4c/d38/d4a/d70 0 2026-03-10T10:19:31.333 INFO:tasks.workunit.client.0.vm02.stdout:5/528: chown d1/db/d11/d84/c4d 17563 1 2026-03-10T10:19:31.338 INFO:tasks.workunit.client.1.vm05.stdout:1/301: write d4/d39/f3a [3644101,62730] 0 2026-03-10T10:19:31.339 INFO:tasks.workunit.client.1.vm05.stdout:0/322: write d1/d2/d9/d31/d13/d17/f5a [1048305,6500] 0 2026-03-10T10:19:31.343 INFO:tasks.workunit.client.1.vm05.stdout:3/379: creat dd/d15/f84 x:0 0 0 2026-03-10T10:19:31.344 INFO:tasks.workunit.client.0.vm02.stdout:6/361: rename d0/d3a to d0/d8/d9/d7a 0 2026-03-10T10:19:31.347 INFO:tasks.workunit.client.1.vm05.stdout:6/264: dread f3 [0,4194304] 0 2026-03-10T10:19:31.347 INFO:tasks.workunit.client.0.vm02.stdout:9/338: unlink da/d3c/d4c/d2c/d34/d35/l48 0 2026-03-10T10:19:31.351 INFO:tasks.workunit.client.0.vm02.stdout:2/394: symlink d0/d71/l84 0 2026-03-10T10:19:31.355 INFO:tasks.workunit.client.1.vm05.stdout:4/270: write d1/d31/f41 [715779,78762] 0 2026-03-10T10:19:31.362 INFO:tasks.workunit.client.1.vm05.stdout:3/380: fdatasync dd/d20/f50 0 2026-03-10T10:19:31.366 INFO:tasks.workunit.client.0.vm02.stdout:0/384: creat d9/d18/d1a/f7e x:0 0 0 2026-03-10T10:19:31.366 INFO:tasks.workunit.client.1.vm05.stdout:8/250: dread d7/f1c [0,4194304] 0 2026-03-10T10:19:31.366 INFO:tasks.workunit.client.1.vm05.stdout:3/381: dread - dd/d15/d24/d2c/d3b/f77 zero size 2026-03-10T10:19:31.366 INFO:tasks.workunit.client.1.vm05.stdout:3/382: dread - dd/d39/f51 zero size 2026-03-10T10:19:31.366 INFO:tasks.workunit.client.1.vm05.stdout:3/383: read dd/d39/d66/f6e [167945,51895] 0 2026-03-10T10:19:31.366 INFO:tasks.workunit.client.1.vm05.stdout:3/384: chown dd/l17 667903 1 2026-03-10T10:19:31.366 INFO:tasks.workunit.client.1.vm05.stdout:1/302: sync 2026-03-10T10:19:31.366 INFO:tasks.workunit.client.1.vm05.stdout:7/387: creat d5/f76 x:0 0 0 2026-03-10T10:19:31.369 INFO:tasks.workunit.client.1.vm05.stdout:6/265: fdatasync f2 0 2026-03-10T10:19:31.371 INFO:tasks.workunit.client.1.vm05.stdout:6/266: dread f3 [0,4194304] 0 2026-03-10T10:19:31.372 INFO:tasks.workunit.client.1.vm05.stdout:0/323: truncate d1/d2/d9/d31/d54/f24 3225304 0 2026-03-10T10:19:31.376 INFO:tasks.workunit.client.1.vm05.stdout:5/358: write da/db/d26/d5c/f33 [670321,54863] 0 2026-03-10T10:19:31.378 INFO:tasks.workunit.client.0.vm02.stdout:5/529: rename d1/db/d11/d16/d79/lac to d1/db/d11/d62/d67/lb9 0 2026-03-10T10:19:31.380 INFO:tasks.workunit.client.1.vm05.stdout:4/271: symlink d1/d31/dc/d40/d45/l58 0 2026-03-10T10:19:31.386 INFO:tasks.workunit.client.1.vm05.stdout:9/292: write d0/d1/d13/f22 [6319895,87979] 0 2026-03-10T10:19:31.386 INFO:tasks.workunit.client.0.vm02.stdout:3/337: write d1/d6/f42 [421902,127427] 0 2026-03-10T10:19:31.389 INFO:tasks.workunit.client.1.vm05.stdout:9/293: dwrite d0/d1/d13/d26/f4e [0,4194304] 0 2026-03-10T10:19:31.391 INFO:tasks.workunit.client.1.vm05.stdout:9/294: chown d0/d1/d13/de/d21/c42 27 1 2026-03-10T10:19:31.391 INFO:tasks.workunit.client.0.vm02.stdout:8/385: dwrite d1/d1c/f1e [0,4194304] 0 2026-03-10T10:19:31.393 INFO:tasks.workunit.client.1.vm05.stdout:1/303: fdatasync d4/d39/d3e/f4d 0 2026-03-10T10:19:31.394 INFO:tasks.workunit.client.1.vm05.stdout:1/304: chown d4/d3d/c4f 60556211 1 2026-03-10T10:19:31.395 INFO:tasks.workunit.client.0.vm02.stdout:9/339: dread da/ff [0,4194304] 0 2026-03-10T10:19:31.409 INFO:tasks.workunit.client.1.vm05.stdout:8/251: readlink d7/l18 0 2026-03-10T10:19:31.416 INFO:tasks.workunit.client.0.vm02.stdout:7/345: dread d1/f5 [0,4194304] 0 2026-03-10T10:19:31.421 INFO:tasks.workunit.client.0.vm02.stdout:7/346: chown d1/dc/d10/f27 6210528 1 2026-03-10T10:19:31.421 INFO:tasks.workunit.client.0.vm02.stdout:1/341: rmdir d4/da/d57 0 2026-03-10T10:19:31.423 INFO:tasks.workunit.client.0.vm02.stdout:6/362: fdatasync d0/f20 0 2026-03-10T10:19:31.426 INFO:tasks.workunit.client.1.vm05.stdout:6/267: stat dd/d36/d3f/d12/d44/d30/c39 0 2026-03-10T10:19:31.426 INFO:tasks.workunit.client.0.vm02.stdout:4/478: write d1/d10/db/f43 [354893,68271] 0 2026-03-10T10:19:31.426 INFO:tasks.workunit.client.0.vm02.stdout:4/479: chown d1/d10/f30 107 1 2026-03-10T10:19:31.436 INFO:tasks.workunit.client.1.vm05.stdout:2/338: chown db/f19 106 1 2026-03-10T10:19:31.436 INFO:tasks.workunit.client.0.vm02.stdout:5/530: symlink d1/db/d11/d84/d40/d4f/d5f/d6d/lba 0 2026-03-10T10:19:31.436 INFO:tasks.workunit.client.1.vm05.stdout:5/359: read da/db/d28/f56 [715213,97148] 0 2026-03-10T10:19:31.436 INFO:tasks.workunit.client.0.vm02.stdout:5/531: readlink d1/db/d11/l34 0 2026-03-10T10:19:31.437 INFO:tasks.workunit.client.0.vm02.stdout:5/532: stat d1/c8b 0 2026-03-10T10:19:31.437 INFO:tasks.workunit.client.1.vm05.stdout:2/339: chown db/d2d/l3e 4 1 2026-03-10T10:19:31.441 INFO:tasks.workunit.client.1.vm05.stdout:4/272: rename d1/d31/f41 to d1/d31/d4b/f59 0 2026-03-10T10:19:31.443 INFO:tasks.workunit.client.1.vm05.stdout:6/268: sync 2026-03-10T10:19:31.445 INFO:tasks.workunit.client.1.vm05.stdout:5/360: read da/db/f1d [688904,119012] 0 2026-03-10T10:19:31.445 INFO:tasks.workunit.client.0.vm02.stdout:1/342: dread d4/da/d1a/f19 [0,4194304] 0 2026-03-10T10:19:31.447 INFO:tasks.workunit.client.1.vm05.stdout:6/269: dread f3 [0,4194304] 0 2026-03-10T10:19:31.449 INFO:tasks.workunit.client.0.vm02.stdout:3/338: symlink d1/d8/d21/l70 0 2026-03-10T10:19:31.449 INFO:tasks.workunit.client.1.vm05.stdout:9/295: symlink d0/df/d11/l5f 0 2026-03-10T10:19:31.451 INFO:tasks.workunit.client.1.vm05.stdout:1/305: creat d4/df/d1c/f5d x:0 0 0 2026-03-10T10:19:31.465 INFO:tasks.workunit.client.1.vm05.stdout:3/385: symlink dd/d15/d24/d74/l85 0 2026-03-10T10:19:31.467 INFO:tasks.workunit.client.1.vm05.stdout:7/388: mkdir d5/d1d/d20/d77 0 2026-03-10T10:19:31.471 INFO:tasks.workunit.client.1.vm05.stdout:3/386: dwrite dd/d15/d24/d2c/f60 [0,4194304] 0 2026-03-10T10:19:31.472 INFO:tasks.workunit.client.1.vm05.stdout:3/387: chown dd/d15/d24/f63 1 1 2026-03-10T10:19:31.489 INFO:tasks.workunit.client.0.vm02.stdout:4/480: symlink d1/d41/d5e/d78/d1a/l9f 0 2026-03-10T10:19:31.492 INFO:tasks.workunit.client.0.vm02.stdout:5/533: mknod d1/db/d11/d16/d79/cbb 0 2026-03-10T10:19:31.492 INFO:tasks.workunit.client.0.vm02.stdout:5/534: write d1/f7f [138296,33126] 0 2026-03-10T10:19:31.497 INFO:tasks.workunit.client.0.vm02.stdout:1/343: mkdir d4/da/d27/d38/d3c/d70 0 2026-03-10T10:19:31.500 INFO:tasks.workunit.client.1.vm05.stdout:5/361: creat da/db/d28/d32/f79 x:0 0 0 2026-03-10T10:19:31.501 INFO:tasks.workunit.client.0.vm02.stdout:1/344: read d4/da/d1a/f1c [1038693,5783] 0 2026-03-10T10:19:31.502 INFO:tasks.workunit.client.0.vm02.stdout:3/339: mknod d1/d6/c71 0 2026-03-10T10:19:31.507 INFO:tasks.workunit.client.1.vm05.stdout:5/362: sync 2026-03-10T10:19:31.507 INFO:tasks.workunit.client.0.vm02.stdout:8/386: dwrite d1/d1c/d24/d35/f4f [0,4194304] 0 2026-03-10T10:19:31.519 INFO:tasks.workunit.client.1.vm05.stdout:9/296: rename d0/d1/d13/de/d21/c42 to d0/d1/d4c/c60 0 2026-03-10T10:19:31.521 INFO:tasks.workunit.client.1.vm05.stdout:8/252: truncate d7/d2f/f4b 655150 0 2026-03-10T10:19:31.522 INFO:tasks.workunit.client.1.vm05.stdout:1/306: dread d4/df/d1c/f38 [0,4194304] 0 2026-03-10T10:19:31.524 INFO:tasks.workunit.client.1.vm05.stdout:7/389: creat d5/d1d/d20/d35/f78 x:0 0 0 2026-03-10T10:19:31.525 INFO:tasks.workunit.client.0.vm02.stdout:6/363: unlink d0/d8/d9/f79 0 2026-03-10T10:19:31.526 INFO:tasks.workunit.client.0.vm02.stdout:6/364: chown d0/d8/d9/f4f 177976 1 2026-03-10T10:19:31.526 INFO:tasks.workunit.client.0.vm02.stdout:6/365: readlink d0/d8/d29/d2f/d4b/l68 0 2026-03-10T10:19:31.532 INFO:tasks.workunit.client.0.vm02.stdout:5/535: truncate d1/f68 598935 0 2026-03-10T10:19:31.535 INFO:tasks.workunit.client.1.vm05.stdout:1/307: sync 2026-03-10T10:19:31.540 INFO:tasks.workunit.client.1.vm05.stdout:4/273: symlink d1/d31/dc/d40/l5a 0 2026-03-10T10:19:31.543 INFO:tasks.workunit.client.0.vm02.stdout:2/395: dwrite d0/f44 [0,4194304] 0 2026-03-10T10:19:31.544 INFO:tasks.workunit.client.1.vm05.stdout:6/270: mkdir dd/d36/d3f/d12/d59 0 2026-03-10T10:19:31.545 INFO:tasks.workunit.client.0.vm02.stdout:2/396: readlink d0/d1a/l3e 0 2026-03-10T10:19:31.547 INFO:tasks.workunit.client.0.vm02.stdout:3/340: symlink d1/d8/d44/l72 0 2026-03-10T10:19:31.549 INFO:tasks.workunit.client.1.vm05.stdout:5/363: mkdir da/db/d26/d35/d7a 0 2026-03-10T10:19:31.555 INFO:tasks.workunit.client.0.vm02.stdout:9/340: write da/ff [5596178,25310] 0 2026-03-10T10:19:31.555 INFO:tasks.workunit.client.0.vm02.stdout:7/347: write d1/f34 [2235163,119377] 0 2026-03-10T10:19:31.556 INFO:tasks.workunit.client.1.vm05.stdout:0/324: dwrite d1/d2/d9/d31/d54/f4 [0,4194304] 0 2026-03-10T10:19:31.559 INFO:tasks.workunit.client.0.vm02.stdout:8/387: dread - d1/d1c/d23/d25/f4c zero size 2026-03-10T10:19:31.563 INFO:tasks.workunit.client.0.vm02.stdout:7/348: dwrite d1/f17 [0,4194304] 0 2026-03-10T10:19:31.564 INFO:tasks.workunit.client.0.vm02.stdout:9/341: dwrite da/d3c/d4c/d2c/d34/f57 [0,4194304] 0 2026-03-10T10:19:31.564 INFO:tasks.workunit.client.1.vm05.stdout:1/308: dread d4/dd/f15 [8388608,4194304] 0 2026-03-10T10:19:31.568 INFO:tasks.workunit.client.0.vm02.stdout:7/349: dread - d1/dc/d16/d28/d2d/d36/f66 zero size 2026-03-10T10:19:31.573 INFO:tasks.workunit.client.0.vm02.stdout:4/481: rmdir d1/d41 39 2026-03-10T10:19:31.573 INFO:tasks.workunit.client.0.vm02.stdout:4/482: dread - d1/d32/f95 zero size 2026-03-10T10:19:31.578 INFO:tasks.workunit.client.1.vm05.stdout:0/325: dread d1/d2/d9/d31/d13/d17/f5a [0,4194304] 0 2026-03-10T10:19:31.583 INFO:tasks.workunit.client.1.vm05.stdout:9/297: mknod d0/d1/d13/d26/c61 0 2026-03-10T10:19:31.586 INFO:tasks.workunit.client.1.vm05.stdout:7/390: symlink d5/d1d/d29/d60/l79 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.0.vm02.stdout:6/366: dread d0/d8/f64 [0,4194304] 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.0.vm02.stdout:2/397: rmdir d0/d1a/d49 39 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.0.vm02.stdout:4/483: fsync d1/d41/d5e/d78/d37/f2e 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.0.vm02.stdout:4/484: truncate d1/d41/d5e/d78/d1a/f98 72977 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.0.vm02.stdout:6/367: rmdir d0/d8/d29/d2f 39 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.0.vm02.stdout:0/385: dwrite f2 [0,4194304] 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.1.vm05.stdout:3/388: write dd/d39/f6f [1683040,58520] 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.1.vm05.stdout:4/274: rename d1/d31/l4d to d1/d31/l5b 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.1.vm05.stdout:4/275: stat d1/d3 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.1.vm05.stdout:6/271: rmdir dd/d36/d3f/d12/d44/d2a/d3d 39 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.1.vm05.stdout:6/272: write dd/f14 [1157139,16767] 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.1.vm05.stdout:5/364: write da/db/d26/d35/f31 [567316,33711] 0 2026-03-10T10:19:31.607 INFO:tasks.workunit.client.1.vm05.stdout:6/273: dread dd/f14 [0,4194304] 0 2026-03-10T10:19:31.612 INFO:tasks.workunit.client.0.vm02.stdout:5/536: sync 2026-03-10T10:19:31.612 INFO:tasks.workunit.client.0.vm02.stdout:3/341: sync 2026-03-10T10:19:31.612 INFO:tasks.workunit.client.0.vm02.stdout:5/537: write d1/db/d11/d16/d79/d85/fb0 [180914,111190] 0 2026-03-10T10:19:31.614 INFO:tasks.workunit.client.0.vm02.stdout:1/345: rename d4/d1b/f34 to d4/da/f71 0 2026-03-10T10:19:31.614 INFO:tasks.workunit.client.1.vm05.stdout:1/309: symlink d4/d37/l5e 0 2026-03-10T10:19:31.615 INFO:tasks.workunit.client.1.vm05.stdout:1/310: fdatasync d4/f46 0 2026-03-10T10:19:31.616 INFO:tasks.workunit.client.1.vm05.stdout:1/311: dread d4/d37/f41 [0,4194304] 0 2026-03-10T10:19:31.616 INFO:tasks.workunit.client.0.vm02.stdout:9/342: sync 2026-03-10T10:19:31.617 INFO:tasks.workunit.client.1.vm05.stdout:1/312: chown d4/d20/f2d 36457 1 2026-03-10T10:19:31.617 INFO:tasks.workunit.client.0.vm02.stdout:8/388: mknod d1/d1c/d23/c77 0 2026-03-10T10:19:31.617 INFO:tasks.workunit.client.1.vm05.stdout:1/313: fsync d4/df/d1c/f23 0 2026-03-10T10:19:31.618 INFO:tasks.workunit.client.0.vm02.stdout:9/343: chown da/d3c/d4c/d38/l5b 6898 1 2026-03-10T10:19:31.618 INFO:tasks.workunit.client.0.vm02.stdout:8/389: chown d1/f40 1165 1 2026-03-10T10:19:31.635 INFO:tasks.workunit.client.1.vm05.stdout:1/314: write d4/d37/f41 [1442259,128949] 0 2026-03-10T10:19:31.635 INFO:tasks.workunit.client.1.vm05.stdout:1/315: dread - d4/df/d1c/f5d zero size 2026-03-10T10:19:31.635 INFO:tasks.workunit.client.1.vm05.stdout:7/391: rename d5/d1d/d29/d60/d6d to d5/d1d/d20/d2d/d5d/d7a 0 2026-03-10T10:19:31.636 INFO:tasks.workunit.client.0.vm02.stdout:8/390: truncate d1/d1c/d24/d35/f6e 4644207 0 2026-03-10T10:19:31.636 INFO:tasks.workunit.client.0.vm02.stdout:0/386: readlink d9/d18/d1a/l5c 0 2026-03-10T10:19:31.636 INFO:tasks.workunit.client.0.vm02.stdout:0/387: write d9/d34/d3d/d65/f7a [558805,102810] 0 2026-03-10T10:19:31.636 INFO:tasks.workunit.client.0.vm02.stdout:7/350: rename d1/fd to d1/dc/f69 0 2026-03-10T10:19:31.636 INFO:tasks.workunit.client.0.vm02.stdout:1/346: symlink d4/d2c/l72 0 2026-03-10T10:19:31.636 INFO:tasks.workunit.client.0.vm02.stdout:8/391: unlink d1/d1c/d23/c77 0 2026-03-10T10:19:31.639 INFO:tasks.workunit.client.0.vm02.stdout:5/538: truncate d1/db/d11/d13/f4e 466182 0 2026-03-10T10:19:31.642 INFO:tasks.workunit.client.0.vm02.stdout:4/485: rename d1/d41/c89 to d1/d41/d5e/d78/d7f/ca0 0 2026-03-10T10:19:31.659 INFO:tasks.workunit.client.0.vm02.stdout:9/344: rename da/c11 to da/d3c/d4c/d38/d4a/c71 0 2026-03-10T10:19:31.659 INFO:tasks.workunit.client.0.vm02.stdout:6/368: unlink d0/d8/d29/d6d/d32/l48 0 2026-03-10T10:19:31.660 INFO:tasks.workunit.client.1.vm05.stdout:9/298: mkdir d0/d1/d13/d62 0 2026-03-10T10:19:31.660 INFO:tasks.workunit.client.1.vm05.stdout:9/299: chown d0/df/d11/f2c 59744 1 2026-03-10T10:19:31.660 INFO:tasks.workunit.client.1.vm05.stdout:9/300: stat d0/df 0 2026-03-10T10:19:31.660 INFO:tasks.workunit.client.1.vm05.stdout:9/301: write d0/df/d11/f50 [1121473,92733] 0 2026-03-10T10:19:31.660 INFO:tasks.workunit.client.1.vm05.stdout:7/392: creat d5/d1d/d20/d2d/d5d/d7a/f7b x:0 0 0 2026-03-10T10:19:31.660 INFO:tasks.workunit.client.1.vm05.stdout:9/302: fdatasync d0/d1/d13/de/d21/f53 0 2026-03-10T10:19:31.660 INFO:tasks.workunit.client.1.vm05.stdout:7/393: chown d5/d1d/d20/d2d/c4b 11797 1 2026-03-10T10:19:31.660 INFO:tasks.workunit.client.1.vm05.stdout:3/389: creat dd/d15/d69/f86 x:0 0 0 2026-03-10T10:19:31.660 INFO:tasks.workunit.client.1.vm05.stdout:4/276: mknod d1/d31/dc/c5c 0 2026-03-10T10:19:31.667 INFO:tasks.workunit.client.1.vm05.stdout:1/316: symlink d4/l5f 0 2026-03-10T10:19:31.670 INFO:tasks.workunit.client.0.vm02.stdout:7/351: rename d1/dc/d16/d28/d2d/c41 to d1/dc/d44/c6a 0 2026-03-10T10:19:31.672 INFO:tasks.workunit.client.0.vm02.stdout:2/398: dread d0/fe [0,4194304] 0 2026-03-10T10:19:31.674 INFO:tasks.workunit.client.1.vm05.stdout:8/253: getdents d7 0 2026-03-10T10:19:31.675 INFO:tasks.workunit.client.0.vm02.stdout:9/345: truncate da/f1b 417794 0 2026-03-10T10:19:31.678 INFO:tasks.workunit.client.1.vm05.stdout:8/254: dwrite d7/d14/d15/f39 [4194304,4194304] 0 2026-03-10T10:19:31.678 INFO:tasks.workunit.client.1.vm05.stdout:8/255: stat d7/d14/d24/f35 0 2026-03-10T10:19:31.683 INFO:tasks.workunit.client.0.vm02.stdout:8/392: sync 2026-03-10T10:19:31.684 INFO:tasks.workunit.client.0.vm02.stdout:8/393: chown d1/d1c/d24/d35/f6e 155861687 1 2026-03-10T10:19:31.686 INFO:tasks.workunit.client.1.vm05.stdout:4/277: dread d1/d31/dc/f3a [0,4194304] 0 2026-03-10T10:19:31.691 INFO:tasks.workunit.client.0.vm02.stdout:5/539: dread d1/f10 [0,4194304] 0 2026-03-10T10:19:31.695 INFO:tasks.workunit.client.1.vm05.stdout:3/390: dread dd/d15/d24/f2f [0,4194304] 0 2026-03-10T10:19:31.707 INFO:tasks.workunit.client.0.vm02.stdout:3/342: dwrite d1/d8/d21/f3c [0,4194304] 0 2026-03-10T10:19:31.711 INFO:tasks.workunit.client.1.vm05.stdout:7/394: dread d5/d1d/d20/d3b/f70 [0,4194304] 0 2026-03-10T10:19:31.712 INFO:tasks.workunit.client.1.vm05.stdout:7/395: chown d5/d1d/d29/l48 907940 1 2026-03-10T10:19:31.712 INFO:tasks.workunit.client.1.vm05.stdout:7/396: write d5/d26/f2c [2559544,18840] 0 2026-03-10T10:19:31.713 INFO:tasks.workunit.client.1.vm05.stdout:7/397: read d5/d17/f4f [436155,98221] 0 2026-03-10T10:19:31.715 INFO:tasks.workunit.client.1.vm05.stdout:7/398: write d5/d1d/d20/d2d/f4c [809154,38133] 0 2026-03-10T10:19:31.725 INFO:tasks.workunit.client.1.vm05.stdout:1/317: stat d4/l27 0 2026-03-10T10:19:31.726 INFO:tasks.workunit.client.1.vm05.stdout:1/318: read d4/df/d1c/f38 [4070269,70312] 0 2026-03-10T10:19:31.728 INFO:tasks.workunit.client.0.vm02.stdout:4/486: write d1/d10/db/f24 [252685,12688] 0 2026-03-10T10:19:31.730 INFO:tasks.workunit.client.0.vm02.stdout:0/388: truncate d9/d18/d1a/d22/d24/f4f 3818519 0 2026-03-10T10:19:31.731 INFO:tasks.workunit.client.1.vm05.stdout:5/365: truncate da/db/d26/d5c/f33 433748 0 2026-03-10T10:19:31.732 INFO:tasks.workunit.client.1.vm05.stdout:6/274: truncate dd/d36/d3f/d12/d24/f4e 2086149 0 2026-03-10T10:19:31.736 INFO:tasks.workunit.client.1.vm05.stdout:8/256: link d7/d14/d15/f1f d7/d14/f4e 0 2026-03-10T10:19:31.737 INFO:tasks.workunit.client.1.vm05.stdout:8/257: truncate d7/d14/f4c 411940 0 2026-03-10T10:19:31.737 INFO:tasks.workunit.client.1.vm05.stdout:3/391: symlink dd/d39/d5f/l87 0 2026-03-10T10:19:31.741 INFO:tasks.workunit.client.1.vm05.stdout:1/319: rename d4/df/d1c/f5d to d4/dd/f60 0 2026-03-10T10:19:31.760 INFO:tasks.workunit.client.1.vm05.stdout:8/258: mkdir d7/d14/d24/d3f/d4f 0 2026-03-10T10:19:31.760 INFO:tasks.workunit.client.0.vm02.stdout:6/369: symlink d0/d8/d29/d6d/d32/d60/d6f/l7b 0 2026-03-10T10:19:31.761 INFO:tasks.workunit.client.1.vm05.stdout:2/340: write db/d12/f1d [1277913,95452] 0 2026-03-10T10:19:31.762 INFO:tasks.workunit.client.1.vm05.stdout:2/341: chown db/d28/f3f 0 1 2026-03-10T10:19:31.762 INFO:tasks.workunit.client.1.vm05.stdout:3/392: mkdir dd/d15/d24/d74/d88 0 2026-03-10T10:19:31.763 INFO:tasks.workunit.client.1.vm05.stdout:9/303: write d0/df/f3b [747481,105903] 0 2026-03-10T10:19:31.763 INFO:tasks.workunit.client.1.vm05.stdout:4/278: creat d1/f5d x:0 0 0 2026-03-10T10:19:31.764 INFO:tasks.workunit.client.1.vm05.stdout:0/326: write d1/d2/d9/d31/d54/f24 [1464995,63716] 0 2026-03-10T10:19:31.764 INFO:tasks.workunit.client.0.vm02.stdout:2/399: mknod d0/d1a/d24/c85 0 2026-03-10T10:19:31.769 INFO:tasks.workunit.client.1.vm05.stdout:1/320: mknod d4/d39/d3e/c61 0 2026-03-10T10:19:31.770 INFO:tasks.workunit.client.1.vm05.stdout:0/327: dwrite d1/d2/d9/d31/d12/d20/f2e [0,4194304] 0 2026-03-10T10:19:31.777 INFO:tasks.workunit.client.0.vm02.stdout:5/540: dwrite d1/db/d11/d84/d40/f7a [0,4194304] 0 2026-03-10T10:19:31.777 INFO:tasks.workunit.client.0.vm02.stdout:5/541: chown d1/db/f2f 29 1 2026-03-10T10:19:31.784 INFO:tasks.workunit.client.0.vm02.stdout:3/343: unlink d1/d8/d44/f5f 0 2026-03-10T10:19:31.784 INFO:tasks.workunit.client.0.vm02.stdout:3/344: chown d1/d6/f63 83145 1 2026-03-10T10:19:31.785 INFO:tasks.workunit.client.0.vm02.stdout:3/345: dread - d1/f6a zero size 2026-03-10T10:19:31.785 INFO:tasks.workunit.client.0.vm02.stdout:3/346: fsync d1/d20/d52/f6c 0 2026-03-10T10:19:31.789 INFO:tasks.workunit.client.1.vm05.stdout:2/342: unlink db/f14 0 2026-03-10T10:19:31.798 INFO:tasks.workunit.client.0.vm02.stdout:4/487: dread d1/d32/d3e/f7d [0,4194304] 0 2026-03-10T10:19:31.802 INFO:tasks.workunit.client.1.vm05.stdout:6/275: dwrite dd/d1b/f1d [0,4194304] 0 2026-03-10T10:19:31.805 INFO:tasks.workunit.client.1.vm05.stdout:2/343: sync 2026-03-10T10:19:31.810 INFO:tasks.workunit.client.0.vm02.stdout:9/346: dwrite da/d3c/d53/f6a [0,4194304] 0 2026-03-10T10:19:31.813 INFO:tasks.workunit.client.1.vm05.stdout:4/279: fsync d1/f5d 0 2026-03-10T10:19:31.822 INFO:tasks.workunit.client.1.vm05.stdout:5/366: creat da/db/f7b x:0 0 0 2026-03-10T10:19:31.822 INFO:tasks.workunit.client.0.vm02.stdout:8/394: truncate d1/d1c/d43/d5b/f60 959656 0 2026-03-10T10:19:31.823 INFO:tasks.workunit.client.1.vm05.stdout:5/367: chown da/c27 218098 1 2026-03-10T10:19:31.823 INFO:tasks.workunit.client.1.vm05.stdout:0/328: truncate d1/d2/d9/d31/d13/d17/f5a 674823 0 2026-03-10T10:19:31.823 INFO:tasks.workunit.client.1.vm05.stdout:7/399: getdents d5/d1d/d20/d2d/d5d/d7a 0 2026-03-10T10:19:31.823 INFO:tasks.workunit.client.1.vm05.stdout:7/400: chown d5/d1d/d20/d2d/f30 986 1 2026-03-10T10:19:31.823 INFO:tasks.workunit.client.1.vm05.stdout:7/401: dread - d5/d1d/d20/d35/f47 zero size 2026-03-10T10:19:31.823 INFO:tasks.workunit.client.1.vm05.stdout:3/393: mkdir dd/d15/d24/d2c/d6d/d89 0 2026-03-10T10:19:31.823 INFO:tasks.workunit.client.1.vm05.stdout:3/394: write dd/fe [2676125,40798] 0 2026-03-10T10:19:31.824 INFO:tasks.workunit.client.0.vm02.stdout:3/347: rename d1/d8/d21/d66 to d1/d8/d21/d73 0 2026-03-10T10:19:31.825 INFO:tasks.workunit.client.1.vm05.stdout:6/276: write dd/d1b/f40 [3142358,41751] 0 2026-03-10T10:19:31.826 INFO:tasks.workunit.client.0.vm02.stdout:0/389: unlink d9/d18/d1a/c27 0 2026-03-10T10:19:31.827 INFO:tasks.workunit.client.0.vm02.stdout:0/390: read d9/f28 [232334,24909] 0 2026-03-10T10:19:31.827 INFO:tasks.workunit.client.0.vm02.stdout:0/391: write d9/d34/d3d/f58 [830827,101399] 0 2026-03-10T10:19:31.829 INFO:tasks.workunit.client.1.vm05.stdout:4/280: write d1/d31/dc/f53 [604584,71593] 0 2026-03-10T10:19:31.829 INFO:tasks.workunit.client.1.vm05.stdout:6/277: dwrite dd/d36/d3f/d12/f4f [0,4194304] 0 2026-03-10T10:19:31.842 INFO:tasks.workunit.client.0.vm02.stdout:1/347: rename d4/d2c/d53/f69 to d4/da/f73 0 2026-03-10T10:19:31.842 INFO:tasks.workunit.client.1.vm05.stdout:5/368: sync 2026-03-10T10:19:31.842 INFO:tasks.workunit.client.1.vm05.stdout:0/329: sync 2026-03-10T10:19:31.846 INFO:tasks.workunit.client.0.vm02.stdout:7/352: link d1/dc/f3 d1/f6b 0 2026-03-10T10:19:31.847 INFO:tasks.workunit.client.1.vm05.stdout:3/395: creat dd/d15/d24/f8a x:0 0 0 2026-03-10T10:19:31.848 INFO:tasks.workunit.client.1.vm05.stdout:2/344: creat db/d28/d4f/f68 x:0 0 0 2026-03-10T10:19:31.849 INFO:tasks.workunit.client.0.vm02.stdout:8/395: stat d1/d1c/f20 0 2026-03-10T10:19:31.850 INFO:tasks.workunit.client.0.vm02.stdout:8/396: chown d1/d1c/d24/d35/f4f 1832 1 2026-03-10T10:19:31.850 INFO:tasks.workunit.client.0.vm02.stdout:8/397: truncate d1/f73 552689 0 2026-03-10T10:19:31.853 INFO:tasks.workunit.client.1.vm05.stdout:5/369: creat da/db/d26/d70/f7c x:0 0 0 2026-03-10T10:19:31.853 INFO:tasks.workunit.client.1.vm05.stdout:9/304: getdents d0/df/d11 0 2026-03-10T10:19:31.853 INFO:tasks.workunit.client.0.vm02.stdout:0/392: unlink d9/d18/d1a/d22/d24/c4b 0 2026-03-10T10:19:31.855 INFO:tasks.workunit.client.1.vm05.stdout:3/396: readlink dd/d15/d1f/l22 0 2026-03-10T10:19:31.856 INFO:tasks.workunit.client.0.vm02.stdout:7/353: symlink d1/d1b/d49/l6c 0 2026-03-10T10:19:31.857 INFO:tasks.workunit.client.1.vm05.stdout:5/370: rmdir da/db/d26/d5c 39 2026-03-10T10:19:31.857 INFO:tasks.workunit.client.1.vm05.stdout:6/278: dwrite f3 [0,4194304] 0 2026-03-10T10:19:31.858 INFO:tasks.workunit.client.1.vm05.stdout:9/305: mkdir d0/d1/d4c/d63 0 2026-03-10T10:19:31.859 INFO:tasks.workunit.client.0.vm02.stdout:0/393: rmdir d9 39 2026-03-10T10:19:31.861 INFO:tasks.workunit.client.1.vm05.stdout:3/397: chown dd/d15/c25 1023 1 2026-03-10T10:19:31.861 INFO:tasks.workunit.client.1.vm05.stdout:0/330: creat d1/d2/d9/d31/d54/f6b x:0 0 0 2026-03-10T10:19:31.862 INFO:tasks.workunit.client.1.vm05.stdout:3/398: dread - dd/d15/d1f/f75 zero size 2026-03-10T10:19:31.862 INFO:tasks.workunit.client.1.vm05.stdout:0/331: chown d1/d2/d9/d31/d54/l58 1153669 1 2026-03-10T10:19:31.867 INFO:tasks.workunit.client.0.vm02.stdout:7/354: creat d1/dc/d16/f6d x:0 0 0 2026-03-10T10:19:31.871 INFO:tasks.workunit.client.0.vm02.stdout:5/542: getdents d1 0 2026-03-10T10:19:31.874 INFO:tasks.workunit.client.0.vm02.stdout:5/543: dwrite d1/db/d11/d13/f1c [0,4194304] 0 2026-03-10T10:19:31.882 INFO:tasks.workunit.client.1.vm05.stdout:5/371: creat da/db/d26/d35/f7d x:0 0 0 2026-03-10T10:19:31.883 INFO:tasks.workunit.client.1.vm05.stdout:2/345: creat db/d1c/f69 x:0 0 0 2026-03-10T10:19:31.883 INFO:tasks.workunit.client.0.vm02.stdout:0/394: stat d9/d34/d3d/d7b/c33 0 2026-03-10T10:19:31.884 INFO:tasks.workunit.client.1.vm05.stdout:5/372: truncate da/db/d28/d32/f69 201293 0 2026-03-10T10:19:31.886 INFO:tasks.workunit.client.0.vm02.stdout:5/544: write d1/db/d11/d84/d40/f7a [1647111,3608] 0 2026-03-10T10:19:31.890 INFO:tasks.workunit.client.0.vm02.stdout:6/370: write d0/d8/d9/f4f [556733,74270] 0 2026-03-10T10:19:31.892 INFO:tasks.workunit.client.1.vm05.stdout:1/321: write d4/dd/f15 [11701368,2896] 0 2026-03-10T10:19:31.895 INFO:tasks.workunit.client.0.vm02.stdout:2/400: write d0/d1a/f33 [5063772,66386] 0 2026-03-10T10:19:31.897 INFO:tasks.workunit.client.0.vm02.stdout:1/348: link d4/fe d4/d2c/d53/f74 0 2026-03-10T10:19:31.898 INFO:tasks.workunit.client.1.vm05.stdout:0/332: creat d1/d2/d9/f6c x:0 0 0 2026-03-10T10:19:31.899 INFO:tasks.workunit.client.1.vm05.stdout:0/333: write d1/d2/d5d/f5f [325779,55839] 0 2026-03-10T10:19:31.899 INFO:tasks.workunit.client.1.vm05.stdout:0/334: fsync d1/d2/d9/d31/d13/d2f/d49/f4f 0 2026-03-10T10:19:31.900 INFO:tasks.workunit.client.1.vm05.stdout:6/279: creat dd/d36/d3f/d12/d58/f5a x:0 0 0 2026-03-10T10:19:31.901 INFO:tasks.workunit.client.0.vm02.stdout:8/398: creat d1/d1c/d43/f78 x:0 0 0 2026-03-10T10:19:31.903 INFO:tasks.workunit.client.0.vm02.stdout:3/348: getdents d1/d6 0 2026-03-10T10:19:31.905 INFO:tasks.workunit.client.0.vm02.stdout:3/349: dread - d1/d20/f64 zero size 2026-03-10T10:19:31.906 INFO:tasks.workunit.client.0.vm02.stdout:0/395: mkdir d9/d18/d1a/d43/d74/d7f 0 2026-03-10T10:19:31.911 INFO:tasks.workunit.client.1.vm05.stdout:8/259: truncate d7/f21 3283751 0 2026-03-10T10:19:31.914 INFO:tasks.workunit.client.1.vm05.stdout:8/260: dwrite d7/d14/d15/f39 [0,4194304] 0 2026-03-10T10:19:31.914 INFO:tasks.workunit.client.0.vm02.stdout:5/545: truncate d1/db/d11/d13/d28/f31 2039326 0 2026-03-10T10:19:31.915 INFO:tasks.workunit.client.0.vm02.stdout:4/488: truncate d1/d10/f8 2507229 0 2026-03-10T10:19:31.918 INFO:tasks.workunit.client.1.vm05.stdout:7/402: dwrite d5/d1d/d20/d3b/f70 [0,4194304] 0 2026-03-10T10:19:31.931 INFO:tasks.workunit.client.1.vm05.stdout:4/281: dwrite d1/d3/f10 [0,4194304] 0 2026-03-10T10:19:31.932 INFO:tasks.workunit.client.0.vm02.stdout:9/347: truncate da/ff 3231200 0 2026-03-10T10:19:31.935 INFO:tasks.workunit.client.0.vm02.stdout:6/371: unlink d0/f28 0 2026-03-10T10:19:31.937 INFO:tasks.workunit.client.1.vm05.stdout:0/335: creat d1/d2/d9/d31/d12/d41/f6d x:0 0 0 2026-03-10T10:19:31.938 INFO:tasks.workunit.client.0.vm02.stdout:2/401: symlink d0/d10/d69/l86 0 2026-03-10T10:19:31.940 INFO:tasks.workunit.client.0.vm02.stdout:1/349: fsync d4/d2c/f43 0 2026-03-10T10:19:31.941 INFO:tasks.workunit.client.1.vm05.stdout:0/336: dwrite d1/d2/d9/d31/d54/f27 [0,4194304] 0 2026-03-10T10:19:31.943 INFO:tasks.workunit.client.1.vm05.stdout:2/346: rename db/d4e/c53 to db/d1c/d40/c6a 0 2026-03-10T10:19:31.944 INFO:tasks.workunit.client.0.vm02.stdout:8/399: truncate d1/f68 1035175 0 2026-03-10T10:19:31.945 INFO:tasks.workunit.client.0.vm02.stdout:8/400: chown d1/d2/c48 2326 1 2026-03-10T10:19:31.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:31 vm05.local ceph-mon[59051]: Upgrade: Updating mgr.vm05.coparq 2026-03-10T10:19:31.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:31 vm05.local ceph-mon[59051]: Deploying daemon mgr.vm05.coparq on vm05 2026-03-10T10:19:31.956 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:31 vm05.local ceph-mon[59051]: pgmap v155: 65 pgs: 65 active+clean; 1.4 GiB data, 5.5 GiB used, 115 GiB / 120 GiB avail; 20 MiB/s rd, 103 MiB/s wr, 179 op/s 2026-03-10T10:19:31.962 INFO:tasks.workunit.client.1.vm05.stdout:7/403: creat d5/d1d/f7c x:0 0 0 2026-03-10T10:19:31.964 INFO:tasks.workunit.client.0.vm02.stdout:4/489: symlink d1/d41/d5e/d78/d1a/la1 0 2026-03-10T10:19:31.966 INFO:tasks.workunit.client.1.vm05.stdout:0/337: sync 2026-03-10T10:19:31.966 INFO:tasks.workunit.client.0.vm02.stdout:4/490: dwrite d1/d41/d5e/d78/d1a/f98 [0,4194304] 0 2026-03-10T10:19:31.967 INFO:tasks.workunit.client.0.vm02.stdout:4/491: chown d1/d41/d7e 1517 1 2026-03-10T10:19:31.970 INFO:tasks.workunit.client.0.vm02.stdout:9/348: creat da/d3c/f72 x:0 0 0 2026-03-10T10:19:31.970 INFO:tasks.workunit.client.1.vm05.stdout:0/338: write d1/d2/d9/d31/f36 [644695,15585] 0 2026-03-10T10:19:31.971 INFO:tasks.workunit.client.1.vm05.stdout:0/339: truncate d1/d2/d9/d31/d12/f1e 943429 0 2026-03-10T10:19:31.974 INFO:tasks.workunit.client.1.vm05.stdout:4/282: symlink d1/d31/dc/d40/d45/l5e 0 2026-03-10T10:19:31.984 INFO:tasks.workunit.client.0.vm02.stdout:7/355: write d1/dc/d16/d28/d2d/f2f [2993539,80496] 0 2026-03-10T10:19:31.984 INFO:tasks.workunit.client.0.vm02.stdout:6/372: creat d0/d8/d29/d6d/d32/d60/d6f/f7c x:0 0 0 2026-03-10T10:19:31.984 INFO:tasks.workunit.client.1.vm05.stdout:9/306: write d0/f28 [905071,80205] 0 2026-03-10T10:19:31.984 INFO:tasks.workunit.client.1.vm05.stdout:8/261: rename d7/f12 to d7/d14/d3a/f50 0 2026-03-10T10:19:31.994 INFO:tasks.workunit.client.1.vm05.stdout:2/347: read db/d28/f35 [1082991,37987] 0 2026-03-10T10:19:31.994 INFO:tasks.workunit.client.1.vm05.stdout:7/404: creat d5/d1d/f7d x:0 0 0 2026-03-10T10:19:31.994 INFO:tasks.workunit.client.1.vm05.stdout:5/373: creat da/db/d26/f7e x:0 0 0 2026-03-10T10:19:31.997 INFO:tasks.workunit.client.1.vm05.stdout:0/340: unlink d1/d2/d9/fd 0 2026-03-10T10:19:31.999 INFO:tasks.workunit.client.1.vm05.stdout:4/283: unlink d1/d31/dc/d40/l56 0 2026-03-10T10:19:32.003 INFO:tasks.workunit.client.1.vm05.stdout:9/307: creat d0/df/d11/f64 x:0 0 0 2026-03-10T10:19:32.008 INFO:tasks.workunit.client.0.vm02.stdout:4/492: truncate d1/d41/d5e/d78/d1a/d49/f5c 869895 0 2026-03-10T10:19:32.011 INFO:tasks.workunit.client.1.vm05.stdout:2/348: rmdir db/d28 39 2026-03-10T10:19:32.011 INFO:tasks.workunit.client.1.vm05.stdout:9/308: dwrite d0/d1/d13/d26/f58 [0,4194304] 0 2026-03-10T10:19:32.013 INFO:tasks.workunit.client.0.vm02.stdout:7/356: dread - d1/dc/d16/d28/d2d/f4c zero size 2026-03-10T10:19:32.013 INFO:tasks.workunit.client.0.vm02.stdout:7/357: chown d1/dc/d44 0 1 2026-03-10T10:19:32.014 INFO:tasks.workunit.client.0.vm02.stdout:7/358: dread - d1/dc/d16/f6d zero size 2026-03-10T10:19:32.014 INFO:tasks.workunit.client.0.vm02.stdout:7/359: chown d1/dc/d44/c5a 18003961 1 2026-03-10T10:19:32.018 INFO:tasks.workunit.client.0.vm02.stdout:6/373: readlink d0/d8/d29/l37 0 2026-03-10T10:19:32.019 INFO:tasks.workunit.client.1.vm05.stdout:9/309: dread d0/df/d11/f52 [0,4194304] 0 2026-03-10T10:19:32.020 INFO:tasks.workunit.client.1.vm05.stdout:9/310: write d0/d1/d16/f5c [3730469,105359] 0 2026-03-10T10:19:32.022 INFO:tasks.workunit.client.1.vm05.stdout:5/374: fdatasync da/db/d26/f4c 0 2026-03-10T10:19:32.022 INFO:tasks.workunit.client.0.vm02.stdout:2/402: symlink d0/d1a/d49/l87 0 2026-03-10T10:19:32.022 INFO:tasks.workunit.client.1.vm05.stdout:9/311: readlink d0/df/l54 0 2026-03-10T10:19:32.029 INFO:tasks.workunit.client.0.vm02.stdout:0/396: rmdir d9/d18/d1a/d43/d49/d6b 0 2026-03-10T10:19:32.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:31 vm02.local ceph-mon[50200]: Upgrade: Updating mgr.vm05.coparq 2026-03-10T10:19:32.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:31 vm02.local ceph-mon[50200]: Deploying daemon mgr.vm05.coparq on vm05 2026-03-10T10:19:32.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:31 vm02.local ceph-mon[50200]: pgmap v155: 65 pgs: 65 active+clean; 1.4 GiB data, 5.5 GiB used, 115 GiB / 120 GiB avail; 20 MiB/s rd, 103 MiB/s wr, 179 op/s 2026-03-10T10:19:32.033 INFO:tasks.workunit.client.0.vm02.stdout:4/493: symlink d1/d41/d5e/d78/d1a/d49/la2 0 2026-03-10T10:19:32.045 INFO:tasks.workunit.client.1.vm05.stdout:6/280: chown dd/d36/d3f/d12/d24/f4e 5616 1 2026-03-10T10:19:32.045 INFO:tasks.workunit.client.1.vm05.stdout:6/281: truncate f2 4877273 0 2026-03-10T10:19:32.045 INFO:tasks.workunit.client.1.vm05.stdout:3/399: write dd/d15/f23 [1048873,102165] 0 2026-03-10T10:19:32.045 INFO:tasks.workunit.client.0.vm02.stdout:9/349: unlink l2 0 2026-03-10T10:19:32.045 INFO:tasks.workunit.client.0.vm02.stdout:7/360: chown d1/d1b/d49/l57 299768538 1 2026-03-10T10:19:32.045 INFO:tasks.workunit.client.0.vm02.stdout:7/361: dwrite d1/dc/d16/d28/d2d/f2f [0,4194304] 0 2026-03-10T10:19:32.045 INFO:tasks.workunit.client.0.vm02.stdout:7/362: read - d1/dc/d16/d28/d2d/d36/f66 zero size 2026-03-10T10:19:32.061 INFO:tasks.workunit.client.0.vm02.stdout:4/494: write d1/d10/db/f15 [5001589,58648] 0 2026-03-10T10:19:32.062 INFO:tasks.workunit.client.0.vm02.stdout:4/495: write d1/d41/d5e/d78/d37/f14 [3903740,120930] 0 2026-03-10T10:19:32.069 INFO:tasks.workunit.client.0.vm02.stdout:7/363: symlink d1/dc/d16/l6e 0 2026-03-10T10:19:32.072 INFO:tasks.workunit.client.0.vm02.stdout:3/350: write d1/d6/f48 [343831,18488] 0 2026-03-10T10:19:32.075 INFO:tasks.workunit.client.1.vm05.stdout:1/322: truncate d4/d39/f54 3513220 0 2026-03-10T10:19:32.079 INFO:tasks.workunit.client.0.vm02.stdout:5/546: dwrite d1/db/d11/d13/d28/d37/f3c [0,4194304] 0 2026-03-10T10:19:32.082 INFO:tasks.workunit.client.0.vm02.stdout:9/350: rmdir da 39 2026-03-10T10:19:32.091 INFO:tasks.workunit.client.0.vm02.stdout:4/496: read d1/f1d [488104,60516] 0 2026-03-10T10:19:32.095 INFO:tasks.workunit.client.0.vm02.stdout:2/403: creat d0/f88 x:0 0 0 2026-03-10T10:19:32.097 INFO:tasks.workunit.client.1.vm05.stdout:7/405: mknod d5/d1d/d20/d35/d6f/c7e 0 2026-03-10T10:19:32.097 INFO:tasks.workunit.client.1.vm05.stdout:7/406: chown d5/d1d/d20 30454 1 2026-03-10T10:19:32.098 INFO:tasks.workunit.client.1.vm05.stdout:7/407: chown d5/dd/f62 11 1 2026-03-10T10:19:32.098 INFO:tasks.workunit.client.1.vm05.stdout:7/408: readlink d5/dd/l10 0 2026-03-10T10:19:32.127 INFO:tasks.workunit.client.1.vm05.stdout:7/409: dwrite d5/d1d/d20/d35/f47 [0,4194304] 0 2026-03-10T10:19:32.127 INFO:tasks.workunit.client.1.vm05.stdout:6/282: fdatasync dd/d36/d3f/f22 0 2026-03-10T10:19:32.127 INFO:tasks.workunit.client.1.vm05.stdout:6/283: stat dd/d36/d3f/d12/d24/c49 0 2026-03-10T10:19:32.127 INFO:tasks.workunit.client.0.vm02.stdout:4/497: mkdir d1/d32/da3 0 2026-03-10T10:19:32.127 INFO:tasks.workunit.client.0.vm02.stdout:1/350: write d4/d2c/d53/f74 [4378017,24388] 0 2026-03-10T10:19:32.127 INFO:tasks.workunit.client.0.vm02.stdout:1/351: fdatasync d4/da/d27/f35 0 2026-03-10T10:19:32.127 INFO:tasks.workunit.client.0.vm02.stdout:1/352: chown d4/da/d1a/c61 166 1 2026-03-10T10:19:32.127 INFO:tasks.workunit.client.0.vm02.stdout:0/397: rename d9/d18/d1a/d43 to d9/d18/d1a/d22/d24/d80 0 2026-03-10T10:19:32.128 INFO:tasks.workunit.client.1.vm05.stdout:3/400: rmdir dd/d20/d56 39 2026-03-10T10:19:32.131 INFO:tasks.workunit.client.0.vm02.stdout:5/547: fdatasync d1/db/d11/d13/d28/f31 0 2026-03-10T10:19:32.132 INFO:tasks.workunit.client.1.vm05.stdout:3/401: dwrite dd/d15/f23 [0,4194304] 0 2026-03-10T10:19:32.135 INFO:tasks.workunit.client.1.vm05.stdout:2/349: symlink db/d28/d4f/l6b 0 2026-03-10T10:19:32.138 INFO:tasks.workunit.client.0.vm02.stdout:1/353: dread d4/da/d1a/d22/f62 [0,4194304] 0 2026-03-10T10:19:32.142 INFO:tasks.workunit.client.1.vm05.stdout:7/410: mkdir d5/d1d/d20/d35/d6f/d7f 0 2026-03-10T10:19:32.142 INFO:tasks.workunit.client.1.vm05.stdout:0/341: rename d1/d2/d5d/d68 to d1/d2/d39/d6e 0 2026-03-10T10:19:32.143 INFO:tasks.workunit.client.1.vm05.stdout:7/411: chown d5/c6 603851506 1 2026-03-10T10:19:32.143 INFO:tasks.workunit.client.1.vm05.stdout:7/412: chown d5/dd/l10 27 1 2026-03-10T10:19:32.143 INFO:tasks.workunit.client.0.vm02.stdout:7/364: rename d1/dc/d44/c6a to d1/dc/d16/d28/d2d/d36/d67/c6f 0 2026-03-10T10:19:32.145 INFO:tasks.workunit.client.0.vm02.stdout:1/354: creat d4/d2c/d53/f75 x:0 0 0 2026-03-10T10:19:32.145 INFO:tasks.workunit.client.0.vm02.stdout:1/355: rename d4 to d4/d2c/d53/d76 22 2026-03-10T10:19:32.146 INFO:tasks.workunit.client.0.vm02.stdout:9/351: rmdir da/d3c/d4c/d50 0 2026-03-10T10:19:32.150 INFO:tasks.workunit.client.0.vm02.stdout:4/498: rmdir d1/d41/d5e/d78/d37/d8d 0 2026-03-10T10:19:32.152 INFO:tasks.workunit.client.1.vm05.stdout:7/413: dread d5/f22 [0,4194304] 0 2026-03-10T10:19:32.156 INFO:tasks.workunit.client.0.vm02.stdout:1/356: creat d4/d2c/f77 x:0 0 0 2026-03-10T10:19:32.156 INFO:tasks.workunit.client.1.vm05.stdout:8/262: dwrite d7/d14/d15/d3b/f43 [0,4194304] 0 2026-03-10T10:19:32.156 INFO:tasks.workunit.client.0.vm02.stdout:4/499: write d1/d10/db/f20 [2103250,65364] 0 2026-03-10T10:19:32.157 INFO:tasks.workunit.client.1.vm05.stdout:2/350: mkdir db/d4e/d6c 0 2026-03-10T10:19:32.157 INFO:tasks.workunit.client.1.vm05.stdout:2/351: chown db/d4e 0 1 2026-03-10T10:19:32.157 INFO:tasks.workunit.client.0.vm02.stdout:3/351: sync 2026-03-10T10:19:32.161 INFO:tasks.workunit.client.0.vm02.stdout:5/548: sync 2026-03-10T10:19:32.161 INFO:tasks.workunit.client.0.vm02.stdout:1/357: sync 2026-03-10T10:19:32.161 INFO:tasks.workunit.client.1.vm05.stdout:2/352: dwrite db/d1c/f69 [0,4194304] 0 2026-03-10T10:19:32.162 INFO:tasks.workunit.client.0.vm02.stdout:5/549: read - d1/db/d11/d84/d40/d4f/d5f/d6d/d71/f80 zero size 2026-03-10T10:19:32.167 INFO:tasks.workunit.client.1.vm05.stdout:2/353: dwrite db/d2d/f47 [0,4194304] 0 2026-03-10T10:19:32.168 INFO:tasks.workunit.client.1.vm05.stdout:2/354: dread - db/d28/f60 zero size 2026-03-10T10:19:32.173 INFO:tasks.workunit.client.0.vm02.stdout:0/398: getdents d9/d18/d1a/d46 0 2026-03-10T10:19:32.175 INFO:tasks.workunit.client.0.vm02.stdout:9/352: link da/f15 da/d3c/d53/f73 0 2026-03-10T10:19:32.180 INFO:tasks.workunit.client.0.vm02.stdout:9/353: dread - da/d3c/d4c/d38/d4a/f54 zero size 2026-03-10T10:19:32.181 INFO:tasks.workunit.client.0.vm02.stdout:3/352: symlink d1/d8/d44/l74 0 2026-03-10T10:19:32.183 INFO:tasks.workunit.client.0.vm02.stdout:1/358: rename d4/da/d27/d38/d3c/d70 to d4/da/d1a/d47/d78 0 2026-03-10T10:19:32.184 INFO:tasks.workunit.client.1.vm05.stdout:3/402: mknod dd/d20/d56/c8b 0 2026-03-10T10:19:32.184 INFO:tasks.workunit.client.1.vm05.stdout:3/403: stat dd/d39/d66/f7e 0 2026-03-10T10:19:32.185 INFO:tasks.workunit.client.1.vm05.stdout:3/404: readlink dd/d15/d24/d74/l85 0 2026-03-10T10:19:32.185 INFO:tasks.workunit.client.1.vm05.stdout:3/405: write dd/d15/d24/d2c/d3b/f77 [398468,5536] 0 2026-03-10T10:19:32.186 INFO:tasks.workunit.client.1.vm05.stdout:3/406: write dd/d15/d24/f42 [162928,51330] 0 2026-03-10T10:19:32.187 INFO:tasks.workunit.client.1.vm05.stdout:3/407: rename dd to dd/d15/d1f/d8c 22 2026-03-10T10:19:32.187 INFO:tasks.workunit.client.0.vm02.stdout:5/550: mknod d1/cbc 0 2026-03-10T10:19:32.188 INFO:tasks.workunit.client.0.vm02.stdout:6/374: write d0/f20 [273826,117615] 0 2026-03-10T10:19:32.192 INFO:tasks.workunit.client.0.vm02.stdout:6/375: dwrite d0/f20 [0,4194304] 0 2026-03-10T10:19:32.197 INFO:tasks.workunit.client.0.vm02.stdout:6/376: chown d0/d8/d29/l37 537278 1 2026-03-10T10:19:32.212 INFO:tasks.workunit.client.0.vm02.stdout:9/354: truncate da/f1f 858975 0 2026-03-10T10:19:32.213 INFO:tasks.workunit.client.1.vm05.stdout:2/355: chown db/d1c/d40/c41 8 1 2026-03-10T10:19:32.214 INFO:tasks.workunit.client.1.vm05.stdout:1/323: getdents d4/dd 0 2026-03-10T10:19:32.215 INFO:tasks.workunit.client.1.vm05.stdout:1/324: chown d4/dd/l19 53107 1 2026-03-10T10:19:32.215 INFO:tasks.workunit.client.0.vm02.stdout:1/359: unlink d4/da/d27/d38/f3f 0 2026-03-10T10:19:32.216 INFO:tasks.workunit.client.0.vm02.stdout:1/360: read - d4/da/d27/f6a zero size 2026-03-10T10:19:32.216 INFO:tasks.workunit.client.0.vm02.stdout:1/361: fsync d4/d2c/d53/f58 0 2026-03-10T10:19:32.217 INFO:tasks.workunit.client.0.vm02.stdout:1/362: write d4/f21 [4822950,41384] 0 2026-03-10T10:19:32.218 INFO:tasks.workunit.client.0.vm02.stdout:1/363: chown d4/d2c/d53/f75 2 1 2026-03-10T10:19:32.228 INFO:tasks.workunit.client.1.vm05.stdout:3/408: rename dd/d15/c25 to dd/d15/d4c/c8d 0 2026-03-10T10:19:32.230 INFO:tasks.workunit.client.0.vm02.stdout:1/364: dread d4/da/d27/d38/f4e [0,4194304] 0 2026-03-10T10:19:32.233 INFO:tasks.workunit.client.0.vm02.stdout:7/365: getdents d1/dc/d10/d38 0 2026-03-10T10:19:32.238 INFO:tasks.workunit.client.1.vm05.stdout:7/414: dread d5/d1d/f31 [0,4194304] 0 2026-03-10T10:19:32.238 INFO:tasks.workunit.client.1.vm05.stdout:7/415: stat d5/d17/f74 0 2026-03-10T10:19:32.242 INFO:tasks.workunit.client.0.vm02.stdout:2/404: dwrite d0/f72 [0,4194304] 0 2026-03-10T10:19:32.244 INFO:tasks.workunit.client.0.vm02.stdout:6/377: creat d0/d8/d9/d7a/f7d x:0 0 0 2026-03-10T10:19:32.247 INFO:tasks.workunit.client.1.vm05.stdout:1/325: fsync d4/d3d/f57 0 2026-03-10T10:19:32.249 INFO:tasks.workunit.client.1.vm05.stdout:4/284: truncate d1/d3/f5 6281077 0 2026-03-10T10:19:32.249 INFO:tasks.workunit.client.0.vm02.stdout:0/399: getdents d9/d34/d3d/d7b 0 2026-03-10T10:19:32.250 INFO:tasks.workunit.client.1.vm05.stdout:9/312: dwrite d0/d1/f9 [4194304,4194304] 0 2026-03-10T10:19:32.253 INFO:tasks.workunit.client.1.vm05.stdout:2/356: mkdir db/d4e/d6c/d6d 0 2026-03-10T10:19:32.253 INFO:tasks.workunit.client.1.vm05.stdout:2/357: stat db/d28/f3f 0 2026-03-10T10:19:32.259 INFO:tasks.workunit.client.1.vm05.stdout:2/358: dwrite db/d28/f30 [4194304,4194304] 0 2026-03-10T10:19:32.259 INFO:tasks.workunit.client.1.vm05.stdout:2/359: chown db/d28/l38 0 1 2026-03-10T10:19:32.260 INFO:tasks.workunit.client.1.vm05.stdout:7/416: mkdir d5/d1d/d20/d2d/d80 0 2026-03-10T10:19:32.261 INFO:tasks.workunit.client.1.vm05.stdout:1/326: rename d4/dd/f21 to d4/d37/d4e/f62 0 2026-03-10T10:19:32.272 INFO:tasks.workunit.client.1.vm05.stdout:5/375: write da/db/d26/d5c/f33 [1433018,126791] 0 2026-03-10T10:19:32.272 INFO:tasks.workunit.client.1.vm05.stdout:0/342: fsync d1/d2/d9/d31/d13/d17/f56 0 2026-03-10T10:19:32.273 INFO:tasks.workunit.client.1.vm05.stdout:5/376: write f9 [192683,90563] 0 2026-03-10T10:19:32.277 INFO:tasks.workunit.client.0.vm02.stdout:7/366: creat d1/dc/d16/d28/d2d/d36/d67/f70 x:0 0 0 2026-03-10T10:19:32.282 INFO:tasks.workunit.client.1.vm05.stdout:7/417: rmdir d5/d1d/d20/d2d 39 2026-03-10T10:19:32.282 INFO:tasks.workunit.client.0.vm02.stdout:7/367: dwrite d1/dc/d16/d28/d2d/d36/f5c [0,4194304] 0 2026-03-10T10:19:32.296 INFO:tasks.workunit.client.0.vm02.stdout:7/368: dwrite d1/dc/ff [0,4194304] 0 2026-03-10T10:19:32.296 INFO:tasks.workunit.client.0.vm02.stdout:0/400: mkdir d9/d18/d1a/d22/d24/d80/d57/d81 0 2026-03-10T10:19:32.296 INFO:tasks.workunit.client.0.vm02.stdout:4/500: write d1/d10/db/f16 [1925542,108528] 0 2026-03-10T10:19:32.297 INFO:tasks.workunit.client.1.vm05.stdout:1/327: rename d4/dd/f1f to d4/df/d1c/f63 0 2026-03-10T10:19:32.299 INFO:tasks.workunit.client.1.vm05.stdout:1/328: write d4/d20/f2d [1652229,122807] 0 2026-03-10T10:19:32.308 INFO:tasks.workunit.client.0.vm02.stdout:1/365: rename d4/da/d1a/f3d to d4/da/d1a/d5b/f79 0 2026-03-10T10:19:32.310 INFO:tasks.workunit.client.1.vm05.stdout:0/343: write d1/d2/d9/d31/d13/d2f/d49/f5c [846074,88298] 0 2026-03-10T10:19:32.311 INFO:tasks.workunit.client.1.vm05.stdout:0/344: write d1/d2/d9/d31/d12/f5b [332773,22243] 0 2026-03-10T10:19:32.313 INFO:tasks.workunit.client.1.vm05.stdout:2/360: creat db/d61/d67/f6e x:0 0 0 2026-03-10T10:19:32.318 INFO:tasks.workunit.client.1.vm05.stdout:1/329: rename d4/d37/f41 to d4/dd/f64 0 2026-03-10T10:19:32.328 INFO:tasks.workunit.client.1.vm05.stdout:5/377: mknod da/db/d28/c7f 0 2026-03-10T10:19:32.328 INFO:tasks.workunit.client.1.vm05.stdout:1/330: dwrite d4/d39/f3a [0,4194304] 0 2026-03-10T10:19:32.328 INFO:tasks.workunit.client.0.vm02.stdout:4/501: unlink d1/d41/d5e/d78/f31 0 2026-03-10T10:19:32.328 INFO:tasks.workunit.client.0.vm02.stdout:3/353: link d1/d8/d21/l24 d1/d8/d21/d73/l75 0 2026-03-10T10:19:32.328 INFO:tasks.workunit.client.0.vm02.stdout:3/354: dread d1/d8/fb [0,4194304] 0 2026-03-10T10:19:32.328 INFO:tasks.workunit.client.0.vm02.stdout:3/355: dwrite d1/d6/f43 [0,4194304] 0 2026-03-10T10:19:32.336 INFO:tasks.workunit.client.1.vm05.stdout:4/285: creat d1/d3/f5f x:0 0 0 2026-03-10T10:19:32.338 INFO:tasks.workunit.client.1.vm05.stdout:8/263: dread d7/f2b [0,4194304] 0 2026-03-10T10:19:32.340 INFO:tasks.workunit.client.0.vm02.stdout:2/405: rename d0/fe to d0/d1a/d24/d80/f89 0 2026-03-10T10:19:32.342 INFO:tasks.workunit.client.1.vm05.stdout:6/284: truncate dd/d36/d3f/d12/d44/f2f 48651 0 2026-03-10T10:19:32.351 INFO:tasks.workunit.client.0.vm02.stdout:1/366: truncate d4/d2c/f43 1032130 0 2026-03-10T10:19:32.354 INFO:tasks.workunit.client.0.vm02.stdout:3/356: creat d1/d20/d52/f76 x:0 0 0 2026-03-10T10:19:32.357 INFO:tasks.workunit.client.0.vm02.stdout:2/406: fdatasync d0/d10/f5f 0 2026-03-10T10:19:32.359 INFO:tasks.workunit.client.0.vm02.stdout:8/401: dread d1/d1c/d23/f3b [0,4194304] 0 2026-03-10T10:19:32.361 INFO:tasks.workunit.client.0.vm02.stdout:2/407: dwrite d0/d1a/f53 [4194304,4194304] 0 2026-03-10T10:19:32.361 INFO:tasks.workunit.client.1.vm05.stdout:5/378: fsync da/db/fd 0 2026-03-10T10:19:32.362 INFO:tasks.workunit.client.0.vm02.stdout:2/408: chown d0/d1a/d49/f4f 420 1 2026-03-10T10:19:32.368 INFO:tasks.workunit.client.0.vm02.stdout:2/409: dwrite d0/d1a/f31 [4194304,4194304] 0 2026-03-10T10:19:32.378 INFO:tasks.workunit.client.1.vm05.stdout:3/409: write dd/f41 [988292,125917] 0 2026-03-10T10:19:32.378 INFO:tasks.workunit.client.1.vm05.stdout:6/285: rmdir dd/d36/d3f/d12/d44/d30 39 2026-03-10T10:19:32.379 INFO:tasks.workunit.client.0.vm02.stdout:4/502: mknod d1/d52/d53/ca4 0 2026-03-10T10:19:32.380 INFO:tasks.workunit.client.1.vm05.stdout:0/345: creat d1/d2/d39/d6e/f6f x:0 0 0 2026-03-10T10:19:32.382 INFO:tasks.workunit.client.1.vm05.stdout:0/346: chown d1/d2/d9/d31/d13/d15/l2c 3 1 2026-03-10T10:19:32.383 INFO:tasks.workunit.client.1.vm05.stdout:1/331: creat d4/df/d1c/d53/f65 x:0 0 0 2026-03-10T10:19:32.384 INFO:tasks.workunit.client.1.vm05.stdout:3/410: dwrite dd/d15/d24/d2c/f38 [0,4194304] 0 2026-03-10T10:19:32.386 INFO:tasks.workunit.client.1.vm05.stdout:8/264: link d7/d14/d15/f3c d7/d14/d15/f51 0 2026-03-10T10:19:32.387 INFO:tasks.workunit.client.1.vm05.stdout:6/286: rename dd/l2e to dd/d36/d3f/d12/d59/l5b 0 2026-03-10T10:19:32.387 INFO:tasks.workunit.client.0.vm02.stdout:2/410: mkdir d0/d1a/d49/d5e/d8a 0 2026-03-10T10:19:32.388 INFO:tasks.workunit.client.1.vm05.stdout:6/287: chown dd/d36/d3f/d12/c33 2349 1 2026-03-10T10:19:32.392 INFO:tasks.workunit.client.1.vm05.stdout:0/347: dwrite d1/d2/d9/d31/d13/d17/f1b [4194304,4194304] 0 2026-03-10T10:19:32.399 INFO:tasks.workunit.client.0.vm02.stdout:7/369: dread d1/dc/f26 [0,4194304] 0 2026-03-10T10:19:32.409 INFO:tasks.workunit.client.0.vm02.stdout:9/355: creat da/d3c/d4c/d38/d4a/d70/f74 x:0 0 0 2026-03-10T10:19:32.413 INFO:tasks.workunit.client.0.vm02.stdout:0/401: dread d9/d34/d3d/f41 [0,4194304] 0 2026-03-10T10:19:32.413 INFO:tasks.workunit.client.1.vm05.stdout:3/411: rmdir dd/d20 39 2026-03-10T10:19:32.413 INFO:tasks.workunit.client.0.vm02.stdout:2/411: stat d0/l43 0 2026-03-10T10:19:32.416 INFO:tasks.workunit.client.0.vm02.stdout:3/357: creat d1/f77 x:0 0 0 2026-03-10T10:19:32.418 INFO:tasks.workunit.client.0.vm02.stdout:7/370: mknod d1/dc/d60/c71 0 2026-03-10T10:19:32.418 INFO:tasks.workunit.client.1.vm05.stdout:9/313: dread d0/d1/f4a [0,4194304] 0 2026-03-10T10:19:32.418 INFO:tasks.workunit.client.1.vm05.stdout:8/265: chown d7/f21 0 1 2026-03-10T10:19:32.421 INFO:tasks.workunit.client.1.vm05.stdout:9/314: dwrite d0/d1/d16/f40 [0,4194304] 0 2026-03-10T10:19:32.421 INFO:tasks.workunit.client.0.vm02.stdout:7/371: dwrite d1/d1b/d49/f4b [4194304,4194304] 0 2026-03-10T10:19:32.426 INFO:tasks.workunit.client.1.vm05.stdout:3/412: mkdir dd/d15/d24/d8e 0 2026-03-10T10:19:32.430 INFO:tasks.workunit.client.1.vm05.stdout:5/379: dread da/f2e [0,4194304] 0 2026-03-10T10:19:32.431 INFO:tasks.workunit.client.1.vm05.stdout:5/380: write da/db/d26/f64 [849834,89579] 0 2026-03-10T10:19:32.431 INFO:tasks.workunit.client.0.vm02.stdout:0/402: read d9/d34/d3d/d65/f7a [283945,67106] 0 2026-03-10T10:19:32.438 INFO:tasks.workunit.client.1.vm05.stdout:1/332: truncate d4/d3d/f57 205797 0 2026-03-10T10:19:32.443 INFO:tasks.workunit.client.0.vm02.stdout:7/372: unlink d1/f34 0 2026-03-10T10:19:32.445 INFO:tasks.workunit.client.1.vm05.stdout:8/266: symlink d7/d14/d24/d3f/l52 0 2026-03-10T10:19:32.446 INFO:tasks.workunit.client.0.vm02.stdout:7/373: dread d1/f17 [0,4194304] 0 2026-03-10T10:19:32.448 INFO:tasks.workunit.client.0.vm02.stdout:3/358: fsync d1/d6/f43 0 2026-03-10T10:19:32.455 INFO:tasks.workunit.client.1.vm05.stdout:0/348: symlink d1/d2/d5d/l70 0 2026-03-10T10:19:32.456 INFO:tasks.workunit.client.1.vm05.stdout:0/349: readlink d1/d2/d9/d31/d13/d15/l59 0 2026-03-10T10:19:32.456 INFO:tasks.workunit.client.0.vm02.stdout:9/356: mkdir da/d3c/d4c/d75 0 2026-03-10T10:19:32.457 INFO:tasks.workunit.client.1.vm05.stdout:0/350: write d1/d2/d9/d31/f36 [1936166,11973] 0 2026-03-10T10:19:32.459 INFO:tasks.workunit.client.1.vm05.stdout:3/413: symlink dd/d15/d24/d2c/d3b/l8f 0 2026-03-10T10:19:32.461 INFO:tasks.workunit.client.1.vm05.stdout:1/333: readlink d4/l42 0 2026-03-10T10:19:32.461 INFO:tasks.workunit.client.0.vm02.stdout:9/357: mknod da/d3c/d4c/d2c/d34/d35/c76 0 2026-03-10T10:19:32.462 INFO:tasks.workunit.client.1.vm05.stdout:1/334: read d4/f46 [101829,15004] 0 2026-03-10T10:19:32.463 INFO:tasks.workunit.client.1.vm05.stdout:0/351: creat d1/d2/d9/d31/d12/d20/f71 x:0 0 0 2026-03-10T10:19:32.464 INFO:tasks.workunit.client.1.vm05.stdout:3/414: chown dd/d15/d1f/c4f 211142 1 2026-03-10T10:19:32.464 INFO:tasks.workunit.client.1.vm05.stdout:0/352: stat d1/d2/d9/d31/d12 0 2026-03-10T10:19:32.464 INFO:tasks.workunit.client.1.vm05.stdout:8/267: mknod d7/d14/d24/d3f/d4f/c53 0 2026-03-10T10:19:32.466 INFO:tasks.workunit.client.1.vm05.stdout:1/335: mkdir d4/df/d1c/d53/d66 0 2026-03-10T10:19:32.466 INFO:tasks.workunit.client.1.vm05.stdout:1/336: readlink d4/df/d1c/l56 0 2026-03-10T10:19:32.489 INFO:tasks.workunit.client.1.vm05.stdout:1/337: creat d4/d39/f67 x:0 0 0 2026-03-10T10:19:32.490 INFO:tasks.workunit.client.1.vm05.stdout:1/338: dread d4/df/d1c/f38 [0,4194304] 0 2026-03-10T10:19:32.490 INFO:tasks.workunit.client.1.vm05.stdout:1/339: dwrite d4/df/d1c/f2a [4194304,4194304] 0 2026-03-10T10:19:32.490 INFO:tasks.workunit.client.1.vm05.stdout:8/268: creat d7/d14/d3a/d49/f54 x:0 0 0 2026-03-10T10:19:32.491 INFO:tasks.workunit.client.1.vm05.stdout:0/353: dread d1/d2/d9/d31/d13/f3e [0,4194304] 0 2026-03-10T10:19:32.494 INFO:tasks.workunit.client.1.vm05.stdout:0/354: dread d1/d2/d9/d31/d54/f4 [0,4194304] 0 2026-03-10T10:19:32.501 INFO:tasks.workunit.client.0.vm02.stdout:3/359: sync 2026-03-10T10:19:32.501 INFO:tasks.workunit.client.0.vm02.stdout:3/360: stat d1/d8/d21/l70 0 2026-03-10T10:19:32.503 INFO:tasks.workunit.client.1.vm05.stdout:0/355: getdents d1/d2/d9 0 2026-03-10T10:19:32.504 INFO:tasks.workunit.client.0.vm02.stdout:3/361: mkdir d1/d8/d21/d73/d78 0 2026-03-10T10:19:32.506 INFO:tasks.workunit.client.1.vm05.stdout:0/356: dwrite d1/d2/d39/d3d/f64 [0,4194304] 0 2026-03-10T10:19:32.507 INFO:tasks.workunit.client.0.vm02.stdout:3/362: mkdir d1/d8/d21/d73/d78/d79 0 2026-03-10T10:19:32.507 INFO:tasks.workunit.client.0.vm02.stdout:3/363: chown d1/d6/f63 35701502 1 2026-03-10T10:19:32.508 INFO:tasks.workunit.client.0.vm02.stdout:4/503: dread d1/d41/d5e/d78/d1a/d49/f5c [0,4194304] 0 2026-03-10T10:19:32.509 INFO:tasks.workunit.client.0.vm02.stdout:3/364: fsync d1/d8/d21/f5e 0 2026-03-10T10:19:32.510 INFO:tasks.workunit.client.1.vm05.stdout:0/357: dread d1/d2/d9/d31/d13/d17/f56 [0,4194304] 0 2026-03-10T10:19:32.517 INFO:tasks.workunit.client.0.vm02.stdout:3/365: mknod d1/d20/d52/c7a 0 2026-03-10T10:19:32.517 INFO:tasks.workunit.client.0.vm02.stdout:6/378: truncate d0/f4c 8072843 0 2026-03-10T10:19:32.519 INFO:tasks.workunit.client.1.vm05.stdout:0/358: read d1/d2/d9/f32 [326587,92116] 0 2026-03-10T10:19:32.520 INFO:tasks.workunit.client.0.vm02.stdout:6/379: mkdir d0/d8/d29/d2f/d50/d7e 0 2026-03-10T10:19:32.521 INFO:tasks.workunit.client.0.vm02.stdout:6/380: write d0/d8/d29/d2f/d4b/f53 [4474508,90836] 0 2026-03-10T10:19:32.521 INFO:tasks.workunit.client.0.vm02.stdout:6/381: fsync d0/d8/d29/d2f/f77 0 2026-03-10T10:19:32.524 INFO:tasks.workunit.client.1.vm05.stdout:0/359: creat d1/d2/d39/d3d/f72 x:0 0 0 2026-03-10T10:19:32.525 INFO:tasks.workunit.client.0.vm02.stdout:6/382: mkdir d0/d7f 0 2026-03-10T10:19:32.526 INFO:tasks.workunit.client.1.vm05.stdout:0/360: write d1/d2/d9/f32 [4193977,124873] 0 2026-03-10T10:19:32.528 INFO:tasks.workunit.client.0.vm02.stdout:6/383: link d0/d8/c3f d0/d8/d29/d6d/d32/d60/d6f/c80 0 2026-03-10T10:19:32.530 INFO:tasks.workunit.client.1.vm05.stdout:0/361: dwrite d1/d2/d39/d3d/f72 [0,4194304] 0 2026-03-10T10:19:32.531 INFO:tasks.workunit.client.0.vm02.stdout:6/384: rename d0/d8/c51 to d0/d8/c81 0 2026-03-10T10:19:32.543 INFO:tasks.workunit.client.0.vm02.stdout:6/385: creat d0/d8/d9/f82 x:0 0 0 2026-03-10T10:19:32.547 INFO:tasks.workunit.client.1.vm05.stdout:0/362: link d1/d2/d9/d31/d12/f1e d1/d2/d9/d31/d13/f73 0 2026-03-10T10:19:32.552 INFO:tasks.workunit.client.1.vm05.stdout:1/340: fdatasync d4/d39/f3a 0 2026-03-10T10:19:32.552 INFO:tasks.workunit.client.1.vm05.stdout:1/341: chown d4/df/d1c/f38 373272 1 2026-03-10T10:19:32.557 INFO:tasks.workunit.client.0.vm02.stdout:4/504: dread d1/d41/d5e/d78/f4 [0,4194304] 0 2026-03-10T10:19:32.558 INFO:tasks.workunit.client.0.vm02.stdout:4/505: truncate d1/d41/d5e/d78/d44/f90 208742 0 2026-03-10T10:19:32.561 INFO:tasks.workunit.client.0.vm02.stdout:4/506: dwrite d1/d10/db/f24 [0,4194304] 0 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.0.vm02.stdout:4/507: rmdir d1/d41/d5e/d78/d37 39 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.0.vm02.stdout:4/508: write d1/d75/f85 [565915,120225] 0 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.0.vm02.stdout:4/509: read - d1/d41/d5e/d78/d1a/d49/f7a zero size 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.0.vm02.stdout:4/510: link d1/d32/l36 d1/d52/d53/la5 0 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.1.vm05.stdout:0/363: symlink d1/d2/d9/d31/d54/l74 0 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.1.vm05.stdout:2/361: fdatasync db/d28/f30 0 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.1.vm05.stdout:2/362: write db/d28/d4f/f68 [891394,118410] 0 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.1.vm05.stdout:2/363: stat db/d4e/d6c 0 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.1.vm05.stdout:2/364: dread db/d1c/f3d [0,4194304] 0 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.1.vm05.stdout:2/365: write db/d1c/f69 [3810851,18657] 0 2026-03-10T10:19:32.579 INFO:tasks.workunit.client.1.vm05.stdout:0/364: write d1/d2/d9/d31/d54/f4 [1713818,53150] 0 2026-03-10T10:19:32.587 INFO:tasks.workunit.client.1.vm05.stdout:1/342: dread d4/d39/d3e/f3f [0,4194304] 0 2026-03-10T10:19:32.592 INFO:tasks.workunit.client.1.vm05.stdout:1/343: mknod d4/c68 0 2026-03-10T10:19:32.595 INFO:tasks.workunit.client.1.vm05.stdout:1/344: read d4/d20/f2c [68170,69514] 0 2026-03-10T10:19:32.598 INFO:tasks.workunit.client.1.vm05.stdout:1/345: symlink d4/df/d1c/d53/d66/l69 0 2026-03-10T10:19:32.603 INFO:tasks.workunit.client.1.vm05.stdout:1/346: chown d4/d39/d3e/c40 1 1 2026-03-10T10:19:32.603 INFO:tasks.workunit.client.1.vm05.stdout:1/347: write d4/f36 [4191503,47916] 0 2026-03-10T10:19:32.628 INFO:tasks.workunit.client.1.vm05.stdout:7/418: truncate d5/d26/f41 2565110 0 2026-03-10T10:19:32.634 INFO:tasks.workunit.client.0.vm02.stdout:2/412: dwrite d0/d10/f5f [8388608,4194304] 0 2026-03-10T10:19:32.637 INFO:tasks.workunit.client.0.vm02.stdout:1/367: write d4/da/d1a/f1c [86928,112001] 0 2026-03-10T10:19:32.641 INFO:tasks.workunit.client.1.vm05.stdout:4/286: truncate d1/d31/f36 3096885 0 2026-03-10T10:19:32.650 INFO:tasks.workunit.client.1.vm05.stdout:7/419: creat d5/d1d/d20/d35/d6f/f81 x:0 0 0 2026-03-10T10:19:32.652 INFO:tasks.workunit.client.1.vm05.stdout:8/269: fsync d7/d14/d3a/f50 0 2026-03-10T10:19:32.656 INFO:tasks.workunit.client.0.vm02.stdout:5/551: dwrite d1/f68 [0,4194304] 0 2026-03-10T10:19:32.660 INFO:tasks.workunit.client.1.vm05.stdout:4/287: fsync d1/d31/dc/f25 0 2026-03-10T10:19:32.661 INFO:tasks.workunit.client.1.vm05.stdout:9/315: dread d0/f7 [0,4194304] 0 2026-03-10T10:19:32.667 INFO:tasks.workunit.client.1.vm05.stdout:7/420: mkdir d5/d1d/d20/d35/d6f/d82 0 2026-03-10T10:19:32.673 INFO:tasks.workunit.client.0.vm02.stdout:8/402: dwrite d1/f65 [0,4194304] 0 2026-03-10T10:19:32.687 INFO:tasks.workunit.client.1.vm05.stdout:9/316: rename d0/d1/d13/d26/l48 to d0/d1/d16/l65 0 2026-03-10T10:19:32.692 INFO:tasks.workunit.client.0.vm02.stdout:7/374: rmdir d1 39 2026-03-10T10:19:32.695 INFO:tasks.workunit.client.1.vm05.stdout:5/381: dread da/db/d26/f64 [0,4194304] 0 2026-03-10T10:19:32.696 INFO:tasks.workunit.client.1.vm05.stdout:9/317: unlink d0/df/l54 0 2026-03-10T10:19:32.696 INFO:tasks.workunit.client.1.vm05.stdout:9/318: fsync d0/d1/d16/f5c 0 2026-03-10T10:19:32.697 INFO:tasks.workunit.client.1.vm05.stdout:6/288: write dd/d36/d3f/d12/d44/d2a/d3d/f53 [508379,10734] 0 2026-03-10T10:19:32.698 INFO:tasks.workunit.client.1.vm05.stdout:6/289: stat dd/d36/d3f/d12/d44/d2a/d3d/d3e 0 2026-03-10T10:19:32.698 INFO:tasks.workunit.client.1.vm05.stdout:7/421: symlink d5/d17/d66/l83 0 2026-03-10T10:19:32.699 INFO:tasks.workunit.client.1.vm05.stdout:6/290: stat dd/d36/d3f/d12/d24/d28 0 2026-03-10T10:19:32.699 INFO:tasks.workunit.client.0.vm02.stdout:0/403: dwrite d9/d34/d3d/d7b/f3a [0,4194304] 0 2026-03-10T10:19:32.701 INFO:tasks.workunit.client.1.vm05.stdout:8/270: dread d7/d14/d24/f34 [0,4194304] 0 2026-03-10T10:19:32.703 INFO:tasks.workunit.client.1.vm05.stdout:6/291: dwrite dd/d36/d3f/d12/d58/f5a [0,4194304] 0 2026-03-10T10:19:32.713 INFO:tasks.workunit.client.1.vm05.stdout:6/292: dwrite dd/f29 [4194304,4194304] 0 2026-03-10T10:19:32.717 INFO:tasks.workunit.client.0.vm02.stdout:9/358: dwrite da/f1f [0,4194304] 0 2026-03-10T10:19:32.727 INFO:tasks.workunit.client.1.vm05.stdout:7/422: mknod d5/dd/c84 0 2026-03-10T10:19:32.728 INFO:tasks.workunit.client.1.vm05.stdout:7/423: chown d5/d1d/d20 4008870 1 2026-03-10T10:19:32.730 INFO:tasks.workunit.client.1.vm05.stdout:3/415: dwrite dd/d15/f1c [0,4194304] 0 2026-03-10T10:19:32.735 INFO:tasks.workunit.client.0.vm02.stdout:3/366: write d1/d8/f46 [1878529,81515] 0 2026-03-10T10:19:32.736 INFO:tasks.workunit.client.0.vm02.stdout:6/386: truncate d0/d8/d9/f13 2901110 0 2026-03-10T10:19:32.737 INFO:tasks.workunit.client.0.vm02.stdout:6/387: dread - d0/d8/d29/d6d/d32/f75 zero size 2026-03-10T10:19:32.744 INFO:tasks.workunit.client.1.vm05.stdout:7/424: dwrite d5/d26/f4d [0,4194304] 0 2026-03-10T10:19:32.745 INFO:tasks.workunit.client.0.vm02.stdout:1/368: creat d4/f7a x:0 0 0 2026-03-10T10:19:32.745 INFO:tasks.workunit.client.1.vm05.stdout:8/271: readlink d7/l28 0 2026-03-10T10:19:32.746 INFO:tasks.workunit.client.1.vm05.stdout:4/288: getdents d1/d31/dc 0 2026-03-10T10:19:32.746 INFO:tasks.workunit.client.0.vm02.stdout:8/403: rename d1/d1c/d43/f78 to d1/d1c/d43/d5b/f79 0 2026-03-10T10:19:32.747 INFO:tasks.workunit.client.0.vm02.stdout:5/552: rename d1/db to d1/db/d11/d84/dbd 22 2026-03-10T10:19:32.748 INFO:tasks.workunit.client.0.vm02.stdout:5/553: dread - d1/db/d11/d16/d48/f5b zero size 2026-03-10T10:19:32.750 INFO:tasks.workunit.client.0.vm02.stdout:3/367: creat d1/d20/f7b x:0 0 0 2026-03-10T10:19:32.751 INFO:tasks.workunit.client.1.vm05.stdout:5/382: mknod da/c80 0 2026-03-10T10:19:32.752 INFO:tasks.workunit.client.0.vm02.stdout:1/369: symlink d4/d2c/d53/l7b 0 2026-03-10T10:19:32.753 INFO:tasks.workunit.client.0.vm02.stdout:1/370: stat d4/da/f73 0 2026-03-10T10:19:32.754 INFO:tasks.workunit.client.0.vm02.stdout:0/404: dread d9/d18/f2a [0,4194304] 0 2026-03-10T10:19:32.756 INFO:tasks.workunit.client.0.vm02.stdout:8/404: fdatasync d1/d1c/f3f 0 2026-03-10T10:19:32.758 INFO:tasks.workunit.client.0.vm02.stdout:9/359: rename da/f30 to da/d3c/d4c/d56/f77 0 2026-03-10T10:19:32.760 INFO:tasks.workunit.client.0.vm02.stdout:1/371: mknod d4/da/d27/c7c 0 2026-03-10T10:19:32.762 INFO:tasks.workunit.client.0.vm02.stdout:0/405: fdatasync d9/f28 0 2026-03-10T10:19:32.763 INFO:tasks.workunit.client.1.vm05.stdout:5/383: mknod da/db/d26/d35/c81 0 2026-03-10T10:19:32.765 INFO:tasks.workunit.client.0.vm02.stdout:8/405: chown d1/d1c/l2c 92421 1 2026-03-10T10:19:32.766 INFO:tasks.workunit.client.0.vm02.stdout:9/360: symlink da/d3c/d4c/d38/l78 0 2026-03-10T10:19:32.767 INFO:tasks.workunit.client.0.vm02.stdout:1/372: symlink d4/d1b/l7d 0 2026-03-10T10:19:32.771 INFO:tasks.workunit.client.1.vm05.stdout:5/384: rename da/db/f1d to da/db/d26/d70/f82 0 2026-03-10T10:19:32.772 INFO:tasks.workunit.client.0.vm02.stdout:8/406: readlink d1/d1c/d23/d25/l32 0 2026-03-10T10:19:32.772 INFO:tasks.workunit.client.0.vm02.stdout:0/406: dwrite d9/d18/f6a [0,4194304] 0 2026-03-10T10:19:32.775 INFO:tasks.workunit.client.1.vm05.stdout:2/366: truncate db/d1c/f69 3130766 0 2026-03-10T10:19:32.778 INFO:tasks.workunit.client.1.vm05.stdout:5/385: dwrite da/db/d28/d32/f79 [0,4194304] 0 2026-03-10T10:19:32.779 INFO:tasks.workunit.client.0.vm02.stdout:4/511: dwrite d1/d10/db/f91 [4194304,4194304] 0 2026-03-10T10:19:32.780 INFO:tasks.workunit.client.0.vm02.stdout:4/512: stat d1/d41/d5e/d78/d1a/f98 0 2026-03-10T10:19:32.784 INFO:tasks.workunit.client.0.vm02.stdout:5/554: sync 2026-03-10T10:19:32.787 INFO:tasks.workunit.client.0.vm02.stdout:1/373: creat d4/da/d1a/d47/d65/f7e x:0 0 0 2026-03-10T10:19:32.788 INFO:tasks.workunit.client.0.vm02.stdout:1/374: chown d4/da/d27/d38/d3c 4 1 2026-03-10T10:19:32.795 INFO:tasks.workunit.client.0.vm02.stdout:0/407: symlink d9/d34/d3d/d7b/l82 0 2026-03-10T10:19:32.796 INFO:tasks.workunit.client.1.vm05.stdout:2/367: truncate db/f4a 6390 0 2026-03-10T10:19:32.796 INFO:tasks.workunit.client.0.vm02.stdout:0/408: read d9/d34/d3d/d65/f7a [550530,106437] 0 2026-03-10T10:19:32.799 INFO:tasks.workunit.client.0.vm02.stdout:4/513: creat d1/d32/d3e/fa6 x:0 0 0 2026-03-10T10:19:32.804 INFO:tasks.workunit.client.1.vm05.stdout:3/416: link dd/d15/d24/d2c/c4b dd/d15/d24/d2c/d6d/c90 0 2026-03-10T10:19:32.804 INFO:tasks.workunit.client.1.vm05.stdout:4/289: getdents d1/d31/d4b 0 2026-03-10T10:19:32.804 INFO:tasks.workunit.client.0.vm02.stdout:4/514: chown d1/d52/d53/c54 29 1 2026-03-10T10:19:32.804 INFO:tasks.workunit.client.0.vm02.stdout:5/555: symlink d1/db/d11/d1a/lbe 0 2026-03-10T10:19:32.804 INFO:tasks.workunit.client.0.vm02.stdout:5/556: readlink d1/db/d11/d84/d40/d4f/l7e 0 2026-03-10T10:19:32.804 INFO:tasks.workunit.client.0.vm02.stdout:1/375: mknod d4/d4a/c7f 0 2026-03-10T10:19:32.804 INFO:tasks.workunit.client.0.vm02.stdout:1/376: readlink d4/d1b/l2e 0 2026-03-10T10:19:32.804 INFO:tasks.workunit.client.0.vm02.stdout:5/557: dwrite d1/db/f2f [0,4194304] 0 2026-03-10T10:19:32.807 INFO:tasks.workunit.client.1.vm05.stdout:0/365: link d1/d2/d9/d31/d13/d15/l45 d1/d2/d39/d3d/l75 0 2026-03-10T10:19:32.807 INFO:tasks.workunit.client.1.vm05.stdout:0/366: chown d1/d2/d9/d31/d12 189 1 2026-03-10T10:19:32.808 INFO:tasks.workunit.client.1.vm05.stdout:0/367: chown d1/d2/d9/f1d 11 1 2026-03-10T10:19:32.822 INFO:tasks.workunit.client.0.vm02.stdout:0/409: rmdir d9/d18/d1a/d22/d24/d51 39 2026-03-10T10:19:32.822 INFO:tasks.workunit.client.0.vm02.stdout:4/515: symlink d1/d32/d3e/la7 0 2026-03-10T10:19:32.824 INFO:tasks.workunit.client.0.vm02.stdout:1/377: mkdir d4/da/d27/d38/d80 0 2026-03-10T10:19:32.824 INFO:tasks.workunit.client.1.vm05.stdout:2/368: creat db/d28/d4f/d59/f6f x:0 0 0 2026-03-10T10:19:32.824 INFO:tasks.workunit.client.0.vm02.stdout:5/558: creat d1/db/d11/d62/fbf x:0 0 0 2026-03-10T10:19:32.824 INFO:tasks.workunit.client.0.vm02.stdout:8/407: creat d1/d1c/d43/f7a x:0 0 0 2026-03-10T10:19:32.825 INFO:tasks.workunit.client.0.vm02.stdout:5/559: fsync d1/db/d11/f3e 0 2026-03-10T10:19:32.829 INFO:tasks.workunit.client.1.vm05.stdout:3/417: symlink dd/d15/d24/d74/d88/l91 0 2026-03-10T10:19:32.829 INFO:tasks.workunit.client.0.vm02.stdout:0/410: dwrite d9/f6c [0,4194304] 0 2026-03-10T10:19:32.829 INFO:tasks.workunit.client.1.vm05.stdout:3/418: read dd/d39/f6f [4953057,33742] 0 2026-03-10T10:19:32.842 INFO:tasks.workunit.client.1.vm05.stdout:2/369: rename db/d1c/f54 to db/d1c/d40/f70 0 2026-03-10T10:19:32.842 INFO:tasks.workunit.client.1.vm05.stdout:2/370: chown db/d1c/d40/l43 23882 1 2026-03-10T10:19:32.844 INFO:tasks.workunit.client.1.vm05.stdout:0/368: mknod d1/d2/d9/d31/d13/c76 0 2026-03-10T10:19:32.845 INFO:tasks.workunit.client.0.vm02.stdout:5/560: dread - d1/db/d11/f7d zero size 2026-03-10T10:19:32.853 INFO:tasks.workunit.client.0.vm02.stdout:8/408: mknod d1/d1c/d23/c7b 0 2026-03-10T10:19:32.855 INFO:tasks.workunit.client.0.vm02.stdout:5/561: rmdir d1/db/d11/d16/d79/d85/d93 39 2026-03-10T10:19:32.857 INFO:tasks.workunit.client.0.vm02.stdout:5/562: dread d1/f68 [0,4194304] 0 2026-03-10T10:19:32.861 INFO:tasks.workunit.client.0.vm02.stdout:5/563: dread d1/db/d11/d13/f1c [0,4194304] 0 2026-03-10T10:19:32.862 INFO:tasks.workunit.client.0.vm02.stdout:1/378: creat d4/f81 x:0 0 0 2026-03-10T10:19:32.865 INFO:tasks.workunit.client.0.vm02.stdout:8/409: chown d1/d1c/c11 1258 1 2026-03-10T10:19:32.868 INFO:tasks.workunit.client.1.vm05.stdout:3/419: mknod dd/d15/d24/d2c/c92 0 2026-03-10T10:19:32.868 INFO:tasks.workunit.client.1.vm05.stdout:3/420: chown dd/f41 93493599 1 2026-03-10T10:19:32.868 INFO:tasks.workunit.client.1.vm05.stdout:2/371: truncate db/d2d/f5d 2698480 0 2026-03-10T10:19:32.868 INFO:tasks.workunit.client.1.vm05.stdout:0/369: mknod d1/d2/d9/d31/d12/c77 0 2026-03-10T10:19:32.868 INFO:tasks.workunit.client.1.vm05.stdout:0/370: dread - d1/d2/d9/f6c zero size 2026-03-10T10:19:32.868 INFO:tasks.workunit.client.1.vm05.stdout:3/421: unlink dd/d15/d1f/c4f 0 2026-03-10T10:19:32.876 INFO:tasks.workunit.client.0.vm02.stdout:1/379: unlink d4/f18 0 2026-03-10T10:19:32.878 INFO:tasks.workunit.client.0.vm02.stdout:0/411: getdents d9/d34/d3d 0 2026-03-10T10:19:32.890 INFO:tasks.workunit.client.0.vm02.stdout:2/413: write d0/d1a/d24/d80/f89 [3400885,24237] 0 2026-03-10T10:19:32.890 INFO:tasks.workunit.client.1.vm05.stdout:0/371: dwrite d1/d2/d9/d31/d13/d17/f56 [0,4194304] 0 2026-03-10T10:19:32.890 INFO:tasks.workunit.client.1.vm05.stdout:0/372: chown d1/c1a 51 1 2026-03-10T10:19:32.890 INFO:tasks.workunit.client.1.vm05.stdout:0/373: write d1/d2/d39/d3d/f44 [874151,101441] 0 2026-03-10T10:19:32.890 INFO:tasks.workunit.client.1.vm05.stdout:3/422: symlink dd/d39/d5f/l93 0 2026-03-10T10:19:32.894 INFO:tasks.workunit.client.0.vm02.stdout:5/564: link d1/db/d11/d13/d28/d37/d3d/f49 d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fc0 0 2026-03-10T10:19:32.895 INFO:tasks.workunit.client.0.vm02.stdout:5/565: write d1/db/d11/f3e [4785731,65988] 0 2026-03-10T10:19:32.896 INFO:tasks.workunit.client.0.vm02.stdout:5/566: write d1/db/d11/d84/d40/fb3 [999107,71369] 0 2026-03-10T10:19:32.899 INFO:tasks.workunit.client.0.vm02.stdout:5/567: creat d1/db/d11/d16/d48/fc1 x:0 0 0 2026-03-10T10:19:32.909 INFO:tasks.workunit.client.1.vm05.stdout:0/374: mknod d1/d2/d9/d31/d13/d55/c78 0 2026-03-10T10:19:32.909 INFO:tasks.workunit.client.1.vm05.stdout:3/423: mkdir dd/d20/d94 0 2026-03-10T10:19:32.909 INFO:tasks.workunit.client.1.vm05.stdout:3/424: mkdir dd/d15/d1f/d95 0 2026-03-10T10:19:32.909 INFO:tasks.workunit.client.1.vm05.stdout:0/375: rename d1/d2/d39/c61 to d1/d2/d9/d31/d13/d2f/c79 0 2026-03-10T10:19:32.917 INFO:tasks.workunit.client.0.vm02.stdout:1/380: read d4/d2c/f43 [162140,15278] 0 2026-03-10T10:19:32.921 INFO:tasks.workunit.client.0.vm02.stdout:1/381: link d4/d2c/d53/c59 d4/da/d27/c82 0 2026-03-10T10:19:32.921 INFO:tasks.workunit.client.0.vm02.stdout:1/382: write d4/fe [3048896,94585] 0 2026-03-10T10:19:32.922 INFO:tasks.workunit.client.0.vm02.stdout:1/383: dread - d4/d2c/d53/f75 zero size 2026-03-10T10:19:32.922 INFO:tasks.workunit.client.0.vm02.stdout:1/384: stat d4/da/d27/d38/l42 0 2026-03-10T10:19:32.923 INFO:tasks.workunit.client.0.vm02.stdout:1/385: mknod d4/d2c/d53/c83 0 2026-03-10T10:19:32.925 INFO:tasks.workunit.client.0.vm02.stdout:1/386: symlink d4/da/d27/d38/d3c/l84 0 2026-03-10T10:19:32.925 INFO:tasks.workunit.client.0.vm02.stdout:1/387: stat d4/da/c50 0 2026-03-10T10:19:32.926 INFO:tasks.workunit.client.0.vm02.stdout:1/388: unlink d4/l67 0 2026-03-10T10:19:32.949 INFO:tasks.workunit.client.0.vm02.stdout:1/389: dread d4/f8 [0,4194304] 0 2026-03-10T10:19:32.950 INFO:tasks.workunit.client.0.vm02.stdout:1/390: truncate d4/da/d27/f66 1139977 0 2026-03-10T10:19:32.954 INFO:tasks.workunit.client.1.vm05.stdout:1/348: truncate d4/d3d/f57 1228807 0 2026-03-10T10:19:32.955 INFO:tasks.workunit.client.0.vm02.stdout:1/391: fsync d4/d1b/f44 0 2026-03-10T10:19:32.958 INFO:tasks.workunit.client.1.vm05.stdout:1/349: dwrite d4/d20/f49 [0,4194304] 0 2026-03-10T10:19:32.964 INFO:tasks.workunit.client.1.vm05.stdout:1/350: fsync d4/dd/f64 0 2026-03-10T10:19:32.964 INFO:tasks.workunit.client.1.vm05.stdout:1/351: chown d4/d37/d4e 936999 1 2026-03-10T10:19:32.966 INFO:tasks.workunit.client.1.vm05.stdout:1/352: dread d4/f36 [0,4194304] 0 2026-03-10T10:19:32.966 INFO:tasks.workunit.client.1.vm05.stdout:1/353: chown d4/d37/d4e/l5b 1 1 2026-03-10T10:19:32.974 INFO:tasks.workunit.client.1.vm05.stdout:1/354: getdents d4/df/d1c/d53 0 2026-03-10T10:19:32.976 INFO:tasks.workunit.client.1.vm05.stdout:1/355: truncate d4/d39/d3e/f3f 3732384 0 2026-03-10T10:19:32.977 INFO:tasks.workunit.client.1.vm05.stdout:1/356: mknod d4/d39/d3e/c6a 0 2026-03-10T10:19:32.978 INFO:tasks.workunit.client.1.vm05.stdout:1/357: creat d4/df/d1c/d53/f6b x:0 0 0 2026-03-10T10:19:32.979 INFO:tasks.workunit.client.1.vm05.stdout:1/358: mknod d4/df/c6c 0 2026-03-10T10:19:32.998 INFO:tasks.workunit.client.0.vm02.stdout:1/392: sync 2026-03-10T10:19:32.999 INFO:tasks.workunit.client.0.vm02.stdout:1/393: dread - d4/da/d1a/d47/d65/f7e zero size 2026-03-10T10:19:33.000 INFO:tasks.workunit.client.0.vm02.stdout:1/394: dread - d4/f81 zero size 2026-03-10T10:19:33.002 INFO:tasks.workunit.client.0.vm02.stdout:1/395: mkdir d4/d2c/d85 0 2026-03-10T10:19:33.005 INFO:tasks.workunit.client.0.vm02.stdout:1/396: creat d4/f86 x:0 0 0 2026-03-10T10:19:33.006 INFO:tasks.workunit.client.0.vm02.stdout:1/397: stat d4/da/c31 0 2026-03-10T10:19:33.015 INFO:tasks.workunit.client.1.vm05.stdout:9/319: dwrite d0/f45 [0,4194304] 0 2026-03-10T10:19:33.015 INFO:tasks.workunit.client.0.vm02.stdout:1/398: getdents d4/da/d1a/d47/d78 0 2026-03-10T10:19:33.015 INFO:tasks.workunit.client.0.vm02.stdout:1/399: chown d4/d2c/d53/f75 3 1 2026-03-10T10:19:33.015 INFO:tasks.workunit.client.0.vm02.stdout:1/400: symlink d4/d2c/d53/l87 0 2026-03-10T10:19:33.015 INFO:tasks.workunit.client.0.vm02.stdout:1/401: dread - d4/da/d1a/d22/f49 zero size 2026-03-10T10:19:33.016 INFO:tasks.workunit.client.0.vm02.stdout:1/402: unlink d4/f86 0 2026-03-10T10:19:33.025 INFO:tasks.workunit.client.0.vm02.stdout:1/403: rmdir d4/d2c/d85 0 2026-03-10T10:19:33.029 INFO:tasks.workunit.client.0.vm02.stdout:1/404: truncate d4/f8 4035938 0 2026-03-10T10:19:33.032 INFO:tasks.workunit.client.0.vm02.stdout:1/405: mkdir d4/da/d1a/d47/d88 0 2026-03-10T10:19:33.044 INFO:tasks.workunit.client.1.vm05.stdout:9/320: creat d0/d1/d13/d26/f66 x:0 0 0 2026-03-10T10:19:33.044 INFO:tasks.workunit.client.1.vm05.stdout:9/321: stat d0/d1/d13 0 2026-03-10T10:19:33.047 INFO:tasks.workunit.client.1.vm05.stdout:9/322: link d0/df/d11/c3e d0/d1/c67 0 2026-03-10T10:19:33.047 INFO:tasks.workunit.client.1.vm05.stdout:9/323: fsync d0/d1/d16/f5c 0 2026-03-10T10:19:33.048 INFO:tasks.workunit.client.1.vm05.stdout:1/359: sync 2026-03-10T10:19:33.049 INFO:tasks.workunit.client.1.vm05.stdout:1/360: readlink d4/l5f 0 2026-03-10T10:19:33.050 INFO:tasks.workunit.client.1.vm05.stdout:1/361: unlink d4/dd/f15 0 2026-03-10T10:19:33.051 INFO:tasks.workunit.client.1.vm05.stdout:9/324: dwrite d0/d1/d13/d26/f4f [0,4194304] 0 2026-03-10T10:19:33.051 INFO:tasks.workunit.client.1.vm05.stdout:1/362: chown d4/d20/f2c 1 1 2026-03-10T10:19:33.055 INFO:tasks.workunit.client.1.vm05.stdout:1/363: symlink d4/df/d1c/l6d 0 2026-03-10T10:19:33.057 INFO:tasks.workunit.client.1.vm05.stdout:9/325: creat d0/d1/d13/d55/f68 x:0 0 0 2026-03-10T10:19:33.057 INFO:tasks.workunit.client.1.vm05.stdout:1/364: mkdir d4/d3d/d6e 0 2026-03-10T10:19:33.058 INFO:tasks.workunit.client.1.vm05.stdout:9/326: symlink d0/d1/d13/d62/l69 0 2026-03-10T10:19:33.060 INFO:tasks.workunit.client.1.vm05.stdout:1/365: dread d4/f36 [0,4194304] 0 2026-03-10T10:19:33.062 INFO:tasks.workunit.client.1.vm05.stdout:1/366: truncate d4/df/d1c/d53/f65 452571 0 2026-03-10T10:19:33.066 INFO:tasks.workunit.client.1.vm05.stdout:9/327: fsync d0/d1/fb 0 2026-03-10T10:19:33.068 INFO:tasks.workunit.client.0.vm02.stdout:3/368: truncate d1/d8/d21/f2f 411551 0 2026-03-10T10:19:33.068 INFO:tasks.workunit.client.1.vm05.stdout:7/425: write d5/dd/f1a [4954383,12218] 0 2026-03-10T10:19:33.068 INFO:tasks.workunit.client.1.vm05.stdout:8/272: write d7/d14/d15/f1f [1303884,107662] 0 2026-03-10T10:19:33.070 INFO:tasks.workunit.client.1.vm05.stdout:8/273: write d7/d14/d3a/d49/f54 [74426,62205] 0 2026-03-10T10:19:33.072 INFO:tasks.workunit.client.1.vm05.stdout:7/426: fdatasync d5/d1d/f32 0 2026-03-10T10:19:33.073 INFO:tasks.workunit.client.1.vm05.stdout:8/274: write d7/f11 [1196690,78106] 0 2026-03-10T10:19:33.073 INFO:tasks.workunit.client.1.vm05.stdout:8/275: stat d7/d14 0 2026-03-10T10:19:33.076 INFO:tasks.workunit.client.1.vm05.stdout:8/276: dwrite d7/d14/f4e [4194304,4194304] 0 2026-03-10T10:19:33.077 INFO:tasks.workunit.client.1.vm05.stdout:7/427: mkdir d5/d17/d85 0 2026-03-10T10:19:33.079 INFO:tasks.workunit.client.1.vm05.stdout:8/277: creat d7/d14/f55 x:0 0 0 2026-03-10T10:19:33.079 INFO:tasks.workunit.client.1.vm05.stdout:9/328: link d0/df/d11/l5f d0/d1/d13/d26/l6a 0 2026-03-10T10:19:33.083 INFO:tasks.workunit.client.0.vm02.stdout:3/369: dread d1/d8/d21/f4d [0,4194304] 0 2026-03-10T10:19:33.086 INFO:tasks.workunit.client.1.vm05.stdout:1/367: getdents d4/d39 0 2026-03-10T10:19:33.086 INFO:tasks.workunit.client.1.vm05.stdout:8/278: truncate d7/d14/d3a/f50 920072 0 2026-03-10T10:19:33.088 INFO:tasks.workunit.client.1.vm05.stdout:9/329: unlink d0/d1/d13/f22 0 2026-03-10T10:19:33.090 INFO:tasks.workunit.client.1.vm05.stdout:9/330: readlink d0/d1/d13/d26/l6a 0 2026-03-10T10:19:33.103 INFO:tasks.workunit.client.0.vm02.stdout:3/370: dread d1/f3 [0,4194304] 0 2026-03-10T10:19:33.106 INFO:tasks.workunit.client.0.vm02.stdout:3/371: dread d1/f3 [0,4194304] 0 2026-03-10T10:19:33.121 INFO:tasks.workunit.client.1.vm05.stdout:6/293: write dd/d36/d3f/d12/d44/d2a/d3d/d48/f4b [138281,107901] 0 2026-03-10T10:19:33.122 INFO:tasks.workunit.client.0.vm02.stdout:3/372: rename d1/d20/f22 to d1/d8/f7c 0 2026-03-10T10:19:33.126 INFO:tasks.workunit.client.0.vm02.stdout:9/361: write da/d3c/d4c/d38/d4a/f59 [1438576,22299] 0 2026-03-10T10:19:33.138 INFO:tasks.workunit.client.0.vm02.stdout:4/516: dwrite d1/d41/d5e/d78/f34 [0,4194304] 0 2026-03-10T10:19:33.143 INFO:tasks.workunit.client.1.vm05.stdout:8/279: dread d7/d14/f4c [0,4194304] 0 2026-03-10T10:19:33.144 INFO:tasks.workunit.client.0.vm02.stdout:4/517: truncate d1/d41/d5e/d78/d1a/f8c 42267 0 2026-03-10T10:19:33.145 INFO:tasks.workunit.client.1.vm05.stdout:4/290: dwrite d1/d31/dc/f3a [4194304,4194304] 0 2026-03-10T10:19:33.146 INFO:tasks.workunit.client.1.vm05.stdout:5/386: dwrite da/f41 [0,4194304] 0 2026-03-10T10:19:33.150 INFO:tasks.workunit.client.1.vm05.stdout:4/291: write d1/d31/dc/f33 [4447331,99919] 0 2026-03-10T10:19:33.174 INFO:tasks.workunit.client.0.vm02.stdout:9/362: truncate da/d3c/d4c/d56/f77 471071 0 2026-03-10T10:19:33.175 INFO:tasks.workunit.client.0.vm02.stdout:9/363: write da/f1f [4370528,106769] 0 2026-03-10T10:19:33.176 INFO:tasks.workunit.client.0.vm02.stdout:9/364: stat da/d3c/d4c/f26 0 2026-03-10T10:19:33.180 INFO:tasks.workunit.client.0.vm02.stdout:4/518: symlink d1/d41/d5e/d78/d7f/d82/la8 0 2026-03-10T10:19:33.180 INFO:tasks.workunit.client.1.vm05.stdout:2/372: dwrite db/f24 [0,4194304] 0 2026-03-10T10:19:33.187 INFO:tasks.workunit.client.0.vm02.stdout:5/568: rmdir d1/db/d11/d84/d40 39 2026-03-10T10:19:33.192 INFO:tasks.workunit.client.1.vm05.stdout:5/387: unlink da/db/d26/d35/d73/l67 0 2026-03-10T10:19:33.192 INFO:tasks.workunit.client.1.vm05.stdout:5/388: readlink da/db/d26/d35/d38/l55 0 2026-03-10T10:19:33.192 INFO:tasks.workunit.client.0.vm02.stdout:8/410: dwrite d1/d1c/d24/d35/f44 [0,4194304] 0 2026-03-10T10:19:33.193 INFO:tasks.workunit.client.1.vm05.stdout:5/389: write f9 [4914886,41904] 0 2026-03-10T10:19:33.195 INFO:tasks.workunit.client.1.vm05.stdout:5/390: read - da/f78 zero size 2026-03-10T10:19:33.200 INFO:tasks.workunit.client.0.vm02.stdout:0/412: dwrite d9/d18/d1a/d22/d24/d80/d49/f53 [0,4194304] 0 2026-03-10T10:19:33.218 INFO:tasks.workunit.client.0.vm02.stdout:2/414: truncate d0/d1a/d24/f6e 3556175 0 2026-03-10T10:19:33.219 INFO:tasks.workunit.client.0.vm02.stdout:0/413: dread d9/d18/f6a [0,4194304] 0 2026-03-10T10:19:33.223 INFO:tasks.workunit.client.0.vm02.stdout:7/375: dread d1/f6b [4194304,4194304] 0 2026-03-10T10:19:33.232 INFO:tasks.workunit.client.0.vm02.stdout:0/414: readlink d9/d34/l5a 0 2026-03-10T10:19:33.232 INFO:tasks.workunit.client.0.vm02.stdout:0/415: readlink d9/d34/l5a 0 2026-03-10T10:19:33.232 INFO:tasks.workunit.client.0.vm02.stdout:7/376: write d1/dc/d16/d28/d2d/d36/f66 [152275,77257] 0 2026-03-10T10:19:33.239 INFO:tasks.workunit.client.1.vm05.stdout:6/294: dread fb [0,4194304] 0 2026-03-10T10:19:33.249 INFO:tasks.workunit.client.0.vm02.stdout:9/365: symlink da/l79 0 2026-03-10T10:19:33.249 INFO:tasks.workunit.client.1.vm05.stdout:6/295: chown dd/d36/d3f/f22 888586849 1 2026-03-10T10:19:33.249 INFO:tasks.workunit.client.1.vm05.stdout:0/376: write d1/f38 [4321894,98460] 0 2026-03-10T10:19:33.249 INFO:tasks.workunit.client.1.vm05.stdout:3/425: truncate dd/d15/d1f/f53 3060134 0 2026-03-10T10:19:33.270 INFO:tasks.workunit.client.0.vm02.stdout:4/519: dread d1/d41/d5e/d78/d37/f48 [0,4194304] 0 2026-03-10T10:19:33.270 INFO:tasks.workunit.client.1.vm05.stdout:2/373: creat db/d2d/d5e/f71 x:0 0 0 2026-03-10T10:19:33.273 INFO:tasks.workunit.client.0.vm02.stdout:4/520: truncate d1/d41/d5e/d78/d1a/f8c 155344 0 2026-03-10T10:19:33.280 INFO:tasks.workunit.client.1.vm05.stdout:1/368: dwrite d4/d3d/f57 [0,4194304] 0 2026-03-10T10:19:33.286 INFO:tasks.workunit.client.1.vm05.stdout:1/369: dread d4/f46 [0,4194304] 0 2026-03-10T10:19:33.293 INFO:tasks.workunit.client.0.vm02.stdout:0/416: sync 2026-03-10T10:19:33.293 INFO:tasks.workunit.client.0.vm02.stdout:7/377: creat d1/d1b/f72 x:0 0 0 2026-03-10T10:19:33.293 INFO:tasks.workunit.client.0.vm02.stdout:7/378: chown d1/dc/d16/d28/c46 23934282 1 2026-03-10T10:19:33.300 INFO:tasks.workunit.client.0.vm02.stdout:9/366: mknod da/d3c/d4c/d2c/d34/d35/c7a 0 2026-03-10T10:19:33.304 INFO:tasks.workunit.client.0.vm02.stdout:8/411: mkdir d1/d1c/d43/d6a/d7c 0 2026-03-10T10:19:33.304 INFO:tasks.workunit.client.0.vm02.stdout:8/412: truncate d1/d1c/d23/d25/f5d 845997 0 2026-03-10T10:19:33.308 INFO:tasks.workunit.client.1.vm05.stdout:4/292: creat d1/d3/f60 x:0 0 0 2026-03-10T10:19:33.311 INFO:tasks.workunit.client.1.vm05.stdout:8/280: getdents d7/d14/d24/d3f 0 2026-03-10T10:19:33.313 INFO:tasks.workunit.client.0.vm02.stdout:1/406: dread d4/f8 [0,4194304] 0 2026-03-10T10:19:33.314 INFO:tasks.workunit.client.0.vm02.stdout:1/407: fsync d4/d2c/d53/f6c 0 2026-03-10T10:19:33.314 INFO:tasks.workunit.client.0.vm02.stdout:1/408: readlink d4/d1b/l2e 0 2026-03-10T10:19:33.320 INFO:tasks.workunit.client.0.vm02.stdout:7/379: readlink d1/d1b/l54 0 2026-03-10T10:19:33.320 INFO:tasks.workunit.client.1.vm05.stdout:1/370: symlink d4/d39/d3e/l6f 0 2026-03-10T10:19:33.320 INFO:tasks.workunit.client.1.vm05.stdout:1/371: stat d4/df/c6c 0 2026-03-10T10:19:33.320 INFO:tasks.workunit.client.1.vm05.stdout:1/372: dread - d4/df/d1c/d53/f6b zero size 2026-03-10T10:19:33.321 INFO:tasks.workunit.client.0.vm02.stdout:0/417: sync 2026-03-10T10:19:33.322 INFO:tasks.workunit.client.0.vm02.stdout:7/380: dwrite d1/d1b/f61 [4194304,4194304] 0 2026-03-10T10:19:33.322 INFO:tasks.workunit.client.0.vm02.stdout:6/388: read d0/f4c [926111,48017] 0 2026-03-10T10:19:33.323 INFO:tasks.workunit.client.0.vm02.stdout:6/389: read - d0/d8/d9/d7a/f7d zero size 2026-03-10T10:19:33.326 INFO:tasks.workunit.client.0.vm02.stdout:6/390: dread - d0/d8/d29/d2f/d50/f78 zero size 2026-03-10T10:19:33.331 INFO:tasks.workunit.client.0.vm02.stdout:0/418: dread d9/d34/d3d/f58 [0,4194304] 0 2026-03-10T10:19:33.331 INFO:tasks.workunit.client.0.vm02.stdout:6/391: dwrite d0/d8/d9/f54 [0,4194304] 0 2026-03-10T10:19:33.333 INFO:tasks.workunit.client.0.vm02.stdout:0/419: readlink d9/d34/d3d/l6e 0 2026-03-10T10:19:33.337 INFO:tasks.workunit.client.0.vm02.stdout:9/367: mknod da/d3c/d53/c7b 0 2026-03-10T10:19:33.343 INFO:tasks.workunit.client.1.vm05.stdout:9/331: truncate d0/d1/d13/f27 1294278 0 2026-03-10T10:19:33.344 INFO:tasks.workunit.client.1.vm05.stdout:9/332: chown d0/d1/d13/d26/f66 955 1 2026-03-10T10:19:33.353 INFO:tasks.workunit.client.1.vm05.stdout:5/391: link da/db/d26/d5c/d4b/f4e da/db/d26/d5c/d4b/f83 0 2026-03-10T10:19:33.365 INFO:tasks.workunit.client.0.vm02.stdout:4/521: mknod d1/d41/d5e/d78/ca9 0 2026-03-10T10:19:33.366 INFO:tasks.workunit.client.0.vm02.stdout:2/415: creat d0/d10/f8b x:0 0 0 2026-03-10T10:19:33.366 INFO:tasks.workunit.client.1.vm05.stdout:9/333: dwrite d0/d1/d13/d26/f66 [0,4194304] 0 2026-03-10T10:19:33.366 INFO:tasks.workunit.client.1.vm05.stdout:2/374: chown db/d12/c16 15780555 1 2026-03-10T10:19:33.366 INFO:tasks.workunit.client.1.vm05.stdout:4/293: unlink d1/d3/c22 0 2026-03-10T10:19:33.366 INFO:tasks.workunit.client.1.vm05.stdout:8/281: symlink d7/d14/d3a/d49/l56 0 2026-03-10T10:19:33.368 INFO:tasks.workunit.client.1.vm05.stdout:4/294: dwrite f0 [0,4194304] 0 2026-03-10T10:19:33.370 INFO:tasks.workunit.client.1.vm05.stdout:7/428: rmdir d5 39 2026-03-10T10:19:33.372 INFO:tasks.workunit.client.1.vm05.stdout:1/373: mkdir d4/d20/d70 0 2026-03-10T10:19:33.380 INFO:tasks.workunit.client.1.vm05.stdout:0/377: creat d1/d2/d9/d31/d13/f7a x:0 0 0 2026-03-10T10:19:33.387 INFO:tasks.workunit.client.1.vm05.stdout:5/392: symlink da/db/d26/d5c/d4b/l84 0 2026-03-10T10:19:33.388 INFO:tasks.workunit.client.0.vm02.stdout:0/420: read - d9/d18/d1a/d46/d5d/f66 zero size 2026-03-10T10:19:33.388 INFO:tasks.workunit.client.0.vm02.stdout:0/421: chown d9/d18/d1a/l5c 707 1 2026-03-10T10:19:33.389 INFO:tasks.workunit.client.0.vm02.stdout:0/422: read d9/d18/f2a [65730,122455] 0 2026-03-10T10:19:33.392 INFO:tasks.workunit.client.0.vm02.stdout:9/368: mkdir da/d3c/d4c/d38/d7c 0 2026-03-10T10:19:33.396 INFO:tasks.workunit.client.0.vm02.stdout:9/369: chown da/d3c 1087904 1 2026-03-10T10:19:33.396 INFO:tasks.workunit.client.0.vm02.stdout:3/373: write d1/d8/f3f [51804,112505] 0 2026-03-10T10:19:33.396 INFO:tasks.workunit.client.0.vm02.stdout:9/370: chown da/d3c/d4c/f27 57667458 1 2026-03-10T10:19:33.396 INFO:tasks.workunit.client.0.vm02.stdout:9/371: fsync da/d3c/d4c/d2c/d34/f4d 0 2026-03-10T10:19:33.396 INFO:tasks.workunit.client.0.vm02.stdout:9/372: chown da/d3c/d4c/d38/d4a/d70 1 1 2026-03-10T10:19:33.396 INFO:tasks.workunit.client.0.vm02.stdout:4/522: mknod d1/d41/d5e/d78/d7f/caa 0 2026-03-10T10:19:33.403 INFO:tasks.workunit.client.0.vm02.stdout:2/416: rmdir d0/d10/d69 39 2026-03-10T10:19:33.412 INFO:tasks.workunit.client.0.vm02.stdout:2/417: dwrite d0/d10/f5f [0,4194304] 0 2026-03-10T10:19:33.413 INFO:tasks.workunit.client.0.vm02.stdout:5/569: dwrite d1/db/d11/d16/d79/d85/f9f [0,4194304] 0 2026-03-10T10:19:33.424 INFO:tasks.workunit.client.1.vm05.stdout:6/296: getdents dd/d36 0 2026-03-10T10:19:33.429 INFO:tasks.workunit.client.0.vm02.stdout:6/392: read d0/d8/d29/d2f/f38 [573801,11095] 0 2026-03-10T10:19:33.432 INFO:tasks.workunit.client.1.vm05.stdout:3/426: getdents dd/d15/d24 0 2026-03-10T10:19:33.434 INFO:tasks.workunit.client.0.vm02.stdout:2/418: sync 2026-03-10T10:19:33.435 INFO:tasks.workunit.client.1.vm05.stdout:4/295: symlink d1/d31/d4b/l61 0 2026-03-10T10:19:33.456 INFO:tasks.workunit.client.1.vm05.stdout:1/374: mknod d4/d39/d3e/c71 0 2026-03-10T10:19:33.459 INFO:tasks.workunit.client.1.vm05.stdout:0/378: creat d1/d2/d39/d3d/f7b x:0 0 0 2026-03-10T10:19:33.461 INFO:tasks.workunit.client.1.vm05.stdout:0/379: dwrite d1/d2/d9/d31/d54/f16 [4194304,4194304] 0 2026-03-10T10:19:33.464 INFO:tasks.workunit.client.0.vm02.stdout:3/374: dread d1/f50 [0,4194304] 0 2026-03-10T10:19:33.479 INFO:tasks.workunit.client.1.vm05.stdout:0/380: dread d1/d2/d9/d31/d12/f1e [0,4194304] 0 2026-03-10T10:19:33.480 INFO:tasks.workunit.client.1.vm05.stdout:0/381: fdatasync d1/d2/d9/d31/d12/d20/f2e 0 2026-03-10T10:19:33.481 INFO:tasks.workunit.client.0.vm02.stdout:8/413: dread d1/d2/f36 [0,4194304] 0 2026-03-10T10:19:33.510 INFO:tasks.workunit.client.0.vm02.stdout:9/373: rmdir da/d3c 39 2026-03-10T10:19:33.518 INFO:tasks.workunit.client.1.vm05.stdout:8/282: mkdir d7/d2f/d57 0 2026-03-10T10:19:33.524 INFO:tasks.workunit.client.0.vm02.stdout:2/419: dread d0/d1a/f25 [0,4194304] 0 2026-03-10T10:19:33.529 INFO:tasks.workunit.client.0.vm02.stdout:1/409: dwrite d4/da/d1a/d22/f49 [0,4194304] 0 2026-03-10T10:19:33.532 INFO:tasks.workunit.client.0.vm02.stdout:1/410: chown d4/da/d1a/d47/d78 26 1 2026-03-10T10:19:33.536 INFO:tasks.workunit.client.1.vm05.stdout:6/297: dread dd/d36/d3f/d12/d44/f2f [0,4194304] 0 2026-03-10T10:19:33.537 INFO:tasks.workunit.client.1.vm05.stdout:7/429: rename d5/d1d/d29/d3e/f42 to d5/d1d/d20/d35/d6f/d82/f86 0 2026-03-10T10:19:33.538 INFO:tasks.workunit.client.0.vm02.stdout:5/570: write d1/db/d11/d84/f8a [1038181,41498] 0 2026-03-10T10:19:33.541 INFO:tasks.workunit.client.0.vm02.stdout:6/393: rmdir d0/d8/d29 39 2026-03-10T10:19:33.545 INFO:tasks.workunit.client.1.vm05.stdout:1/375: mknod d4/d37/d4e/c72 0 2026-03-10T10:19:33.549 INFO:tasks.workunit.client.1.vm05.stdout:1/376: dread d4/d3d/f57 [0,4194304] 0 2026-03-10T10:19:33.554 INFO:tasks.workunit.client.1.vm05.stdout:9/334: creat d0/d1/d13/f6b x:0 0 0 2026-03-10T10:19:33.557 INFO:tasks.workunit.client.0.vm02.stdout:4/523: mknod d1/d10/d88/cab 0 2026-03-10T10:19:33.562 INFO:tasks.workunit.client.1.vm05.stdout:8/283: truncate d7/f21 493713 0 2026-03-10T10:19:33.562 INFO:tasks.workunit.client.0.vm02.stdout:1/411: symlink d4/d2c/d53/l89 0 2026-03-10T10:19:33.563 INFO:tasks.workunit.client.0.vm02.stdout:2/420: read d0/d1a/d49/d5e/f63 [269691,30334] 0 2026-03-10T10:19:33.563 INFO:tasks.workunit.client.1.vm05.stdout:8/284: chown d7/d2f/f45 51790530 1 2026-03-10T10:19:33.565 INFO:tasks.workunit.client.0.vm02.stdout:5/571: symlink d1/db/d11/d1a/lc2 0 2026-03-10T10:19:33.568 INFO:tasks.workunit.client.0.vm02.stdout:7/381: getdents d1/dc 0 2026-03-10T10:19:33.569 INFO:tasks.workunit.client.1.vm05.stdout:6/298: unlink dd/l3a 0 2026-03-10T10:19:33.569 INFO:tasks.workunit.client.0.vm02.stdout:9/374: symlink da/d3c/d4c/d56/l7d 0 2026-03-10T10:19:33.571 INFO:tasks.workunit.client.1.vm05.stdout:5/393: creat da/db/f85 x:0 0 0 2026-03-10T10:19:33.571 INFO:tasks.workunit.client.0.vm02.stdout:6/394: rmdir d0/d8/d29/d6d 39 2026-03-10T10:19:33.572 INFO:tasks.workunit.client.1.vm05.stdout:1/377: unlink d4/d39/d3e/c61 0 2026-03-10T10:19:33.575 INFO:tasks.workunit.client.0.vm02.stdout:1/412: rename d4/da/d27/d38/d3c/c6d to d4/d2c/d53/c8a 0 2026-03-10T10:19:33.580 INFO:tasks.workunit.client.0.vm02.stdout:1/413: write d4/fe [2259039,34964] 0 2026-03-10T10:19:33.580 INFO:tasks.workunit.client.0.vm02.stdout:1/414: readlink d4/d1b/l7d 0 2026-03-10T10:19:33.580 INFO:tasks.workunit.client.0.vm02.stdout:4/524: link d1/d41/d5e/d78/c97 d1/d41/d5e/d78/d1a/cac 0 2026-03-10T10:19:33.580 INFO:tasks.workunit.client.1.vm05.stdout:0/382: truncate d1/d2/d9/f40 2472411 0 2026-03-10T10:19:33.580 INFO:tasks.workunit.client.1.vm05.stdout:2/375: getdents db/d28/d4f 0 2026-03-10T10:19:33.580 INFO:tasks.workunit.client.1.vm05.stdout:8/285: mknod d7/d2f/c58 0 2026-03-10T10:19:33.580 INFO:tasks.workunit.client.1.vm05.stdout:7/430: symlink d5/d1d/d20/l87 0 2026-03-10T10:19:33.584 INFO:tasks.workunit.client.0.vm02.stdout:6/395: mknod d0/c83 0 2026-03-10T10:19:33.593 INFO:tasks.workunit.client.0.vm02.stdout:4/525: creat d1/d41/d5e/d78/d1a/fad x:0 0 0 2026-03-10T10:19:33.597 INFO:tasks.workunit.client.0.vm02.stdout:5/572: getdents d1/d9c 0 2026-03-10T10:19:33.597 INFO:tasks.workunit.client.1.vm05.stdout:6/299: dwrite dd/d36/d3f/f22 [0,4194304] 0 2026-03-10T10:19:33.597 INFO:tasks.workunit.client.1.vm05.stdout:1/378: write d4/d37/d4e/f62 [614974,47559] 0 2026-03-10T10:19:33.597 INFO:tasks.workunit.client.1.vm05.stdout:0/383: chown d1/d2/c42 33 1 2026-03-10T10:19:33.597 INFO:tasks.workunit.client.1.vm05.stdout:8/286: fsync d7/f2b 0 2026-03-10T10:19:33.597 INFO:tasks.workunit.client.1.vm05.stdout:0/384: dread d1/d2/d9/d31/d13/f3e [0,4194304] 0 2026-03-10T10:19:33.599 INFO:tasks.workunit.client.0.vm02.stdout:4/526: link d1/d41/d5e/d78/f34 d1/d32/d3e/fae 0 2026-03-10T10:19:33.600 INFO:tasks.workunit.client.0.vm02.stdout:2/421: sync 2026-03-10T10:19:33.600 INFO:tasks.workunit.client.0.vm02.stdout:7/382: sync 2026-03-10T10:19:33.612 INFO:tasks.workunit.client.1.vm05.stdout:6/300: dread f2 [0,4194304] 0 2026-03-10T10:19:33.612 INFO:tasks.workunit.client.1.vm05.stdout:9/335: dread d0/df/d11/f50 [0,4194304] 0 2026-03-10T10:19:33.617 INFO:tasks.workunit.client.1.vm05.stdout:4/296: sync 2026-03-10T10:19:33.619 INFO:tasks.workunit.client.1.vm05.stdout:4/297: readlink d1/d31/l49 0 2026-03-10T10:19:33.622 INFO:tasks.workunit.client.0.vm02.stdout:7/383: fsync d1/dc/f26 0 2026-03-10T10:19:33.622 INFO:tasks.workunit.client.0.vm02.stdout:7/384: fdatasync d1/dc/d16/f1f 0 2026-03-10T10:19:33.623 INFO:tasks.workunit.client.1.vm05.stdout:0/385: read d1/d2/f21 [484393,3081] 0 2026-03-10T10:19:33.632 INFO:tasks.workunit.client.0.vm02.stdout:2/422: rmdir d0/d1a/d24 39 2026-03-10T10:19:33.633 INFO:tasks.workunit.client.0.vm02.stdout:2/423: chown d0/d10/c2b 1918 1 2026-03-10T10:19:33.634 INFO:tasks.workunit.client.0.vm02.stdout:2/424: chown d0/d1a/l3e 27409388 1 2026-03-10T10:19:33.638 INFO:tasks.workunit.client.0.vm02.stdout:2/425: dread d0/d1a/f31 [4194304,4194304] 0 2026-03-10T10:19:33.650 INFO:tasks.workunit.client.0.vm02.stdout:2/426: dread - d0/d1a/d49/f78 zero size 2026-03-10T10:19:33.651 INFO:tasks.workunit.client.0.vm02.stdout:7/385: unlink d1/dc/d16/d28/l31 0 2026-03-10T10:19:33.660 INFO:tasks.workunit.client.0.vm02.stdout:0/423: write d9/d18/f1e [4576247,53772] 0 2026-03-10T10:19:33.670 INFO:tasks.workunit.client.0.vm02.stdout:4/527: creat d1/d10/faf x:0 0 0 2026-03-10T10:19:33.670 INFO:tasks.workunit.client.0.vm02.stdout:7/386: creat d1/dc/d16/d28/f73 x:0 0 0 2026-03-10T10:19:33.672 INFO:tasks.workunit.client.0.vm02.stdout:0/424: rename d9/d18/d1a/d3c to d9/d18/d1a/d3c/d83 22 2026-03-10T10:19:33.682 INFO:tasks.workunit.client.0.vm02.stdout:5/573: dread d1/db/d11/d84/d40/f66 [0,4194304] 0 2026-03-10T10:19:33.691 INFO:tasks.workunit.client.1.vm05.stdout:1/379: creat d4/df/f73 x:0 0 0 2026-03-10T10:19:33.692 INFO:tasks.workunit.client.1.vm05.stdout:2/376: link db/d12/l46 db/d61/l72 0 2026-03-10T10:19:33.692 INFO:tasks.workunit.client.0.vm02.stdout:8/414: write d1/f1b [1496700,47342] 0 2026-03-10T10:19:33.692 INFO:tasks.workunit.client.1.vm05.stdout:8/287: creat d7/f59 x:0 0 0 2026-03-10T10:19:33.697 INFO:tasks.workunit.client.1.vm05.stdout:1/380: read d4/df/d1c/f23 [2599885,95030] 0 2026-03-10T10:19:33.700 INFO:tasks.workunit.client.0.vm02.stdout:3/375: truncate d1/f12 3284667 0 2026-03-10T10:19:33.711 INFO:tasks.workunit.client.1.vm05.stdout:6/301: fdatasync dd/d36/d3f/f22 0 2026-03-10T10:19:33.712 INFO:tasks.workunit.client.0.vm02.stdout:1/415: dread d4/d2c/d53/f74 [0,4194304] 0 2026-03-10T10:19:33.713 INFO:tasks.workunit.client.1.vm05.stdout:0/386: mkdir d1/d2/d9/d31/d54/d7c 0 2026-03-10T10:19:33.717 INFO:tasks.workunit.client.0.vm02.stdout:9/375: dwrite da/d3c/d4c/d2c/f32 [0,4194304] 0 2026-03-10T10:19:33.726 INFO:tasks.workunit.client.1.vm05.stdout:2/377: creat db/d1c/d40/f73 x:0 0 0 2026-03-10T10:19:33.726 INFO:tasks.workunit.client.1.vm05.stdout:5/394: write da/db/f1e [801008,40710] 0 2026-03-10T10:19:33.727 INFO:tasks.workunit.client.1.vm05.stdout:2/378: truncate db/d1c/d40/f73 736760 0 2026-03-10T10:19:33.729 INFO:tasks.workunit.client.0.vm02.stdout:6/396: truncate d0/d8/d29/d2f/f38 3450848 0 2026-03-10T10:19:33.733 INFO:tasks.workunit.client.0.vm02.stdout:6/397: read - d0/d8/f5a zero size 2026-03-10T10:19:33.736 INFO:tasks.workunit.client.1.vm05.stdout:1/381: rmdir d4/df/d1c/d53/d66 39 2026-03-10T10:19:33.740 INFO:tasks.workunit.client.1.vm05.stdout:5/395: read f9 [2953386,18857] 0 2026-03-10T10:19:33.750 INFO:tasks.workunit.client.1.vm05.stdout:0/387: fdatasync d1/d2/d9/d31/d13/f73 0 2026-03-10T10:19:33.754 INFO:tasks.workunit.client.1.vm05.stdout:0/388: write d1/d2/d9/d31/d13/d2f/d49/f5c [174173,65875] 0 2026-03-10T10:19:33.764 INFO:tasks.workunit.client.1.vm05.stdout:8/288: unlink d7/d2f/f45 0 2026-03-10T10:19:33.764 INFO:tasks.workunit.client.1.vm05.stdout:9/336: write d0/d1/d13/de/f46 [68515,60722] 0 2026-03-10T10:19:33.765 INFO:tasks.workunit.client.1.vm05.stdout:5/396: rename da/db/d26/d35/d73/c59 to da/db/d26/d35/d38/c86 0 2026-03-10T10:19:33.766 INFO:tasks.workunit.client.1.vm05.stdout:6/302: mknod dd/d36/d3f/d12/d44/d2a/c5c 0 2026-03-10T10:19:33.769 INFO:tasks.workunit.client.1.vm05.stdout:2/379: mkdir db/d12/d74 0 2026-03-10T10:19:33.772 INFO:tasks.workunit.client.0.vm02.stdout:0/425: creat d9/d34/d3d/d65/f84 x:0 0 0 2026-03-10T10:19:33.773 INFO:tasks.workunit.client.0.vm02.stdout:4/528: dwrite d1/d41/d5e/d78/d37/f14 [0,4194304] 0 2026-03-10T10:19:33.777 INFO:tasks.workunit.client.0.vm02.stdout:5/574: rename d1/db/d11/d62/f8f to d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fc3 0 2026-03-10T10:19:33.777 INFO:tasks.workunit.client.0.vm02.stdout:8/415: creat d1/f7d x:0 0 0 2026-03-10T10:19:33.779 INFO:tasks.workunit.client.0.vm02.stdout:3/376: mkdir d1/d8/d21/d7d 0 2026-03-10T10:19:33.780 INFO:tasks.workunit.client.1.vm05.stdout:9/337: readlink d0/d1/d16/l47 0 2026-03-10T10:19:33.781 INFO:tasks.workunit.client.0.vm02.stdout:1/416: unlink d4/da/d1a/d47/d65/f7e 0 2026-03-10T10:19:33.785 INFO:tasks.workunit.client.1.vm05.stdout:9/338: readlink d0/d1/d16/l47 0 2026-03-10T10:19:33.786 INFO:tasks.workunit.client.1.vm05.stdout:5/397: rename da/db/d28/d32/f71 to da/db/d26/d35/d7a/f87 0 2026-03-10T10:19:33.786 INFO:tasks.workunit.client.1.vm05.stdout:1/382: creat d4/d3d/d6e/f74 x:0 0 0 2026-03-10T10:19:33.788 INFO:tasks.workunit.client.1.vm05.stdout:1/383: write d4/d20/f31 [4169302,4277] 0 2026-03-10T10:19:33.791 INFO:tasks.workunit.client.1.vm05.stdout:2/380: creat db/d28/d4f/f75 x:0 0 0 2026-03-10T10:19:33.797 INFO:tasks.workunit.client.1.vm05.stdout:0/389: symlink d1/d2/d9/d31/d13/d15/l7d 0 2026-03-10T10:19:33.800 INFO:tasks.workunit.client.1.vm05.stdout:6/303: dwrite f3 [0,4194304] 0 2026-03-10T10:19:33.810 INFO:tasks.workunit.client.0.vm02.stdout:9/376: dread da/d3c/d4c/f49 [0,4194304] 0 2026-03-10T10:19:33.812 INFO:tasks.workunit.client.1.vm05.stdout:4/298: link d1/d31/dc/f53 d1/d3/f62 0 2026-03-10T10:19:33.821 INFO:tasks.workunit.client.0.vm02.stdout:4/529: symlink d1/d52/lb0 0 2026-03-10T10:19:33.838 INFO:tasks.workunit.client.1.vm05.stdout:1/384: rename d4/l58 to d4/d3d/d6e/l75 0 2026-03-10T10:19:33.839 INFO:tasks.workunit.client.1.vm05.stdout:1/385: readlink d4/df/d1c/l6d 0 2026-03-10T10:19:33.840 INFO:tasks.workunit.client.0.vm02.stdout:3/377: creat d1/d8/d21/d73/f7e x:0 0 0 2026-03-10T10:19:33.842 INFO:tasks.workunit.client.1.vm05.stdout:2/381: rmdir db/d12 39 2026-03-10T10:19:33.843 INFO:tasks.workunit.client.0.vm02.stdout:1/417: symlink d4/da/d1a/d47/d65/l8b 0 2026-03-10T10:19:33.843 INFO:tasks.workunit.client.1.vm05.stdout:2/382: chown db/d61/d67 2 1 2026-03-10T10:19:33.844 INFO:tasks.workunit.client.1.vm05.stdout:6/304: mknod dd/d36/d3f/d12/d59/c5d 0 2026-03-10T10:19:33.855 INFO:tasks.workunit.client.1.vm05.stdout:0/390: dwrite d1/d2/d9/d31/d13/f4c [0,4194304] 0 2026-03-10T10:19:33.867 INFO:tasks.workunit.client.1.vm05.stdout:4/299: readlink d1/d3/l7 0 2026-03-10T10:19:33.868 INFO:tasks.workunit.client.1.vm05.stdout:0/391: sync 2026-03-10T10:19:33.869 INFO:tasks.workunit.client.1.vm05.stdout:0/392: readlink d1/d2/d9/d31/d13/d15/l59 0 2026-03-10T10:19:33.869 INFO:tasks.workunit.client.1.vm05.stdout:3/427: dread dd/f65 [0,4194304] 0 2026-03-10T10:19:33.870 INFO:tasks.workunit.client.1.vm05.stdout:0/393: write d1/d2/d39/d6e/f6f [232274,30679] 0 2026-03-10T10:19:33.877 INFO:tasks.workunit.client.0.vm02.stdout:0/426: creat d9/d18/d1a/d22/d24/d79/d7d/f85 x:0 0 0 2026-03-10T10:19:33.878 INFO:tasks.workunit.client.0.vm02.stdout:0/427: chown d9/d34/d3d/d65/f84 1577185931 1 2026-03-10T10:19:33.880 INFO:tasks.workunit.client.1.vm05.stdout:9/339: rename d0/d1/d13/de/d21/l29 to d0/d1/d13/de/d21/l6c 0 2026-03-10T10:19:33.881 INFO:tasks.workunit.client.1.vm05.stdout:2/383: mknod db/d61/c76 0 2026-03-10T10:19:33.884 INFO:tasks.workunit.client.0.vm02.stdout:3/378: rmdir d1/d8/d44 39 2026-03-10T10:19:33.888 INFO:tasks.workunit.client.0.vm02.stdout:1/418: mknod d4/da/d1a/d5b/c8c 0 2026-03-10T10:19:33.890 INFO:tasks.workunit.client.0.vm02.stdout:3/379: read d1/d8/d21/f4a [76851,71424] 0 2026-03-10T10:19:33.891 INFO:tasks.workunit.client.1.vm05.stdout:5/398: rename da/db/d26/c36 to da/db/d26/c88 0 2026-03-10T10:19:33.894 INFO:tasks.workunit.client.0.vm02.stdout:7/387: getdents d1/dc/d16/d28/d2d 0 2026-03-10T10:19:33.894 INFO:tasks.workunit.client.1.vm05.stdout:6/305: read dd/d36/d3f/d12/f20 [3146148,129492] 0 2026-03-10T10:19:33.894 INFO:tasks.workunit.client.1.vm05.stdout:0/394: mknod d1/d2/d9/d31/d13/d17/c7e 0 2026-03-10T10:19:33.895 INFO:tasks.workunit.client.0.vm02.stdout:7/388: write d1/d1b/f72 [423183,103468] 0 2026-03-10T10:19:33.897 INFO:tasks.workunit.client.1.vm05.stdout:3/428: truncate dd/d15/d24/d2c/f3e 4386252 0 2026-03-10T10:19:33.898 INFO:tasks.workunit.client.1.vm05.stdout:0/395: write d1/d2/d9/d31/d13/d17/f56 [17113,76332] 0 2026-03-10T10:19:33.898 INFO:tasks.workunit.client.0.vm02.stdout:4/530: creat d1/d41/d5e/d78/d1a/d49/d81/fb1 x:0 0 0 2026-03-10T10:19:33.898 INFO:tasks.workunit.client.0.vm02.stdout:4/531: stat d1/d41/d5e/d78/d1a/d49/f5c 0 2026-03-10T10:19:33.899 INFO:tasks.workunit.client.1.vm05.stdout:6/306: write dd/d36/d3f/d12/d44/d2a/d3d/f53 [1071171,14442] 0 2026-03-10T10:19:33.900 INFO:tasks.workunit.client.1.vm05.stdout:6/307: fdatasync dd/d36/d3f/d12/f4f 0 2026-03-10T10:19:33.904 INFO:tasks.workunit.client.0.vm02.stdout:2/427: write d0/f2d [470747,110037] 0 2026-03-10T10:19:33.906 INFO:tasks.workunit.client.1.vm05.stdout:0/396: dread d1/d2/d39/d3d/f44 [0,4194304] 0 2026-03-10T10:19:33.910 INFO:tasks.workunit.client.1.vm05.stdout:2/384: rmdir db/d28 39 2026-03-10T10:19:33.910 INFO:tasks.workunit.client.0.vm02.stdout:3/380: mknod d1/d20/c7f 0 2026-03-10T10:19:33.912 INFO:tasks.workunit.client.0.vm02.stdout:7/389: fdatasync d1/dc/f25 0 2026-03-10T10:19:33.913 INFO:tasks.workunit.client.0.vm02.stdout:7/390: readlink d1/dc/d16/d28/l2a 0 2026-03-10T10:19:33.923 INFO:tasks.workunit.client.0.vm02.stdout:8/416: link d1/d2/f36 d1/d1c/d43/f7e 0 2026-03-10T10:19:33.923 INFO:tasks.workunit.client.0.vm02.stdout:4/532: dread d1/d41/d5e/d78/d44/f59 [0,4194304] 0 2026-03-10T10:19:33.929 INFO:tasks.workunit.client.0.vm02.stdout:1/419: dread d4/da/d1a/d22/f32 [0,4194304] 0 2026-03-10T10:19:33.929 INFO:tasks.workunit.client.1.vm05.stdout:3/429: unlink dd/d39/d66/l80 0 2026-03-10T10:19:33.938 INFO:tasks.workunit.client.0.vm02.stdout:6/398: write d0/d8/d29/d6d/d32/d60/f73 [537592,9981] 0 2026-03-10T10:19:33.944 INFO:tasks.workunit.client.0.vm02.stdout:1/420: dwrite d4/da/d1a/f1c [0,4194304] 0 2026-03-10T10:19:33.946 INFO:tasks.workunit.client.0.vm02.stdout:1/421: readlink d4/d2c/l72 0 2026-03-10T10:19:33.948 INFO:tasks.workunit.client.0.vm02.stdout:4/533: mkdir d1/d10/d88/db2 0 2026-03-10T10:19:33.949 INFO:tasks.workunit.client.0.vm02.stdout:1/422: chown d4/l41 6 1 2026-03-10T10:19:33.951 INFO:tasks.workunit.client.1.vm05.stdout:8/289: dwrite d7/d14/d24/f34 [0,4194304] 0 2026-03-10T10:19:33.960 INFO:tasks.workunit.client.0.vm02.stdout:8/417: mknod d1/d1c/d24/c7f 0 2026-03-10T10:19:33.962 INFO:tasks.workunit.client.1.vm05.stdout:5/399: creat da/db/d28/d6e/f89 x:0 0 0 2026-03-10T10:19:33.963 INFO:tasks.workunit.client.1.vm05.stdout:5/400: write da/db/d28/d32/f69 [790895,103531] 0 2026-03-10T10:19:33.967 INFO:tasks.workunit.client.0.vm02.stdout:6/399: fsync d0/d8/d29/d2f/f67 0 2026-03-10T10:19:33.967 INFO:tasks.workunit.client.1.vm05.stdout:3/430: creat dd/d39/f96 x:0 0 0 2026-03-10T10:19:33.970 INFO:tasks.workunit.client.0.vm02.stdout:1/423: write d4/da/f73 [946277,126692] 0 2026-03-10T10:19:33.972 INFO:tasks.workunit.client.0.vm02.stdout:4/534: dwrite d1/d41/d5e/d78/f4 [0,4194304] 0 2026-03-10T10:19:33.987 INFO:tasks.workunit.client.0.vm02.stdout:6/400: dread d0/d8/f5a [0,4194304] 0 2026-03-10T10:19:33.991 INFO:tasks.workunit.client.0.vm02.stdout:3/381: dread d1/d8/d21/f4a [0,4194304] 0 2026-03-10T10:19:33.994 INFO:tasks.workunit.client.0.vm02.stdout:3/382: readlink d1/d8/l2d 0 2026-03-10T10:19:34.002 INFO:tasks.workunit.client.0.vm02.stdout:8/418: creat d1/f80 x:0 0 0 2026-03-10T10:19:34.010 INFO:tasks.workunit.client.0.vm02.stdout:4/535: fsync d1/d32/d3e/f7d 0 2026-03-10T10:19:34.017 INFO:tasks.workunit.client.0.vm02.stdout:6/401: rename d0/d8/f45 to d0/d8/d9/f84 0 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.1.vm05.stdout:4/300: mkdir d1/d31/dc/d40/d63 0 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.1.vm05.stdout:7/431: dread d5/d1d/d20/d35/d6f/d82/f86 [0,4194304] 0 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.1.vm05.stdout:3/431: unlink dd/d15/d1f/f75 0 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.1.vm05.stdout:9/340: link d0/d1/d16/f36 d0/d1/f6d 0 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.1.vm05.stdout:6/308: fsync dd/d36/d3f/d12/d44/f46 0 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.1.vm05.stdout:0/397: creat d1/d2/d9/d31/d54/f7f x:0 0 0 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.0.vm02.stdout:6/402: chown d0/d8/d29/d2f/d50/c5f 192070611 1 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.0.vm02.stdout:8/419: chown d1/d1c/f72 28 1 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.0.vm02.stdout:1/424: symlink d4/l8d 0 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.0.vm02.stdout:8/420: stat d1/d1c/d43/f7a 0 2026-03-10T10:19:34.018 INFO:tasks.workunit.client.0.vm02.stdout:8/421: chown d1/d1c/d24/c55 9969 1 2026-03-10T10:19:34.021 INFO:tasks.workunit.client.1.vm05.stdout:8/290: truncate d7/d14/d3a/f50 1835255 0 2026-03-10T10:19:34.022 INFO:tasks.workunit.client.0.vm02.stdout:1/425: dwrite d4/da/d27/d38/f3b [0,4194304] 0 2026-03-10T10:19:34.026 INFO:tasks.workunit.client.0.vm02.stdout:6/403: mknod d0/d8/d29/d52/c85 0 2026-03-10T10:19:34.026 INFO:tasks.workunit.client.1.vm05.stdout:2/385: rmdir db/d4e/d55 0 2026-03-10T10:19:34.027 INFO:tasks.workunit.client.1.vm05.stdout:5/401: mkdir da/db/d28/d8a 0 2026-03-10T10:19:34.028 INFO:tasks.workunit.client.1.vm05.stdout:5/402: dread - da/db/d26/d35/d7a/f87 zero size 2026-03-10T10:19:34.028 INFO:tasks.workunit.client.0.vm02.stdout:3/383: rename d1/f28 to d1/f80 0 2026-03-10T10:19:34.031 INFO:tasks.workunit.client.1.vm05.stdout:7/432: sync 2026-03-10T10:19:34.032 INFO:tasks.workunit.client.1.vm05.stdout:6/309: dwrite dd/d36/d3f/d12/d44/d2a/d3d/f53 [0,4194304] 0 2026-03-10T10:19:34.041 INFO:tasks.workunit.client.0.vm02.stdout:1/426: getdents d4/da/d1a/d22 0 2026-03-10T10:19:34.042 INFO:tasks.workunit.client.1.vm05.stdout:9/341: mkdir d0/d1/d16/d6e 0 2026-03-10T10:19:34.043 INFO:tasks.workunit.client.1.vm05.stdout:8/291: rename d7/d14/d24/d3f/c4a to d7/d2f/d57/c5a 0 2026-03-10T10:19:34.044 INFO:tasks.workunit.client.0.vm02.stdout:3/384: link d1/d20/f41 d1/f81 0 2026-03-10T10:19:34.044 INFO:tasks.workunit.client.0.vm02.stdout:5/575: write d1/db/d11/d16/d48/f5b [290873,53915] 0 2026-03-10T10:19:34.046 INFO:tasks.workunit.client.1.vm05.stdout:4/301: mkdir d1/d64 0 2026-03-10T10:19:34.047 INFO:tasks.workunit.client.0.vm02.stdout:1/427: truncate d4/d1b/f44 1772071 0 2026-03-10T10:19:34.051 INFO:tasks.workunit.client.1.vm05.stdout:7/433: rmdir d5/d17/d66 39 2026-03-10T10:19:34.051 INFO:tasks.workunit.client.1.vm05.stdout:2/386: stat db/d28/d4f/d59/f6f 0 2026-03-10T10:19:34.051 INFO:tasks.workunit.client.1.vm05.stdout:5/403: symlink da/db/d26/d70/l8b 0 2026-03-10T10:19:34.051 INFO:tasks.workunit.client.0.vm02.stdout:1/428: dread - d4/d2c/d53/f75 zero size 2026-03-10T10:19:34.051 INFO:tasks.workunit.client.0.vm02.stdout:3/385: truncate d1/f3 2164120 0 2026-03-10T10:19:34.051 INFO:tasks.workunit.client.0.vm02.stdout:3/386: write d1/d20/f7b [928983,49066] 0 2026-03-10T10:19:34.052 INFO:tasks.workunit.client.1.vm05.stdout:0/398: symlink d1/d2/d9/d31/d13/l80 0 2026-03-10T10:19:34.055 INFO:tasks.workunit.client.1.vm05.stdout:9/342: symlink d0/df/d11/l6f 0 2026-03-10T10:19:34.055 INFO:tasks.workunit.client.1.vm05.stdout:9/343: chown d0/d1/d57 2139 1 2026-03-10T10:19:34.056 INFO:tasks.workunit.client.1.vm05.stdout:8/292: dread - d7/d14/f33 zero size 2026-03-10T10:19:34.059 INFO:tasks.workunit.client.1.vm05.stdout:9/344: sync 2026-03-10T10:19:34.062 INFO:tasks.workunit.client.1.vm05.stdout:6/310: rename dd/d36/d3f/d12/d24/c49 to dd/d36/d3f/d12/d44/d30/c5e 0 2026-03-10T10:19:34.062 INFO:tasks.workunit.client.1.vm05.stdout:0/399: creat d1/d2/d9/d31/d12/d20/f81 x:0 0 0 2026-03-10T10:19:34.064 INFO:tasks.workunit.client.1.vm05.stdout:7/434: rename d5/l72 to d5/d17/d66/l88 0 2026-03-10T10:19:34.067 INFO:tasks.workunit.client.1.vm05.stdout:6/311: dread f2 [0,4194304] 0 2026-03-10T10:19:34.068 INFO:tasks.workunit.client.1.vm05.stdout:0/400: creat d1/d2/d39/d3d/f82 x:0 0 0 2026-03-10T10:19:34.070 INFO:tasks.workunit.client.1.vm05.stdout:5/404: dwrite da/db/d26/d35/d38/f6c [0,4194304] 0 2026-03-10T10:19:34.074 INFO:tasks.workunit.client.1.vm05.stdout:7/435: rename d5/c14 to d5/d1d/d20/d35/d6f/d82/c89 0 2026-03-10T10:19:34.075 INFO:tasks.workunit.client.1.vm05.stdout:6/312: write fb [4143471,44455] 0 2026-03-10T10:19:34.075 INFO:tasks.workunit.client.1.vm05.stdout:0/401: write d1/d2/d9/d31/d13/d17/f56 [2831904,82619] 0 2026-03-10T10:19:34.077 INFO:tasks.workunit.client.1.vm05.stdout:6/313: read dd/d36/d3f/d12/d58/f5a [129926,119100] 0 2026-03-10T10:19:34.082 INFO:tasks.workunit.client.1.vm05.stdout:0/402: symlink d1/d2/d39/d3d/l83 0 2026-03-10T10:19:34.086 INFO:tasks.workunit.client.1.vm05.stdout:6/314: dwrite dd/d1b/f40 [0,4194304] 0 2026-03-10T10:19:34.088 INFO:tasks.workunit.client.1.vm05.stdout:6/315: creat dd/d36/f5f x:0 0 0 2026-03-10T10:19:34.089 INFO:tasks.workunit.client.1.vm05.stdout:6/316: truncate dd/f14 5312669 0 2026-03-10T10:19:34.089 INFO:tasks.workunit.client.1.vm05.stdout:0/403: creat d1/d2/d9/d31/f84 x:0 0 0 2026-03-10T10:19:34.096 INFO:tasks.workunit.client.1.vm05.stdout:6/317: dwrite fb [0,4194304] 0 2026-03-10T10:19:34.096 INFO:tasks.workunit.client.1.vm05.stdout:0/404: dwrite d1/d2/d9/d31/d13/f7a [0,4194304] 0 2026-03-10T10:19:34.109 INFO:tasks.workunit.client.1.vm05.stdout:9/345: dread d0/d1/d13/f8 [0,4194304] 0 2026-03-10T10:19:34.113 INFO:tasks.workunit.client.1.vm05.stdout:0/405: mknod d1/d2/d9/d31/d13/d2f/d49/c85 0 2026-03-10T10:19:34.114 INFO:tasks.workunit.client.1.vm05.stdout:6/318: mknod dd/d36/d3f/d12/d44/d30/d4a/c60 0 2026-03-10T10:19:34.115 INFO:tasks.workunit.client.1.vm05.stdout:9/346: mkdir d0/d70 0 2026-03-10T10:19:34.116 INFO:tasks.workunit.client.1.vm05.stdout:9/347: read d0/d1/d13/de/f46 [108222,84998] 0 2026-03-10T10:19:34.117 INFO:tasks.workunit.client.1.vm05.stdout:9/348: write d0/d1/d13/f6b [858153,109972] 0 2026-03-10T10:19:34.124 INFO:tasks.workunit.client.1.vm05.stdout:0/406: creat d1/d2/d9/d31/d54/f86 x:0 0 0 2026-03-10T10:19:34.124 INFO:tasks.workunit.client.1.vm05.stdout:0/407: readlink d1/d2/d9/d31/d12/d20/l43 0 2026-03-10T10:19:34.125 INFO:tasks.workunit.client.1.vm05.stdout:6/319: creat dd/d36/d3f/f61 x:0 0 0 2026-03-10T10:19:34.126 INFO:tasks.workunit.client.1.vm05.stdout:9/349: dwrite d0/df/d11/f50 [0,4194304] 0 2026-03-10T10:19:34.127 INFO:tasks.workunit.client.1.vm05.stdout:0/408: chown d1/d2/d39/d3d/f72 1 1 2026-03-10T10:19:34.129 INFO:tasks.workunit.client.1.vm05.stdout:6/320: mknod dd/d36/c62 0 2026-03-10T10:19:34.134 INFO:tasks.workunit.client.1.vm05.stdout:9/350: creat d0/d1/d13/de/d21/f71 x:0 0 0 2026-03-10T10:19:34.134 INFO:tasks.workunit.client.1.vm05.stdout:0/409: dwrite d1/d2/d39/d3d/f82 [0,4194304] 0 2026-03-10T10:19:34.135 INFO:tasks.workunit.client.1.vm05.stdout:0/410: rename d1 to d1/d2/d9/d87 22 2026-03-10T10:19:34.141 INFO:tasks.workunit.client.1.vm05.stdout:6/321: dwrite fb [0,4194304] 0 2026-03-10T10:19:34.142 INFO:tasks.workunit.client.1.vm05.stdout:0/411: link d1/d2/d39/d3d/f44 d1/d2/d9/d31/d13/d2f/f88 0 2026-03-10T10:19:34.142 INFO:tasks.workunit.client.1.vm05.stdout:9/351: getdents d0/d1/d16/d6e 0 2026-03-10T10:19:34.145 INFO:tasks.workunit.client.1.vm05.stdout:0/412: write d1/d2/d9/d31/d13/f7a [4149855,2966] 0 2026-03-10T10:19:34.146 INFO:tasks.workunit.client.1.vm05.stdout:6/322: write fb [4212697,2677] 0 2026-03-10T10:19:34.149 INFO:tasks.workunit.client.1.vm05.stdout:0/413: dwrite d1/d2/d9/d31/d54/f4 [0,4194304] 0 2026-03-10T10:19:34.157 INFO:tasks.workunit.client.1.vm05.stdout:9/352: dwrite d0/d1/d16/f3d [0,4194304] 0 2026-03-10T10:19:34.158 INFO:tasks.workunit.client.1.vm05.stdout:6/323: fsync dd/d36/d3f/f1e 0 2026-03-10T10:19:34.160 INFO:tasks.workunit.client.1.vm05.stdout:0/414: creat d1/d2/d9/d31/d13/d15/d4e/f89 x:0 0 0 2026-03-10T10:19:34.161 INFO:tasks.workunit.client.1.vm05.stdout:0/415: chown d1/d2/d9/d31 58247974 1 2026-03-10T10:19:34.161 INFO:tasks.workunit.client.1.vm05.stdout:9/353: creat d0/d1/d16/f72 x:0 0 0 2026-03-10T10:19:34.162 INFO:tasks.workunit.client.1.vm05.stdout:9/354: readlink d0/d1/d13/d26/l37 0 2026-03-10T10:19:34.162 INFO:tasks.workunit.client.1.vm05.stdout:9/355: chown d0/d1/d16/l1c 3357 1 2026-03-10T10:19:34.165 INFO:tasks.workunit.client.1.vm05.stdout:6/324: dread dd/d1b/f40 [0,4194304] 0 2026-03-10T10:19:34.176 INFO:tasks.workunit.client.1.vm05.stdout:6/325: mkdir dd/d36/d3f/d12/d44/d63 0 2026-03-10T10:19:34.185 INFO:tasks.workunit.client.1.vm05.stdout:6/326: chown dd/d36/d3f/c31 881723 1 2026-03-10T10:19:34.189 INFO:tasks.workunit.client.1.vm05.stdout:6/327: creat dd/d36/d3f/d12/d44/d2a/d3d/d3e/f64 x:0 0 0 2026-03-10T10:19:34.191 INFO:tasks.workunit.client.1.vm05.stdout:6/328: write dd/f14 [4634906,32978] 0 2026-03-10T10:19:34.194 INFO:tasks.workunit.client.1.vm05.stdout:6/329: dread dd/d36/d3f/d12/d44/f2f [0,4194304] 0 2026-03-10T10:19:34.196 INFO:tasks.workunit.client.1.vm05.stdout:6/330: chown dd/d36/d3f/d12/d44/l38 3 1 2026-03-10T10:19:34.202 INFO:tasks.workunit.client.1.vm05.stdout:6/331: getdents dd/d36/d3f/d12/d44/d2a 0 2026-03-10T10:19:34.203 INFO:tasks.workunit.client.1.vm05.stdout:6/332: readlink dd/d1b/l54 0 2026-03-10T10:19:34.203 INFO:tasks.workunit.client.0.vm02.stdout:9/377: write da/d3c/d4c/f41 [2444943,125130] 0 2026-03-10T10:19:34.207 INFO:tasks.workunit.client.0.vm02.stdout:9/378: read - da/d3c/d4c/d38/f45 zero size 2026-03-10T10:19:34.210 INFO:tasks.workunit.client.0.vm02.stdout:9/379: dread da/f1f [0,4194304] 0 2026-03-10T10:19:34.241 INFO:tasks.workunit.client.0.vm02.stdout:2/428: dread d0/d1a/d49/f54 [0,4194304] 0 2026-03-10T10:19:34.244 INFO:tasks.workunit.client.0.vm02.stdout:2/429: fdatasync d0/f4d 0 2026-03-10T10:19:34.244 INFO:tasks.workunit.client.1.vm05.stdout:1/386: dwrite d4/d39/f54 [0,4194304] 0 2026-03-10T10:19:34.248 INFO:tasks.workunit.client.0.vm02.stdout:2/430: mkdir d0/d8c 0 2026-03-10T10:19:34.251 INFO:tasks.workunit.client.0.vm02.stdout:0/428: write d9/f28 [1204965,12922] 0 2026-03-10T10:19:34.252 INFO:tasks.workunit.client.0.vm02.stdout:0/429: truncate d9/d34/d3d/d65/f84 709396 0 2026-03-10T10:19:34.252 INFO:tasks.workunit.client.1.vm05.stdout:1/387: dread d4/d3d/f57 [0,4194304] 0 2026-03-10T10:19:34.253 INFO:tasks.workunit.client.0.vm02.stdout:7/391: write d1/dc/d16/f48 [249713,4082] 0 2026-03-10T10:19:34.254 INFO:tasks.workunit.client.0.vm02.stdout:7/392: readlink d1/dc/d16/d28/l5e 0 2026-03-10T10:19:34.255 INFO:tasks.workunit.client.1.vm05.stdout:1/388: dwrite d4/d37/d4e/f62 [0,4194304] 0 2026-03-10T10:19:34.256 INFO:tasks.workunit.client.0.vm02.stdout:4/536: getdents d1/d10/d88 0 2026-03-10T10:19:34.256 INFO:tasks.workunit.client.0.vm02.stdout:4/537: readlink d1/l4e 0 2026-03-10T10:19:34.259 INFO:tasks.workunit.client.0.vm02.stdout:4/538: dread d1/d41/d5e/d78/d37/f14 [0,4194304] 0 2026-03-10T10:19:34.261 INFO:tasks.workunit.client.1.vm05.stdout:8/293: write d7/d14/d24/f34 [4792062,130787] 0 2026-03-10T10:19:34.265 INFO:tasks.workunit.client.0.vm02.stdout:7/393: symlink d1/dc/d10/d38/l74 0 2026-03-10T10:19:34.280 INFO:tasks.workunit.client.0.vm02.stdout:4/539: creat d1/d32/fb3 x:0 0 0 2026-03-10T10:19:34.280 INFO:tasks.workunit.client.1.vm05.stdout:1/389: mkdir d4/df/d76 0 2026-03-10T10:19:34.281 INFO:tasks.workunit.client.1.vm05.stdout:1/390: chown d4/l22 1043 1 2026-03-10T10:19:34.286 INFO:tasks.workunit.client.0.vm02.stdout:8/422: dwrite d1/d1c/d23/d25/f3d [0,4194304] 0 2026-03-10T10:19:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:34 vm05.local ceph-mon[59051]: pgmap v156: 65 pgs: 65 active+clean; 1.6 GiB data, 6.1 GiB used, 114 GiB / 120 GiB avail; 34 MiB/s rd, 139 MiB/s wr, 237 op/s 2026-03-10T10:19:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:34 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:34 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:34 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:34.290 INFO:tasks.workunit.client.0.vm02.stdout:7/394: dread d1/dc/d16/d28/d2d/f2f [0,4194304] 0 2026-03-10T10:19:34.290 INFO:tasks.workunit.client.1.vm05.stdout:8/294: read d7/f9 [2162118,109515] 0 2026-03-10T10:19:34.291 INFO:tasks.workunit.client.0.vm02.stdout:7/395: readlink d1/dc/d16/d28/d2d/l45 0 2026-03-10T10:19:34.304 INFO:tasks.workunit.client.0.vm02.stdout:4/540: unlink d1/d10/db/f35 0 2026-03-10T10:19:34.307 INFO:tasks.workunit.client.0.vm02.stdout:6/404: write d0/d8/d29/d2f/f55 [120265,8257] 0 2026-03-10T10:19:34.314 INFO:tasks.workunit.client.0.vm02.stdout:7/396: dwrite d1/f6b [4194304,4194304] 0 2026-03-10T10:19:34.320 INFO:tasks.workunit.client.0.vm02.stdout:4/541: read d1/d52/d53/f66 [128775,56703] 0 2026-03-10T10:19:34.321 INFO:tasks.workunit.client.0.vm02.stdout:8/423: getdents d1/d1c/d43/d6a/d7c 0 2026-03-10T10:19:34.323 INFO:tasks.workunit.client.0.vm02.stdout:6/405: mknod d0/d7f/c86 0 2026-03-10T10:19:34.326 INFO:tasks.workunit.client.0.vm02.stdout:6/406: mkdir d0/d87 0 2026-03-10T10:19:34.342 INFO:tasks.workunit.client.0.vm02.stdout:4/542: dread d1/d41/d5e/d78/d44/f90 [0,4194304] 0 2026-03-10T10:19:34.342 INFO:tasks.workunit.client.0.vm02.stdout:4/543: rename d1/d41 to d1/d41/db4 22 2026-03-10T10:19:34.343 INFO:tasks.workunit.client.0.vm02.stdout:4/544: write d1/d41/d5e/d78/d1a/f93 [431324,94554] 0 2026-03-10T10:19:34.359 INFO:tasks.workunit.client.0.vm02.stdout:9/380: dread da/f15 [0,4194304] 0 2026-03-10T10:19:34.361 INFO:tasks.workunit.client.0.vm02.stdout:9/381: read da/ff [3103156,118488] 0 2026-03-10T10:19:34.362 INFO:tasks.workunit.client.1.vm05.stdout:8/295: fsync d7/d14/f23 0 2026-03-10T10:19:34.366 INFO:tasks.workunit.client.1.vm05.stdout:8/296: rmdir d7/d2f/d57 39 2026-03-10T10:19:34.367 INFO:tasks.workunit.client.1.vm05.stdout:8/297: chown d7/d2f/l32 50 1 2026-03-10T10:19:34.368 INFO:tasks.workunit.client.0.vm02.stdout:9/382: fdatasync da/f25 0 2026-03-10T10:19:34.372 INFO:tasks.workunit.client.0.vm02.stdout:9/383: dwrite da/d3c/d4c/d56/f61 [0,4194304] 0 2026-03-10T10:19:34.383 INFO:tasks.workunit.client.0.vm02.stdout:9/384: unlink da/d3c/d4c/d2c/d34/d35/c4f 0 2026-03-10T10:19:34.384 INFO:tasks.workunit.client.0.vm02.stdout:9/385: dread da/d3c/d4c/f49 [0,4194304] 0 2026-03-10T10:19:34.384 INFO:tasks.workunit.client.0.vm02.stdout:9/386: dread - da/d3c/d4c/d38/f45 zero size 2026-03-10T10:19:34.386 INFO:tasks.workunit.client.1.vm05.stdout:8/298: read d7/f11 [6580883,48283] 0 2026-03-10T10:19:34.390 INFO:tasks.workunit.client.0.vm02.stdout:9/387: dwrite da/f28 [0,4194304] 0 2026-03-10T10:19:34.391 INFO:tasks.workunit.client.1.vm05.stdout:8/299: write d7/d14/f23 [1439996,68660] 0 2026-03-10T10:19:34.395 INFO:tasks.workunit.client.0.vm02.stdout:9/388: symlink da/d3c/d4c/d2c/l7e 0 2026-03-10T10:19:34.412 INFO:tasks.workunit.client.1.vm05.stdout:8/300: getdents d7 0 2026-03-10T10:19:34.412 INFO:tasks.workunit.client.0.vm02.stdout:9/389: dwrite da/d3c/d4c/d2c/d34/f57 [0,4194304] 0 2026-03-10T10:19:34.412 INFO:tasks.workunit.client.0.vm02.stdout:9/390: fsync da/d3c/d4c/f29 0 2026-03-10T10:19:34.412 INFO:tasks.workunit.client.0.vm02.stdout:9/391: dread da/f15 [0,4194304] 0 2026-03-10T10:19:34.412 INFO:tasks.workunit.client.0.vm02.stdout:9/392: creat da/d3c/d4c/d38/d4a/f7f x:0 0 0 2026-03-10T10:19:34.474 INFO:tasks.workunit.client.0.vm02.stdout:3/387: write d1/d8/d21/f2a [1013425,97933] 0 2026-03-10T10:19:34.474 INFO:tasks.workunit.client.1.vm05.stdout:3/432: write dd/d20/d56/f68 [1417302,116136] 0 2026-03-10T10:19:34.478 INFO:tasks.workunit.client.1.vm05.stdout:2/387: write db/d28/f3f [231502,65504] 0 2026-03-10T10:19:34.478 INFO:tasks.workunit.client.0.vm02.stdout:1/429: dwrite d4/da/d1a/f40 [0,4194304] 0 2026-03-10T10:19:34.481 INFO:tasks.workunit.client.0.vm02.stdout:5/576: truncate d1/d6a/faa 3864305 0 2026-03-10T10:19:34.487 INFO:tasks.workunit.client.1.vm05.stdout:3/433: mknod dd/d15/d24/d8e/c97 0 2026-03-10T10:19:34.488 INFO:tasks.workunit.client.0.vm02.stdout:1/430: readlink d4/da/d27/d38/d3c/l5f 0 2026-03-10T10:19:34.491 INFO:tasks.workunit.client.0.vm02.stdout:1/431: dwrite d4/da/d27/f6a [0,4194304] 0 2026-03-10T10:19:34.492 INFO:tasks.workunit.client.0.vm02.stdout:1/432: write d4/f81 [625824,16859] 0 2026-03-10T10:19:34.504 INFO:tasks.workunit.client.1.vm05.stdout:2/388: link db/d28/f30 db/d61/d67/f77 0 2026-03-10T10:19:34.511 INFO:tasks.workunit.client.1.vm05.stdout:2/389: dwrite db/f24 [0,4194304] 0 2026-03-10T10:19:34.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:34 vm02.local ceph-mon[50200]: pgmap v156: 65 pgs: 65 active+clean; 1.6 GiB data, 6.1 GiB used, 114 GiB / 120 GiB avail; 34 MiB/s rd, 139 MiB/s wr, 237 op/s 2026-03-10T10:19:34.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:34 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:34.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:34 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:34.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:34 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:34.540 INFO:tasks.workunit.client.1.vm05.stdout:4/302: write d1/d31/dc/f2a [2808863,53691] 0 2026-03-10T10:19:34.545 INFO:tasks.workunit.client.0.vm02.stdout:5/577: symlink d1/db/d11/d13/d28/lc4 0 2026-03-10T10:19:34.547 INFO:tasks.workunit.client.1.vm05.stdout:7/436: dwrite d5/d17/f52 [0,4194304] 0 2026-03-10T10:19:34.549 INFO:tasks.workunit.client.0.vm02.stdout:5/578: creat d1/db/d11/d16/d79/d85/d93/fc5 x:0 0 0 2026-03-10T10:19:34.551 INFO:tasks.workunit.client.0.vm02.stdout:5/579: chown d1/db/d11/d16/d79/c8d 2 1 2026-03-10T10:19:34.553 INFO:tasks.workunit.client.0.vm02.stdout:5/580: creat d1/db/d11/d1a/fc6 x:0 0 0 2026-03-10T10:19:34.554 INFO:tasks.workunit.client.1.vm05.stdout:4/303: mkdir d1/d3/d65 0 2026-03-10T10:19:34.556 INFO:tasks.workunit.client.1.vm05.stdout:3/434: dread dd/d15/d24/d2c/f60 [0,4194304] 0 2026-03-10T10:19:34.558 INFO:tasks.workunit.client.1.vm05.stdout:0/416: write d1/d2/d9/d31/d12/f1e [1405665,109663] 0 2026-03-10T10:19:34.558 INFO:tasks.workunit.client.1.vm05.stdout:4/304: creat d1/d31/dc/d40/d45/f66 x:0 0 0 2026-03-10T10:19:34.560 INFO:tasks.workunit.client.0.vm02.stdout:2/431: write d0/d10/f1f [2628864,104847] 0 2026-03-10T10:19:34.563 INFO:tasks.workunit.client.1.vm05.stdout:0/417: truncate d1/d2/d9/d31/d13/d17/f57 4355925 0 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.1.vm05.stdout:9/356: dwrite d0/d1/fb [0,4194304] 0 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.1.vm05.stdout:6/333: truncate dd/f29 5415434 0 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.1.vm05.stdout:7/437: link d5/d1d/d20/c21 d5/d1d/d20/d35/d6f/c8a 0 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.1.vm05.stdout:1/391: truncate d4/d39/f54 3484510 0 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.0.vm02.stdout:0/430: write d9/d18/d1a/d22/d24/f26 [2556191,106718] 0 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.0.vm02.stdout:0/431: chown d9/d18/d1a/d22 195940 1 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.0.vm02.stdout:0/432: chown d9/d34/d3d/d65 7574 1 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.0.vm02.stdout:0/433: dread d9/d34/d3d/f58 [0,4194304] 0 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.0.vm02.stdout:0/434: fdatasync d9/d34/d3d/d7b/f3a 0 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.0.vm02.stdout:0/435: stat d9/d18/d1a/d22/f3f 0 2026-03-10T10:19:34.577 INFO:tasks.workunit.client.0.vm02.stdout:0/436: chown d9/d34/d3d/d67/l75 2999466 1 2026-03-10T10:19:34.578 INFO:tasks.workunit.client.1.vm05.stdout:3/435: dwrite dd/d15/d24/d2c/d3b/f77 [0,4194304] 0 2026-03-10T10:19:34.583 INFO:tasks.workunit.client.0.vm02.stdout:0/437: dread d9/d18/f1e [0,4194304] 0 2026-03-10T10:19:34.588 INFO:tasks.workunit.client.1.vm05.stdout:5/405: dwrite da/db/f3b [0,4194304] 0 2026-03-10T10:19:34.592 INFO:tasks.workunit.client.1.vm05.stdout:4/305: dread d1/d31/dc/f2e [0,4194304] 0 2026-03-10T10:19:34.595 INFO:tasks.workunit.client.0.vm02.stdout:8/424: dwrite d1/f6d [0,4194304] 0 2026-03-10T10:19:34.602 INFO:tasks.workunit.client.1.vm05.stdout:7/438: creat d5/d17/d66/f8b x:0 0 0 2026-03-10T10:19:34.602 INFO:tasks.workunit.client.0.vm02.stdout:7/397: truncate d1/dc/d16/d28/d2d/d36/f5c 3595235 0 2026-03-10T10:19:34.602 INFO:tasks.workunit.client.0.vm02.stdout:6/407: write d0/f21 [4073575,86546] 0 2026-03-10T10:19:34.602 INFO:tasks.workunit.client.0.vm02.stdout:6/408: readlink d0/d8/d29/d2f/l65 0 2026-03-10T10:19:34.611 INFO:tasks.workunit.client.0.vm02.stdout:4/545: dwrite d1/d32/d3e/f42 [4194304,4194304] 0 2026-03-10T10:19:34.611 INFO:tasks.workunit.client.1.vm05.stdout:6/334: dread dd/d1b/f1d [0,4194304] 0 2026-03-10T10:19:34.615 INFO:tasks.workunit.client.1.vm05.stdout:1/392: creat d4/d3d/f77 x:0 0 0 2026-03-10T10:19:34.618 INFO:tasks.workunit.client.1.vm05.stdout:3/436: truncate dd/d15/d24/f63 53608 0 2026-03-10T10:19:34.618 INFO:tasks.workunit.client.0.vm02.stdout:6/409: symlink d0/d8/d9/d7a/l88 0 2026-03-10T10:19:34.631 INFO:tasks.workunit.client.0.vm02.stdout:4/546: creat d1/d10/fb5 x:0 0 0 2026-03-10T10:19:34.644 INFO:tasks.workunit.client.0.vm02.stdout:4/547: dread - d1/d52/d53/f83 zero size 2026-03-10T10:19:34.644 INFO:tasks.workunit.client.0.vm02.stdout:4/548: mknod d1/d41/d5e/d78/d7f/d82/cb6 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:4/306: rmdir d1/d31 39 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:7/439: write d5/d1d/d20/d35/d6f/d82/f86 [4313855,111476] 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:7/440: dread - d5/dd/f73 zero size 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:8/301: dwrite d7/f21 [0,4194304] 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:6/335: creat dd/d36/d3f/d12/d58/f65 x:0 0 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:4/307: readlink d1/d31/l4c 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:6/336: read dd/f14 [1773659,56612] 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:9/357: getdents d0/d1/d13/de 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:4/308: write d1/d31/dc/f1f [1222479,126248] 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:5/406: link da/db/d28/c7f da/db/c8c 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:6/337: symlink dd/d36/d3f/d12/d58/l66 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.1.vm05.stdout:6/338: write dd/d36/d3f/d12/f56 [20667,6811] 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.0.vm02.stdout:2/432: dread d0/d1a/d49/d5e/f68 [0,4194304] 0 2026-03-10T10:19:34.645 INFO:tasks.workunit.client.0.vm02.stdout:9/393: write da/d3c/d4c/f3b [598016,88299] 0 2026-03-10T10:19:34.649 INFO:tasks.workunit.client.1.vm05.stdout:6/339: dread dd/d1b/f1d [0,4194304] 0 2026-03-10T10:19:34.669 INFO:tasks.workunit.client.0.vm02.stdout:2/433: symlink d0/d1a/d24/d80/l8d 0 2026-03-10T10:19:34.669 INFO:tasks.workunit.client.0.vm02.stdout:2/434: getdents d0/d1a 0 2026-03-10T10:19:34.669 INFO:tasks.workunit.client.0.vm02.stdout:2/435: fsync d0/f36 0 2026-03-10T10:19:34.669 INFO:tasks.workunit.client.1.vm05.stdout:6/340: fsync dd/d1b/f40 0 2026-03-10T10:19:34.669 INFO:tasks.workunit.client.1.vm05.stdout:5/407: creat da/db/d28/f8d x:0 0 0 2026-03-10T10:19:34.670 INFO:tasks.workunit.client.1.vm05.stdout:5/408: dwrite da/db/f1e [0,4194304] 0 2026-03-10T10:19:34.671 INFO:tasks.workunit.client.0.vm02.stdout:1/433: sync 2026-03-10T10:19:34.672 INFO:tasks.workunit.client.0.vm02.stdout:0/438: sync 2026-03-10T10:19:34.672 INFO:tasks.workunit.client.0.vm02.stdout:0/439: chown d9/d34 22 1 2026-03-10T10:19:34.675 INFO:tasks.workunit.client.0.vm02.stdout:2/436: dwrite d0/f72 [4194304,4194304] 0 2026-03-10T10:19:34.677 INFO:tasks.workunit.client.1.vm05.stdout:4/309: sync 2026-03-10T10:19:34.677 INFO:tasks.workunit.client.1.vm05.stdout:7/441: sync 2026-03-10T10:19:34.678 INFO:tasks.workunit.client.1.vm05.stdout:4/310: readlink d1/d3/l7 0 2026-03-10T10:19:34.678 INFO:tasks.workunit.client.1.vm05.stdout:7/442: write d5/d17/d66/f8b [213169,28182] 0 2026-03-10T10:19:34.690 INFO:tasks.workunit.client.1.vm05.stdout:5/409: rename da/db/d26/d35/d38/f5b to da/db/d26/d35/d7a/f8e 0 2026-03-10T10:19:34.690 INFO:tasks.workunit.client.1.vm05.stdout:7/443: write d5/d1d/d20/d2d/f58 [3081571,17549] 0 2026-03-10T10:19:34.692 INFO:tasks.workunit.client.1.vm05.stdout:5/410: symlink da/db/d26/d5c/d4b/l8f 0 2026-03-10T10:19:34.692 INFO:tasks.workunit.client.0.vm02.stdout:2/437: symlink d0/d1a/d49/d5e/d8a/l8e 0 2026-03-10T10:19:34.694 INFO:tasks.workunit.client.1.vm05.stdout:7/444: rmdir d5/d1d/d20/d2d/d5d/d7a 39 2026-03-10T10:19:34.695 INFO:tasks.workunit.client.1.vm05.stdout:7/445: readlink d5/d17/d66/l88 0 2026-03-10T10:19:34.695 INFO:tasks.workunit.client.1.vm05.stdout:5/411: write da/db/d26/d70/f7c [20091,82539] 0 2026-03-10T10:19:34.708 INFO:tasks.workunit.client.1.vm05.stdout:5/412: rename da/db/d26/d5c/d4b/l84 to da/db/d26/d70/l90 0 2026-03-10T10:19:34.728 INFO:tasks.workunit.client.1.vm05.stdout:5/413: dwrite da/db/d28/d32/f79 [0,4194304] 0 2026-03-10T10:19:34.858 INFO:tasks.workunit.client.0.vm02.stdout:4/549: mknod d1/d52/cb7 0 2026-03-10T10:19:34.858 INFO:tasks.workunit.client.0.vm02.stdout:3/388: dread d1/fe [0,4194304] 0 2026-03-10T10:19:34.862 INFO:tasks.workunit.client.0.vm02.stdout:4/550: dwrite d1/d41/d5e/d78/d1a/f93 [0,4194304] 0 2026-03-10T10:19:34.865 INFO:tasks.workunit.client.0.vm02.stdout:3/389: truncate d1/d8/d21/f4d 75861 0 2026-03-10T10:19:34.867 INFO:tasks.workunit.client.0.vm02.stdout:3/390: readlink d1/d8/d21/d73/l75 0 2026-03-10T10:19:34.868 INFO:tasks.workunit.client.0.vm02.stdout:3/391: chown d1/d8/f3d 2203997 1 2026-03-10T10:19:34.872 INFO:tasks.workunit.client.0.vm02.stdout:7/398: link d1/d1b/f43 d1/dc/d44/f75 0 2026-03-10T10:19:34.873 INFO:tasks.workunit.client.0.vm02.stdout:3/392: stat d1/d8/d44/c4f 0 2026-03-10T10:19:34.875 INFO:tasks.workunit.client.0.vm02.stdout:7/399: creat d1/dc/d16/d28/d2d/d36/d67/f76 x:0 0 0 2026-03-10T10:19:34.877 INFO:tasks.workunit.client.0.vm02.stdout:3/393: creat d1/d8/d21/d73/f82 x:0 0 0 2026-03-10T10:19:34.879 INFO:tasks.workunit.client.0.vm02.stdout:7/400: creat d1/dc/d16/d28/d2d/d36/f77 x:0 0 0 2026-03-10T10:19:34.886 INFO:tasks.workunit.client.0.vm02.stdout:7/401: fsync d1/dc/ff 0 2026-03-10T10:19:34.886 INFO:tasks.workunit.client.0.vm02.stdout:7/402: truncate d1/dc/d16/f1f 5434925 0 2026-03-10T10:19:34.886 INFO:tasks.workunit.client.0.vm02.stdout:3/394: unlink d1/d8/d21/d73/l75 0 2026-03-10T10:19:34.886 INFO:tasks.workunit.client.0.vm02.stdout:3/395: symlink d1/d8/d21/d7d/l83 0 2026-03-10T10:19:34.886 INFO:tasks.workunit.client.0.vm02.stdout:3/396: mkdir d1/d8/d21/d73/d78/d84 0 2026-03-10T10:19:34.886 INFO:tasks.workunit.client.0.vm02.stdout:3/397: chown d1/d8/f46 54442248 1 2026-03-10T10:19:34.888 INFO:tasks.workunit.client.0.vm02.stdout:8/425: read d1/d2/f67 [3732666,123471] 0 2026-03-10T10:19:34.889 INFO:tasks.workunit.client.0.vm02.stdout:3/398: dread d1/f25 [0,4194304] 0 2026-03-10T10:19:34.889 INFO:tasks.workunit.client.0.vm02.stdout:8/426: chown d1/d1c/f20 62 1 2026-03-10T10:19:34.893 INFO:tasks.workunit.client.0.vm02.stdout:3/399: dwrite d1/d58/f60 [4194304,4194304] 0 2026-03-10T10:19:34.903 INFO:tasks.workunit.client.0.vm02.stdout:8/427: rename d1/d1c/f3f to d1/d1c/d24/d35/d56/f81 0 2026-03-10T10:19:34.925 INFO:tasks.workunit.client.1.vm05.stdout:0/418: dwrite d1/d2/d9/d31/d13/d2f/f88 [0,4194304] 0 2026-03-10T10:19:34.935 INFO:tasks.workunit.client.1.vm05.stdout:0/419: mkdir d1/d2/d9/d31/d13/d15/d4e/d8a 0 2026-03-10T10:19:34.940 INFO:tasks.workunit.client.1.vm05.stdout:0/420: fsync d1/d2/d9/d31/f84 0 2026-03-10T10:19:34.950 INFO:tasks.workunit.client.1.vm05.stdout:0/421: dwrite d1/d2/d9/f32 [0,4194304] 0 2026-03-10T10:19:34.956 INFO:tasks.workunit.client.1.vm05.stdout:0/422: chown d1/d2/d9/d31/d12/d20/l53 400201 1 2026-03-10T10:19:34.962 INFO:tasks.workunit.client.1.vm05.stdout:0/423: creat d1/d2/d9/d31/d13/d55/f8b x:0 0 0 2026-03-10T10:19:34.963 INFO:tasks.workunit.client.1.vm05.stdout:0/424: chown d1/d2/d9/d31/d12 15416 1 2026-03-10T10:19:34.969 INFO:tasks.workunit.client.1.vm05.stdout:0/425: truncate d1/d2/d9/d31/d12/d20/f71 317082 0 2026-03-10T10:19:34.996 INFO:tasks.workunit.client.0.vm02.stdout:6/410: dwrite d0/f4c [0,4194304] 0 2026-03-10T10:19:34.998 INFO:tasks.workunit.client.0.vm02.stdout:6/411: readlink d0/d8/l22 0 2026-03-10T10:19:34.998 INFO:tasks.workunit.client.0.vm02.stdout:6/412: dread - d0/d8/d29/d6d/d32/f70 zero size 2026-03-10T10:19:35.000 INFO:tasks.workunit.client.0.vm02.stdout:9/394: read da/d3c/d53/f6a [268344,119126] 0 2026-03-10T10:19:35.002 INFO:tasks.workunit.client.0.vm02.stdout:6/413: dread d0/d8/d9/f54 [0,4194304] 0 2026-03-10T10:19:35.005 INFO:tasks.workunit.client.1.vm05.stdout:8/302: write f2 [3423015,74050] 0 2026-03-10T10:19:35.013 INFO:tasks.workunit.client.0.vm02.stdout:6/414: unlink d0/d8/d9/d7a/c6c 0 2026-03-10T10:19:35.013 INFO:tasks.workunit.client.1.vm05.stdout:8/303: chown d7/d14/d24/d3f/l52 7 1 2026-03-10T10:19:35.013 INFO:tasks.workunit.client.0.vm02.stdout:6/415: write d0/d8/d9/f82 [991221,76178] 0 2026-03-10T10:19:35.016 INFO:tasks.workunit.client.0.vm02.stdout:9/395: link da/d3c/d4c/d38/d4a/c51 da/d3c/d4c/d75/c80 0 2026-03-10T10:19:35.023 INFO:tasks.workunit.client.1.vm05.stdout:8/304: sync 2026-03-10T10:19:35.025 INFO:tasks.workunit.client.1.vm05.stdout:8/305: stat d7/f9 0 2026-03-10T10:19:35.031 INFO:tasks.workunit.client.1.vm05.stdout:8/306: write d7/f8 [4265218,119440] 0 2026-03-10T10:19:35.031 INFO:tasks.workunit.client.1.vm05.stdout:9/358: dwrite d0/d1/f6d [0,4194304] 0 2026-03-10T10:19:35.031 INFO:tasks.workunit.client.1.vm05.stdout:8/307: chown d7/d14/d3a/d49/f54 18681 1 2026-03-10T10:19:35.059 INFO:tasks.workunit.client.0.vm02.stdout:9/396: sync 2026-03-10T10:19:35.059 INFO:tasks.workunit.client.0.vm02.stdout:9/397: chown da/d3c/d4c/d75 170966529 1 2026-03-10T10:19:35.100 INFO:tasks.workunit.client.1.vm05.stdout:7/446: write d5/d1d/f56 [102086,82237] 0 2026-03-10T10:19:35.102 INFO:tasks.workunit.client.1.vm05.stdout:4/311: dwrite d1/d31/f13 [0,4194304] 0 2026-03-10T10:19:35.103 INFO:tasks.workunit.client.0.vm02.stdout:1/434: unlink d4/da/d1a/c61 0 2026-03-10T10:19:35.104 INFO:tasks.workunit.client.1.vm05.stdout:5/414: dwrite da/db/d26/d35/d38/f51 [0,4194304] 0 2026-03-10T10:19:35.105 INFO:tasks.workunit.client.1.vm05.stdout:7/447: dread - d5/d1d/d20/d2d/d5d/d7a/f7b zero size 2026-03-10T10:19:35.110 INFO:tasks.workunit.client.1.vm05.stdout:2/390: mknod db/d28/c78 0 2026-03-10T10:19:35.111 INFO:tasks.workunit.client.1.vm05.stdout:2/391: write db/f24 [1871314,76416] 0 2026-03-10T10:19:35.119 INFO:tasks.workunit.client.1.vm05.stdout:5/415: read - da/db/d26/d5c/d4b/f6a zero size 2026-03-10T10:19:35.122 INFO:tasks.workunit.client.1.vm05.stdout:2/392: mknod db/d28/d4f/d59/c79 0 2026-03-10T10:19:35.122 INFO:tasks.workunit.client.1.vm05.stdout:2/393: chown db/d4e/d6c 7 1 2026-03-10T10:19:35.124 INFO:tasks.workunit.client.1.vm05.stdout:2/394: mknod db/d2d/d5e/c7a 0 2026-03-10T10:19:35.125 INFO:tasks.workunit.client.1.vm05.stdout:7/448: getdents d5/dd 0 2026-03-10T10:19:35.126 INFO:tasks.workunit.client.1.vm05.stdout:2/395: read - db/d12/f3c zero size 2026-03-10T10:19:35.128 INFO:tasks.workunit.client.1.vm05.stdout:7/449: dread d5/d1d/d20/d2d/d5d/f67 [0,4194304] 0 2026-03-10T10:19:35.129 INFO:tasks.workunit.client.1.vm05.stdout:2/396: symlink db/d4e/l7b 0 2026-03-10T10:19:35.129 INFO:tasks.workunit.client.1.vm05.stdout:7/450: stat d5/d1d/d20/d2d/f58 0 2026-03-10T10:19:35.130 INFO:tasks.workunit.client.1.vm05.stdout:7/451: fdatasync d5/d1d/d20/d35/f47 0 2026-03-10T10:19:35.130 INFO:tasks.workunit.client.1.vm05.stdout:7/452: chown d5/d1d/d20/d2d/d80 1 1 2026-03-10T10:19:35.163 INFO:tasks.workunit.client.1.vm05.stdout:2/397: dread db/d28/f35 [0,4194304] 0 2026-03-10T10:19:35.164 INFO:tasks.workunit.client.1.vm05.stdout:2/398: write db/d28/d4f/f68 [1612538,108451] 0 2026-03-10T10:19:35.169 INFO:tasks.workunit.client.1.vm05.stdout:5/416: dread f5 [0,4194304] 0 2026-03-10T10:19:35.172 INFO:tasks.workunit.client.1.vm05.stdout:2/399: getdents db/d1c 0 2026-03-10T10:19:35.177 INFO:tasks.workunit.client.1.vm05.stdout:2/400: dread db/d1c/f3d [0,4194304] 0 2026-03-10T10:19:35.179 INFO:tasks.workunit.client.1.vm05.stdout:2/401: dread - db/d28/d4f/f75 zero size 2026-03-10T10:19:35.180 INFO:tasks.workunit.client.0.vm02.stdout:0/440: mknod d9/d34/c86 0 2026-03-10T10:19:35.183 INFO:tasks.workunit.client.1.vm05.stdout:2/402: creat db/d28/d4f/d59/f7c x:0 0 0 2026-03-10T10:19:35.185 INFO:tasks.workunit.client.1.vm05.stdout:2/403: creat db/d28/f7d x:0 0 0 2026-03-10T10:19:35.186 INFO:tasks.workunit.client.0.vm02.stdout:0/441: symlink d9/l87 0 2026-03-10T10:19:35.189 INFO:tasks.workunit.client.0.vm02.stdout:0/442: creat d9/d18/d1a/f88 x:0 0 0 2026-03-10T10:19:35.192 INFO:tasks.workunit.client.0.vm02.stdout:0/443: fdatasync d9/d18/d1a/d22/d24/d80/d49/f5e 0 2026-03-10T10:19:35.197 INFO:tasks.workunit.client.1.vm05.stdout:2/404: getdents db 0 2026-03-10T10:19:35.198 INFO:tasks.workunit.client.0.vm02.stdout:5/581: mknod d1/db/d11/d16/d79/cc7 0 2026-03-10T10:19:35.200 INFO:tasks.workunit.client.0.vm02.stdout:0/444: mkdir d9/d34/d3d/d65/d89 0 2026-03-10T10:19:35.202 INFO:tasks.workunit.client.0.vm02.stdout:5/582: symlink d1/db/d11/d62/d67/lc8 0 2026-03-10T10:19:35.203 INFO:tasks.workunit.client.0.vm02.stdout:0/445: fsync d9/d18/f2a 0 2026-03-10T10:19:35.204 INFO:tasks.workunit.client.0.vm02.stdout:5/583: unlink d1/db/d11/d84/c36 0 2026-03-10T10:19:35.205 INFO:tasks.workunit.client.0.vm02.stdout:5/584: fsync d1/db/d11/d13/d28/d37/f3c 0 2026-03-10T10:19:35.206 INFO:tasks.workunit.client.0.vm02.stdout:5/585: chown d1/db/d11/d13/f21 31148 1 2026-03-10T10:19:35.214 INFO:tasks.workunit.client.0.vm02.stdout:5/586: truncate d1/db/d11/f47 731127 0 2026-03-10T10:19:35.215 INFO:tasks.workunit.client.0.vm02.stdout:4/551: write d1/d41/d5e/d78/d7f/f74 [767216,61383] 0 2026-03-10T10:19:35.217 INFO:tasks.workunit.client.0.vm02.stdout:5/587: write d1/db/d11/d16/d79/d85/f94 [376731,51081] 0 2026-03-10T10:19:35.217 INFO:tasks.workunit.client.0.vm02.stdout:4/552: mknod d1/d52/d53/cb8 0 2026-03-10T10:19:35.221 INFO:tasks.workunit.client.0.vm02.stdout:4/553: dwrite d1/d10/fb5 [0,4194304] 0 2026-03-10T10:19:35.229 INFO:tasks.workunit.client.0.vm02.stdout:0/446: dread f2 [0,4194304] 0 2026-03-10T10:19:35.235 INFO:tasks.workunit.client.0.vm02.stdout:5/588: truncate d1/db/d11/d84/d40/f66 3581439 0 2026-03-10T10:19:35.241 INFO:tasks.workunit.client.0.vm02.stdout:0/447: creat d9/d34/d3d/d7b/f8a x:0 0 0 2026-03-10T10:19:35.244 INFO:tasks.workunit.client.1.vm05.stdout:3/437: unlink dd/l1e 0 2026-03-10T10:19:35.250 INFO:tasks.workunit.client.0.vm02.stdout:5/589: write d1/db/d11/d62/d67/fa6 [462671,63993] 0 2026-03-10T10:19:35.253 INFO:tasks.workunit.client.0.vm02.stdout:0/448: creat d9/d18/d1a/d22/d24/d80/d49/f8b x:0 0 0 2026-03-10T10:19:35.267 INFO:tasks.workunit.client.1.vm05.stdout:3/438: getdents dd/d15/d24/d2c/d3b 0 2026-03-10T10:19:35.267 INFO:tasks.workunit.client.0.vm02.stdout:0/449: truncate d9/d18/d1a/d22/d24/f4f 1783213 0 2026-03-10T10:19:35.268 INFO:tasks.workunit.client.1.vm05.stdout:3/439: symlink dd/d39/l98 0 2026-03-10T10:19:35.268 INFO:tasks.workunit.client.1.vm05.stdout:3/440: readlink l7 0 2026-03-10T10:19:35.268 INFO:tasks.workunit.client.0.vm02.stdout:0/450: chown d9/d18/d1a/d22/d24/l77 682969 1 2026-03-10T10:19:35.269 INFO:tasks.workunit.client.1.vm05.stdout:3/441: fsync dd/d15/d69/f86 0 2026-03-10T10:19:35.269 INFO:tasks.workunit.client.1.vm05.stdout:3/442: dread - dd/d15/d69/f86 zero size 2026-03-10T10:19:35.270 INFO:tasks.workunit.client.0.vm02.stdout:0/451: symlink d9/d18/l8c 0 2026-03-10T10:19:35.275 INFO:tasks.workunit.client.0.vm02.stdout:0/452: mkdir d9/d34/d3d/d8d 0 2026-03-10T10:19:35.275 INFO:tasks.workunit.client.0.vm02.stdout:0/453: dread d9/d18/f2a [0,4194304] 0 2026-03-10T10:19:35.275 INFO:tasks.workunit.client.0.vm02.stdout:0/454: dwrite d9/f28 [0,4194304] 0 2026-03-10T10:19:35.278 INFO:tasks.workunit.client.0.vm02.stdout:0/455: write d9/d18/d1a/f88 [366059,88459] 0 2026-03-10T10:19:35.292 INFO:tasks.workunit.client.0.vm02.stdout:3/400: truncate d1/d6/f48 1898972 0 2026-03-10T10:19:35.293 INFO:tasks.workunit.client.0.vm02.stdout:3/401: truncate d1/d8/f3f 250623 0 2026-03-10T10:19:35.295 INFO:tasks.workunit.client.0.vm02.stdout:8/428: write d1/d1c/d23/d3e/f50 [784709,69545] 0 2026-03-10T10:19:35.298 INFO:tasks.workunit.client.1.vm05.stdout:6/341: rmdir dd/d36 39 2026-03-10T10:19:35.301 INFO:tasks.workunit.client.1.vm05.stdout:6/342: mknod dd/c67 0 2026-03-10T10:19:35.305 INFO:tasks.workunit.client.1.vm05.stdout:6/343: getdents dd/d36/d3f/d12/d24 0 2026-03-10T10:19:35.314 INFO:tasks.workunit.client.1.vm05.stdout:6/344: stat dd/d1b/f40 0 2026-03-10T10:19:35.314 INFO:tasks.workunit.client.1.vm05.stdout:6/345: symlink dd/d36/d3f/d12/d44/d30/d4a/l68 0 2026-03-10T10:19:35.314 INFO:tasks.workunit.client.1.vm05.stdout:6/346: creat dd/d36/f69 x:0 0 0 2026-03-10T10:19:35.314 INFO:tasks.workunit.client.1.vm05.stdout:6/347: readlink dd/d36/d3f/d12/l1a 0 2026-03-10T10:19:35.316 INFO:tasks.workunit.client.0.vm02.stdout:8/429: sync 2026-03-10T10:19:35.316 INFO:tasks.workunit.client.0.vm02.stdout:3/402: sync 2026-03-10T10:19:35.317 INFO:tasks.workunit.client.0.vm02.stdout:8/430: write d1/f7d [148857,13367] 0 2026-03-10T10:19:35.324 INFO:tasks.workunit.client.1.vm05.stdout:6/348: link dd/d36/d3f/d12/d44/d2a/d3d/d3e/l52 dd/d1b/l6a 0 2026-03-10T10:19:35.325 INFO:tasks.workunit.client.1.vm05.stdout:6/349: truncate dd/d36/d3f/d12/d58/f65 846431 0 2026-03-10T10:19:35.326 INFO:tasks.workunit.client.0.vm02.stdout:2/438: creat d0/f8f x:0 0 0 2026-03-10T10:19:35.327 INFO:tasks.workunit.client.0.vm02.stdout:2/439: readlink d0/d10/l37 0 2026-03-10T10:19:35.330 INFO:tasks.workunit.client.0.vm02.stdout:8/431: creat d1/d1c/d43/d6a/f82 x:0 0 0 2026-03-10T10:19:35.332 INFO:tasks.workunit.client.1.vm05.stdout:8/308: rename d7/d14/d24/f26 to d7/d14/f5b 0 2026-03-10T10:19:35.333 INFO:tasks.workunit.client.1.vm05.stdout:1/393: mknod d4/c78 0 2026-03-10T10:19:35.333 INFO:tasks.workunit.client.1.vm05.stdout:8/309: dread - d7/d14/f33 zero size 2026-03-10T10:19:35.334 INFO:tasks.workunit.client.1.vm05.stdout:8/310: chown d7/d14/d24/f2c 6294841 1 2026-03-10T10:19:35.334 INFO:tasks.workunit.client.1.vm05.stdout:8/311: dread - d7/d14/f40 zero size 2026-03-10T10:19:35.334 INFO:tasks.workunit.client.1.vm05.stdout:6/350: creat dd/d36/d3f/d12/d24/d28/d4d/f6b x:0 0 0 2026-03-10T10:19:35.336 INFO:tasks.workunit.client.1.vm05.stdout:7/453: rename d5/d1d/d20/d35/d6f to d5/d1d/d29/d3e/d8c 0 2026-03-10T10:19:35.338 INFO:tasks.workunit.client.0.vm02.stdout:6/416: rmdir d0/d7f 39 2026-03-10T10:19:35.341 INFO:tasks.workunit.client.0.vm02.stdout:2/440: unlink d0/c4e 0 2026-03-10T10:19:35.341 INFO:tasks.workunit.client.0.vm02.stdout:2/441: chown d0/d1a/d49 12061603 1 2026-03-10T10:19:35.342 INFO:tasks.workunit.client.1.vm05.stdout:0/426: truncate d1/d2/d39/d3d/f72 1622915 0 2026-03-10T10:19:35.351 INFO:tasks.workunit.client.1.vm05.stdout:8/312: fdatasync d7/d14/d15/f39 0 2026-03-10T10:19:35.352 INFO:tasks.workunit.client.1.vm05.stdout:8/313: fsync d7/d2f/f4b 0 2026-03-10T10:19:35.352 INFO:tasks.workunit.client.1.vm05.stdout:8/314: chown d7/d2f/c58 2 1 2026-03-10T10:19:35.353 INFO:tasks.workunit.client.1.vm05.stdout:2/405: rename f1 to db/d28/d4f/d59/f7e 0 2026-03-10T10:19:35.356 INFO:tasks.workunit.client.1.vm05.stdout:0/427: sync 2026-03-10T10:19:35.356 INFO:tasks.workunit.client.1.vm05.stdout:6/351: mknod dd/c6c 0 2026-03-10T10:19:35.357 INFO:tasks.workunit.client.0.vm02.stdout:6/417: rmdir d0/d7f 39 2026-03-10T10:19:35.357 INFO:tasks.workunit.client.0.vm02.stdout:6/418: readlink d0/d8/d29/d2f/d4b/l2e 0 2026-03-10T10:19:35.358 INFO:tasks.workunit.client.0.vm02.stdout:6/419: readlink d0/d8/d9/le 0 2026-03-10T10:19:35.360 INFO:tasks.workunit.client.1.vm05.stdout:8/315: symlink d7/d14/d15/d3b/l5c 0 2026-03-10T10:19:35.364 INFO:tasks.workunit.client.1.vm05.stdout:3/443: rename f3 to dd/d15/d69/f99 0 2026-03-10T10:19:35.368 INFO:tasks.workunit.client.1.vm05.stdout:7/454: rename d5/f34 to d5/d1d/d20/d3b/f8d 0 2026-03-10T10:19:35.368 INFO:tasks.workunit.client.1.vm05.stdout:2/406: creat db/d28/f7f x:0 0 0 2026-03-10T10:19:35.368 INFO:tasks.workunit.client.1.vm05.stdout:6/352: link dd/d36/d3f/d12/d44/l47 dd/d36/d3f/d12/d44/d30/d4a/l6d 0 2026-03-10T10:19:35.369 INFO:tasks.workunit.client.1.vm05.stdout:3/444: mknod dd/d20/d56/d5e/c9a 0 2026-03-10T10:19:35.369 INFO:tasks.workunit.client.1.vm05.stdout:0/428: link d1/d2/d9/d31/d13/f3e d1/d2/d9/d31/f8c 0 2026-03-10T10:19:35.371 INFO:tasks.workunit.client.1.vm05.stdout:6/353: read f3 [993869,11514] 0 2026-03-10T10:19:35.371 INFO:tasks.workunit.client.1.vm05.stdout:8/316: dwrite d7/d14/d15/d3b/f43 [0,4194304] 0 2026-03-10T10:19:35.372 INFO:tasks.workunit.client.1.vm05.stdout:8/317: write f2 [1742322,2090] 0 2026-03-10T10:19:35.372 INFO:tasks.workunit.client.1.vm05.stdout:8/318: chown f6 41 1 2026-03-10T10:19:35.389 INFO:tasks.workunit.client.1.vm05.stdout:0/429: rmdir d1/d2/d9/d31/d13/d15 39 2026-03-10T10:19:35.389 INFO:tasks.workunit.client.0.vm02.stdout:9/398: dwrite da/f15 [0,4194304] 0 2026-03-10T10:19:35.393 INFO:tasks.workunit.client.0.vm02.stdout:9/399: creat da/d3c/d4c/d2c/d34/f81 x:0 0 0 2026-03-10T10:19:35.397 INFO:tasks.workunit.client.1.vm05.stdout:3/445: creat dd/d15/d24/d2c/d6d/d89/f9b x:0 0 0 2026-03-10T10:19:35.398 INFO:tasks.workunit.client.1.vm05.stdout:6/354: chown dd/d36/d3f/d12/d44/d30/d4a/l6d 59 1 2026-03-10T10:19:35.402 INFO:tasks.workunit.client.1.vm05.stdout:0/430: write d1/d2/d9/d31/d12/f5b [1049753,113908] 0 2026-03-10T10:19:35.402 INFO:tasks.workunit.client.1.vm05.stdout:0/431: write d1/d2/d9/d31/d13/f4c [3326354,27576] 0 2026-03-10T10:19:35.403 INFO:tasks.workunit.client.0.vm02.stdout:3/403: rename d1/d20/f41 to d1/f85 0 2026-03-10T10:19:35.403 INFO:tasks.workunit.client.1.vm05.stdout:3/446: mkdir dd/d15/d24/d2c/d6d/d89/d9c 0 2026-03-10T10:19:35.403 INFO:tasks.workunit.client.0.vm02.stdout:3/404: chown d1/d8/d21/d73/c67 773 1 2026-03-10T10:19:35.403 INFO:tasks.workunit.client.1.vm05.stdout:9/359: dwrite d0/df/d11/f24 [0,4194304] 0 2026-03-10T10:19:35.405 INFO:tasks.workunit.client.0.vm02.stdout:3/405: truncate d1/d8/f3d 4560306 0 2026-03-10T10:19:35.410 INFO:tasks.workunit.client.1.vm05.stdout:9/360: fdatasync d0/f1e 0 2026-03-10T10:19:35.410 INFO:tasks.workunit.client.1.vm05.stdout:0/432: symlink d1/d2/d9/d31/d12/l8d 0 2026-03-10T10:19:35.415 INFO:tasks.workunit.client.1.vm05.stdout:0/433: mkdir d1/d2/d39/d6e/d8e 0 2026-03-10T10:19:35.423 INFO:tasks.workunit.client.1.vm05.stdout:0/434: creat d1/d2/d9/d31/d13/d2f/f8f x:0 0 0 2026-03-10T10:19:35.429 INFO:tasks.workunit.client.1.vm05.stdout:0/435: getdents d1/d2/d39/d3d 0 2026-03-10T10:19:35.438 INFO:tasks.workunit.client.1.vm05.stdout:0/436: chown d1/c1a 12572628 1 2026-03-10T10:19:35.438 INFO:tasks.workunit.client.1.vm05.stdout:0/437: dread - d1/d2/d9/f6c zero size 2026-03-10T10:19:35.438 INFO:tasks.workunit.client.1.vm05.stdout:0/438: chown d1/d2/d9/d31/d13/d55/f8b 150695683 1 2026-03-10T10:19:35.446 INFO:tasks.workunit.client.1.vm05.stdout:0/439: sync 2026-03-10T10:19:35.446 INFO:tasks.workunit.client.1.vm05.stdout:0/440: chown d1/d2/f21 222517709 1 2026-03-10T10:19:35.455 INFO:tasks.workunit.client.1.vm05.stdout:0/441: mkdir d1/d2/d9/d31/d90 0 2026-03-10T10:19:35.463 INFO:tasks.workunit.client.1.vm05.stdout:0/442: symlink d1/d2/d39/d3d/l91 0 2026-03-10T10:19:35.465 INFO:tasks.workunit.client.0.vm02.stdout:1/435: write d4/ff [548159,92348] 0 2026-03-10T10:19:35.468 INFO:tasks.workunit.client.1.vm05.stdout:0/443: mknod d1/d2/d5d/c92 0 2026-03-10T10:19:35.469 INFO:tasks.workunit.client.0.vm02.stdout:1/436: rmdir d4/d2c/d53 39 2026-03-10T10:19:35.472 INFO:tasks.workunit.client.0.vm02.stdout:1/437: dread d4/da/d1a/f40 [0,4194304] 0 2026-03-10T10:19:35.476 INFO:tasks.workunit.client.1.vm05.stdout:0/444: creat d1/d2/d9/d50/f93 x:0 0 0 2026-03-10T10:19:35.479 INFO:tasks.workunit.client.1.vm05.stdout:4/312: dwrite d1/d31/dc/f21 [0,4194304] 0 2026-03-10T10:19:35.480 INFO:tasks.workunit.client.0.vm02.stdout:1/438: dwrite d4/fe [0,4194304] 0 2026-03-10T10:19:35.488 INFO:tasks.workunit.client.0.vm02.stdout:7/403: mknod d1/c78 0 2026-03-10T10:19:35.489 INFO:tasks.workunit.client.0.vm02.stdout:7/404: truncate d1/d1b/f72 860632 0 2026-03-10T10:19:35.503 INFO:tasks.workunit.client.0.vm02.stdout:4/554: write d1/d41/d5e/d78/d44/f59 [886332,96825] 0 2026-03-10T10:19:35.507 INFO:tasks.workunit.client.0.vm02.stdout:5/590: write d1/db/d11/d16/fa1 [430002,70500] 0 2026-03-10T10:19:35.514 INFO:tasks.workunit.client.1.vm05.stdout:5/417: dwrite da/db/d28/f44 [0,4194304] 0 2026-03-10T10:19:35.518 INFO:tasks.workunit.client.1.vm05.stdout:5/418: stat da/db/d26/d35/c4f 0 2026-03-10T10:19:35.522 INFO:tasks.workunit.client.0.vm02.stdout:0/456: write d9/d18/d1a/d22/d24/d80/f72 [754527,20504] 0 2026-03-10T10:19:35.526 INFO:tasks.workunit.client.1.vm05.stdout:5/419: stat da/db/c8c 0 2026-03-10T10:19:35.528 INFO:tasks.workunit.client.0.vm02.stdout:4/555: fsync d1/d41/d5e/f87 0 2026-03-10T10:19:35.533 INFO:tasks.workunit.client.0.vm02.stdout:5/591: mkdir d1/db/d11/d13/dc9 0 2026-03-10T10:19:35.535 INFO:tasks.workunit.client.0.vm02.stdout:0/457: truncate d9/d34/d3d/f69 552450 0 2026-03-10T10:19:35.535 INFO:tasks.workunit.client.0.vm02.stdout:1/439: link d4/d2c/c37 d4/da/d27/c8e 0 2026-03-10T10:19:35.536 INFO:tasks.workunit.client.0.vm02.stdout:4/556: creat d1/d41/d5e/d78/d7f/fb9 x:0 0 0 2026-03-10T10:19:35.537 INFO:tasks.workunit.client.1.vm05.stdout:0/445: link d1/d2/d9/d31/d13/d2f/f33 d1/d2/d9/d50/f94 0 2026-03-10T10:19:35.540 INFO:tasks.workunit.client.0.vm02.stdout:1/440: creat d4/da/d27/d38/d3c/f8f x:0 0 0 2026-03-10T10:19:35.541 INFO:tasks.workunit.client.1.vm05.stdout:0/446: mkdir d1/d2/d39/d6e/d95 0 2026-03-10T10:19:35.542 INFO:tasks.workunit.client.0.vm02.stdout:4/557: rename d1/d41/d5e/d78/d1a/l2a to d1/d41/lba 0 2026-03-10T10:19:35.542 INFO:tasks.workunit.client.0.vm02.stdout:4/558: write d1/d32/f69 [3959074,103063] 0 2026-03-10T10:19:35.543 INFO:tasks.workunit.client.1.vm05.stdout:5/420: dwrite da/db/d26/f64 [0,4194304] 0 2026-03-10T10:19:35.543 INFO:tasks.workunit.client.0.vm02.stdout:0/458: mkdir d9/d18/d1a/d22/d24/d8e 0 2026-03-10T10:19:35.547 INFO:tasks.workunit.client.0.vm02.stdout:0/459: dwrite d9/d34/d3d/d7b/f3a [4194304,4194304] 0 2026-03-10T10:19:35.561 INFO:tasks.workunit.client.0.vm02.stdout:4/559: rmdir d1/d41/d5e/d78 39 2026-03-10T10:19:35.563 INFO:tasks.workunit.client.0.vm02.stdout:5/592: sync 2026-03-10T10:19:35.567 INFO:tasks.workunit.client.0.vm02.stdout:4/560: dread - d1/d10/f71 zero size 2026-03-10T10:19:35.567 INFO:tasks.workunit.client.0.vm02.stdout:4/561: chown d1/d32/d3e/fae 12631 1 2026-03-10T10:19:35.567 INFO:tasks.workunit.client.0.vm02.stdout:1/441: link d4/da/d27/d38/f4e d4/f90 0 2026-03-10T10:19:35.568 INFO:tasks.workunit.client.1.vm05.stdout:5/421: symlink da/db/d28/d6e/l91 0 2026-03-10T10:19:35.569 INFO:tasks.workunit.client.0.vm02.stdout:4/562: readlink d1/d52/d53/la5 0 2026-03-10T10:19:35.571 INFO:tasks.workunit.client.1.vm05.stdout:0/447: symlink d1/d2/d9/d31/d54/l96 0 2026-03-10T10:19:35.584 INFO:tasks.workunit.client.1.vm05.stdout:5/422: creat da/db/d26/d5c/f92 x:0 0 0 2026-03-10T10:19:35.584 INFO:tasks.workunit.client.1.vm05.stdout:5/423: read - da/f78 zero size 2026-03-10T10:19:35.584 INFO:tasks.workunit.client.1.vm05.stdout:0/448: chown d1/d2/d9/d31/d13/d15/c35 7 1 2026-03-10T10:19:35.584 INFO:tasks.workunit.client.1.vm05.stdout:5/424: stat da/db/d26/d35/c81 0 2026-03-10T10:19:35.584 INFO:tasks.workunit.client.1.vm05.stdout:0/449: dread - d1/d2/d9/d31/d13/d2f/d49/f4f zero size 2026-03-10T10:19:35.584 INFO:tasks.workunit.client.1.vm05.stdout:5/425: mknod da/db/d26/d35/d38/c93 0 2026-03-10T10:19:35.584 INFO:tasks.workunit.client.1.vm05.stdout:0/450: mknod d1/d2/d9/d31/d54/d7c/c97 0 2026-03-10T10:19:35.585 INFO:tasks.workunit.client.0.vm02.stdout:5/593: sync 2026-03-10T10:19:35.585 INFO:tasks.workunit.client.0.vm02.stdout:4/563: sync 2026-03-10T10:19:35.588 INFO:tasks.workunit.client.0.vm02.stdout:5/594: rename d1/db/d11/f3e to d1/d6a/fca 0 2026-03-10T10:19:35.588 INFO:tasks.workunit.client.1.vm05.stdout:5/426: creat da/db/d26/d35/d38/f94 x:0 0 0 2026-03-10T10:19:35.589 INFO:tasks.workunit.client.0.vm02.stdout:4/564: creat d1/d52/d53/fbb x:0 0 0 2026-03-10T10:19:35.590 INFO:tasks.workunit.client.0.vm02.stdout:5/595: symlink d1/db/d11/d16/d79/lcb 0 2026-03-10T10:19:35.593 INFO:tasks.workunit.client.0.vm02.stdout:5/596: link d1/d6a/ca5 d1/db/d11/d16/ccc 0 2026-03-10T10:19:35.606 INFO:tasks.workunit.client.1.vm05.stdout:5/427: creat da/db/d26/d5c/d4b/f95 x:0 0 0 2026-03-10T10:19:35.606 INFO:tasks.workunit.client.0.vm02.stdout:5/597: write d1/f10 [2712279,32544] 0 2026-03-10T10:19:35.606 INFO:tasks.workunit.client.0.vm02.stdout:5/598: mknod d1/db/d11/d16/d79/ccd 0 2026-03-10T10:19:35.606 INFO:tasks.workunit.client.1.vm05.stdout:5/428: dread da/f41 [0,4194304] 0 2026-03-10T10:19:35.615 INFO:tasks.workunit.client.1.vm05.stdout:5/429: unlink da/db/fd 0 2026-03-10T10:19:35.619 INFO:tasks.workunit.client.1.vm05.stdout:5/430: mkdir da/d96 0 2026-03-10T10:19:35.621 INFO:tasks.workunit.client.1.vm05.stdout:5/431: write da/db/f85 [539444,92034] 0 2026-03-10T10:19:35.622 INFO:tasks.workunit.client.0.vm02.stdout:5/599: sync 2026-03-10T10:19:35.622 INFO:tasks.workunit.client.1.vm05.stdout:5/432: write da/db/d26/d35/f7d [985034,22268] 0 2026-03-10T10:19:35.623 INFO:tasks.workunit.client.1.vm05.stdout:5/433: write da/f5e [951615,21492] 0 2026-03-10T10:19:35.643 INFO:tasks.workunit.client.1.vm05.stdout:6/355: rename dd/d36/d3f/d12/d24 to dd/d36/d3f/d12/d44/d30/d4a/d6e 0 2026-03-10T10:19:35.647 INFO:tasks.workunit.client.1.vm05.stdout:1/394: write d4/df/d1c/f38 [1579279,82343] 0 2026-03-10T10:19:35.654 INFO:tasks.workunit.client.0.vm02.stdout:7/405: dread d1/dc/d10/f27 [0,4194304] 0 2026-03-10T10:19:35.657 INFO:tasks.workunit.client.1.vm05.stdout:9/361: rename d0/d1/d13/f8 to d0/f73 0 2026-03-10T10:19:35.657 INFO:tasks.workunit.client.0.vm02.stdout:7/406: creat d1/dc/d60/f79 x:0 0 0 2026-03-10T10:19:35.657 INFO:tasks.workunit.client.0.vm02.stdout:8/432: write d1/d1c/f42 [1528757,26164] 0 2026-03-10T10:19:35.658 INFO:tasks.workunit.client.0.vm02.stdout:7/407: read - d1/dc/d55/f64 zero size 2026-03-10T10:19:35.658 INFO:tasks.workunit.client.0.vm02.stdout:7/408: chown d1/dc/d10/l1a 24584 1 2026-03-10T10:19:35.664 INFO:tasks.workunit.client.0.vm02.stdout:7/409: creat d1/dc/d16/f7a x:0 0 0 2026-03-10T10:19:35.664 INFO:tasks.workunit.client.1.vm05.stdout:1/395: mkdir d4/d79 0 2026-03-10T10:19:35.672 INFO:tasks.workunit.client.1.vm05.stdout:9/362: mkdir d0/df/d74 0 2026-03-10T10:19:35.676 INFO:tasks.workunit.client.1.vm05.stdout:9/363: write d0/d1/d13/de/f5b [305540,20115] 0 2026-03-10T10:19:35.680 INFO:tasks.workunit.client.1.vm05.stdout:4/313: rename d1/d3/f46 to d1/d31/dc/d40/f67 0 2026-03-10T10:19:35.681 INFO:tasks.workunit.client.1.vm05.stdout:0/451: rename d1/d2/d39/d6e/f6f to d1/d2/d9/f98 0 2026-03-10T10:19:35.681 INFO:tasks.workunit.client.1.vm05.stdout:1/396: read d4/d39/d3e/f4d [2650373,95115] 0 2026-03-10T10:19:35.682 INFO:tasks.workunit.client.1.vm05.stdout:9/364: stat d0/d1/f9 0 2026-03-10T10:19:35.682 INFO:tasks.workunit.client.1.vm05.stdout:4/314: write d1/f5d [1035141,40675] 0 2026-03-10T10:19:35.688 INFO:tasks.workunit.client.1.vm05.stdout:0/452: write d1/d2/d9/d31/d13/f4c [3810885,111983] 0 2026-03-10T10:19:35.688 INFO:tasks.workunit.client.1.vm05.stdout:9/365: dread - d0/d1/d13/f3f zero size 2026-03-10T10:19:35.691 INFO:tasks.workunit.client.1.vm05.stdout:5/434: rename da/db/d26/d35/d7a to da/db/d28/d97 0 2026-03-10T10:19:35.700 INFO:tasks.workunit.client.1.vm05.stdout:6/356: rename dd/d36/d3f/d12/d44/d30/d4a/d6e/f4e to dd/d36/d3f/f6f 0 2026-03-10T10:19:35.701 INFO:tasks.workunit.client.1.vm05.stdout:4/315: rename d1/d31/dc/d40/c4e to d1/d31/dc/d40/c68 0 2026-03-10T10:19:35.709 INFO:tasks.workunit.client.0.vm02.stdout:2/442: write d0/d1a/f52 [2642278,43921] 0 2026-03-10T10:19:35.709 INFO:tasks.workunit.client.1.vm05.stdout:9/366: rename d0/d1/d13/f3f to d0/d1/f75 0 2026-03-10T10:19:35.709 INFO:tasks.workunit.client.1.vm05.stdout:6/357: dread dd/d36/d3f/d12/d44/f2f [0,4194304] 0 2026-03-10T10:19:35.712 INFO:tasks.workunit.client.0.vm02.stdout:2/443: readlink d0/l27 0 2026-03-10T10:19:35.713 INFO:tasks.workunit.client.0.vm02.stdout:2/444: write d0/d10/f1f [223792,31810] 0 2026-03-10T10:19:35.713 INFO:tasks.workunit.client.0.vm02.stdout:2/445: readlink d0/d1a/d24/l3a 0 2026-03-10T10:19:35.721 INFO:tasks.workunit.client.1.vm05.stdout:9/367: creat d0/d1/d13/de/d21/f76 x:0 0 0 2026-03-10T10:19:35.726 INFO:tasks.workunit.client.0.vm02.stdout:6/420: mknod d0/d8/d29/d6d/d32/d60/d6f/c89 0 2026-03-10T10:19:35.728 INFO:tasks.workunit.client.0.vm02.stdout:6/421: rename d0/d8/f56 to d0/d8/d9/f8a 0 2026-03-10T10:19:35.728 INFO:tasks.workunit.client.1.vm05.stdout:6/358: unlink dd/d36/d3f/d12/l1c 0 2026-03-10T10:19:35.728 INFO:tasks.workunit.client.1.vm05.stdout:1/397: dread d4/dd/f64 [0,4194304] 0 2026-03-10T10:19:35.729 INFO:tasks.workunit.client.1.vm05.stdout:7/455: write d5/f6c [1035539,29162] 0 2026-03-10T10:19:35.729 INFO:tasks.workunit.client.1.vm05.stdout:4/316: creat d1/d31/dc/f69 x:0 0 0 2026-03-10T10:19:35.730 INFO:tasks.workunit.client.1.vm05.stdout:1/398: chown d4/d3d 79 1 2026-03-10T10:19:35.733 INFO:tasks.workunit.client.1.vm05.stdout:8/319: write d7/d14/f38 [1789423,81259] 0 2026-03-10T10:19:35.734 INFO:tasks.workunit.client.0.vm02.stdout:6/422: dwrite d0/d8/d29/d6d/d32/d60/d6f/f7c [0,4194304] 0 2026-03-10T10:19:35.735 INFO:tasks.workunit.client.1.vm05.stdout:2/407: write db/d2d/f5d [2774651,92593] 0 2026-03-10T10:19:35.740 INFO:tasks.workunit.client.1.vm05.stdout:9/368: mknod d0/d1/d13/c77 0 2026-03-10T10:19:35.742 INFO:tasks.workunit.client.0.vm02.stdout:9/400: write da/d3c/d4c/f23 [4294454,103636] 0 2026-03-10T10:19:35.755 INFO:tasks.workunit.client.1.vm05.stdout:7/456: dread d5/d1d/f46 [0,4194304] 0 2026-03-10T10:19:35.755 INFO:tasks.workunit.client.1.vm05.stdout:4/317: rmdir d1/d31/dc/d40 39 2026-03-10T10:19:35.756 INFO:tasks.workunit.client.1.vm05.stdout:7/457: write d5/dd/f62 [268256,116802] 0 2026-03-10T10:19:35.756 INFO:tasks.workunit.client.1.vm05.stdout:7/458: readlink l2 0 2026-03-10T10:19:35.757 INFO:tasks.workunit.client.0.vm02.stdout:9/401: mkdir da/d3c/d4c/d38/d82 0 2026-03-10T10:19:35.757 INFO:tasks.workunit.client.0.vm02.stdout:9/402: dread - da/d3c/d4c/d38/f45 zero size 2026-03-10T10:19:35.758 INFO:tasks.workunit.client.1.vm05.stdout:6/359: truncate dd/f29 3172780 0 2026-03-10T10:19:35.763 INFO:tasks.workunit.client.0.vm02.stdout:3/406: dwrite d1/f50 [0,4194304] 0 2026-03-10T10:19:35.768 INFO:tasks.workunit.client.1.vm05.stdout:3/447: dwrite f6 [4194304,4194304] 0 2026-03-10T10:19:35.778 INFO:tasks.workunit.client.1.vm05.stdout:7/459: dwrite d5/d1d/d20/d2d/f4c [0,4194304] 0 2026-03-10T10:19:35.778 INFO:tasks.workunit.client.1.vm05.stdout:7/460: chown d5/d26/f5a 10477 1 2026-03-10T10:19:35.779 INFO:tasks.workunit.client.1.vm05.stdout:7/461: write d5/d1d/d20/d35/f47 [4974291,36932] 0 2026-03-10T10:19:35.779 INFO:tasks.workunit.client.0.vm02.stdout:9/403: creat da/d3c/d4c/d2c/d34/f83 x:0 0 0 2026-03-10T10:19:35.784 INFO:tasks.workunit.client.0.vm02.stdout:9/404: dwrite da/d3c/d4c/f41 [0,4194304] 0 2026-03-10T10:19:35.792 INFO:tasks.workunit.client.0.vm02.stdout:3/407: getdents d1/d8/d21/d73 0 2026-03-10T10:19:35.800 INFO:tasks.workunit.client.0.vm02.stdout:9/405: dread da/d3c/d4c/f23 [0,4194304] 0 2026-03-10T10:19:35.800 INFO:tasks.workunit.client.0.vm02.stdout:9/406: stat da/f1f 0 2026-03-10T10:19:35.808 INFO:tasks.workunit.client.1.vm05.stdout:9/369: creat d0/d1/d4c/d63/f78 x:0 0 0 2026-03-10T10:19:35.819 INFO:tasks.workunit.client.1.vm05.stdout:9/370: rename d0/df/d11/f33 to d0/d70/f79 0 2026-03-10T10:19:35.819 INFO:tasks.workunit.client.1.vm05.stdout:8/320: getdents d7/d14/d24/d3f 0 2026-03-10T10:19:35.819 INFO:tasks.workunit.client.1.vm05.stdout:7/462: link d5/d1d/d20/d3b/l5b d5/dd/l8e 0 2026-03-10T10:19:35.819 INFO:tasks.workunit.client.1.vm05.stdout:7/463: chown d5/d1d/d29/d3e/d8c/d82 49 1 2026-03-10T10:19:35.821 INFO:tasks.workunit.client.1.vm05.stdout:8/321: rename d7/d14/d24/c2d to d7/d14/d3a/c5d 0 2026-03-10T10:19:35.822 INFO:tasks.workunit.client.1.vm05.stdout:7/464: fdatasync d5/d1d/d20/d2d/d5d/f67 0 2026-03-10T10:19:35.822 INFO:tasks.workunit.client.1.vm05.stdout:8/322: rmdir d7/d14/d15/d3b 39 2026-03-10T10:19:35.828 INFO:tasks.workunit.client.1.vm05.stdout:9/371: dread d0/df/d11/f24 [0,4194304] 0 2026-03-10T10:19:35.829 INFO:tasks.workunit.client.1.vm05.stdout:1/399: dread d4/d20/f31 [0,4194304] 0 2026-03-10T10:19:35.834 INFO:tasks.workunit.client.1.vm05.stdout:1/400: symlink d4/d37/d4e/l7a 0 2026-03-10T10:19:35.844 INFO:tasks.workunit.client.1.vm05.stdout:1/401: dwrite d4/d39/f54 [0,4194304] 0 2026-03-10T10:19:35.853 INFO:tasks.workunit.client.1.vm05.stdout:1/402: getdents d4/df/d76 0 2026-03-10T10:19:35.853 INFO:tasks.workunit.client.1.vm05.stdout:1/403: fsync d4/df/d1c/f2a 0 2026-03-10T10:19:35.857 INFO:tasks.workunit.client.1.vm05.stdout:1/404: creat d4/d39/f7b x:0 0 0 2026-03-10T10:19:35.858 INFO:tasks.workunit.client.0.vm02.stdout:6/423: dread d0/f43 [0,4194304] 0 2026-03-10T10:19:35.862 INFO:tasks.workunit.client.1.vm05.stdout:1/405: dwrite d4/d20/f49 [0,4194304] 0 2026-03-10T10:19:35.873 INFO:tasks.workunit.client.1.vm05.stdout:1/406: creat d4/d3d/d6e/f7c x:0 0 0 2026-03-10T10:19:35.873 INFO:tasks.workunit.client.1.vm05.stdout:1/407: stat d4/d39/d3e/f3f 0 2026-03-10T10:19:35.880 INFO:tasks.workunit.client.1.vm05.stdout:1/408: creat d4/d39/d3e/f7d x:0 0 0 2026-03-10T10:19:35.883 INFO:tasks.workunit.client.1.vm05.stdout:1/409: creat d4/df/d76/f7e x:0 0 0 2026-03-10T10:19:35.890 INFO:tasks.workunit.client.1.vm05.stdout:1/410: truncate d4/d3d/d6e/f7c 1000673 0 2026-03-10T10:19:35.890 INFO:tasks.workunit.client.1.vm05.stdout:1/411: symlink d4/d39/d3e/l7f 0 2026-03-10T10:19:35.894 INFO:tasks.workunit.client.1.vm05.stdout:1/412: truncate d4/d39/d3e/f3f 4142285 0 2026-03-10T10:19:35.904 INFO:tasks.workunit.client.1.vm05.stdout:1/413: rename d4/d3d/d6e/l75 to d4/d3d/d6e/l80 0 2026-03-10T10:19:35.908 INFO:tasks.workunit.client.1.vm05.stdout:1/414: symlink d4/d3d/l81 0 2026-03-10T10:19:35.942 INFO:tasks.workunit.client.1.vm05.stdout:1/415: dwrite d4/d20/f31 [0,4194304] 0 2026-03-10T10:19:35.944 INFO:tasks.workunit.client.0.vm02.stdout:1/442: write d4/da/d1a/d22/f23 [12938,10990] 0 2026-03-10T10:19:35.950 INFO:tasks.workunit.client.0.vm02.stdout:1/443: dread d4/da/d1a/d22/f49 [0,4194304] 0 2026-03-10T10:19:35.950 INFO:tasks.workunit.client.0.vm02.stdout:4/565: dwrite d1/d52/d53/f83 [0,4194304] 0 2026-03-10T10:19:35.952 INFO:tasks.workunit.client.0.vm02.stdout:4/566: fdatasync d1/d32/d3e/f42 0 2026-03-10T10:19:35.955 INFO:tasks.workunit.client.1.vm05.stdout:1/416: readlink d4/d39/d3e/l7f 0 2026-03-10T10:19:35.959 INFO:tasks.workunit.client.0.vm02.stdout:5/600: write d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fb6 [2484512,106008] 0 2026-03-10T10:19:35.960 INFO:tasks.workunit.client.1.vm05.stdout:5/435: getdents da 0 2026-03-10T10:19:35.963 INFO:tasks.workunit.client.1.vm05.stdout:1/417: dwrite d4/df/d76/f7e [0,4194304] 0 2026-03-10T10:19:35.965 INFO:tasks.workunit.client.0.vm02.stdout:1/444: dwrite d4/d2c/d53/f74 [0,4194304] 0 2026-03-10T10:19:35.969 INFO:tasks.workunit.client.0.vm02.stdout:4/567: readlink d1/d41/d5e/d78/l29 0 2026-03-10T10:19:35.976 INFO:tasks.workunit.client.0.vm02.stdout:8/433: write d1/d2/f36 [4737145,67444] 0 2026-03-10T10:19:35.976 INFO:tasks.workunit.client.1.vm05.stdout:1/418: write d4/d37/d4e/f62 [148241,75977] 0 2026-03-10T10:19:35.976 INFO:tasks.workunit.client.1.vm05.stdout:5/436: dwrite da/db/d28/d97/f87 [0,4194304] 0 2026-03-10T10:19:35.976 INFO:tasks.workunit.client.1.vm05.stdout:1/419: write d4/d3d/f77 [858924,63061] 0 2026-03-10T10:19:35.976 INFO:tasks.workunit.client.1.vm05.stdout:5/437: chown da/db/d26/d35/d38/f51 235 1 2026-03-10T10:19:35.985 INFO:tasks.workunit.client.0.vm02.stdout:7/410: dwrite d1/dc/d16/d28/d2d/d36/f5c [4194304,4194304] 0 2026-03-10T10:19:36.004 INFO:tasks.workunit.client.0.vm02.stdout:8/434: unlink d1/d1c/d24/c7f 0 2026-03-10T10:19:36.010 INFO:tasks.workunit.client.0.vm02.stdout:7/411: dwrite d1/f6b [0,4194304] 0 2026-03-10T10:19:36.013 INFO:tasks.workunit.client.0.vm02.stdout:5/601: mkdir d1/db/d11/d13/d28/d37/dce 0 2026-03-10T10:19:36.015 INFO:tasks.workunit.client.0.vm02.stdout:8/435: mkdir d1/d1c/d23/d3e/d83 0 2026-03-10T10:19:36.017 INFO:tasks.workunit.client.1.vm05.stdout:5/438: dread da/db/d26/d5c/d4b/f83 [0,4194304] 0 2026-03-10T10:19:36.019 INFO:tasks.workunit.client.0.vm02.stdout:8/436: dwrite d1/d1c/d23/d3e/f5a [0,4194304] 0 2026-03-10T10:19:36.022 INFO:tasks.workunit.client.0.vm02.stdout:7/412: symlink d1/dc/l7b 0 2026-03-10T10:19:36.028 INFO:tasks.workunit.client.1.vm05.stdout:1/420: dread d4/df/d1c/f23 [0,4194304] 0 2026-03-10T10:19:36.038 INFO:tasks.workunit.client.0.vm02.stdout:5/602: mkdir d1/db/d11/d16/d48/dcf 0 2026-03-10T10:19:36.038 INFO:tasks.workunit.client.0.vm02.stdout:5/603: dwrite d1/db/d11/d13/d28/f31 [0,4194304] 0 2026-03-10T10:19:36.038 INFO:tasks.workunit.client.0.vm02.stdout:5/604: stat d1/db/d11/l34 0 2026-03-10T10:19:36.057 INFO:tasks.workunit.client.0.vm02.stdout:4/568: getdents d1/d32 0 2026-03-10T10:19:36.058 INFO:tasks.workunit.client.0.vm02.stdout:4/569: truncate d1/d10/db/f43 936043 0 2026-03-10T10:19:36.059 INFO:tasks.workunit.client.1.vm05.stdout:0/453: truncate d1/d2/d9/d31/d13/f4c 3172759 0 2026-03-10T10:19:36.062 INFO:tasks.workunit.client.0.vm02.stdout:5/605: fdatasync d1/db/d11/d13/f4e 0 2026-03-10T10:19:36.065 INFO:tasks.workunit.client.0.vm02.stdout:2/446: dwrite d0/d1a/d49/d5e/f63 [0,4194304] 0 2026-03-10T10:19:36.100 INFO:tasks.workunit.client.1.vm05.stdout:2/408: write db/d1c/d40/f50 [584937,49580] 0 2026-03-10T10:19:36.100 INFO:tasks.workunit.client.1.vm05.stdout:4/318: dwrite d1/d3/f12 [4194304,4194304] 0 2026-03-10T10:19:36.100 INFO:tasks.workunit.client.1.vm05.stdout:4/319: dwrite d1/d3/f5f [0,4194304] 0 2026-03-10T10:19:36.100 INFO:tasks.workunit.client.0.vm02.stdout:4/570: unlink d1/d10/db/c60 0 2026-03-10T10:19:36.100 INFO:tasks.workunit.client.0.vm02.stdout:8/437: rename d1/d1c/d24/d35/d56/c57 to d1/d1c/d43/c84 0 2026-03-10T10:19:36.100 INFO:tasks.workunit.client.0.vm02.stdout:8/438: creat d1/d1c/d24/d35/d56/f85 x:0 0 0 2026-03-10T10:19:36.100 INFO:tasks.workunit.client.0.vm02.stdout:2/447: getdents d0/d1a/d49 0 2026-03-10T10:19:36.101 INFO:tasks.workunit.client.1.vm05.stdout:3/448: dwrite dd/d20/d56/f7d [0,4194304] 0 2026-03-10T10:19:36.103 INFO:tasks.workunit.client.0.vm02.stdout:3/408: dwrite d1/d6/f1b [0,4194304] 0 2026-03-10T10:19:36.110 INFO:tasks.workunit.client.0.vm02.stdout:5/606: rename d1/db/d11/d13/f1c to d1/db/d11/d84/d40/fd0 0 2026-03-10T10:19:36.111 INFO:tasks.workunit.client.0.vm02.stdout:3/409: rmdir d1/d20/d52 39 2026-03-10T10:19:36.111 INFO:tasks.workunit.client.1.vm05.stdout:0/454: mkdir d1/d2/d9/d50/d99 0 2026-03-10T10:19:36.111 INFO:tasks.workunit.client.0.vm02.stdout:2/448: chown d0/c2a 491660980 1 2026-03-10T10:19:36.113 INFO:tasks.workunit.client.1.vm05.stdout:1/421: sync 2026-03-10T10:19:36.113 INFO:tasks.workunit.client.1.vm05.stdout:1/422: chown d4/d39/d3e/f4d 397810 1 2026-03-10T10:19:36.115 INFO:tasks.workunit.client.1.vm05.stdout:1/423: write d4/d39/f67 [772077,35696] 0 2026-03-10T10:19:36.115 INFO:tasks.workunit.client.1.vm05.stdout:1/424: truncate d4/d39/f7b 991986 0 2026-03-10T10:19:36.117 INFO:tasks.workunit.client.1.vm05.stdout:0/455: dread d1/d2/d39/d3d/f82 [0,4194304] 0 2026-03-10T10:19:36.118 INFO:tasks.workunit.client.1.vm05.stdout:0/456: chown d1/d2/d9/d31/d54/l58 135 1 2026-03-10T10:19:36.119 INFO:tasks.workunit.client.0.vm02.stdout:7/413: sync 2026-03-10T10:19:36.122 INFO:tasks.workunit.client.1.vm05.stdout:4/320: dwrite d1/d31/dc/f33 [0,4194304] 0 2026-03-10T10:19:36.137 INFO:tasks.workunit.client.1.vm05.stdout:3/449: symlink dd/d15/d69/l9d 0 2026-03-10T10:19:36.137 INFO:tasks.workunit.client.0.vm02.stdout:5/607: mknod d1/db/d11/cd1 0 2026-03-10T10:19:36.137 INFO:tasks.workunit.client.0.vm02.stdout:5/608: truncate d1/db/d11/d16/d48/fb5 1037963 0 2026-03-10T10:19:36.137 INFO:tasks.workunit.client.0.vm02.stdout:7/414: dread d1/dc/d16/d28/d2d/f2f [0,4194304] 0 2026-03-10T10:19:36.137 INFO:tasks.workunit.client.0.vm02.stdout:7/415: write d1/dc/d16/f48 [659117,11449] 0 2026-03-10T10:19:36.137 INFO:tasks.workunit.client.0.vm02.stdout:2/449: rename d0/l2 to d0/d1a/d49/d5e/d65/l90 0 2026-03-10T10:19:36.141 INFO:tasks.workunit.client.1.vm05.stdout:0/457: rename d1/d2/d9/d31/d90 to d1/d2/d9/d50/d9a 0 2026-03-10T10:19:36.143 INFO:tasks.workunit.client.0.vm02.stdout:7/416: truncate d1/dc/d16/d28/f4e 3080152 0 2026-03-10T10:19:36.143 INFO:tasks.workunit.client.0.vm02.stdout:7/417: write d1/dc/d16/d28/d2d/d36/d67/f76 [39473,60250] 0 2026-03-10T10:19:36.145 INFO:tasks.workunit.client.0.vm02.stdout:5/609: mknod d1/db/d11/d13/dc9/cd2 0 2026-03-10T10:19:36.146 INFO:tasks.workunit.client.0.vm02.stdout:5/610: write d1/db/d11/d84/f8a [1727072,16856] 0 2026-03-10T10:19:36.147 INFO:tasks.workunit.client.0.vm02.stdout:7/418: mkdir d1/dc/d16/d28/d2d/d7c 0 2026-03-10T10:19:36.148 INFO:tasks.workunit.client.0.vm02.stdout:2/450: dread d0/d1a/d49/d5e/d65/f6c [0,4194304] 0 2026-03-10T10:19:36.150 INFO:tasks.workunit.client.0.vm02.stdout:8/439: dread d1/f40 [0,4194304] 0 2026-03-10T10:19:36.151 INFO:tasks.workunit.client.0.vm02.stdout:5/611: readlink d1/db/d11/d84/d40/d4f/l8e 0 2026-03-10T10:19:36.151 INFO:tasks.workunit.client.0.vm02.stdout:5/612: chown d1/db/d11/d16/d79/d85/f9f 457233039 1 2026-03-10T10:19:36.154 INFO:tasks.workunit.client.0.vm02.stdout:3/410: dread d1/d6/f49 [0,4194304] 0 2026-03-10T10:19:36.155 INFO:tasks.workunit.client.1.vm05.stdout:3/450: mkdir dd/d20/d9e 0 2026-03-10T10:19:36.155 INFO:tasks.workunit.client.1.vm05.stdout:0/458: symlink d1/d2/d39/d6e/d95/l9b 0 2026-03-10T10:19:36.157 INFO:tasks.workunit.client.0.vm02.stdout:8/440: mknod d1/d1c/d24/d35/d56/c86 0 2026-03-10T10:19:36.161 INFO:tasks.workunit.client.0.vm02.stdout:8/441: truncate d1/f40 3826364 0 2026-03-10T10:19:36.162 INFO:tasks.workunit.client.0.vm02.stdout:8/442: fsync d1/f1b 0 2026-03-10T10:19:36.163 INFO:tasks.workunit.client.0.vm02.stdout:3/411: mkdir d1/d8/d86 0 2026-03-10T10:19:36.164 INFO:tasks.workunit.client.0.vm02.stdout:3/412: readlink d1/d6/l69 0 2026-03-10T10:19:36.164 INFO:tasks.workunit.client.0.vm02.stdout:3/413: fdatasync d1/d20/f7b 0 2026-03-10T10:19:36.167 INFO:tasks.workunit.client.0.vm02.stdout:8/443: dwrite d1/d1c/d23/d25/f5d [0,4194304] 0 2026-03-10T10:19:36.180 INFO:tasks.workunit.client.1.vm05.stdout:1/425: dread d4/d39/d3e/f4d [0,4194304] 0 2026-03-10T10:19:36.180 INFO:tasks.workunit.client.1.vm05.stdout:1/426: stat d4/df/d1c/f23 0 2026-03-10T10:19:36.180 INFO:tasks.workunit.client.0.vm02.stdout:8/444: write d1/d2/f36 [4061667,4364] 0 2026-03-10T10:19:36.181 INFO:tasks.workunit.client.0.vm02.stdout:3/414: creat d1/d8/d86/f87 x:0 0 0 2026-03-10T10:19:36.181 INFO:tasks.workunit.client.0.vm02.stdout:3/415: dwrite d1/d6/f49 [0,4194304] 0 2026-03-10T10:19:36.181 INFO:tasks.workunit.client.0.vm02.stdout:3/416: write d1/d8/d86/f87 [925836,88922] 0 2026-03-10T10:19:36.181 INFO:tasks.workunit.client.0.vm02.stdout:3/417: creat d1/d8/d21/f88 x:0 0 0 2026-03-10T10:19:36.181 INFO:tasks.workunit.client.0.vm02.stdout:3/418: write d1/f77 [626478,99642] 0 2026-03-10T10:19:36.181 INFO:tasks.workunit.client.0.vm02.stdout:3/419: write d1/d20/f7b [1761653,61494] 0 2026-03-10T10:19:36.206 INFO:tasks.workunit.client.0.vm02.stdout:9/407: write da/d3c/d4c/d2c/d34/f36 [758363,23677] 0 2026-03-10T10:19:36.207 INFO:tasks.workunit.client.1.vm05.stdout:6/360: truncate dd/d36/d3f/d12/d44/d2a/d3d/d48/f4b 2181969 0 2026-03-10T10:19:36.208 INFO:tasks.workunit.client.0.vm02.stdout:9/408: creat da/d3c/d4c/d38/f84 x:0 0 0 2026-03-10T10:19:36.209 INFO:tasks.workunit.client.0.vm02.stdout:9/409: write da/d3c/d4c/d2c/d34/f57 [2316797,46202] 0 2026-03-10T10:19:36.210 INFO:tasks.workunit.client.0.vm02.stdout:9/410: chown da/d3c/d4c/f3b 247 1 2026-03-10T10:19:36.214 INFO:tasks.workunit.client.1.vm05.stdout:9/372: truncate d0/d1/fb 2105380 0 2026-03-10T10:19:36.216 INFO:tasks.workunit.client.1.vm05.stdout:9/373: write d0/f73 [913421,95859] 0 2026-03-10T10:19:36.218 INFO:tasks.workunit.client.0.vm02.stdout:6/424: dwrite d0/d8/d9/f54 [4194304,4194304] 0 2026-03-10T10:19:36.218 INFO:tasks.workunit.client.1.vm05.stdout:7/465: dwrite d5/d26/f33 [0,4194304] 0 2026-03-10T10:19:36.219 INFO:tasks.workunit.client.1.vm05.stdout:9/374: fdatasync d0/df/f3b 0 2026-03-10T10:19:36.219 INFO:tasks.workunit.client.1.vm05.stdout:8/323: dwrite d7/d14/d15/f3c [0,4194304] 0 2026-03-10T10:19:36.219 INFO:tasks.workunit.client.1.vm05.stdout:7/466: readlink d5/l1b 0 2026-03-10T10:19:36.229 INFO:tasks.workunit.client.0.vm02.stdout:5/613: sync 2026-03-10T10:19:36.229 INFO:tasks.workunit.client.0.vm02.stdout:3/420: sync 2026-03-10T10:19:36.231 INFO:tasks.workunit.client.1.vm05.stdout:6/361: creat dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f70 x:0 0 0 2026-03-10T10:19:36.231 INFO:tasks.workunit.client.0.vm02.stdout:6/425: write d0/d8/d29/d2f/f4e [4869546,13163] 0 2026-03-10T10:19:36.244 INFO:tasks.workunit.client.0.vm02.stdout:6/426: rename d0/d8/d29/d6d/d32/f70 to d0/d8/d29/d52/f8b 0 2026-03-10T10:19:36.252 INFO:tasks.workunit.client.0.vm02.stdout:5/614: creat d1/db/fd3 x:0 0 0 2026-03-10T10:19:36.252 INFO:tasks.workunit.client.0.vm02.stdout:3/421: getdents d1/d8/d21/d73/d78 0 2026-03-10T10:19:36.252 INFO:tasks.workunit.client.1.vm05.stdout:6/362: fsync f2 0 2026-03-10T10:19:36.253 INFO:tasks.workunit.client.1.vm05.stdout:6/363: creat dd/d36/f71 x:0 0 0 2026-03-10T10:19:36.253 INFO:tasks.workunit.client.1.vm05.stdout:6/364: fdatasync f3 0 2026-03-10T10:19:36.255 INFO:tasks.workunit.client.1.vm05.stdout:6/365: rename dd/d36/d3f/d12/d58/l66 to dd/d36/d3f/d12/d44/d30/d4a/l72 0 2026-03-10T10:19:36.264 INFO:tasks.workunit.client.1.vm05.stdout:6/366: dwrite fb [0,4194304] 0 2026-03-10T10:19:36.264 INFO:tasks.workunit.client.1.vm05.stdout:6/367: getdents dd/d36/d3f/d12 0 2026-03-10T10:19:36.264 INFO:tasks.workunit.client.1.vm05.stdout:6/368: stat dd/d36 0 2026-03-10T10:19:36.264 INFO:tasks.workunit.client.1.vm05.stdout:6/369: stat dd/d36/d3f/d12/f4f 0 2026-03-10T10:19:36.264 INFO:tasks.workunit.client.0.vm02.stdout:3/422: fdatasync d1/d20/d52/f6b 0 2026-03-10T10:19:36.264 INFO:tasks.workunit.client.0.vm02.stdout:3/423: mknod d1/d58/c89 0 2026-03-10T10:19:36.264 INFO:tasks.workunit.client.0.vm02.stdout:3/424: link d1/d6/c23 d1/d58/c8a 0 2026-03-10T10:19:36.264 INFO:tasks.workunit.client.0.vm02.stdout:3/425: mkdir d1/d6/d8b 0 2026-03-10T10:19:36.264 INFO:tasks.workunit.client.0.vm02.stdout:3/426: dread d1/d8/d21/f47 [0,4194304] 0 2026-03-10T10:19:36.265 INFO:tasks.workunit.client.0.vm02.stdout:3/427: chown d1/d20/d52/f76 948833734 1 2026-03-10T10:19:36.265 INFO:tasks.workunit.client.0.vm02.stdout:3/428: dread - d1/d8/d21/d73/f82 zero size 2026-03-10T10:19:36.265 INFO:tasks.workunit.client.0.vm02.stdout:3/429: stat d1/d8/c1f 0 2026-03-10T10:19:36.266 INFO:tasks.workunit.client.0.vm02.stdout:3/430: dread - d1/d20/d52/f6c zero size 2026-03-10T10:19:36.266 INFO:tasks.workunit.client.1.vm05.stdout:6/370: creat dd/d36/d3f/d12/d44/d2a/d3d/d3e/f73 x:0 0 0 2026-03-10T10:19:36.268 INFO:tasks.workunit.client.1.vm05.stdout:6/371: mknod dd/d36/d3f/d12/d44/d30/d4a/c74 0 2026-03-10T10:19:36.282 INFO:tasks.workunit.client.1.vm05.stdout:6/372: creat dd/d36/d3f/d12/d44/d2a/d3d/d48/f75 x:0 0 0 2026-03-10T10:19:36.283 INFO:tasks.workunit.client.1.vm05.stdout:6/373: truncate dd/d36/d3f/d12/d44/f46 4267994 0 2026-03-10T10:19:36.285 INFO:tasks.workunit.client.1.vm05.stdout:6/374: truncate dd/d36/d3f/d12/f20 5147802 0 2026-03-10T10:19:36.285 INFO:tasks.workunit.client.1.vm05.stdout:6/375: fdatasync dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f70 0 2026-03-10T10:19:36.287 INFO:tasks.workunit.client.1.vm05.stdout:6/376: truncate dd/d36/d3f/d12/f35 4997149 0 2026-03-10T10:19:36.290 INFO:tasks.workunit.client.1.vm05.stdout:6/377: creat dd/d36/d3f/d12/d44/d2a/d3d/f76 x:0 0 0 2026-03-10T10:19:36.291 INFO:tasks.workunit.client.1.vm05.stdout:6/378: mkdir dd/d36/d3f/d12/d44/d2a/d77 0 2026-03-10T10:19:36.292 INFO:tasks.workunit.client.1.vm05.stdout:6/379: write dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b [644290,107830] 0 2026-03-10T10:19:36.293 INFO:tasks.workunit.client.1.vm05.stdout:6/380: fdatasync dd/d36/f71 0 2026-03-10T10:19:36.293 INFO:tasks.workunit.client.1.vm05.stdout:6/381: read fb [3949465,115952] 0 2026-03-10T10:19:36.340 INFO:tasks.workunit.client.0.vm02.stdout:8/445: fdatasync d1/d2/f36 0 2026-03-10T10:19:36.341 INFO:tasks.workunit.client.0.vm02.stdout:8/446: creat d1/d1c/d43/d6a/f87 x:0 0 0 2026-03-10T10:19:36.347 INFO:tasks.workunit.client.0.vm02.stdout:8/447: unlink d1/d1c/d23/d25/c2f 0 2026-03-10T10:19:36.385 INFO:tasks.workunit.client.1.vm05.stdout:1/427: fsync d4/d39/d3e/f3f 0 2026-03-10T10:19:36.386 INFO:tasks.workunit.client.1.vm05.stdout:1/428: dread - d4/d39/d3e/f7d zero size 2026-03-10T10:19:36.388 INFO:tasks.workunit.client.1.vm05.stdout:1/429: mkdir d4/d37/d4e/d82 0 2026-03-10T10:19:36.390 INFO:tasks.workunit.client.1.vm05.stdout:1/430: rmdir d4/df/d1c/d53/d66 39 2026-03-10T10:19:36.392 INFO:tasks.workunit.client.1.vm05.stdout:1/431: dread - d4/d39/d3e/f7d zero size 2026-03-10T10:19:36.393 INFO:tasks.workunit.client.1.vm05.stdout:1/432: mkdir d4/d79/d83 0 2026-03-10T10:19:36.402 INFO:tasks.workunit.client.1.vm05.stdout:1/433: getdents d4/d39 0 2026-03-10T10:19:36.402 INFO:tasks.workunit.client.1.vm05.stdout:1/434: readlink d4/df/d1c/l59 0 2026-03-10T10:19:36.404 INFO:tasks.workunit.client.1.vm05.stdout:1/435: symlink d4/df/d76/l84 0 2026-03-10T10:19:36.405 INFO:tasks.workunit.client.1.vm05.stdout:1/436: fdatasync d4/d3d/f4c 0 2026-03-10T10:19:36.411 INFO:tasks.workunit.client.0.vm02.stdout:1/445: fdatasync d4/fe 0 2026-03-10T10:19:36.413 INFO:tasks.workunit.client.0.vm02.stdout:1/446: mkdir d4/d2c/d91 0 2026-03-10T10:19:36.415 INFO:tasks.workunit.client.1.vm05.stdout:1/437: dwrite d4/d39/f7b [0,4194304] 0 2026-03-10T10:19:36.427 INFO:tasks.workunit.client.0.vm02.stdout:2/451: dread d0/d10/f6a [0,4194304] 0 2026-03-10T10:19:36.430 INFO:tasks.workunit.client.1.vm05.stdout:1/438: dwrite d4/d39/f67 [0,4194304] 0 2026-03-10T10:19:36.436 INFO:tasks.workunit.client.1.vm05.stdout:1/439: rename d4/df/c24 to d4/d20/d70/c85 0 2026-03-10T10:19:36.440 INFO:tasks.workunit.client.1.vm05.stdout:1/440: stat d4/d20/c28 0 2026-03-10T10:19:36.441 INFO:tasks.workunit.client.1.vm05.stdout:1/441: write d4/d39/d3e/f4d [1459838,56646] 0 2026-03-10T10:19:36.445 INFO:tasks.workunit.client.1.vm05.stdout:1/442: symlink d4/l86 0 2026-03-10T10:19:36.445 INFO:tasks.workunit.client.1.vm05.stdout:1/443: stat d4/d37/d4e/c72 0 2026-03-10T10:19:36.449 INFO:tasks.workunit.client.1.vm05.stdout:1/444: unlink d4/d39/f3a 0 2026-03-10T10:19:36.449 INFO:tasks.workunit.client.1.vm05.stdout:1/445: chown d4/df/d1c/f23 14 1 2026-03-10T10:19:36.451 INFO:tasks.workunit.client.1.vm05.stdout:1/446: unlink d4/d3d/c4f 0 2026-03-10T10:19:36.454 INFO:tasks.workunit.client.1.vm05.stdout:1/447: dwrite d4/f36 [0,4194304] 0 2026-03-10T10:19:36.455 INFO:tasks.workunit.client.1.vm05.stdout:1/448: dread - d4/d3d/f4c zero size 2026-03-10T10:19:36.457 INFO:tasks.workunit.client.1.vm05.stdout:1/449: symlink d4/d37/l87 0 2026-03-10T10:19:36.459 INFO:tasks.workunit.client.1.vm05.stdout:1/450: mkdir d4/d39/d88 0 2026-03-10T10:19:36.461 INFO:tasks.workunit.client.1.vm05.stdout:1/451: truncate d4/df/f73 153683 0 2026-03-10T10:19:36.473 INFO:tasks.workunit.client.1.vm05.stdout:1/452: dwrite d4/dd/f60 [0,4194304] 0 2026-03-10T10:19:36.478 INFO:tasks.workunit.client.1.vm05.stdout:1/453: readlink d4/d37/l5a 0 2026-03-10T10:19:36.485 INFO:tasks.workunit.client.1.vm05.stdout:1/454: dwrite d4/df/d1c/f38 [0,4194304] 0 2026-03-10T10:19:36.491 INFO:tasks.workunit.client.1.vm05.stdout:1/455: truncate d4/f46 1247565 0 2026-03-10T10:19:36.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:36 vm02.local ceph-mon[50200]: pgmap v157: 65 pgs: 65 active+clean; 1.7 GiB data, 6.2 GiB used, 114 GiB / 120 GiB avail; 32 MiB/s rd, 121 MiB/s wr, 232 op/s 2026-03-10T10:19:36.539 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:36 vm05.local ceph-mon[59051]: pgmap v157: 65 pgs: 65 active+clean; 1.7 GiB data, 6.2 GiB used, 114 GiB / 120 GiB avail; 32 MiB/s rd, 121 MiB/s wr, 232 op/s 2026-03-10T10:19:36.602 INFO:tasks.workunit.client.0.vm02.stdout:4/571: rmdir d1/d10 39 2026-03-10T10:19:36.602 INFO:tasks.workunit.client.0.vm02.stdout:4/572: chown d1/d41/d5e/d78/d1a/d49 8568 1 2026-03-10T10:19:36.605 INFO:tasks.workunit.client.0.vm02.stdout:4/573: link d1/d41/d5e/d78/d1a/c26 d1/d41/d5e/d78/d37/cbc 0 2026-03-10T10:19:36.609 INFO:tasks.workunit.client.0.vm02.stdout:4/574: dwrite d1/d41/d5e/d78/d7f/f74 [0,4194304] 0 2026-03-10T10:19:36.622 INFO:tasks.workunit.client.0.vm02.stdout:4/575: creat d1/d52/fbd x:0 0 0 2026-03-10T10:19:36.622 INFO:tasks.workunit.client.0.vm02.stdout:4/576: chown d1/f9d 185594 1 2026-03-10T10:19:36.624 INFO:tasks.workunit.client.1.vm05.stdout:5/439: write da/db/f6d [945663,49136] 0 2026-03-10T10:19:36.627 INFO:tasks.workunit.client.0.vm02.stdout:4/577: dwrite d1/d41/d5e/d78/d1a/d49/f7a [0,4194304] 0 2026-03-10T10:19:36.641 INFO:tasks.workunit.client.0.vm02.stdout:4/578: link d1/d41/d5e/d78/d7f/ca0 d1/d32/da3/cbe 0 2026-03-10T10:19:36.644 INFO:tasks.workunit.client.0.vm02.stdout:4/579: link d1/d41/d5e/d78/d1a/d49/l72 d1/d41/d5e/d78/d55/lbf 0 2026-03-10T10:19:36.646 INFO:tasks.workunit.client.0.vm02.stdout:4/580: unlink d1/d41/d5e/d78/d1a/f98 0 2026-03-10T10:19:36.655 INFO:tasks.workunit.client.0.vm02.stdout:4/581: mkdir d1/d32/da3/dc0 0 2026-03-10T10:19:36.655 INFO:tasks.workunit.client.0.vm02.stdout:4/582: chown d1/d75 17 1 2026-03-10T10:19:36.655 INFO:tasks.workunit.client.0.vm02.stdout:4/583: dread - d1/f9d zero size 2026-03-10T10:19:36.657 INFO:tasks.workunit.client.0.vm02.stdout:4/584: dread d1/d10/db/f43 [0,4194304] 0 2026-03-10T10:19:36.663 INFO:tasks.workunit.client.1.vm05.stdout:2/409: truncate db/f26 1792201 0 2026-03-10T10:19:36.667 INFO:tasks.workunit.client.1.vm05.stdout:2/410: mkdir db/d1c/d40/d80 0 2026-03-10T10:19:36.668 INFO:tasks.workunit.client.1.vm05.stdout:5/440: sync 2026-03-10T10:19:36.671 INFO:tasks.workunit.client.1.vm05.stdout:5/441: dread - da/db/d26/d5c/f6b zero size 2026-03-10T10:19:36.671 INFO:tasks.workunit.client.1.vm05.stdout:5/442: symlink da/db/d26/d5c/l98 0 2026-03-10T10:19:36.671 INFO:tasks.workunit.client.1.vm05.stdout:2/411: rmdir db/d4e/d6c/d6d 0 2026-03-10T10:19:36.676 INFO:tasks.workunit.client.1.vm05.stdout:2/412: dread - db/d28/d4f/f75 zero size 2026-03-10T10:19:36.676 INFO:tasks.workunit.client.1.vm05.stdout:5/443: fsync da/db/d26/d5c/f46 0 2026-03-10T10:19:36.676 INFO:tasks.workunit.client.1.vm05.stdout:2/413: mkdir db/d4e/d81 0 2026-03-10T10:19:36.676 INFO:tasks.workunit.client.1.vm05.stdout:2/414: chown db/d2d/l2f 94100723 1 2026-03-10T10:19:36.676 INFO:tasks.workunit.client.1.vm05.stdout:2/415: chown db/d1c/d40/f50 4567124 1 2026-03-10T10:19:36.676 INFO:tasks.workunit.client.1.vm05.stdout:5/444: chown da/db/l42 498186 1 2026-03-10T10:19:36.678 INFO:tasks.workunit.client.1.vm05.stdout:5/445: symlink da/d96/l99 0 2026-03-10T10:19:36.679 INFO:tasks.workunit.client.1.vm05.stdout:5/446: mkdir da/d9a 0 2026-03-10T10:19:36.681 INFO:tasks.workunit.client.1.vm05.stdout:5/447: chown da/db/d28/d32/f79 363076 1 2026-03-10T10:19:36.684 INFO:tasks.workunit.client.1.vm05.stdout:5/448: rmdir da/db/d26/d35/d38 39 2026-03-10T10:19:36.686 INFO:tasks.workunit.client.1.vm05.stdout:5/449: dread - da/db/d26/d35/f74 zero size 2026-03-10T10:19:36.704 INFO:tasks.workunit.client.1.vm05.stdout:2/416: dread db/d28/f3f [0,4194304] 0 2026-03-10T10:19:36.706 INFO:tasks.workunit.client.1.vm05.stdout:2/417: truncate db/d1c/f1f 4210240 0 2026-03-10T10:19:36.708 INFO:tasks.workunit.client.1.vm05.stdout:2/418: mknod db/d28/d4f/d59/c82 0 2026-03-10T10:19:36.722 INFO:tasks.workunit.client.1.vm05.stdout:4/321: dwrite d1/d31/f1a [0,4194304] 0 2026-03-10T10:19:36.722 INFO:tasks.workunit.client.0.vm02.stdout:7/419: truncate d1/dc/d16/f48 306143 0 2026-03-10T10:19:36.725 INFO:tasks.workunit.client.1.vm05.stdout:4/322: write d1/d31/dc/f69 [894555,9998] 0 2026-03-10T10:19:36.726 INFO:tasks.workunit.client.0.vm02.stdout:7/420: dwrite d1/dc/f25 [0,4194304] 0 2026-03-10T10:19:36.730 INFO:tasks.workunit.client.0.vm02.stdout:7/421: dread d1/dc/f25 [0,4194304] 0 2026-03-10T10:19:36.730 INFO:tasks.workunit.client.0.vm02.stdout:7/422: chown d1/dc/d10/d38/l56 8772041 1 2026-03-10T10:19:36.733 INFO:tasks.workunit.client.0.vm02.stdout:7/423: creat d1/dc/d10/f7d x:0 0 0 2026-03-10T10:19:36.735 INFO:tasks.workunit.client.1.vm05.stdout:4/323: creat d1/d3/d65/f6a x:0 0 0 2026-03-10T10:19:36.738 INFO:tasks.workunit.client.1.vm05.stdout:4/324: mknod d1/d31/dc/c6b 0 2026-03-10T10:19:36.741 INFO:tasks.workunit.client.1.vm05.stdout:3/451: dwrite dd/fe [0,4194304] 0 2026-03-10T10:19:36.745 INFO:tasks.workunit.client.1.vm05.stdout:4/325: truncate d1/d3/f4a 356962 0 2026-03-10T10:19:36.752 INFO:tasks.workunit.client.1.vm05.stdout:3/452: link dd/d15/d24/l59 dd/d20/l9f 0 2026-03-10T10:19:36.753 INFO:tasks.workunit.client.1.vm05.stdout:3/453: write dd/d15/d24/f42 [1560822,84510] 0 2026-03-10T10:19:36.754 INFO:tasks.workunit.client.1.vm05.stdout:3/454: symlink dd/d15/d24/la0 0 2026-03-10T10:19:36.777 INFO:tasks.workunit.client.1.vm05.stdout:0/459: dwrite d1/d2/f21 [0,4194304] 0 2026-03-10T10:19:36.779 INFO:tasks.workunit.client.1.vm05.stdout:3/455: sync 2026-03-10T10:19:36.779 INFO:tasks.workunit.client.1.vm05.stdout:0/460: dread - d1/d2/d9/d31/d12/d20/f81 zero size 2026-03-10T10:19:36.785 INFO:tasks.workunit.client.1.vm05.stdout:3/456: rename dd/f65 to dd/d15/d24/d2c/d6d/fa1 0 2026-03-10T10:19:36.787 INFO:tasks.workunit.client.0.vm02.stdout:9/411: dwrite da/f1f [0,4194304] 0 2026-03-10T10:19:36.798 INFO:tasks.workunit.client.0.vm02.stdout:6/427: rename d0/d8/d29/d6d/d32 to d0/d8/d8c 0 2026-03-10T10:19:36.798 INFO:tasks.workunit.client.0.vm02.stdout:5/615: rename d1/db/d11/d16/d79/d85/d93/fc5 to d1/db/d11/d84/d95/fd4 0 2026-03-10T10:19:36.798 INFO:tasks.workunit.client.1.vm05.stdout:8/324: write d7/d14/f33 [1015357,113850] 0 2026-03-10T10:19:36.799 INFO:tasks.workunit.client.1.vm05.stdout:8/325: write d7/d14/f33 [764896,110188] 0 2026-03-10T10:19:36.799 INFO:tasks.workunit.client.1.vm05.stdout:8/326: dread - d7/d14/f55 zero size 2026-03-10T10:19:36.802 INFO:tasks.workunit.client.1.vm05.stdout:8/327: mknod d7/d14/d24/d3f/d4f/c5e 0 2026-03-10T10:19:36.803 INFO:tasks.workunit.client.1.vm05.stdout:8/328: chown d7/d14/f4e 3250572 1 2026-03-10T10:19:36.803 INFO:tasks.workunit.client.1.vm05.stdout:7/467: dwrite d5/fe [0,4194304] 0 2026-03-10T10:19:36.805 INFO:tasks.workunit.client.0.vm02.stdout:3/431: dwrite d1/f1c [0,4194304] 0 2026-03-10T10:19:36.813 INFO:tasks.workunit.client.1.vm05.stdout:7/468: dwrite d5/d1d/f32 [0,4194304] 0 2026-03-10T10:19:36.818 INFO:tasks.workunit.client.0.vm02.stdout:8/448: write d1/d2/f27 [1521960,100012] 0 2026-03-10T10:19:36.818 INFO:tasks.workunit.client.1.vm05.stdout:8/329: creat d7/d14/d3a/f5f x:0 0 0 2026-03-10T10:19:36.824 INFO:tasks.workunit.client.0.vm02.stdout:0/460: dwrite d9/d18/d1a/d22/d24/f4f [0,4194304] 0 2026-03-10T10:19:36.825 INFO:tasks.workunit.client.0.vm02.stdout:0/461: dread d9/d18/f1e [4194304,4194304] 0 2026-03-10T10:19:36.832 INFO:tasks.workunit.client.1.vm05.stdout:7/469: dread d5/d26/f33 [0,4194304] 0 2026-03-10T10:19:36.835 INFO:tasks.workunit.client.0.vm02.stdout:9/412: link da/l24 da/d3c/d4c/d56/l85 0 2026-03-10T10:19:36.835 INFO:tasks.workunit.client.0.vm02.stdout:9/413: fsync da/d3c/d4c/d2c/d34/f4d 0 2026-03-10T10:19:36.836 INFO:tasks.workunit.client.0.vm02.stdout:9/414: chown da/d3c/d4c/f23 17278088 1 2026-03-10T10:19:36.837 INFO:tasks.workunit.client.0.vm02.stdout:9/415: write da/d3c/d4c/d38/d4a/f59 [1589720,113020] 0 2026-03-10T10:19:36.841 INFO:tasks.workunit.client.0.vm02.stdout:3/432: dread d1/d8/d21/f2f [0,4194304] 0 2026-03-10T10:19:36.844 INFO:tasks.workunit.client.0.vm02.stdout:2/452: dwrite d0/d10/f4b [0,4194304] 0 2026-03-10T10:19:36.850 INFO:tasks.workunit.client.0.vm02.stdout:1/447: rename d4/da/d1a/c36 to d4/da/d1a/d47/d88/c92 0 2026-03-10T10:19:36.855 INFO:tasks.workunit.client.0.vm02.stdout:5/616: symlink d1/db/d11/d16/ld5 0 2026-03-10T10:19:36.859 INFO:tasks.workunit.client.0.vm02.stdout:5/617: dwrite d1/db/d11/d84/d40/d4f/d5f/f6b [0,4194304] 0 2026-03-10T10:19:36.866 INFO:tasks.workunit.client.1.vm05.stdout:8/330: symlink d7/d2f/d57/l60 0 2026-03-10T10:19:36.870 INFO:tasks.workunit.client.0.vm02.stdout:3/433: symlink d1/d20/d52/l8c 0 2026-03-10T10:19:36.871 INFO:tasks.workunit.client.1.vm05.stdout:1/456: rmdir d4/d39 39 2026-03-10T10:19:36.871 INFO:tasks.workunit.client.1.vm05.stdout:7/470: symlink d5/d1d/d20/l8f 0 2026-03-10T10:19:36.873 INFO:tasks.workunit.client.1.vm05.stdout:8/331: read f6 [428704,10414] 0 2026-03-10T10:19:36.879 INFO:tasks.workunit.client.0.vm02.stdout:2/453: dread d0/d1a/d24/f6e [0,4194304] 0 2026-03-10T10:19:36.885 INFO:tasks.workunit.client.1.vm05.stdout:1/457: truncate d4/d39/f7b 4290505 0 2026-03-10T10:19:36.885 INFO:tasks.workunit.client.0.vm02.stdout:1/448: mkdir d4/da/d1a/d5b/d93 0 2026-03-10T10:19:36.886 INFO:tasks.workunit.client.1.vm05.stdout:7/471: mkdir d5/d1d/d29/d3e/d8c/d82/d90 0 2026-03-10T10:19:36.893 INFO:tasks.workunit.client.0.vm02.stdout:4/585: write d1/d52/f5a [2662172,12794] 0 2026-03-10T10:19:36.893 INFO:tasks.workunit.client.1.vm05.stdout:7/472: readlink d5/d17/l43 0 2026-03-10T10:19:36.894 INFO:tasks.workunit.client.1.vm05.stdout:8/332: creat d7/d14/d24/f61 x:0 0 0 2026-03-10T10:19:36.894 INFO:tasks.workunit.client.1.vm05.stdout:1/458: creat d4/d37/f89 x:0 0 0 2026-03-10T10:19:36.894 INFO:tasks.workunit.client.1.vm05.stdout:7/473: rmdir d5/d1d/d20/d35 39 2026-03-10T10:19:36.894 INFO:tasks.workunit.client.1.vm05.stdout:1/459: chown d4/d39/d3e/c47 25110 1 2026-03-10T10:19:36.894 INFO:tasks.workunit.client.0.vm02.stdout:1/449: rmdir d4/da 39 2026-03-10T10:19:36.895 INFO:tasks.workunit.client.1.vm05.stdout:1/460: rmdir d4/d37/d4e 39 2026-03-10T10:19:36.896 INFO:tasks.workunit.client.1.vm05.stdout:1/461: chown d4/df/c35 45 1 2026-03-10T10:19:36.897 INFO:tasks.workunit.client.0.vm02.stdout:0/462: rename d9/d18/d1a/d22/d24/d80/d57/c63 to d9/d18/c8f 0 2026-03-10T10:19:36.898 INFO:tasks.workunit.client.0.vm02.stdout:1/450: write d4/d1b/f6f [814030,6431] 0 2026-03-10T10:19:36.904 INFO:tasks.workunit.client.1.vm05.stdout:8/333: dread d7/d14/f22 [4194304,4194304] 0 2026-03-10T10:19:36.905 INFO:tasks.workunit.client.0.vm02.stdout:2/454: link d0/f72 d0/f91 0 2026-03-10T10:19:36.907 INFO:tasks.workunit.client.0.vm02.stdout:0/463: dread d9/d18/d1a/d22/d24/f2f [0,4194304] 0 2026-03-10T10:19:36.907 INFO:tasks.workunit.client.1.vm05.stdout:7/474: dread d5/d1d/d20/d35/f36 [0,4194304] 0 2026-03-10T10:19:36.908 INFO:tasks.workunit.client.1.vm05.stdout:7/475: write d5/f76 [196566,64906] 0 2026-03-10T10:19:36.908 INFO:tasks.workunit.client.1.vm05.stdout:1/462: mknod d4/d37/c8a 0 2026-03-10T10:19:36.909 INFO:tasks.workunit.client.1.vm05.stdout:1/463: dread - d4/d3d/f4c zero size 2026-03-10T10:19:36.909 INFO:tasks.workunit.client.1.vm05.stdout:8/334: mkdir d7/d14/d62 0 2026-03-10T10:19:36.910 INFO:tasks.workunit.client.0.vm02.stdout:5/618: getdents d1/db/d11/d84 0 2026-03-10T10:19:36.912 INFO:tasks.workunit.client.0.vm02.stdout:2/455: rename d0/d1a/d49/l87 to d0/d1a/d49/d5e/l92 0 2026-03-10T10:19:36.916 INFO:tasks.workunit.client.0.vm02.stdout:1/451: unlink d4/da/c2d 0 2026-03-10T10:19:36.918 INFO:tasks.workunit.client.0.vm02.stdout:0/464: dwrite d9/d34/d3d/d65/f7a [0,4194304] 0 2026-03-10T10:19:36.922 INFO:tasks.workunit.client.0.vm02.stdout:0/465: read d9/d18/d1a/d22/d24/f4f [2835436,118900] 0 2026-03-10T10:19:36.922 INFO:tasks.workunit.client.1.vm05.stdout:8/335: unlink d7/f8 0 2026-03-10T10:19:36.935 INFO:tasks.workunit.client.0.vm02.stdout:2/456: getdents d0/d71 0 2026-03-10T10:19:36.935 INFO:tasks.workunit.client.1.vm05.stdout:1/464: creat d4/d79/f8b x:0 0 0 2026-03-10T10:19:36.935 INFO:tasks.workunit.client.1.vm05.stdout:1/465: chown d4/dd/f60 300863 1 2026-03-10T10:19:36.935 INFO:tasks.workunit.client.1.vm05.stdout:7/476: mkdir d5/d1d/d20/d91 0 2026-03-10T10:19:36.943 INFO:tasks.workunit.client.1.vm05.stdout:7/477: fsync d5/d1d/d20/d2d/f3d 0 2026-03-10T10:19:36.944 INFO:tasks.workunit.client.0.vm02.stdout:9/416: fsync da/f5c 0 2026-03-10T10:19:36.949 INFO:tasks.workunit.client.0.vm02.stdout:0/466: link d9/d34/d3d/d65/f84 d9/d18/d1a/d22/d24/d80/f90 0 2026-03-10T10:19:36.956 INFO:tasks.workunit.client.1.vm05.stdout:1/466: dread d4/df/d1c/f2a [4194304,4194304] 0 2026-03-10T10:19:36.956 INFO:tasks.workunit.client.0.vm02.stdout:2/457: creat d0/d10/f93 x:0 0 0 2026-03-10T10:19:36.956 INFO:tasks.workunit.client.0.vm02.stdout:9/417: mknod da/d3c/d4c/d38/d82/c86 0 2026-03-10T10:19:36.956 INFO:tasks.workunit.client.0.vm02.stdout:2/458: symlink d0/d1a/d49/d5e/l94 0 2026-03-10T10:19:36.956 INFO:tasks.workunit.client.0.vm02.stdout:0/467: mkdir d9/d18/d1a/d22/d24/d8e/d91 0 2026-03-10T10:19:36.957 INFO:tasks.workunit.client.1.vm05.stdout:1/467: rename d4/df/d76/l84 to d4/d79/l8c 0 2026-03-10T10:19:36.957 INFO:tasks.workunit.client.1.vm05.stdout:7/478: creat d5/d26/f92 x:0 0 0 2026-03-10T10:19:36.958 INFO:tasks.workunit.client.0.vm02.stdout:0/468: creat d9/d18/d1a/d3c/f92 x:0 0 0 2026-03-10T10:19:36.958 INFO:tasks.workunit.client.1.vm05.stdout:7/479: chown d5/c6 1344 1 2026-03-10T10:19:36.959 INFO:tasks.workunit.client.0.vm02.stdout:2/459: symlink d0/d10/l95 0 2026-03-10T10:19:36.960 INFO:tasks.workunit.client.0.vm02.stdout:2/460: write d0/d1a/d49/d5e/f60 [1018173,73051] 0 2026-03-10T10:19:36.967 INFO:tasks.workunit.client.1.vm05.stdout:1/468: dwrite d4/df/d76/f7e [0,4194304] 0 2026-03-10T10:19:36.972 INFO:tasks.workunit.client.1.vm05.stdout:7/480: dwrite d5/d1d/d20/d2d/f4c [0,4194304] 0 2026-03-10T10:19:36.981 INFO:tasks.workunit.client.1.vm05.stdout:1/469: unlink d4/d3d/d6e/f74 0 2026-03-10T10:19:36.988 INFO:tasks.workunit.client.1.vm05.stdout:7/481: creat d5/d1d/d29/d3e/d8c/d7f/f93 x:0 0 0 2026-03-10T10:19:36.990 INFO:tasks.workunit.client.1.vm05.stdout:1/470: creat d4/d79/f8d x:0 0 0 2026-03-10T10:19:36.990 INFO:tasks.workunit.client.1.vm05.stdout:7/482: creat d5/d17/d66/f94 x:0 0 0 2026-03-10T10:19:36.990 INFO:tasks.workunit.client.1.vm05.stdout:7/483: fsync d5/d17/f4f 0 2026-03-10T10:19:36.990 INFO:tasks.workunit.client.1.vm05.stdout:1/471: symlink d4/df/d1c/d53/l8e 0 2026-03-10T10:19:36.993 INFO:tasks.workunit.client.1.vm05.stdout:2/419: link db/d1c/f1f db/d1c/d40/d62/f83 0 2026-03-10T10:19:36.993 INFO:tasks.workunit.client.1.vm05.stdout:1/472: fsync d4/d39/f67 0 2026-03-10T10:19:36.995 INFO:tasks.workunit.client.1.vm05.stdout:5/450: dwrite da/db/d26/d35/d38/f48 [0,4194304] 0 2026-03-10T10:19:36.997 INFO:tasks.workunit.client.1.vm05.stdout:1/473: symlink d4/d3d/d6e/l8f 0 2026-03-10T10:19:36.998 INFO:tasks.workunit.client.1.vm05.stdout:1/474: readlink d4/d20/l48 0 2026-03-10T10:19:37.008 INFO:tasks.workunit.client.1.vm05.stdout:1/475: creat d4/d37/f90 x:0 0 0 2026-03-10T10:19:37.008 INFO:tasks.workunit.client.1.vm05.stdout:1/476: chown d4/df/d1c/d53 181646625 1 2026-03-10T10:19:37.011 INFO:tasks.workunit.client.1.vm05.stdout:1/477: truncate d4/d3d/f4c 464991 0 2026-03-10T10:19:37.019 INFO:tasks.workunit.client.1.vm05.stdout:2/420: dread db/d1c/d40/f50 [0,4194304] 0 2026-03-10T10:19:37.023 INFO:tasks.workunit.client.1.vm05.stdout:1/478: dwrite d4/d37/d4e/f62 [4194304,4194304] 0 2026-03-10T10:19:37.027 INFO:tasks.workunit.client.1.vm05.stdout:5/451: sync 2026-03-10T10:19:37.027 INFO:tasks.workunit.client.1.vm05.stdout:1/479: chown d4/df/f73 3821556 1 2026-03-10T10:19:37.028 INFO:tasks.workunit.client.1.vm05.stdout:5/452: chown da/db/f29 85414 1 2026-03-10T10:19:37.034 INFO:tasks.workunit.client.1.vm05.stdout:5/453: readlink da/l3d 0 2026-03-10T10:19:37.034 INFO:tasks.workunit.client.1.vm05.stdout:1/480: dwrite d4/d39/f67 [0,4194304] 0 2026-03-10T10:19:37.035 INFO:tasks.workunit.client.1.vm05.stdout:5/454: readlink da/d96/l99 0 2026-03-10T10:19:37.035 INFO:tasks.workunit.client.1.vm05.stdout:1/481: fsync d4/d20/f2d 0 2026-03-10T10:19:37.042 INFO:tasks.workunit.client.1.vm05.stdout:1/482: dread d4/d20/f31 [0,4194304] 0 2026-03-10T10:19:37.044 INFO:tasks.workunit.client.1.vm05.stdout:1/483: dread d4/df/d1c/f23 [0,4194304] 0 2026-03-10T10:19:37.046 INFO:tasks.workunit.client.1.vm05.stdout:1/484: dread - d4/df/d1c/d53/f6b zero size 2026-03-10T10:19:37.047 INFO:tasks.workunit.client.0.vm02.stdout:7/424: dwrite d1/dc/d44/f75 [0,4194304] 0 2026-03-10T10:19:37.048 INFO:tasks.workunit.client.0.vm02.stdout:7/425: stat d1/dc/d10/d38/l74 0 2026-03-10T10:19:37.050 INFO:tasks.workunit.client.0.vm02.stdout:7/426: truncate d1/f15 4774493 0 2026-03-10T10:19:37.051 INFO:tasks.workunit.client.0.vm02.stdout:7/427: chown d1/dc/d16/d28/d2d/d7c 19 1 2026-03-10T10:19:37.061 INFO:tasks.workunit.client.1.vm05.stdout:4/326: dwrite d1/d31/dc/f53 [0,4194304] 0 2026-03-10T10:19:37.065 INFO:tasks.workunit.client.1.vm05.stdout:1/485: dwrite d4/df/d1c/f63 [4194304,4194304] 0 2026-03-10T10:19:37.082 INFO:tasks.workunit.client.1.vm05.stdout:5/455: rmdir da/db/d26/d5c/d4b/d77 0 2026-03-10T10:19:37.088 INFO:tasks.workunit.client.1.vm05.stdout:4/327: creat d1/d3/f6c x:0 0 0 2026-03-10T10:19:37.092 INFO:tasks.workunit.client.1.vm05.stdout:5/456: symlink da/d9a/l9b 0 2026-03-10T10:19:37.096 INFO:tasks.workunit.client.1.vm05.stdout:6/382: dwrite dd/d36/d3f/d12/d44/d2a/d3d/d48/f4b [0,4194304] 0 2026-03-10T10:19:37.102 INFO:tasks.workunit.client.1.vm05.stdout:5/457: truncate da/f10 9669359 0 2026-03-10T10:19:37.106 INFO:tasks.workunit.client.1.vm05.stdout:1/486: dread d4/d20/f2c [4194304,4194304] 0 2026-03-10T10:19:37.111 INFO:tasks.workunit.client.1.vm05.stdout:6/383: creat dd/d36/d3f/d12/d44/d63/f78 x:0 0 0 2026-03-10T10:19:37.115 INFO:tasks.workunit.client.1.vm05.stdout:5/458: link da/db/c8c da/db/d26/d5c/d4b/c9c 0 2026-03-10T10:19:37.116 INFO:tasks.workunit.client.1.vm05.stdout:6/384: dwrite dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b [0,4194304] 0 2026-03-10T10:19:37.127 INFO:tasks.workunit.client.1.vm05.stdout:6/385: symlink dd/l79 0 2026-03-10T10:19:37.128 INFO:tasks.workunit.client.1.vm05.stdout:6/386: read - dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f70 zero size 2026-03-10T10:19:37.132 INFO:tasks.workunit.client.1.vm05.stdout:1/487: sync 2026-03-10T10:19:37.134 INFO:tasks.workunit.client.1.vm05.stdout:6/387: link dd/d36/d3f/c31 dd/d36/d3f/d12/d44/d30/d4a/c7a 0 2026-03-10T10:19:37.134 INFO:tasks.workunit.client.1.vm05.stdout:1/488: chown d4/df/d76/f7e 0 1 2026-03-10T10:19:37.153 INFO:tasks.workunit.client.1.vm05.stdout:6/388: fdatasync f3 0 2026-03-10T10:19:37.154 INFO:tasks.workunit.client.1.vm05.stdout:9/375: write d0/d1/fb [2865989,84] 0 2026-03-10T10:19:37.154 INFO:tasks.workunit.client.1.vm05.stdout:1/489: write d4/d37/f89 [39932,75061] 0 2026-03-10T10:19:37.154 INFO:tasks.workunit.client.1.vm05.stdout:6/389: dread - dd/d36/d3f/d12/d44/d63/f78 zero size 2026-03-10T10:19:37.154 INFO:tasks.workunit.client.1.vm05.stdout:9/376: symlink d0/df/d11/l7a 0 2026-03-10T10:19:37.154 INFO:tasks.workunit.client.1.vm05.stdout:9/377: chown d0/d70/f79 14772 1 2026-03-10T10:19:37.154 INFO:tasks.workunit.client.1.vm05.stdout:0/461: dwrite d1/d2/d9/d31/d12/d20/f37 [0,4194304] 0 2026-03-10T10:19:37.154 INFO:tasks.workunit.client.1.vm05.stdout:6/390: fdatasync dd/d36/d3f/d12/d58/f5a 0 2026-03-10T10:19:37.159 INFO:tasks.workunit.client.1.vm05.stdout:9/378: dread d0/d1/d13/de/f5b [0,4194304] 0 2026-03-10T10:19:37.160 INFO:tasks.workunit.client.1.vm05.stdout:6/391: creat dd/d36/d3f/d12/d58/f7b x:0 0 0 2026-03-10T10:19:37.160 INFO:tasks.workunit.client.1.vm05.stdout:0/462: unlink d1/d2/d9/d31/d12/d20/l53 0 2026-03-10T10:19:37.160 INFO:tasks.workunit.client.1.vm05.stdout:1/490: dwrite d4/d20/f49 [0,4194304] 0 2026-03-10T10:19:37.163 INFO:tasks.workunit.client.1.vm05.stdout:6/392: unlink dd/d36/f5f 0 2026-03-10T10:19:37.173 INFO:tasks.workunit.client.1.vm05.stdout:9/379: creat d0/d1/f7b x:0 0 0 2026-03-10T10:19:37.173 INFO:tasks.workunit.client.1.vm05.stdout:1/491: dwrite d4/f46 [0,4194304] 0 2026-03-10T10:19:37.174 INFO:tasks.workunit.client.1.vm05.stdout:1/492: readlink d4/d39/d3e/l6f 0 2026-03-10T10:19:37.175 INFO:tasks.workunit.client.1.vm05.stdout:6/393: truncate dd/d36/d3f/d12/d44/f2f 542155 0 2026-03-10T10:19:37.175 INFO:tasks.workunit.client.1.vm05.stdout:0/463: creat d1/d2/d9/d31/d13/f9c x:0 0 0 2026-03-10T10:19:37.176 INFO:tasks.workunit.client.1.vm05.stdout:0/464: stat d1/d2/d9/d31/d13/d15/d4e/f89 0 2026-03-10T10:19:37.178 INFO:tasks.workunit.client.1.vm05.stdout:9/380: creat d0/d1/d13/d26/f7c x:0 0 0 2026-03-10T10:19:37.179 INFO:tasks.workunit.client.1.vm05.stdout:6/394: write dd/d36/d3f/d12/d44/d2a/d3d/d48/f75 [692653,45823] 0 2026-03-10T10:19:37.183 INFO:tasks.workunit.client.1.vm05.stdout:9/381: mkdir d0/d1/d13/d55/d7d 0 2026-03-10T10:19:37.183 INFO:tasks.workunit.client.1.vm05.stdout:6/395: truncate dd/d36/f71 648401 0 2026-03-10T10:19:37.184 INFO:tasks.workunit.client.1.vm05.stdout:6/396: readlink dd/d36/d3f/d12/l13 0 2026-03-10T10:19:37.187 INFO:tasks.workunit.client.1.vm05.stdout:9/382: write d0/f73 [3350617,122644] 0 2026-03-10T10:19:37.193 INFO:tasks.workunit.client.1.vm05.stdout:9/383: stat d0/d1/f6d 0 2026-03-10T10:19:37.193 INFO:tasks.workunit.client.1.vm05.stdout:1/493: dread d4/d20/f31 [0,4194304] 0 2026-03-10T10:19:37.193 INFO:tasks.workunit.client.1.vm05.stdout:9/384: link d0/f28 d0/d1/d4c/f7e 0 2026-03-10T10:19:37.193 INFO:tasks.workunit.client.1.vm05.stdout:9/385: chown d0/d1/d16/d6e 643608924 1 2026-03-10T10:19:37.193 INFO:tasks.workunit.client.1.vm05.stdout:1/494: read d4/d39/f54 [62294,83346] 0 2026-03-10T10:19:37.193 INFO:tasks.workunit.client.1.vm05.stdout:1/495: write d4/d39/d3e/f7d [1007969,72570] 0 2026-03-10T10:19:37.195 INFO:tasks.workunit.client.1.vm05.stdout:1/496: stat d4/ca 0 2026-03-10T10:19:37.198 INFO:tasks.workunit.client.1.vm05.stdout:9/386: link d0/d1/d13/c5a d0/d1/d13/de/c7f 0 2026-03-10T10:19:37.206 INFO:tasks.workunit.client.1.vm05.stdout:3/457: write fb [225049,64679] 0 2026-03-10T10:19:37.208 INFO:tasks.workunit.client.0.vm02.stdout:6/428: write d0/d8/d9/f84 [402001,78414] 0 2026-03-10T10:19:37.209 INFO:tasks.workunit.client.0.vm02.stdout:6/429: read d0/d8/d29/d6d/f3d [2225553,19658] 0 2026-03-10T10:19:37.213 INFO:tasks.workunit.client.1.vm05.stdout:9/387: sync 2026-03-10T10:19:37.213 INFO:tasks.workunit.client.1.vm05.stdout:9/388: chown d0/d1/f6d 113 1 2026-03-10T10:19:37.214 INFO:tasks.workunit.client.1.vm05.stdout:9/389: dread - d0/d1/d16/f72 zero size 2026-03-10T10:19:37.215 INFO:tasks.workunit.client.0.vm02.stdout:6/430: link d0/d8/d29/d2f/d4b/f26 d0/d8/d29/d2f/d4b/f8d 0 2026-03-10T10:19:37.220 INFO:tasks.workunit.client.0.vm02.stdout:6/431: creat d0/d8/d29/d2f/f8e x:0 0 0 2026-03-10T10:19:37.221 INFO:tasks.workunit.client.1.vm05.stdout:9/390: sync 2026-03-10T10:19:37.245 INFO:tasks.workunit.client.0.vm02.stdout:3/434: write d1/d8/f7c [5692370,106950] 0 2026-03-10T10:19:37.247 INFO:tasks.workunit.client.0.vm02.stdout:4/586: truncate d1/d10/db/f20 881580 0 2026-03-10T10:19:37.250 INFO:tasks.workunit.client.0.vm02.stdout:3/435: rename d1/d20/d52/c7a to d1/d6/d8b/c8d 0 2026-03-10T10:19:37.252 INFO:tasks.workunit.client.0.vm02.stdout:4/587: chown d1/c23 0 1 2026-03-10T10:19:37.252 INFO:tasks.workunit.client.0.vm02.stdout:4/588: chown d1/d10/f71 1283967771 1 2026-03-10T10:19:37.254 INFO:tasks.workunit.client.0.vm02.stdout:3/436: fsync d1/d6/f42 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/437: readlink d1/l65 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:5/619: dwrite d1/db/d11/d13/f4e [0,4194304] 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:1/452: write d4/d2c/f43 [911771,58681] 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:1/453: write d4/fe [2374363,11843] 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/438: unlink d1/l37 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/439: mkdir d1/d6/d8e 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/440: unlink d1/d6/f43 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/441: read d1/fe [606551,60338] 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/442: creat d1/d6/d8e/f8f x:0 0 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/443: fsync d1/d8/d21/f29 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/444: fdatasync d1/d8/d21/f47 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/445: rename d1/f80 to d1/f90 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/446: unlink d1/d20/c7f 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/447: truncate d1/d6/f48 2754486 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/448: creat d1/d58/f91 x:0 0 0 2026-03-10T10:19:37.290 INFO:tasks.workunit.client.0.vm02.stdout:3/449: getdents d1/d8/d21/d7d 0 2026-03-10T10:19:37.296 INFO:tasks.workunit.client.0.vm02.stdout:4/589: dread d1/f1d [4194304,4194304] 0 2026-03-10T10:19:37.298 INFO:tasks.workunit.client.0.vm02.stdout:4/590: rename d1/d52/f77 to d1/d41/d5e/d78/d7f/d82/fc1 0 2026-03-10T10:19:37.300 INFO:tasks.workunit.client.0.vm02.stdout:4/591: rename d1/f9c to d1/fc2 0 2026-03-10T10:19:37.302 INFO:tasks.workunit.client.0.vm02.stdout:4/592: creat d1/d41/fc3 x:0 0 0 2026-03-10T10:19:37.303 INFO:tasks.workunit.client.0.vm02.stdout:4/593: truncate d1/d41/d5e/d78/d1a/f4d 7744 0 2026-03-10T10:19:37.307 INFO:tasks.workunit.client.0.vm02.stdout:4/594: fsync d1/d52/d53/f79 0 2026-03-10T10:19:37.313 INFO:tasks.workunit.client.0.vm02.stdout:6/432: sync 2026-03-10T10:19:37.315 INFO:tasks.workunit.client.1.vm05.stdout:0/465: dread d1/d2/d9/d31/d54/f4 [0,4194304] 0 2026-03-10T10:19:37.316 INFO:tasks.workunit.client.1.vm05.stdout:0/466: readlink d1/d2/d39/d3d/l83 0 2026-03-10T10:19:37.317 INFO:tasks.workunit.client.1.vm05.stdout:0/467: write d1/d2/d9/d31/d13/f7a [3832324,89644] 0 2026-03-10T10:19:37.318 INFO:tasks.workunit.client.1.vm05.stdout:0/468: chown d1/d2/d9/d31/d54/f16 73348 1 2026-03-10T10:19:37.319 INFO:tasks.workunit.client.0.vm02.stdout:6/433: creat d0/d8/f8f x:0 0 0 2026-03-10T10:19:37.333 INFO:tasks.workunit.client.0.vm02.stdout:9/418: truncate da/d3c/d53/f73 64414 0 2026-03-10T10:19:37.336 INFO:tasks.workunit.client.0.vm02.stdout:0/469: dwrite d9/d18/d1a/f6f [0,4194304] 0 2026-03-10T10:19:37.338 INFO:tasks.workunit.client.0.vm02.stdout:0/470: truncate d9/d18/d1a/d3c/f92 500734 0 2026-03-10T10:19:37.342 INFO:tasks.workunit.client.0.vm02.stdout:2/461: dwrite d0/d1a/f4c [0,4194304] 0 2026-03-10T10:19:37.356 INFO:tasks.workunit.client.1.vm05.stdout:7/484: write d5/f13 [761064,86234] 0 2026-03-10T10:19:37.365 INFO:tasks.workunit.client.1.vm05.stdout:2/421: dwrite db/d28/f35 [0,4194304] 0 2026-03-10T10:19:37.373 INFO:tasks.workunit.client.1.vm05.stdout:1/497: dread d4/d3d/f4c [0,4194304] 0 2026-03-10T10:19:37.373 INFO:tasks.workunit.client.1.vm05.stdout:7/485: dread - d5/d1d/d29/f5c zero size 2026-03-10T10:19:37.374 INFO:tasks.workunit.client.0.vm02.stdout:0/471: write f2 [4049968,84737] 0 2026-03-10T10:19:37.380 INFO:tasks.workunit.client.1.vm05.stdout:1/498: dread d4/d20/f31 [0,4194304] 0 2026-03-10T10:19:37.384 INFO:tasks.workunit.client.1.vm05.stdout:1/499: symlink d4/d20/l91 0 2026-03-10T10:19:37.385 INFO:tasks.workunit.client.1.vm05.stdout:1/500: chown d4/d37/f90 1510577 1 2026-03-10T10:19:37.388 INFO:tasks.workunit.client.1.vm05.stdout:2/422: symlink db/d12/d74/l84 0 2026-03-10T10:19:37.388 INFO:tasks.workunit.client.1.vm05.stdout:1/501: unlink d4/df/d1c/c5c 0 2026-03-10T10:19:37.389 INFO:tasks.workunit.client.0.vm02.stdout:7/428: truncate d1/f6b 1213963 0 2026-03-10T10:19:37.390 INFO:tasks.workunit.client.1.vm05.stdout:1/502: mkdir d4/df/d1c/d92 0 2026-03-10T10:19:37.394 INFO:tasks.workunit.client.0.vm02.stdout:7/429: mkdir d1/dc/d44/d7e 0 2026-03-10T10:19:37.395 INFO:tasks.workunit.client.0.vm02.stdout:7/430: write d1/dc/d60/f79 [1006898,103742] 0 2026-03-10T10:19:37.397 INFO:tasks.workunit.client.1.vm05.stdout:1/503: dwrite d4/dd/f60 [4194304,4194304] 0 2026-03-10T10:19:37.405 INFO:tasks.workunit.client.1.vm05.stdout:4/328: truncate d1/f5d 600827 0 2026-03-10T10:19:37.405 INFO:tasks.workunit.client.1.vm05.stdout:4/329: chown d1/d3/d65 505475604 1 2026-03-10T10:19:37.408 INFO:tasks.workunit.client.1.vm05.stdout:6/397: fsync dd/d36/d3f/d12/d44/d63/f78 0 2026-03-10T10:19:37.409 INFO:tasks.workunit.client.1.vm05.stdout:6/398: chown dd/d1b/f40 46183934 1 2026-03-10T10:19:37.410 INFO:tasks.workunit.client.1.vm05.stdout:1/504: mknod d4/d3d/c93 0 2026-03-10T10:19:37.410 INFO:tasks.workunit.client.1.vm05.stdout:6/399: write dd/d36/d3f/d12/d44/d63/f78 [827446,47648] 0 2026-03-10T10:19:37.411 INFO:tasks.workunit.client.1.vm05.stdout:1/505: write d4/df/d1c/f2a [7292296,81167] 0 2026-03-10T10:19:37.414 INFO:tasks.workunit.client.1.vm05.stdout:5/459: write da/db/d26/d35/f1c [2161265,56020] 0 2026-03-10T10:19:37.416 INFO:tasks.workunit.client.1.vm05.stdout:6/400: creat dd/d36/d3f/d12/d44/d2a/d3d/d3e/f7c x:0 0 0 2026-03-10T10:19:37.417 INFO:tasks.workunit.client.1.vm05.stdout:1/506: unlink d4/l42 0 2026-03-10T10:19:37.420 INFO:tasks.workunit.client.1.vm05.stdout:5/460: fdatasync da/db/d26/d5c/f50 0 2026-03-10T10:19:37.421 INFO:tasks.workunit.client.1.vm05.stdout:6/401: mkdir dd/d36/d7d 0 2026-03-10T10:19:37.423 INFO:tasks.workunit.client.1.vm05.stdout:1/507: dwrite d4/d79/f8b [0,4194304] 0 2026-03-10T10:19:37.424 INFO:tasks.workunit.client.1.vm05.stdout:1/508: chown d4/d3d/d6e/f7c 10924 1 2026-03-10T10:19:37.429 INFO:tasks.workunit.client.1.vm05.stdout:6/402: dwrite dd/d36/d3f/d12/d58/f7b [0,4194304] 0 2026-03-10T10:19:37.430 INFO:tasks.workunit.client.0.vm02.stdout:1/454: write d4/f8 [2012514,13714] 0 2026-03-10T10:19:37.430 INFO:tasks.workunit.client.0.vm02.stdout:1/455: readlink d4/da/d27/d38/d3c/l55 0 2026-03-10T10:19:37.437 INFO:tasks.workunit.client.0.vm02.stdout:5/620: truncate d1/db/d11/d16/d79/d85/f94 859146 0 2026-03-10T10:19:37.439 INFO:tasks.workunit.client.1.vm05.stdout:6/403: readlink dd/d36/d3f/d12/l13 0 2026-03-10T10:19:37.440 INFO:tasks.workunit.client.1.vm05.stdout:6/404: rename dd/d36/d3f to dd/d36/d3f/d12/d44/d2a/d77/d7e 22 2026-03-10T10:19:37.445 INFO:tasks.workunit.client.1.vm05.stdout:3/458: dwrite dd/d20/f50 [0,4194304] 0 2026-03-10T10:19:37.448 INFO:tasks.workunit.client.0.vm02.stdout:5/621: dread d1/db/d11/d84/d40/d4f/d5f/d6d/d71/f80 [0,4194304] 0 2026-03-10T10:19:37.452 INFO:tasks.workunit.client.0.vm02.stdout:5/622: fsync d1/db/d11/d1a/f27 0 2026-03-10T10:19:37.454 INFO:tasks.workunit.client.1.vm05.stdout:9/391: dread - d0/d1/d13/de/d21/f71 zero size 2026-03-10T10:19:37.458 INFO:tasks.workunit.client.0.vm02.stdout:1/456: getdents d4/da/d1a/d47/d88 0 2026-03-10T10:19:37.458 INFO:tasks.workunit.client.0.vm02.stdout:1/457: chown d4/d2c 7076821 1 2026-03-10T10:19:37.459 INFO:tasks.workunit.client.0.vm02.stdout:1/458: rmdir d4/da/d1a/d47 39 2026-03-10T10:19:37.460 INFO:tasks.workunit.client.0.vm02.stdout:1/459: stat d4/da/d1a/f1c 0 2026-03-10T10:19:37.460 INFO:tasks.workunit.client.0.vm02.stdout:1/460: stat d4/d1b/l39 0 2026-03-10T10:19:37.463 INFO:tasks.workunit.client.1.vm05.stdout:6/405: mkdir dd/d36/d3f/d12/d44/d2a/d7f 0 2026-03-10T10:19:37.467 INFO:tasks.workunit.client.1.vm05.stdout:5/461: getdents da/db 0 2026-03-10T10:19:37.480 INFO:tasks.workunit.client.0.vm02.stdout:3/450: dwrite d1/d6/f53 [0,4194304] 0 2026-03-10T10:19:37.484 INFO:tasks.workunit.client.0.vm02.stdout:4/595: write d1/f6f [801573,39579] 0 2026-03-10T10:19:37.485 INFO:tasks.workunit.client.0.vm02.stdout:4/596: rename d1/d10/fb5 to d1/d32/fc4 0 2026-03-10T10:19:37.486 INFO:tasks.workunit.client.0.vm02.stdout:4/597: rmdir d1/d32/d3e 39 2026-03-10T10:19:37.487 INFO:tasks.workunit.client.0.vm02.stdout:4/598: write d1/d10/db/f16 [404409,56794] 0 2026-03-10T10:19:37.492 INFO:tasks.workunit.client.0.vm02.stdout:4/599: link d1/l4e d1/d32/da3/lc5 0 2026-03-10T10:19:37.494 INFO:tasks.workunit.client.0.vm02.stdout:4/600: chown d1/d41/d5e/d78/c97 1 1 2026-03-10T10:19:37.495 INFO:tasks.workunit.client.0.vm02.stdout:4/601: unlink d1/d52/f5a 0 2026-03-10T10:19:37.496 INFO:tasks.workunit.client.0.vm02.stdout:4/602: truncate d1/d52/d53/f70 2275268 0 2026-03-10T10:19:37.497 INFO:tasks.workunit.client.0.vm02.stdout:1/461: sync 2026-03-10T10:19:37.498 INFO:tasks.workunit.client.0.vm02.stdout:4/603: unlink d1/d41/d5e/f87 0 2026-03-10T10:19:37.500 INFO:tasks.workunit.client.0.vm02.stdout:4/604: mkdir d1/d41/d5e/d78/d1a/d49/d81/dc6 0 2026-03-10T10:19:37.501 INFO:tasks.workunit.client.0.vm02.stdout:1/462: rename d4/f90 to d4/da/d27/d38/d80/f94 0 2026-03-10T10:19:37.503 INFO:tasks.workunit.client.1.vm05.stdout:5/462: dread da/db/f1e [0,4194304] 0 2026-03-10T10:19:37.507 INFO:tasks.workunit.client.0.vm02.stdout:1/463: dread d4/d1b/f44 [0,4194304] 0 2026-03-10T10:19:37.509 INFO:tasks.workunit.client.0.vm02.stdout:1/464: mkdir d4/da/d27/d38/d80/d95 0 2026-03-10T10:19:37.511 INFO:tasks.workunit.client.0.vm02.stdout:1/465: rename d4/da/d1a/f19 to d4/da/d27/d38/d3c/f96 0 2026-03-10T10:19:37.512 INFO:tasks.workunit.client.0.vm02.stdout:1/466: rmdir d4/da/d1a/d47 39 2026-03-10T10:19:37.514 INFO:tasks.workunit.client.0.vm02.stdout:1/467: getdents d4/d2c/d53 0 2026-03-10T10:19:37.519 INFO:tasks.workunit.client.0.vm02.stdout:1/468: creat d4/d2c/d53/f97 x:0 0 0 2026-03-10T10:19:37.519 INFO:tasks.workunit.client.0.vm02.stdout:1/469: link d4/d2c/c60 d4/da/d1a/d47/d88/c98 0 2026-03-10T10:19:37.519 INFO:tasks.workunit.client.0.vm02.stdout:1/470: write d4/f7a [837838,97136] 0 2026-03-10T10:19:37.519 INFO:tasks.workunit.client.0.vm02.stdout:1/471: chown d4/d1b/f4c 0 1 2026-03-10T10:19:37.519 INFO:tasks.workunit.client.0.vm02.stdout:1/472: chown d4/d2c/d53/f97 193533853 1 2026-03-10T10:19:37.521 INFO:tasks.workunit.client.1.vm05.stdout:5/463: sync 2026-03-10T10:19:37.524 INFO:tasks.workunit.client.0.vm02.stdout:1/473: getdents d4/d4a 0 2026-03-10T10:19:37.527 INFO:tasks.workunit.client.1.vm05.stdout:5/464: getdents da/db/d26/d5c/d4b 0 2026-03-10T10:19:37.528 INFO:tasks.workunit.client.1.vm05.stdout:5/465: chown da/db 3443448 1 2026-03-10T10:19:37.529 INFO:tasks.workunit.client.0.vm02.stdout:1/474: rename d4/da/d1a/d22/f49 to d4/d2c/d53/f99 0 2026-03-10T10:19:37.534 INFO:tasks.workunit.client.0.vm02.stdout:1/475: dwrite d4/da/d27/f35 [0,4194304] 0 2026-03-10T10:19:37.535 INFO:tasks.workunit.client.1.vm05.stdout:5/466: creat da/d96/f9d x:0 0 0 2026-03-10T10:19:37.542 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:37 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:37.542 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:37 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:37.557 INFO:tasks.workunit.client.1.vm05.stdout:5/467: dwrite da/db/d26/d35/d38/f51 [0,4194304] 0 2026-03-10T10:19:37.561 INFO:tasks.workunit.client.1.vm05.stdout:5/468: chown da/d9a 0 1 2026-03-10T10:19:37.563 INFO:tasks.workunit.client.1.vm05.stdout:5/469: fdatasync da/db/d26/f7e 0 2026-03-10T10:19:37.564 INFO:tasks.workunit.client.1.vm05.stdout:5/470: chown da/db/d26/d35/f74 21 1 2026-03-10T10:19:37.566 INFO:tasks.workunit.client.1.vm05.stdout:5/471: fdatasync da/db/f3b 0 2026-03-10T10:19:37.572 INFO:tasks.workunit.client.0.vm02.stdout:6/434: dwrite d0/d8/d29/d2f/f38 [0,4194304] 0 2026-03-10T10:19:37.581 INFO:tasks.workunit.client.1.vm05.stdout:0/469: dwrite d1/d2/d9/d31/d13/d17/f5a [0,4194304] 0 2026-03-10T10:19:37.582 INFO:tasks.workunit.client.0.vm02.stdout:9/419: dwrite da/f25 [0,4194304] 0 2026-03-10T10:19:37.583 INFO:tasks.workunit.client.0.vm02.stdout:2/462: dwrite d0/d1a/d49/f54 [0,4194304] 0 2026-03-10T10:19:37.583 INFO:tasks.workunit.client.0.vm02.stdout:0/472: dwrite d9/d34/d3d/d65/f84 [0,4194304] 0 2026-03-10T10:19:37.597 INFO:tasks.workunit.client.1.vm05.stdout:0/470: dwrite d1/d2/d9/d31/d12/d20/f71 [0,4194304] 0 2026-03-10T10:19:37.604 INFO:tasks.workunit.client.1.vm05.stdout:5/472: mknod da/db/d28/d32/c9e 0 2026-03-10T10:19:37.609 INFO:tasks.workunit.client.1.vm05.stdout:7/486: write d5/d1d/d20/d2d/f55 [276161,617] 0 2026-03-10T10:19:37.611 INFO:tasks.workunit.client.0.vm02.stdout:7/431: dwrite d1/dc/d16/d28/f4e [0,4194304] 0 2026-03-10T10:19:37.612 INFO:tasks.workunit.client.0.vm02.stdout:7/432: chown d1/dc/d16/d28/d2c 7950528 1 2026-03-10T10:19:37.613 INFO:tasks.workunit.client.0.vm02.stdout:7/433: write d1/dc/d16/d28/d2d/d36/d67/f70 [565749,25288] 0 2026-03-10T10:19:37.616 INFO:tasks.workunit.client.1.vm05.stdout:2/423: truncate db/d1c/d40/f73 402488 0 2026-03-10T10:19:37.620 INFO:tasks.workunit.client.0.vm02.stdout:0/473: mknod d9/d34/d3d/d67/c93 0 2026-03-10T10:19:37.620 INFO:tasks.workunit.client.0.vm02.stdout:0/474: chown d9/d18/d1a/d3c 2557 1 2026-03-10T10:19:37.621 INFO:tasks.workunit.client.1.vm05.stdout:5/473: creat da/db/f9f x:0 0 0 2026-03-10T10:19:37.622 INFO:tasks.workunit.client.0.vm02.stdout:2/463: symlink d0/d1a/d49/l96 0 2026-03-10T10:19:37.623 INFO:tasks.workunit.client.1.vm05.stdout:7/487: creat d5/d1d/d20/d2d/f95 x:0 0 0 2026-03-10T10:19:37.623 INFO:tasks.workunit.client.0.vm02.stdout:2/464: fsync d0/d10/f93 0 2026-03-10T10:19:37.625 INFO:tasks.workunit.client.0.vm02.stdout:7/434: truncate d1/dc/f26 1534792 0 2026-03-10T10:19:37.625 INFO:tasks.workunit.client.1.vm05.stdout:2/424: unlink db/d28/f60 0 2026-03-10T10:19:37.626 INFO:tasks.workunit.client.1.vm05.stdout:2/425: chown db/d1c/f3d 3119571 1 2026-03-10T10:19:37.628 INFO:tasks.workunit.client.0.vm02.stdout:0/475: fsync d9/d34/d3d/f4e 0 2026-03-10T10:19:37.636 INFO:tasks.workunit.client.0.vm02.stdout:7/435: symlink d1/dc/d16/d28/d2c/l7f 0 2026-03-10T10:19:37.640 INFO:tasks.workunit.client.0.vm02.stdout:0/476: fsync d9/d34/d3d/f41 0 2026-03-10T10:19:37.641 INFO:tasks.workunit.client.1.vm05.stdout:7/488: dread d5/d26/f33 [0,4194304] 0 2026-03-10T10:19:37.643 INFO:tasks.workunit.client.0.vm02.stdout:2/465: creat d0/d10/d69/f97 x:0 0 0 2026-03-10T10:19:37.648 INFO:tasks.workunit.client.1.vm05.stdout:2/426: unlink db/d12/l1e 0 2026-03-10T10:19:37.653 INFO:tasks.workunit.client.1.vm05.stdout:2/427: mkdir db/d1c/d40/d62/d85 0 2026-03-10T10:19:37.654 INFO:tasks.workunit.client.1.vm05.stdout:2/428: readlink db/d2d/l51 0 2026-03-10T10:19:37.654 INFO:tasks.workunit.client.0.vm02.stdout:2/466: creat d0/d1a/d49/d5e/d8a/f98 x:0 0 0 2026-03-10T10:19:37.655 INFO:tasks.workunit.client.0.vm02.stdout:2/467: dread d0/d1a/d49/d5e/f68 [0,4194304] 0 2026-03-10T10:19:37.659 INFO:tasks.workunit.client.1.vm05.stdout:2/429: creat db/d2d/d5e/f86 x:0 0 0 2026-03-10T10:19:37.662 INFO:tasks.workunit.client.0.vm02.stdout:2/468: mknod d0/d1a/d24/c99 0 2026-03-10T10:19:37.673 INFO:tasks.workunit.client.1.vm05.stdout:7/489: sync 2026-03-10T10:19:37.678 INFO:tasks.workunit.client.1.vm05.stdout:0/471: read d1/d2/d9/d31/d13/d2f/f33 [29361,3108] 0 2026-03-10T10:19:37.682 INFO:tasks.workunit.client.1.vm05.stdout:0/472: symlink d1/d2/d39/d6e/d8e/l9d 0 2026-03-10T10:19:37.699 INFO:tasks.workunit.client.1.vm05.stdout:0/473: symlink d1/d2/d9/d31/d13/l9e 0 2026-03-10T10:19:37.699 INFO:tasks.workunit.client.1.vm05.stdout:0/474: chown d1/d2/d9/d31/d12/c25 231649 1 2026-03-10T10:19:37.708 INFO:tasks.workunit.client.0.vm02.stdout:4/605: dread d1/f6f [0,4194304] 0 2026-03-10T10:19:37.709 INFO:tasks.workunit.client.0.vm02.stdout:4/606: stat d1/f94 0 2026-03-10T10:19:37.714 INFO:tasks.workunit.client.1.vm05.stdout:0/475: dread d1/d2/d9/d31/d13/d17/f1b [0,4194304] 0 2026-03-10T10:19:37.716 INFO:tasks.workunit.client.1.vm05.stdout:0/476: mkdir d1/d2/d39/d3d/d9f 0 2026-03-10T10:19:37.716 INFO:tasks.workunit.client.1.vm05.stdout:6/406: dread dd/d36/d3f/d12/f35 [0,4194304] 0 2026-03-10T10:19:37.718 INFO:tasks.workunit.client.1.vm05.stdout:0/477: rename d1/d2/d9/d31/d54/d7c to d1/d2/d9/d50/d9a/da0 0 2026-03-10T10:19:37.720 INFO:tasks.workunit.client.1.vm05.stdout:0/478: rename d1/d2/d9/d31/d13/d55 to d1/d2/d9/d31/d13/d17/da1 0 2026-03-10T10:19:37.721 INFO:tasks.workunit.client.1.vm05.stdout:6/407: mknod dd/d36/d3f/d12/d44/d2a/d3d/c80 0 2026-03-10T10:19:37.722 INFO:tasks.workunit.client.1.vm05.stdout:6/408: fdatasync dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b 0 2026-03-10T10:19:37.722 INFO:tasks.workunit.client.1.vm05.stdout:0/479: mkdir d1/d2/d9/d31/d13/da2 0 2026-03-10T10:19:37.724 INFO:tasks.workunit.client.1.vm05.stdout:0/480: mknod d1/d2/d9/d50/d9a/da0/ca3 0 2026-03-10T10:19:37.725 INFO:tasks.workunit.client.1.vm05.stdout:6/409: rename dd/d36/d3f/d12/d44/d30/d4a/l72 to dd/d36/d7d/l81 0 2026-03-10T10:19:37.725 INFO:tasks.workunit.client.1.vm05.stdout:0/481: mknod d1/d2/d39/d6e/d8e/ca4 0 2026-03-10T10:19:37.727 INFO:tasks.workunit.client.1.vm05.stdout:0/482: unlink d1/d2/d39/d3d/f82 0 2026-03-10T10:19:37.758 INFO:tasks.workunit.client.1.vm05.stdout:4/330: dwrite d1/d31/dc/d40/d45/f52 [0,4194304] 0 2026-03-10T10:19:37.770 INFO:tasks.workunit.client.1.vm05.stdout:4/331: mkdir d1/d31/d4b/d6d 0 2026-03-10T10:19:37.774 INFO:tasks.workunit.client.0.vm02.stdout:8/449: dread d1/d1c/f66 [0,4194304] 0 2026-03-10T10:19:37.779 INFO:tasks.workunit.client.0.vm02.stdout:8/450: fsync d1/d1c/f2a 0 2026-03-10T10:19:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:37 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:37 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:37.801 INFO:tasks.workunit.client.0.vm02.stdout:8/451: dread d1/f12 [0,4194304] 0 2026-03-10T10:19:37.804 INFO:tasks.workunit.client.1.vm05.stdout:3/459: dread f6 [0,4194304] 0 2026-03-10T10:19:37.806 INFO:tasks.workunit.client.1.vm05.stdout:3/460: creat dd/d39/d5f/fa2 x:0 0 0 2026-03-10T10:19:37.809 INFO:tasks.workunit.client.1.vm05.stdout:3/461: creat dd/d15/fa3 x:0 0 0 2026-03-10T10:19:37.810 INFO:tasks.workunit.client.1.vm05.stdout:3/462: write dd/d15/d24/d2c/d3b/f77 [742444,40890] 0 2026-03-10T10:19:37.813 INFO:tasks.workunit.client.1.vm05.stdout:3/463: rename dd/c14 to dd/d20/d9e/ca4 0 2026-03-10T10:19:37.815 INFO:tasks.workunit.client.1.vm05.stdout:3/464: fdatasync dd/d15/d24/d2c/f3e 0 2026-03-10T10:19:37.817 INFO:tasks.workunit.client.1.vm05.stdout:3/465: symlink dd/d15/d24/d2c/d6d/d89/la5 0 2026-03-10T10:19:37.821 INFO:tasks.workunit.client.1.vm05.stdout:1/509: write d4/d20/f2c [4944019,119695] 0 2026-03-10T10:19:37.823 INFO:tasks.workunit.client.0.vm02.stdout:5/623: dwrite d1/db/d11/d13/d28/d37/f76 [0,4194304] 0 2026-03-10T10:19:37.829 INFO:tasks.workunit.client.1.vm05.stdout:9/392: dwrite d0/d1/d13/d26/f58 [0,4194304] 0 2026-03-10T10:19:37.833 INFO:tasks.workunit.client.1.vm05.stdout:9/393: chown d0/d1/d16/f72 844346 1 2026-03-10T10:19:37.840 INFO:tasks.workunit.client.0.vm02.stdout:8/452: dread d1/d1c/f2a [0,4194304] 0 2026-03-10T10:19:37.848 INFO:tasks.workunit.client.1.vm05.stdout:1/510: unlink d4/df/d1c/l56 0 2026-03-10T10:19:37.849 INFO:tasks.workunit.client.1.vm05.stdout:1/511: read d4/df/d1c/f23 [2027609,126212] 0 2026-03-10T10:19:37.852 INFO:tasks.workunit.client.1.vm05.stdout:9/394: unlink d0/d1/d16/f72 0 2026-03-10T10:19:37.862 INFO:tasks.workunit.client.0.vm02.stdout:8/453: mkdir d1/d1c/d43/d5b/d88 0 2026-03-10T10:19:37.862 INFO:tasks.workunit.client.0.vm02.stdout:8/454: fdatasync d1/f12 0 2026-03-10T10:19:37.862 INFO:tasks.workunit.client.1.vm05.stdout:1/512: creat d4/df/d1c/d53/d66/f94 x:0 0 0 2026-03-10T10:19:37.862 INFO:tasks.workunit.client.1.vm05.stdout:1/513: unlink d4/df/d1c/f2a 0 2026-03-10T10:19:37.863 INFO:tasks.workunit.client.0.vm02.stdout:8/455: symlink d1/d1c/d24/d71/l89 0 2026-03-10T10:19:37.864 INFO:tasks.workunit.client.1.vm05.stdout:9/395: dread d0/d1/d13/f6b [0,4194304] 0 2026-03-10T10:19:37.865 INFO:tasks.workunit.client.1.vm05.stdout:1/514: dread d4/d20/f2c [4194304,4194304] 0 2026-03-10T10:19:37.865 INFO:tasks.workunit.client.0.vm02.stdout:8/456: readlink d1/d2/l47 0 2026-03-10T10:19:37.870 INFO:tasks.workunit.client.1.vm05.stdout:9/396: symlink d0/df/l80 0 2026-03-10T10:19:37.871 INFO:tasks.workunit.client.1.vm05.stdout:1/515: truncate d4/d39/d3e/f3f 3490954 0 2026-03-10T10:19:37.872 INFO:tasks.workunit.client.1.vm05.stdout:9/397: mknod d0/df/d11/c81 0 2026-03-10T10:19:37.874 INFO:tasks.workunit.client.1.vm05.stdout:9/398: chown d0/d1/c35 9480929 1 2026-03-10T10:19:37.874 INFO:tasks.workunit.client.1.vm05.stdout:8/336: dread d7/f1c [0,4194304] 0 2026-03-10T10:19:37.877 INFO:tasks.workunit.client.1.vm05.stdout:9/399: unlink d0/d1/c35 0 2026-03-10T10:19:37.877 INFO:tasks.workunit.client.0.vm02.stdout:3/451: dread d1/f90 [0,4194304] 0 2026-03-10T10:19:37.878 INFO:tasks.workunit.client.0.vm02.stdout:8/457: read d1/d1c/d24/d35/f6e [3895998,116309] 0 2026-03-10T10:19:37.879 INFO:tasks.workunit.client.1.vm05.stdout:8/337: write d7/f21 [3726273,83463] 0 2026-03-10T10:19:37.884 INFO:tasks.workunit.client.0.vm02.stdout:1/476: write d4/f26 [1425066,54270] 0 2026-03-10T10:19:37.884 INFO:tasks.workunit.client.0.vm02.stdout:8/458: fdatasync d1/d1c/d24/d35/f6e 0 2026-03-10T10:19:37.888 INFO:tasks.workunit.client.1.vm05.stdout:9/400: mknod d0/d1/d16/c82 0 2026-03-10T10:19:37.892 INFO:tasks.workunit.client.1.vm05.stdout:9/401: unlink d0/df/d11/c1d 0 2026-03-10T10:19:37.894 INFO:tasks.workunit.client.1.vm05.stdout:8/338: write d7/d14/f4e [2573245,19150] 0 2026-03-10T10:19:37.894 INFO:tasks.workunit.client.1.vm05.stdout:9/402: fdatasync d0/f7 0 2026-03-10T10:19:37.898 INFO:tasks.workunit.client.1.vm05.stdout:9/403: readlink d0/df/d11/l5f 0 2026-03-10T10:19:37.898 INFO:tasks.workunit.client.0.vm02.stdout:8/459: creat d1/d1c/d24/f8a x:0 0 0 2026-03-10T10:19:37.936 INFO:tasks.workunit.client.0.vm02.stdout:8/460: readlink d1/d2/l70 0 2026-03-10T10:19:37.936 INFO:tasks.workunit.client.0.vm02.stdout:8/461: truncate d1/d1c/f1e 4097635 0 2026-03-10T10:19:37.936 INFO:tasks.workunit.client.1.vm05.stdout:8/339: dread d7/f37 [0,4194304] 0 2026-03-10T10:19:37.936 INFO:tasks.workunit.client.1.vm05.stdout:8/340: readlink d7/d14/d3a/d49/l56 0 2026-03-10T10:19:37.936 INFO:tasks.workunit.client.1.vm05.stdout:8/341: getdents d7 0 2026-03-10T10:19:37.936 INFO:tasks.workunit.client.1.vm05.stdout:8/342: readlink d7/d14/d15/l30 0 2026-03-10T10:19:38.052 INFO:tasks.workunit.client.0.vm02.stdout:8/462: sync 2026-03-10T10:19:38.088 INFO:tasks.workunit.client.1.vm05.stdout:8/343: dread d7/d14/d15/f1f [0,4194304] 0 2026-03-10T10:19:38.088 INFO:tasks.workunit.client.1.vm05.stdout:8/344: chown d7 7501559 1 2026-03-10T10:19:38.089 INFO:tasks.workunit.client.1.vm05.stdout:8/345: symlink d7/d14/d3a/l63 0 2026-03-10T10:19:38.092 INFO:tasks.workunit.client.1.vm05.stdout:8/346: dwrite d7/d14/d15/f1f [8388608,4194304] 0 2026-03-10T10:19:38.094 INFO:tasks.workunit.client.1.vm05.stdout:8/347: write f2 [1207629,59520] 0 2026-03-10T10:19:38.110 INFO:tasks.workunit.client.1.vm05.stdout:6/410: dread dd/d36/d3f/d12/f56 [0,4194304] 0 2026-03-10T10:19:38.147 INFO:tasks.workunit.client.1.vm05.stdout:6/411: dread dd/d36/d3f/f1e [0,4194304] 0 2026-03-10T10:19:38.148 INFO:tasks.workunit.client.1.vm05.stdout:6/412: write dd/d36/d3f/d12/d58/f7b [4474876,123693] 0 2026-03-10T10:19:38.148 INFO:tasks.workunit.client.0.vm02.stdout:6/435: dwrite d0/d8/d9/d7a/f40 [0,4194304] 0 2026-03-10T10:19:38.158 INFO:tasks.workunit.client.1.vm05.stdout:6/413: creat dd/d36/d3f/d12/d44/d2a/d3d/d48/f82 x:0 0 0 2026-03-10T10:19:38.183 INFO:tasks.workunit.client.1.vm05.stdout:6/414: sync 2026-03-10T10:19:38.186 INFO:tasks.workunit.client.0.vm02.stdout:9/420: write da/f25 [4470180,106724] 0 2026-03-10T10:19:38.187 INFO:tasks.workunit.client.0.vm02.stdout:9/421: chown da/f65 387 1 2026-03-10T10:19:38.188 INFO:tasks.workunit.client.0.vm02.stdout:9/422: write da/d3c/d4c/d2c/d34/f83 [98945,79969] 0 2026-03-10T10:19:38.189 INFO:tasks.workunit.client.0.vm02.stdout:9/423: chown da/d3c/d4c/d38/l5b 15 1 2026-03-10T10:19:38.192 INFO:tasks.workunit.client.1.vm05.stdout:6/415: symlink dd/d1b/l83 0 2026-03-10T10:19:38.193 INFO:tasks.workunit.client.0.vm02.stdout:9/424: fdatasync da/d3c/d4c/d38/d4a/d70/f74 0 2026-03-10T10:19:38.197 INFO:tasks.workunit.client.0.vm02.stdout:9/425: dwrite da/d3c/d4c/d38/d4a/f59 [0,4194304] 0 2026-03-10T10:19:38.202 INFO:tasks.workunit.client.0.vm02.stdout:9/426: mknod da/d3c/d4c/d38/d82/c87 0 2026-03-10T10:19:38.203 INFO:tasks.workunit.client.0.vm02.stdout:9/427: read da/d3c/d4c/f26 [795288,56415] 0 2026-03-10T10:19:38.205 INFO:tasks.workunit.client.1.vm05.stdout:5/474: fdatasync da/db/d26/d35/f31 0 2026-03-10T10:19:38.207 INFO:tasks.workunit.client.1.vm05.stdout:5/475: chown da/db/d26/d35/d38/f48 484 1 2026-03-10T10:19:38.211 INFO:tasks.workunit.client.1.vm05.stdout:6/416: creat dd/d36/d3f/d12/d44/d2a/f84 x:0 0 0 2026-03-10T10:19:38.214 INFO:tasks.workunit.client.1.vm05.stdout:2/430: write db/d1c/f1f [307781,33166] 0 2026-03-10T10:19:38.216 INFO:tasks.workunit.client.0.vm02.stdout:7/436: write d1/dc/d16/f1e [1222688,93523] 0 2026-03-10T10:19:38.219 INFO:tasks.workunit.client.1.vm05.stdout:7/490: dwrite d5/d1d/d29/f5c [0,4194304] 0 2026-03-10T10:19:38.220 INFO:tasks.workunit.client.0.vm02.stdout:0/477: dwrite d9/d34/d3d/f58 [0,4194304] 0 2026-03-10T10:19:38.220 INFO:tasks.workunit.client.0.vm02.stdout:2/469: dwrite d0/d1a/f66 [0,4194304] 0 2026-03-10T10:19:38.235 INFO:tasks.workunit.client.0.vm02.stdout:4/607: dwrite d1/f94 [0,4194304] 0 2026-03-10T10:19:38.237 INFO:tasks.workunit.client.0.vm02.stdout:7/437: fsync d1/dc/d16/d28/d2d/f3d 0 2026-03-10T10:19:38.241 INFO:tasks.workunit.client.0.vm02.stdout:0/478: unlink d9/d34/d3d/l6e 0 2026-03-10T10:19:38.244 INFO:tasks.workunit.client.0.vm02.stdout:2/470: symlink d0/d1a/d49/l9a 0 2026-03-10T10:19:38.245 INFO:tasks.workunit.client.1.vm05.stdout:5/476: creat da/db/d28/d8a/fa0 x:0 0 0 2026-03-10T10:19:38.246 INFO:tasks.workunit.client.1.vm05.stdout:5/477: write da/d96/f9d [885349,23381] 0 2026-03-10T10:19:38.256 INFO:tasks.workunit.client.1.vm05.stdout:7/491: mkdir d5/d1d/d29/d3e/d8c/d96 0 2026-03-10T10:19:38.256 INFO:tasks.workunit.client.1.vm05.stdout:0/483: write d1/d2/d9/d31/d54/f24 [3571335,62953] 0 2026-03-10T10:19:38.256 INFO:tasks.workunit.client.0.vm02.stdout:7/438: chown d1/c8 6664 1 2026-03-10T10:19:38.257 INFO:tasks.workunit.client.1.vm05.stdout:7/492: read - d5/d26/f92 zero size 2026-03-10T10:19:38.257 INFO:tasks.workunit.client.1.vm05.stdout:0/484: dread - d1/d2/d9/d31/d13/d15/d4e/f89 zero size 2026-03-10T10:19:38.258 INFO:tasks.workunit.client.1.vm05.stdout:6/417: rename dd/d36/c55 to dd/d36/d3f/d12/d44/d2a/d3d/d3e/c85 0 2026-03-10T10:19:38.258 INFO:tasks.workunit.client.1.vm05.stdout:5/478: dwrite da/db/d26/d35/f1c [0,4194304] 0 2026-03-10T10:19:38.258 INFO:tasks.workunit.client.0.vm02.stdout:0/479: creat d9/d34/d3d/f94 x:0 0 0 2026-03-10T10:19:38.260 INFO:tasks.workunit.client.1.vm05.stdout:6/418: dread dd/d36/d3f/f1e [0,4194304] 0 2026-03-10T10:19:38.261 INFO:tasks.workunit.client.1.vm05.stdout:7/493: readlink d5/d17/d66/l83 0 2026-03-10T10:19:38.265 INFO:tasks.workunit.client.1.vm05.stdout:4/332: getdents d1/d31/d4b 0 2026-03-10T10:19:38.266 INFO:tasks.workunit.client.1.vm05.stdout:2/431: link db/d28/c78 db/d4e/d6c/c87 0 2026-03-10T10:19:38.266 INFO:tasks.workunit.client.1.vm05.stdout:5/479: write da/db/d26/d35/d38/f94 [397240,47583] 0 2026-03-10T10:19:38.267 INFO:tasks.workunit.client.1.vm05.stdout:0/485: symlink d1/d2/d9/d50/d9a/la5 0 2026-03-10T10:19:38.270 INFO:tasks.workunit.client.0.vm02.stdout:5/624: write d1/db/d11/d13/d28/d37/f76 [4894042,25199] 0 2026-03-10T10:19:38.271 INFO:tasks.workunit.client.0.vm02.stdout:5/625: chown d1/db/d11/d1a/f27 159 1 2026-03-10T10:19:38.272 INFO:tasks.workunit.client.0.vm02.stdout:0/480: creat d9/d34/d3d/d8d/f95 x:0 0 0 2026-03-10T10:19:38.273 INFO:tasks.workunit.client.1.vm05.stdout:6/419: sync 2026-03-10T10:19:38.274 INFO:tasks.workunit.client.1.vm05.stdout:0/486: write d1/d2/d9/d31/d13/d2f/f8f [526421,2449] 0 2026-03-10T10:19:38.278 INFO:tasks.workunit.client.1.vm05.stdout:4/333: read d1/d31/dc/f1f [3909336,107654] 0 2026-03-10T10:19:38.282 INFO:tasks.workunit.client.1.vm05.stdout:3/466: dwrite dd/d15/d4c/f73 [0,4194304] 0 2026-03-10T10:19:38.282 INFO:tasks.workunit.client.1.vm05.stdout:0/487: write d1/d2/d9/f6c [184005,28718] 0 2026-03-10T10:19:38.286 INFO:tasks.workunit.client.0.vm02.stdout:7/439: getdents d1/dc/d44/d5f 0 2026-03-10T10:19:38.287 INFO:tasks.workunit.client.1.vm05.stdout:7/494: mknod d5/d1d/d20/d2d/d68/c97 0 2026-03-10T10:19:38.290 INFO:tasks.workunit.client.1.vm05.stdout:0/488: write d1/d2/d9/d31/d13/f7a [3208942,59621] 0 2026-03-10T10:19:38.290 INFO:tasks.workunit.client.1.vm05.stdout:4/334: truncate d1/d3/d65/f6a 719622 0 2026-03-10T10:19:38.292 INFO:tasks.workunit.client.0.vm02.stdout:0/481: dread d9/d18/d1a/d22/d24/f40 [0,4194304] 0 2026-03-10T10:19:38.306 INFO:tasks.workunit.client.1.vm05.stdout:5/480: dread da/db/d28/d97/f87 [0,4194304] 0 2026-03-10T10:19:38.306 INFO:tasks.workunit.client.0.vm02.stdout:0/482: unlink d9/d18/f2a 0 2026-03-10T10:19:38.307 INFO:tasks.workunit.client.1.vm05.stdout:1/516: truncate d4/d3d/f4c 1324548 0 2026-03-10T10:19:38.309 INFO:tasks.workunit.client.1.vm05.stdout:3/467: symlink dd/d20/d56/la6 0 2026-03-10T10:19:38.312 INFO:tasks.workunit.client.1.vm05.stdout:7/495: rmdir d5/d1d 39 2026-03-10T10:19:38.313 INFO:tasks.workunit.client.0.vm02.stdout:3/452: dwrite d1/f54 [0,4194304] 0 2026-03-10T10:19:38.316 INFO:tasks.workunit.client.0.vm02.stdout:1/477: write d4/d2c/d53/f99 [4400827,79470] 0 2026-03-10T10:19:38.316 INFO:tasks.workunit.client.0.vm02.stdout:0/483: creat d9/d18/d1a/d22/d24/d80/d74/f96 x:0 0 0 2026-03-10T10:19:38.324 INFO:tasks.workunit.client.1.vm05.stdout:9/404: write d0/df/d11/f64 [440555,69974] 0 2026-03-10T10:19:38.328 INFO:tasks.workunit.client.1.vm05.stdout:6/420: unlink dd/d36/d3f/l19 0 2026-03-10T10:19:38.331 INFO:tasks.workunit.client.1.vm05.stdout:6/421: rename dd to dd/d36/d3f/d12/d44/d30/d4a/d86 22 2026-03-10T10:19:38.333 INFO:tasks.workunit.client.1.vm05.stdout:6/422: readlink dd/d1b/l83 0 2026-03-10T10:19:38.334 INFO:tasks.workunit.client.0.vm02.stdout:7/440: link d1/dc/d16/d28/d2d/f3d d1/f80 0 2026-03-10T10:19:38.337 INFO:tasks.workunit.client.1.vm05.stdout:3/468: dread dd/d15/d1f/f2b [0,4194304] 0 2026-03-10T10:19:38.340 INFO:tasks.workunit.client.0.vm02.stdout:7/441: mknod d1/dc/d55/c81 0 2026-03-10T10:19:38.342 INFO:tasks.workunit.client.1.vm05.stdout:9/405: unlink d0/d1/d13/d26/f66 0 2026-03-10T10:19:38.343 INFO:tasks.workunit.client.0.vm02.stdout:1/478: unlink d4/d2c/c60 0 2026-03-10T10:19:38.346 INFO:tasks.workunit.client.0.vm02.stdout:7/442: symlink d1/dc/d16/d28/l82 0 2026-03-10T10:19:38.346 INFO:tasks.workunit.client.1.vm05.stdout:1/517: mknod d4/c95 0 2026-03-10T10:19:38.348 INFO:tasks.workunit.client.1.vm05.stdout:6/423: dread - dd/d36/d3f/f41 zero size 2026-03-10T10:19:38.348 INFO:tasks.workunit.client.0.vm02.stdout:1/479: creat d4/da/d1a/d47/d65/f9a x:0 0 0 2026-03-10T10:19:38.350 INFO:tasks.workunit.client.0.vm02.stdout:8/463: write d1/d1c/d24/f6b [383546,44607] 0 2026-03-10T10:19:38.351 INFO:tasks.workunit.client.1.vm05.stdout:6/424: write dd/d36/d3f/d12/d44/d2a/d3d/d48/f82 [1024389,78523] 0 2026-03-10T10:19:38.353 INFO:tasks.workunit.client.0.vm02.stdout:1/480: dwrite d4/da/d27/d38/d3c/f8f [0,4194304] 0 2026-03-10T10:19:38.355 INFO:tasks.workunit.client.0.vm02.stdout:1/481: chown d4/da/d1a/d5b/c8c 0 1 2026-03-10T10:19:38.363 INFO:tasks.workunit.client.1.vm05.stdout:6/425: dwrite dd/d36/d3f/d12/d44/d2a/d3d/d48/f82 [0,4194304] 0 2026-03-10T10:19:38.364 INFO:tasks.workunit.client.0.vm02.stdout:6/436: dwrite d0/d8/d29/d2f/f67 [0,4194304] 0 2026-03-10T10:19:38.364 INFO:tasks.workunit.client.0.vm02.stdout:6/437: write d0/f21 [2381631,33344] 0 2026-03-10T10:19:38.371 INFO:tasks.workunit.client.0.vm02.stdout:1/482: fsync d4/d1b/f4c 0 2026-03-10T10:19:38.371 INFO:tasks.workunit.client.0.vm02.stdout:1/483: read d4/f5 [1628830,114599] 0 2026-03-10T10:19:38.373 INFO:tasks.workunit.client.0.vm02.stdout:0/484: getdents d9/d18/d1a/d22 0 2026-03-10T10:19:38.379 INFO:tasks.workunit.client.0.vm02.stdout:6/438: dwrite d0/d8/d29/d6d/f3d [0,4194304] 0 2026-03-10T10:19:38.384 INFO:tasks.workunit.client.1.vm05.stdout:3/469: mkdir dd/d15/d24/d2c/d6d/da7 0 2026-03-10T10:19:38.385 INFO:tasks.workunit.client.1.vm05.stdout:8/348: truncate f2 2738437 0 2026-03-10T10:19:38.387 INFO:tasks.workunit.client.1.vm05.stdout:4/335: chown d1/f5d 1383 1 2026-03-10T10:19:38.387 INFO:tasks.workunit.client.1.vm05.stdout:5/481: creat da/db/fa1 x:0 0 0 2026-03-10T10:19:38.388 INFO:tasks.workunit.client.0.vm02.stdout:1/484: dread - d4/d2c/d53/f6c zero size 2026-03-10T10:19:38.395 INFO:tasks.workunit.client.0.vm02.stdout:0/485: creat d9/d34/f97 x:0 0 0 2026-03-10T10:19:38.399 INFO:tasks.workunit.client.0.vm02.stdout:0/486: dwrite d9/d18/d1a/d3c/f92 [0,4194304] 0 2026-03-10T10:19:38.405 INFO:tasks.workunit.client.1.vm05.stdout:6/426: symlink dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/l87 0 2026-03-10T10:19:38.407 INFO:tasks.workunit.client.1.vm05.stdout:3/470: symlink dd/d20/d56/d5e/la8 0 2026-03-10T10:19:38.414 INFO:tasks.workunit.client.0.vm02.stdout:1/485: write d4/d2c/d53/f74 [765570,60899] 0 2026-03-10T10:19:38.420 INFO:tasks.workunit.client.1.vm05.stdout:9/406: mknod d0/df/c83 0 2026-03-10T10:19:38.423 INFO:tasks.workunit.client.0.vm02.stdout:0/487: symlink d9/d18/d1a/d22/d24/d8e/l98 0 2026-03-10T10:19:38.423 INFO:tasks.workunit.client.1.vm05.stdout:4/336: rmdir d1/d3 39 2026-03-10T10:19:38.423 INFO:tasks.workunit.client.1.vm05.stdout:4/337: fsync d1/d31/d4b/f59 0 2026-03-10T10:19:38.428 INFO:tasks.workunit.client.0.vm02.stdout:6/439: creat d0/d87/f90 x:0 0 0 2026-03-10T10:19:38.431 INFO:tasks.workunit.client.0.vm02.stdout:2/471: dread d0/f9 [0,4194304] 0 2026-03-10T10:19:38.434 INFO:tasks.workunit.client.1.vm05.stdout:1/518: link d4/d20/f2d d4/d39/d3e/f96 0 2026-03-10T10:19:38.435 INFO:tasks.workunit.client.1.vm05.stdout:6/427: readlink dd/d36/d3f/d12/d44/d2a/d3d/d3e/l52 0 2026-03-10T10:19:38.436 INFO:tasks.workunit.client.0.vm02.stdout:1/486: chown d4/d2c/d53/f75 1182 1 2026-03-10T10:19:38.436 INFO:tasks.workunit.client.1.vm05.stdout:6/428: write dd/d36/d3f/d12/d44/d2a/d3d/d48/f75 [782115,41385] 0 2026-03-10T10:19:38.438 INFO:tasks.workunit.client.1.vm05.stdout:3/471: stat dd/d15/d24/d2c/c4b 0 2026-03-10T10:19:38.443 INFO:tasks.workunit.client.0.vm02.stdout:0/488: symlink d9/d18/d1a/d22/d24/d80/d74/l99 0 2026-03-10T10:19:38.450 INFO:tasks.workunit.client.0.vm02.stdout:2/472: unlink d0/d1a/d24/d80/f89 0 2026-03-10T10:19:38.463 INFO:tasks.workunit.client.1.vm05.stdout:4/338: rmdir d1/d31/dc/d40 39 2026-03-10T10:19:38.470 INFO:tasks.workunit.client.0.vm02.stdout:0/489: creat d9/d18/d1a/d46/d5d/f9a x:0 0 0 2026-03-10T10:19:38.475 INFO:tasks.workunit.client.1.vm05.stdout:5/482: rmdir da/db/d26 39 2026-03-10T10:19:38.495 INFO:tasks.workunit.client.0.vm02.stdout:0/490: mkdir d9/d18/d1a/d22/d24/d8e/d9b 0 2026-03-10T10:19:38.495 INFO:tasks.workunit.client.0.vm02.stdout:0/491: chown d9/d18/d1a/d22/d24/d80/f72 0 1 2026-03-10T10:19:38.496 INFO:tasks.workunit.client.0.vm02.stdout:0/492: write d9/d18/d1a/d22/d24/d80/d74/f96 [914256,45997] 0 2026-03-10T10:19:38.501 INFO:tasks.workunit.client.0.vm02.stdout:0/493: dwrite d9/d34/d3d/d65/f7a [0,4194304] 0 2026-03-10T10:19:38.502 INFO:tasks.workunit.client.0.vm02.stdout:0/494: stat d9/d18/l44 0 2026-03-10T10:19:38.508 INFO:tasks.workunit.client.1.vm05.stdout:5/483: write da/db/d28/d32/f69 [698469,115742] 0 2026-03-10T10:19:38.514 INFO:tasks.workunit.client.1.vm05.stdout:1/519: creat d4/df/d1c/d92/f97 x:0 0 0 2026-03-10T10:19:38.515 INFO:tasks.workunit.client.1.vm05.stdout:1/520: write d4/f36 [5268371,98764] 0 2026-03-10T10:19:38.516 INFO:tasks.workunit.client.0.vm02.stdout:0/495: mkdir d9/d18/d1a/d46/d5d/d9c 0 2026-03-10T10:19:38.518 INFO:tasks.workunit.client.0.vm02.stdout:6/440: link d0/d8/d29/d2f/d4b/l66 d0/d8/d29/d2f/d4b/l91 0 2026-03-10T10:19:38.522 INFO:tasks.workunit.client.1.vm05.stdout:3/472: fsync dd/d15/d69/f99 0 2026-03-10T10:19:38.527 INFO:tasks.workunit.client.1.vm05.stdout:7/496: dread d5/d17/d66/f8b [0,4194304] 0 2026-03-10T10:19:38.528 INFO:tasks.workunit.client.1.vm05.stdout:7/497: chown d5/d17/d66/l88 25585339 1 2026-03-10T10:19:38.531 INFO:tasks.workunit.client.0.vm02.stdout:0/496: symlink d9/d18/d1a/d22/d24/d51/l9d 0 2026-03-10T10:19:38.532 INFO:tasks.workunit.client.0.vm02.stdout:0/497: dread d9/d18/d1a/d22/d24/f2f [0,4194304] 0 2026-03-10T10:19:38.541 INFO:tasks.workunit.client.1.vm05.stdout:4/339: read d1/d3/f26 [3212545,84516] 0 2026-03-10T10:19:38.541 INFO:tasks.workunit.client.1.vm05.stdout:4/340: chown d1/d31/l4c 4960 1 2026-03-10T10:19:38.549 INFO:tasks.workunit.client.1.vm05.stdout:6/429: link dd/d36/d3f/d12/d44/d30/d4a/c60 dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/c88 0 2026-03-10T10:19:38.551 INFO:tasks.workunit.client.1.vm05.stdout:9/407: link d0/d1/f75 d0/df/d11/f84 0 2026-03-10T10:19:38.557 INFO:tasks.workunit.client.1.vm05.stdout:3/473: creat dd/d20/d94/fa9 x:0 0 0 2026-03-10T10:19:38.558 INFO:tasks.workunit.client.0.vm02.stdout:6/441: dread d0/d8/d9/f82 [0,4194304] 0 2026-03-10T10:19:38.559 INFO:tasks.workunit.client.0.vm02.stdout:6/442: stat d0/d8/d9/f6a 0 2026-03-10T10:19:38.565 INFO:tasks.workunit.client.1.vm05.stdout:6/430: chown dd/d36/d3f/d12/d44/d30/d4a/c7a 1852234 1 2026-03-10T10:19:38.565 INFO:tasks.workunit.client.1.vm05.stdout:7/498: creat d5/d1d/d20/d2d/d68/f98 x:0 0 0 2026-03-10T10:19:38.566 INFO:tasks.workunit.client.1.vm05.stdout:6/431: readlink dd/d36/d3f/d12/d44/d30/l57 0 2026-03-10T10:19:38.566 INFO:tasks.workunit.client.1.vm05.stdout:9/408: dread d0/d1/d13/de/f5b [0,4194304] 0 2026-03-10T10:19:38.569 INFO:tasks.workunit.client.1.vm05.stdout:5/484: rename da/f5e to da/db/d26/d35/d38/fa2 0 2026-03-10T10:19:38.570 INFO:tasks.workunit.client.1.vm05.stdout:1/521: creat d4/df/d1c/d53/f98 x:0 0 0 2026-03-10T10:19:38.579 INFO:tasks.workunit.client.1.vm05.stdout:6/432: dwrite dd/d36/d3f/f61 [0,4194304] 0 2026-03-10T10:19:38.583 INFO:tasks.workunit.client.1.vm05.stdout:5/485: truncate da/db/d26/f4c 52070 0 2026-03-10T10:19:38.583 INFO:tasks.workunit.client.1.vm05.stdout:1/522: read d4/df/d1c/f38 [3924815,73579] 0 2026-03-10T10:19:38.585 INFO:tasks.workunit.client.1.vm05.stdout:7/499: getdents d5/d1d/d20/d91 0 2026-03-10T10:19:38.588 INFO:tasks.workunit.client.1.vm05.stdout:9/409: symlink d0/l85 0 2026-03-10T10:19:38.589 INFO:tasks.workunit.client.1.vm05.stdout:7/500: read d5/d17/d66/f8b [137554,10165] 0 2026-03-10T10:19:38.590 INFO:tasks.workunit.client.1.vm05.stdout:1/523: unlink d4/d39/d3e/f4d 0 2026-03-10T10:19:38.593 INFO:tasks.workunit.client.1.vm05.stdout:6/433: mknod dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/c89 0 2026-03-10T10:19:38.593 INFO:tasks.workunit.client.1.vm05.stdout:1/524: chown d4/ca 438 1 2026-03-10T10:19:38.596 INFO:tasks.workunit.client.1.vm05.stdout:3/474: creat dd/d20/faa x:0 0 0 2026-03-10T10:19:38.598 INFO:tasks.workunit.client.1.vm05.stdout:5/486: mknod da/db/ca3 0 2026-03-10T10:19:38.599 INFO:tasks.workunit.client.1.vm05.stdout:7/501: creat d5/d1d/d29/d3e/f99 x:0 0 0 2026-03-10T10:19:38.599 INFO:tasks.workunit.client.1.vm05.stdout:5/487: dread - da/db/d26/d5c/d4b/f6a zero size 2026-03-10T10:19:38.601 INFO:tasks.workunit.client.1.vm05.stdout:9/410: mknod d0/d1/d16/d6e/c86 0 2026-03-10T10:19:38.604 INFO:tasks.workunit.client.1.vm05.stdout:7/502: unlink d5/d1d/d20/d2d/d68/c97 0 2026-03-10T10:19:38.605 INFO:tasks.workunit.client.1.vm05.stdout:6/434: truncate dd/f14 4324069 0 2026-03-10T10:19:38.608 INFO:tasks.workunit.client.1.vm05.stdout:9/411: mknod d0/d1/d16/d6e/c87 0 2026-03-10T10:19:38.610 INFO:tasks.workunit.client.1.vm05.stdout:5/488: dread da/db/d28/d32/f69 [0,4194304] 0 2026-03-10T10:19:38.610 INFO:tasks.workunit.client.1.vm05.stdout:6/435: fsync dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b 0 2026-03-10T10:19:38.614 INFO:tasks.workunit.client.1.vm05.stdout:9/412: fdatasync d0/d1/d13/d26/f43 0 2026-03-10T10:19:38.622 INFO:tasks.workunit.client.1.vm05.stdout:9/413: read d0/d1/d16/f40 [1837794,63394] 0 2026-03-10T10:19:38.622 INFO:tasks.workunit.client.1.vm05.stdout:5/489: mknod da/d63/ca4 0 2026-03-10T10:19:38.623 INFO:tasks.workunit.client.1.vm05.stdout:5/490: chown da 214 1 2026-03-10T10:19:38.624 INFO:tasks.workunit.client.0.vm02.stdout:7/443: dread d1/dc/d16/f1e [0,4194304] 0 2026-03-10T10:19:38.625 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:38 vm05.local ceph-mon[59051]: pgmap v158: 65 pgs: 65 active+clean; 1.9 GiB data, 6.6 GiB used, 113 GiB / 120 GiB avail; 39 MiB/s rd, 141 MiB/s wr, 296 op/s 2026-03-10T10:19:38.625 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:38 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:38.625 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:38 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:38.625 INFO:tasks.workunit.client.0.vm02.stdout:7/444: creat d1/dc/d10/d38/f83 x:0 0 0 2026-03-10T10:19:38.628 INFO:tasks.workunit.client.0.vm02.stdout:7/445: mknod d1/dc/c84 0 2026-03-10T10:19:38.629 INFO:tasks.workunit.client.0.vm02.stdout:7/446: write d1/dc/d16/f1e [1870566,103755] 0 2026-03-10T10:19:38.631 INFO:tasks.workunit.client.0.vm02.stdout:7/447: creat d1/dc/d55/f85 x:0 0 0 2026-03-10T10:19:38.631 INFO:tasks.workunit.client.0.vm02.stdout:7/448: stat d1/dc/d10/d38/l50 0 2026-03-10T10:19:38.634 INFO:tasks.workunit.client.0.vm02.stdout:7/449: dread d1/dc/d16/f48 [0,4194304] 0 2026-03-10T10:19:38.635 INFO:tasks.workunit.client.0.vm02.stdout:7/450: creat d1/d1b/f86 x:0 0 0 2026-03-10T10:19:38.636 INFO:tasks.workunit.client.0.vm02.stdout:7/451: rename d1 to d1/dc/d44/d7e/d87 22 2026-03-10T10:19:38.637 INFO:tasks.workunit.client.0.vm02.stdout:7/452: creat d1/dc/d60/f88 x:0 0 0 2026-03-10T10:19:38.637 INFO:tasks.workunit.client.0.vm02.stdout:7/453: dread - d1/dc/d10/f7d zero size 2026-03-10T10:19:38.655 INFO:tasks.workunit.client.1.vm05.stdout:5/491: dwrite da/db/d28/d32/f79 [4194304,4194304] 0 2026-03-10T10:19:38.663 INFO:tasks.workunit.client.1.vm05.stdout:6/436: dread f2 [0,4194304] 0 2026-03-10T10:19:38.664 INFO:tasks.workunit.client.1.vm05.stdout:5/492: symlink da/db/d26/d70/d72/la5 0 2026-03-10T10:19:38.668 INFO:tasks.workunit.client.1.vm05.stdout:5/493: creat da/db/d26/d35/d38/fa6 x:0 0 0 2026-03-10T10:19:38.671 INFO:tasks.workunit.client.1.vm05.stdout:5/494: rename da/db/d26/d5c/f50 to da/d9a/fa7 0 2026-03-10T10:19:38.672 INFO:tasks.workunit.client.1.vm05.stdout:5/495: write da/db/d26/d35/d38/f94 [1041308,130216] 0 2026-03-10T10:19:38.683 INFO:tasks.workunit.client.1.vm05.stdout:5/496: read da/db/d26/d5c/f2c [13875,15853] 0 2026-03-10T10:19:38.685 INFO:tasks.workunit.client.1.vm05.stdout:5/497: symlink da/d96/la8 0 2026-03-10T10:19:38.692 INFO:tasks.workunit.client.1.vm05.stdout:5/498: write da/f41 [1044161,75140] 0 2026-03-10T10:19:38.732 INFO:tasks.workunit.client.1.vm05.stdout:5/499: read da/db/d26/d35/d38/f94 [1033952,3564] 0 2026-03-10T10:19:38.733 INFO:tasks.workunit.client.1.vm05.stdout:5/500: readlink da/d96/l99 0 2026-03-10T10:19:38.734 INFO:tasks.workunit.client.1.vm05.stdout:5/501: creat da/db/d28/d8a/fa9 x:0 0 0 2026-03-10T10:19:38.735 INFO:tasks.workunit.client.1.vm05.stdout:5/502: write da/db/d28/d32/f79 [8343798,83443] 0 2026-03-10T10:19:38.738 INFO:tasks.workunit.client.1.vm05.stdout:5/503: creat da/d9a/faa x:0 0 0 2026-03-10T10:19:38.739 INFO:tasks.workunit.client.1.vm05.stdout:5/504: readlink da/l61 0 2026-03-10T10:19:38.742 INFO:tasks.workunit.client.1.vm05.stdout:5/505: rename da/db/d26/d5c/f6b to da/db/d26/d35/d38/fab 0 2026-03-10T10:19:38.746 INFO:tasks.workunit.client.1.vm05.stdout:5/506: truncate da/db/f3b 1474398 0 2026-03-10T10:19:38.751 INFO:tasks.workunit.client.1.vm05.stdout:5/507: truncate da/db/d26/d5c/f46 456487 0 2026-03-10T10:19:38.760 INFO:tasks.workunit.client.1.vm05.stdout:5/508: dwrite da/db/d28/d8a/fa9 [0,4194304] 0 2026-03-10T10:19:38.766 INFO:tasks.workunit.client.1.vm05.stdout:5/509: write da/db/d26/d35/d38/f94 [50962,122816] 0 2026-03-10T10:19:38.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:38 vm02.local ceph-mon[50200]: pgmap v158: 65 pgs: 65 active+clean; 1.9 GiB data, 6.6 GiB used, 113 GiB / 120 GiB avail; 39 MiB/s rd, 141 MiB/s wr, 296 op/s 2026-03-10T10:19:38.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:38 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:38.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:38 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:38.798 INFO:tasks.workunit.client.0.vm02.stdout:9/428: dwrite da/d3c/d4c/d38/f47 [0,4194304] 0 2026-03-10T10:19:38.800 INFO:tasks.workunit.client.0.vm02.stdout:9/429: chown da/d3c/d4c/d38/d7c 1068333 1 2026-03-10T10:19:38.806 INFO:tasks.workunit.client.0.vm02.stdout:4/608: write d1/d41/d5e/d78/d1a/d49/f5c [1878558,62840] 0 2026-03-10T10:19:38.812 INFO:tasks.workunit.client.0.vm02.stdout:9/430: rename da/d3c/d4c/d38/d4a/f59 to da/d3c/d4c/d38/f88 0 2026-03-10T10:19:38.820 INFO:tasks.workunit.client.0.vm02.stdout:9/431: mkdir da/d3c/d4c/d38/d82/d89 0 2026-03-10T10:19:38.823 INFO:tasks.workunit.client.0.vm02.stdout:5/626: dwrite d1/f7f [0,4194304] 0 2026-03-10T10:19:38.824 INFO:tasks.workunit.client.1.vm05.stdout:2/432: symlink db/d2d/l88 0 2026-03-10T10:19:38.834 INFO:tasks.workunit.client.0.vm02.stdout:5/627: truncate d1/f32 2482365 0 2026-03-10T10:19:38.840 INFO:tasks.workunit.client.0.vm02.stdout:5/628: rename d1/d6a/fca to d1/db/d11/d84/d95/fd6 0 2026-03-10T10:19:38.842 INFO:tasks.workunit.client.0.vm02.stdout:9/432: dread da/f14 [0,4194304] 0 2026-03-10T10:19:38.846 INFO:tasks.workunit.client.0.vm02.stdout:9/433: dread da/d3c/d4c/f33 [0,4194304] 0 2026-03-10T10:19:38.853 INFO:tasks.workunit.client.0.vm02.stdout:5/629: dread d1/db/d11/d84/d40/d4f/f97 [0,4194304] 0 2026-03-10T10:19:38.854 INFO:tasks.workunit.client.0.vm02.stdout:9/434: creat da/d3c/d4c/d38/d82/d89/f8a x:0 0 0 2026-03-10T10:19:38.860 INFO:tasks.workunit.client.1.vm05.stdout:0/489: truncate d1/d2/d9/d31/d13/d2f/f88 166404 0 2026-03-10T10:19:38.861 INFO:tasks.workunit.client.0.vm02.stdout:3/453: dwrite d1/d20/f38 [0,4194304] 0 2026-03-10T10:19:38.861 INFO:tasks.workunit.client.1.vm05.stdout:0/490: write d1/d2/d9/d31/d12/f1e [1203095,109334] 0 2026-03-10T10:19:38.863 INFO:tasks.workunit.client.0.vm02.stdout:5/630: rename d1/db/d11/d13/d28/d37/d3d/f75 to d1/d6a/fd7 0 2026-03-10T10:19:38.866 INFO:tasks.workunit.client.0.vm02.stdout:8/464: write d1/f12 [4332500,29549] 0 2026-03-10T10:19:38.869 INFO:tasks.workunit.client.0.vm02.stdout:3/454: rename d1/d20/d52/f6b to d1/d20/d52/f92 0 2026-03-10T10:19:38.877 INFO:tasks.workunit.client.1.vm05.stdout:0/491: rmdir d1/d2/d39/d6e 39 2026-03-10T10:19:38.878 INFO:tasks.workunit.client.0.vm02.stdout:5/631: creat d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fd8 x:0 0 0 2026-03-10T10:19:38.881 INFO:tasks.workunit.client.0.vm02.stdout:8/465: unlink d1/c49 0 2026-03-10T10:19:38.882 INFO:tasks.workunit.client.1.vm05.stdout:0/492: truncate d1/d2/d39/f69 678935 0 2026-03-10T10:19:38.883 INFO:tasks.workunit.client.1.vm05.stdout:0/493: chown d1/d2/d9/d31/d12/d20/f2e 24597 1 2026-03-10T10:19:38.885 INFO:tasks.workunit.client.0.vm02.stdout:3/455: dread - d1/d8/d21/f5e zero size 2026-03-10T10:19:38.885 INFO:tasks.workunit.client.0.vm02.stdout:3/456: readlink d1/l65 0 2026-03-10T10:19:38.889 INFO:tasks.workunit.client.0.vm02.stdout:3/457: dwrite d1/f50 [0,4194304] 0 2026-03-10T10:19:38.907 INFO:tasks.workunit.client.0.vm02.stdout:5/632: rmdir d1/db/d11/d84/d40/d4f/d5f 39 2026-03-10T10:19:38.908 INFO:tasks.workunit.client.1.vm05.stdout:8/349: dwrite d7/d14/d24/f42 [0,4194304] 0 2026-03-10T10:19:38.925 INFO:tasks.workunit.client.0.vm02.stdout:1/487: dwrite d4/da/f13 [4194304,4194304] 0 2026-03-10T10:19:38.926 INFO:tasks.workunit.client.0.vm02.stdout:2/473: dwrite d0/d10/f6b [0,4194304] 0 2026-03-10T10:19:38.928 INFO:tasks.workunit.client.0.vm02.stdout:8/466: rename d1/d1c/d24/d35/d56/c86 to d1/d1c/d24/d35/d56/c8b 0 2026-03-10T10:19:38.929 INFO:tasks.workunit.client.1.vm05.stdout:8/350: symlink d7/d14/d3a/d49/l64 0 2026-03-10T10:19:38.929 INFO:tasks.workunit.client.0.vm02.stdout:3/458: fsync d1/d8/d44/f56 0 2026-03-10T10:19:38.929 INFO:tasks.workunit.client.1.vm05.stdout:8/351: truncate d7/d14/d24/f61 1047177 0 2026-03-10T10:19:38.933 INFO:tasks.workunit.client.0.vm02.stdout:0/498: write d9/d18/d1a/f7e [434418,124086] 0 2026-03-10T10:19:38.939 INFO:tasks.workunit.client.0.vm02.stdout:6/443: write d0/d8/f5a [1067123,68905] 0 2026-03-10T10:19:38.942 INFO:tasks.workunit.client.1.vm05.stdout:4/341: dwrite d1/d31/dc/d40/d45/f48 [0,4194304] 0 2026-03-10T10:19:38.943 INFO:tasks.workunit.client.1.vm05.stdout:8/352: mkdir d7/d14/d3a/d49/d65 0 2026-03-10T10:19:38.944 INFO:tasks.workunit.client.1.vm05.stdout:4/342: read d1/d31/dc/f2a [2147086,64191] 0 2026-03-10T10:19:38.945 INFO:tasks.workunit.client.1.vm05.stdout:0/494: link d1/d2/d9/d31/d13/d17/l23 d1/d2/d39/d6e/d95/la6 0 2026-03-10T10:19:38.947 INFO:tasks.workunit.client.1.vm05.stdout:3/475: write dd/d15/d24/d2c/f32 [417521,92547] 0 2026-03-10T10:19:38.947 INFO:tasks.workunit.client.1.vm05.stdout:0/495: fdatasync d1/d2/d9/d31/d54/f16 0 2026-03-10T10:19:38.948 INFO:tasks.workunit.client.1.vm05.stdout:1/525: dwrite d4/d20/f31 [0,4194304] 0 2026-03-10T10:19:38.950 INFO:tasks.workunit.client.0.vm02.stdout:1/488: stat d4/d2c/d53/f58 0 2026-03-10T10:19:38.950 INFO:tasks.workunit.client.0.vm02.stdout:1/489: chown d4/c9 59 1 2026-03-10T10:19:38.952 INFO:tasks.workunit.client.1.vm05.stdout:8/353: creat d7/d2f/d57/f66 x:0 0 0 2026-03-10T10:19:38.953 INFO:tasks.workunit.client.1.vm05.stdout:7/503: write d5/d26/f4d [1873526,95562] 0 2026-03-10T10:19:38.954 INFO:tasks.workunit.client.1.vm05.stdout:7/504: truncate d5/f76 770843 0 2026-03-10T10:19:38.955 INFO:tasks.workunit.client.1.vm05.stdout:9/414: truncate d0/d1/d16/f5c 5089814 0 2026-03-10T10:19:38.957 INFO:tasks.workunit.client.0.vm02.stdout:3/459: symlink d1/d6/d8e/l93 0 2026-03-10T10:19:38.957 INFO:tasks.workunit.client.0.vm02.stdout:7/454: write d1/f15 [140357,18853] 0 2026-03-10T10:19:38.957 INFO:tasks.workunit.client.0.vm02.stdout:3/460: read - d1/d20/f51 zero size 2026-03-10T10:19:38.958 INFO:tasks.workunit.client.0.vm02.stdout:7/455: chown d1/dc/d10/l11 143 1 2026-03-10T10:19:38.959 INFO:tasks.workunit.client.0.vm02.stdout:0/499: mknod d9/d18/d1a/d22/d24/d8e/c9e 0 2026-03-10T10:19:38.963 INFO:tasks.workunit.client.1.vm05.stdout:4/343: read d1/d31/f13 [2529069,129614] 0 2026-03-10T10:19:38.963 INFO:tasks.workunit.client.1.vm05.stdout:0/496: symlink d1/d2/d5d/la7 0 2026-03-10T10:19:38.964 INFO:tasks.workunit.client.1.vm05.stdout:6/437: write dd/d36/d3f/d12/d58/f5a [4022916,104124] 0 2026-03-10T10:19:38.964 INFO:tasks.workunit.client.0.vm02.stdout:0/500: chown d9/f28 119 1 2026-03-10T10:19:38.964 INFO:tasks.workunit.client.0.vm02.stdout:0/501: readlink d9/l87 0 2026-03-10T10:19:38.964 INFO:tasks.workunit.client.0.vm02.stdout:0/502: dwrite d9/d18/d1a/d22/d24/d80/f90 [0,4194304] 0 2026-03-10T10:19:38.967 INFO:tasks.workunit.client.0.vm02.stdout:5/633: rmdir d1/db/d11/d84/d40/d4f/d5f/d6d 39 2026-03-10T10:19:38.967 INFO:tasks.workunit.client.0.vm02.stdout:2/474: fsync d0/f30 0 2026-03-10T10:19:38.969 INFO:tasks.workunit.client.0.vm02.stdout:3/461: mknod d1/d8/d21/d7d/c94 0 2026-03-10T10:19:38.970 INFO:tasks.workunit.client.0.vm02.stdout:3/462: chown d1/d6/f63 4 1 2026-03-10T10:19:38.970 INFO:tasks.workunit.client.0.vm02.stdout:7/456: unlink d1/dc/d16/d28/l2a 0 2026-03-10T10:19:38.980 INFO:tasks.workunit.client.1.vm05.stdout:4/344: rmdir d1/d31/dc 39 2026-03-10T10:19:38.980 INFO:tasks.workunit.client.0.vm02.stdout:0/503: fsync d9/d18/d1a/d22/d24/f4f 0 2026-03-10T10:19:38.982 INFO:tasks.workunit.client.1.vm05.stdout:1/526: symlink d4/df/d1c/d53/l99 0 2026-03-10T10:19:38.983 INFO:tasks.workunit.client.1.vm05.stdout:5/510: getdents da/db/d28/d8a 0 2026-03-10T10:19:38.988 INFO:tasks.workunit.client.0.vm02.stdout:5/634: read d1/db/d11/d62/f65 [99861,8591] 0 2026-03-10T10:19:38.988 INFO:tasks.workunit.client.1.vm05.stdout:6/438: stat dd/d36/d3f/d12/d44/f2f 0 2026-03-10T10:19:38.993 INFO:tasks.workunit.client.1.vm05.stdout:6/439: dwrite dd/d36/d3f/d12/d44/d2a/d3d/d48/f4b [0,4194304] 0 2026-03-10T10:19:38.993 INFO:tasks.workunit.client.0.vm02.stdout:6/444: truncate d0/d8/d29/f59 1010369 0 2026-03-10T10:19:38.994 INFO:tasks.workunit.client.0.vm02.stdout:6/445: chown d0/d8/d29/d2f/f8e 338 1 2026-03-10T10:19:38.994 INFO:tasks.workunit.client.1.vm05.stdout:7/505: getdents d5/d1d/d29/d3e/d8c/d96 0 2026-03-10T10:19:39.005 INFO:tasks.workunit.client.0.vm02.stdout:1/490: symlink d4/da/d27/d38/d80/d95/l9b 0 2026-03-10T10:19:39.012 INFO:tasks.workunit.client.0.vm02.stdout:4/609: dwrite d1/d41/d5e/d78/d37/f14 [4194304,4194304] 0 2026-03-10T10:19:39.012 INFO:tasks.workunit.client.1.vm05.stdout:1/527: dread d4/dd/f60 [0,4194304] 0 2026-03-10T10:19:39.012 INFO:tasks.workunit.client.1.vm05.stdout:9/415: truncate d0/d1/d13/de/f46 740529 0 2026-03-10T10:19:39.015 INFO:tasks.workunit.client.1.vm05.stdout:3/476: sync 2026-03-10T10:19:39.016 INFO:tasks.workunit.client.1.vm05.stdout:3/477: chown dd/d39/d66/f6e 7 1 2026-03-10T10:19:39.020 INFO:tasks.workunit.client.0.vm02.stdout:3/463: dread d1/f5 [0,4194304] 0 2026-03-10T10:19:39.024 INFO:tasks.workunit.client.0.vm02.stdout:0/504: unlink d9/d18/l5b 0 2026-03-10T10:19:39.034 INFO:tasks.workunit.client.1.vm05.stdout:2/433: write db/d2d/f48 [1160880,55589] 0 2026-03-10T10:19:39.035 INFO:tasks.workunit.client.1.vm05.stdout:5/511: stat da/db/d26/d5c/d4b/c9c 0 2026-03-10T10:19:39.036 INFO:tasks.workunit.client.0.vm02.stdout:4/610: read d1/d10/db/f24 [3412202,78267] 0 2026-03-10T10:19:39.038 INFO:tasks.workunit.client.0.vm02.stdout:0/505: creat d9/d34/d3d/d67/f9f x:0 0 0 2026-03-10T10:19:39.040 INFO:tasks.workunit.client.0.vm02.stdout:6/446: getdents d0/d8/d29/d2f/d50/d7e 0 2026-03-10T10:19:39.043 INFO:tasks.workunit.client.0.vm02.stdout:6/447: dwrite d0/d8/d9/d7a/f40 [0,4194304] 0 2026-03-10T10:19:39.056 INFO:tasks.workunit.client.1.vm05.stdout:0/497: truncate d1/d2/d9/d31/d12/d20/f2e 860137 0 2026-03-10T10:19:39.056 INFO:tasks.workunit.client.1.vm05.stdout:9/416: symlink d0/df/d11/l88 0 2026-03-10T10:19:39.057 INFO:tasks.workunit.client.0.vm02.stdout:3/464: rename d1/d6/f48 to d1/d6/d8b/f95 0 2026-03-10T10:19:39.057 INFO:tasks.workunit.client.0.vm02.stdout:1/491: dread d4/da/d1a/d22/f32 [0,4194304] 0 2026-03-10T10:19:39.057 INFO:tasks.workunit.client.0.vm02.stdout:3/465: creat d1/d6/d8e/f96 x:0 0 0 2026-03-10T10:19:39.057 INFO:tasks.workunit.client.0.vm02.stdout:1/492: symlink d4/da/d27/l9c 0 2026-03-10T10:19:39.057 INFO:tasks.workunit.client.0.vm02.stdout:1/493: stat d4/da/d27/f66 0 2026-03-10T10:19:39.059 INFO:tasks.workunit.client.0.vm02.stdout:0/506: read d9/d18/d1a/d22/d24/f26 [1602203,73461] 0 2026-03-10T10:19:39.062 INFO:tasks.workunit.client.1.vm05.stdout:3/478: dread dd/d15/d24/d2c/f32 [4194304,4194304] 0 2026-03-10T10:19:39.063 INFO:tasks.workunit.client.0.vm02.stdout:0/507: dwrite d9/d18/d1a/f7e [0,4194304] 0 2026-03-10T10:19:39.070 INFO:tasks.workunit.client.0.vm02.stdout:3/466: truncate d1/d8/d21/f3c 1904053 0 2026-03-10T10:19:39.074 INFO:tasks.workunit.client.0.vm02.stdout:3/467: dwrite d1/d6/f1b [0,4194304] 0 2026-03-10T10:19:39.083 INFO:tasks.workunit.client.0.vm02.stdout:3/468: read d1/fe [3349376,49377] 0 2026-03-10T10:19:39.086 INFO:tasks.workunit.client.0.vm02.stdout:1/494: chown d4/d2c/d53/l89 12 1 2026-03-10T10:19:39.093 INFO:tasks.workunit.client.0.vm02.stdout:3/469: mknod d1/d8/d21/d73/d78/d84/c97 0 2026-03-10T10:19:39.094 INFO:tasks.workunit.client.0.vm02.stdout:0/508: rename d9/d34/d3d/c47 to d9/d18/d1a/d22/d24/d80/ca0 0 2026-03-10T10:19:39.097 INFO:tasks.workunit.client.0.vm02.stdout:3/470: read - d1/d20/f64 zero size 2026-03-10T10:19:39.100 INFO:tasks.workunit.client.0.vm02.stdout:0/509: mkdir d9/d18/d1a/d46/d5d/da1 0 2026-03-10T10:19:39.103 INFO:tasks.workunit.client.0.vm02.stdout:9/435: dwrite da/d3c/d4c/f49 [0,4194304] 0 2026-03-10T10:19:39.113 INFO:tasks.workunit.client.1.vm05.stdout:8/354: link d7/l27 d7/d14/d15/d3b/l67 0 2026-03-10T10:19:39.114 INFO:tasks.workunit.client.0.vm02.stdout:0/510: mkdir d9/d34/d3d/d65/da2 0 2026-03-10T10:19:39.116 INFO:tasks.workunit.client.0.vm02.stdout:0/511: readlink d9/d18/d1a/d22/d24/d80/d74/l99 0 2026-03-10T10:19:39.119 INFO:tasks.workunit.client.1.vm05.stdout:5/512: rename da/l61 to da/db/d26/d5c/d4b/lac 0 2026-03-10T10:19:39.132 INFO:tasks.workunit.client.0.vm02.stdout:9/436: fsync da/f1b 0 2026-03-10T10:19:39.132 INFO:tasks.workunit.client.0.vm02.stdout:9/437: fsync da/f28 0 2026-03-10T10:19:39.132 INFO:tasks.workunit.client.0.vm02.stdout:9/438: write da/d3c/d4c/f41 [4239162,98208] 0 2026-03-10T10:19:39.132 INFO:tasks.workunit.client.1.vm05.stdout:7/506: mkdir d5/d1d/d29/d3e/d8c/d82/d90/d9a 0 2026-03-10T10:19:39.132 INFO:tasks.workunit.client.1.vm05.stdout:5/513: chown da/db/d26/d70/d72 69998 1 2026-03-10T10:19:39.132 INFO:tasks.workunit.client.1.vm05.stdout:2/434: dwrite db/d28/f35 [0,4194304] 0 2026-03-10T10:19:39.132 INFO:tasks.workunit.client.1.vm05.stdout:1/528: symlink d4/d37/l9a 0 2026-03-10T10:19:39.132 INFO:tasks.workunit.client.1.vm05.stdout:3/479: rename dd/d15/d24/d2c/d6d/d89 to dd/d20/d56/d5e/dab 0 2026-03-10T10:19:39.132 INFO:tasks.workunit.client.1.vm05.stdout:1/529: dwrite d4/d37/f89 [0,4194304] 0 2026-03-10T10:19:39.133 INFO:tasks.workunit.client.1.vm05.stdout:8/355: sync 2026-03-10T10:19:39.134 INFO:tasks.workunit.client.1.vm05.stdout:8/356: chown d7/d14/d24/c3e 28810 1 2026-03-10T10:19:39.135 INFO:tasks.workunit.client.1.vm05.stdout:8/357: stat d7/d14/f40 0 2026-03-10T10:19:39.135 INFO:tasks.workunit.client.1.vm05.stdout:8/358: chown d7/d14/d62 94212 1 2026-03-10T10:19:39.140 INFO:tasks.workunit.client.1.vm05.stdout:9/417: symlink d0/d1/d13/d55/d7d/l89 0 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.1.vm05.stdout:3/480: mkdir dd/d15/d24/d8e/dac 0 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.1.vm05.stdout:0/498: creat d1/d2/d9/d31/fa8 x:0 0 0 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.1.vm05.stdout:9/418: stat d0/d1/d16/l65 0 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.1.vm05.stdout:5/514: rmdir da/db/d26/d35/d73 0 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.0.vm02.stdout:0/512: dread d9/d18/d1a/d22/d24/d80/d74/f62 [0,4194304] 0 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.0.vm02.stdout:0/513: stat d9/d18/d1a/d22/d24/d80/d74/c7c 0 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.0.vm02.stdout:0/514: chown d9/d34/d3d/d7b 56004550 1 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.0.vm02.stdout:0/515: rmdir d9 39 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.0.vm02.stdout:0/516: mkdir d9/d34/d3d/d65/da3 0 2026-03-10T10:19:39.159 INFO:tasks.workunit.client.1.vm05.stdout:5/515: dread - da/db/d26/d5c/d4b/f95 zero size 2026-03-10T10:19:39.160 INFO:tasks.workunit.client.1.vm05.stdout:5/516: fdatasync da/db/d26/d35/d38/f51 0 2026-03-10T10:19:39.163 INFO:tasks.workunit.client.0.vm02.stdout:0/517: dwrite d9/d18/f1e [0,4194304] 0 2026-03-10T10:19:39.164 INFO:tasks.workunit.client.0.vm02.stdout:0/518: truncate d9/d34/f97 142362 0 2026-03-10T10:19:39.165 INFO:tasks.workunit.client.0.vm02.stdout:0/519: write d9/d18/d1a/f88 [1107562,105587] 0 2026-03-10T10:19:39.170 INFO:tasks.workunit.client.1.vm05.stdout:1/530: rename d4/d37/d4e/l7a to d4/l9b 0 2026-03-10T10:19:39.181 INFO:tasks.workunit.client.1.vm05.stdout:1/531: dwrite d4/d3d/d6e/f7c [0,4194304] 0 2026-03-10T10:19:39.181 INFO:tasks.workunit.client.1.vm05.stdout:8/359: rename d7/d14/f4e to d7/d14/d3a/f68 0 2026-03-10T10:19:39.181 INFO:tasks.workunit.client.0.vm02.stdout:0/520: link d9/d18/d1a/d22/d24/d8e/c9e d9/d18/d1a/d22/d24/d8e/ca4 0 2026-03-10T10:19:39.184 INFO:tasks.workunit.client.0.vm02.stdout:0/521: creat d9/d18/d1a/d22/d24/d79/d7d/fa5 x:0 0 0 2026-03-10T10:19:39.185 INFO:tasks.workunit.client.0.vm02.stdout:0/522: write d9/d18/d1a/d22/d24/f26 [3188503,1194] 0 2026-03-10T10:19:39.190 INFO:tasks.workunit.client.1.vm05.stdout:9/419: sync 2026-03-10T10:19:39.190 INFO:tasks.workunit.client.1.vm05.stdout:8/360: rename d7/d14/d24/f35 to d7/d14/d62/f69 0 2026-03-10T10:19:39.192 INFO:tasks.workunit.client.1.vm05.stdout:1/532: link d4/df/d1c/f63 d4/df/d1c/f9c 0 2026-03-10T10:19:39.209 INFO:tasks.workunit.client.1.vm05.stdout:1/533: sync 2026-03-10T10:19:39.213 INFO:tasks.workunit.client.1.vm05.stdout:1/534: dwrite d4/df/d1c/f9c [0,4194304] 0 2026-03-10T10:19:39.280 INFO:tasks.workunit.client.1.vm05.stdout:8/361: dread d7/d14/f5b [0,4194304] 0 2026-03-10T10:19:39.282 INFO:tasks.workunit.client.0.vm02.stdout:5/635: read d1/db/d11/d84/d95/fd6 [2118800,124937] 0 2026-03-10T10:19:39.284 INFO:tasks.workunit.client.1.vm05.stdout:8/362: mkdir d7/d14/d24/d3f/d6a 0 2026-03-10T10:19:39.285 INFO:tasks.workunit.client.0.vm02.stdout:5/636: mkdir d1/db/d11/d13/d28/da7/dd9 0 2026-03-10T10:19:39.286 INFO:tasks.workunit.client.0.vm02.stdout:5/637: chown d1/db/d11/d84/d40/d4f/f6e 76148159 1 2026-03-10T10:19:39.288 INFO:tasks.workunit.client.1.vm05.stdout:8/363: unlink d7/d14/f22 0 2026-03-10T10:19:39.292 INFO:tasks.workunit.client.0.vm02.stdout:5/638: rename d1/db/d11/d16/d79/ccd to d1/db/d11/d84/d40/d4f/cda 0 2026-03-10T10:19:39.349 INFO:tasks.workunit.client.0.vm02.stdout:8/467: write d1/d1c/d24/d35/f6e [306856,107373] 0 2026-03-10T10:19:39.349 INFO:tasks.workunit.client.0.vm02.stdout:8/468: chown d1/d1c/d24/f8a 10821937 1 2026-03-10T10:19:39.365 INFO:tasks.workunit.client.0.vm02.stdout:2/475: dwrite d0/f9 [0,4194304] 0 2026-03-10T10:19:39.370 INFO:tasks.workunit.client.0.vm02.stdout:2/476: dwrite d0/d10/f6b [0,4194304] 0 2026-03-10T10:19:39.372 INFO:tasks.workunit.client.0.vm02.stdout:2/477: dread - d0/f8f zero size 2026-03-10T10:19:39.384 INFO:tasks.workunit.client.0.vm02.stdout:2/478: dread d0/d1a/f52 [0,4194304] 0 2026-03-10T10:19:39.389 INFO:tasks.workunit.client.0.vm02.stdout:2/479: dwrite d0/f9 [0,4194304] 0 2026-03-10T10:19:39.394 INFO:tasks.workunit.client.0.vm02.stdout:4/611: dwrite d1/d41/d5e/d78/d7f/f8e [0,4194304] 0 2026-03-10T10:19:39.395 INFO:tasks.workunit.client.0.vm02.stdout:2/480: creat d0/d10/d81/f9b x:0 0 0 2026-03-10T10:19:39.400 INFO:tasks.workunit.client.0.vm02.stdout:6/448: dwrite d0/d8/d29/d2f/d4b/f26 [0,4194304] 0 2026-03-10T10:19:39.409 INFO:tasks.workunit.client.0.vm02.stdout:4/612: dwrite d1/d52/d53/fbb [0,4194304] 0 2026-03-10T10:19:39.415 INFO:tasks.workunit.client.0.vm02.stdout:2/481: symlink d0/d10/d69/l9c 0 2026-03-10T10:19:39.417 INFO:tasks.workunit.client.1.vm05.stdout:4/345: dwrite d1/d31/f36 [0,4194304] 0 2026-03-10T10:19:39.425 INFO:tasks.workunit.client.0.vm02.stdout:2/482: creat d0/d1a/d49/d5e/d65/f9d x:0 0 0 2026-03-10T10:19:39.427 INFO:tasks.workunit.client.0.vm02.stdout:4/613: symlink d1/d10/d88/db2/lc7 0 2026-03-10T10:19:39.428 INFO:tasks.workunit.client.0.vm02.stdout:2/483: creat d0/d1a/d49/d5e/d65/f9e x:0 0 0 2026-03-10T10:19:39.431 INFO:tasks.workunit.client.0.vm02.stdout:4/614: dread - d1/d32/f95 zero size 2026-03-10T10:19:39.432 INFO:tasks.workunit.client.0.vm02.stdout:2/484: mkdir d0/d10/d69/d9f 0 2026-03-10T10:19:39.434 INFO:tasks.workunit.client.0.vm02.stdout:4/615: symlink d1/d41/d5e/d78/d37/lc8 0 2026-03-10T10:19:39.435 INFO:tasks.workunit.client.0.vm02.stdout:2/485: creat d0/d1a/d49/d5e/fa0 x:0 0 0 2026-03-10T10:19:39.442 INFO:tasks.workunit.client.0.vm02.stdout:4/616: fdatasync d1/d32/f95 0 2026-03-10T10:19:39.443 INFO:tasks.workunit.client.0.vm02.stdout:1/495: write d4/d2c/f77 [363817,11859] 0 2026-03-10T10:19:39.446 INFO:tasks.workunit.client.0.vm02.stdout:1/496: dwrite d4/d2c/d53/f99 [0,4194304] 0 2026-03-10T10:19:39.447 INFO:tasks.workunit.client.0.vm02.stdout:1/497: write d4/da/f73 [7728,69316] 0 2026-03-10T10:19:39.453 INFO:tasks.workunit.client.0.vm02.stdout:7/457: dread d1/f17 [0,4194304] 0 2026-03-10T10:19:39.460 INFO:tasks.workunit.client.0.vm02.stdout:7/458: creat d1/dc/d60/f89 x:0 0 0 2026-03-10T10:19:39.461 INFO:tasks.workunit.client.0.vm02.stdout:2/486: creat d0/d10/fa1 x:0 0 0 2026-03-10T10:19:39.461 INFO:tasks.workunit.client.0.vm02.stdout:1/498: creat d4/da/f9d x:0 0 0 2026-03-10T10:19:39.461 INFO:tasks.workunit.client.0.vm02.stdout:1/499: write d4/f81 [1088642,45651] 0 2026-03-10T10:19:39.461 INFO:tasks.workunit.client.0.vm02.stdout:4/617: dread d1/d41/d5e/d78/d1a/f8c [0,4194304] 0 2026-03-10T10:19:39.466 INFO:tasks.workunit.client.0.vm02.stdout:4/618: dread - d1/d10/faf zero size 2026-03-10T10:19:39.466 INFO:tasks.workunit.client.0.vm02.stdout:1/500: mknod d4/c9e 0 2026-03-10T10:19:39.468 INFO:tasks.workunit.client.0.vm02.stdout:4/619: unlink d1/fc2 0 2026-03-10T10:19:39.468 INFO:tasks.workunit.client.0.vm02.stdout:4/620: chown d1/d10/c51 50255 1 2026-03-10T10:19:39.469 INFO:tasks.workunit.client.0.vm02.stdout:7/459: getdents d1/dc/d55 0 2026-03-10T10:19:39.470 INFO:tasks.workunit.client.0.vm02.stdout:1/501: read d4/f5 [1285977,97860] 0 2026-03-10T10:19:39.480 INFO:tasks.workunit.client.0.vm02.stdout:1/502: readlink d4/d2c/l72 0 2026-03-10T10:19:39.480 INFO:tasks.workunit.client.0.vm02.stdout:1/503: read d4/fe [3640241,50328] 0 2026-03-10T10:19:39.480 INFO:tasks.workunit.client.0.vm02.stdout:4/621: truncate d1/d52/d53/f79 385749 0 2026-03-10T10:19:39.480 INFO:tasks.workunit.client.0.vm02.stdout:7/460: creat d1/dc/d16/d28/d2c/f8a x:0 0 0 2026-03-10T10:19:39.480 INFO:tasks.workunit.client.0.vm02.stdout:7/461: creat d1/dc/d55/f8b x:0 0 0 2026-03-10T10:19:39.480 INFO:tasks.workunit.client.0.vm02.stdout:7/462: creat d1/dc/d16/d28/d2d/d36/f8c x:0 0 0 2026-03-10T10:19:39.480 INFO:tasks.workunit.client.0.vm02.stdout:7/463: truncate d1/dc/d60/f53 454198 0 2026-03-10T10:19:39.480 INFO:tasks.workunit.client.0.vm02.stdout:7/464: chown d1/dc/c29 803886 1 2026-03-10T10:19:39.486 INFO:tasks.workunit.client.0.vm02.stdout:3/471: dwrite d1/d20/d52/f6f [0,4194304] 0 2026-03-10T10:19:39.488 INFO:tasks.workunit.client.0.vm02.stdout:3/472: write d1/d8/fb [404775,72545] 0 2026-03-10T10:19:39.492 INFO:tasks.workunit.client.0.vm02.stdout:7/465: rmdir d1/dc/d16/d28/d2d/d7c 0 2026-03-10T10:19:39.494 INFO:tasks.workunit.client.0.vm02.stdout:3/473: link d1/d20/c2b d1/d6/d8e/c98 0 2026-03-10T10:19:39.495 INFO:tasks.workunit.client.0.vm02.stdout:3/474: chown d1/d20/f38 54 1 2026-03-10T10:19:39.575 INFO:tasks.workunit.client.1.vm05.stdout:6/440: write dd/d36/d3f/d12/f20 [2721294,46816] 0 2026-03-10T10:19:39.577 INFO:tasks.workunit.client.0.vm02.stdout:9/439: truncate da/d3c/d4c/f41 2386609 0 2026-03-10T10:19:39.580 INFO:tasks.workunit.client.1.vm05.stdout:7/507: dwrite d5/d26/f2c [0,4194304] 0 2026-03-10T10:19:39.584 INFO:tasks.workunit.client.1.vm05.stdout:3/481: dwrite dd/d15/d24/d2c/f38 [0,4194304] 0 2026-03-10T10:19:39.584 INFO:tasks.workunit.client.0.vm02.stdout:9/440: chown da/d3c/d4c/d2c/d34/d35/l46 29014 1 2026-03-10T10:19:39.584 INFO:tasks.workunit.client.0.vm02.stdout:9/441: stat da/f1b 0 2026-03-10T10:19:39.590 INFO:tasks.workunit.client.0.vm02.stdout:9/442: dread da/d3c/d53/f73 [0,4194304] 0 2026-03-10T10:19:39.591 INFO:tasks.workunit.client.1.vm05.stdout:3/482: dwrite dd/fe [0,4194304] 0 2026-03-10T10:19:39.591 INFO:tasks.workunit.client.0.vm02.stdout:9/443: fsync da/d3c/d4c/d2c/d34/f81 0 2026-03-10T10:19:39.591 INFO:tasks.workunit.client.0.vm02.stdout:9/444: chown da/f25 8044221 1 2026-03-10T10:19:39.595 INFO:tasks.workunit.client.1.vm05.stdout:3/483: stat dd/d15/d1f/l30 0 2026-03-10T10:19:39.595 INFO:tasks.workunit.client.1.vm05.stdout:3/484: dread - dd/d39/d5f/fa2 zero size 2026-03-10T10:19:39.599 INFO:tasks.workunit.client.1.vm05.stdout:3/485: chown dd/d39/d5c/l62 329 1 2026-03-10T10:19:39.599 INFO:tasks.workunit.client.1.vm05.stdout:3/486: dread - dd/d15/d69/f86 zero size 2026-03-10T10:19:39.600 INFO:tasks.workunit.client.1.vm05.stdout:2/435: truncate db/d1c/f56 716648 0 2026-03-10T10:19:39.602 INFO:tasks.workunit.client.0.vm02.stdout:9/445: link da/d3c/d4c/d2c/f32 da/d3c/f8b 0 2026-03-10T10:19:39.606 INFO:tasks.workunit.client.1.vm05.stdout:0/499: dwrite d1/d2/d5d/f5f [0,4194304] 0 2026-03-10T10:19:39.607 INFO:tasks.workunit.client.0.vm02.stdout:0/523: write d9/d18/d1a/d22/f3f [3746196,128513] 0 2026-03-10T10:19:39.607 INFO:tasks.workunit.client.1.vm05.stdout:8/364: getdents d7/d14/d62 0 2026-03-10T10:19:39.612 INFO:tasks.workunit.client.1.vm05.stdout:5/517: dwrite da/db/d26/d35/d38/f65 [0,4194304] 0 2026-03-10T10:19:39.616 INFO:tasks.workunit.client.1.vm05.stdout:5/518: chown da/db/d26/d5c/d4b/l5d 3668 1 2026-03-10T10:19:39.623 INFO:tasks.workunit.client.1.vm05.stdout:9/420: dwrite d0/f1e [4194304,4194304] 0 2026-03-10T10:19:39.623 INFO:tasks.workunit.client.1.vm05.stdout:9/421: read - d0/d1/d13/de/d21/f71 zero size 2026-03-10T10:19:39.625 INFO:tasks.workunit.client.1.vm05.stdout:8/365: creat d7/d14/d3a/d49/f6b x:0 0 0 2026-03-10T10:19:39.627 INFO:tasks.workunit.client.1.vm05.stdout:8/366: truncate d7/d14/d3a/f50 4911684 0 2026-03-10T10:19:39.631 INFO:tasks.workunit.client.1.vm05.stdout:1/535: write d4/d3d/f4c [536030,22222] 0 2026-03-10T10:19:39.638 INFO:tasks.workunit.client.1.vm05.stdout:9/422: creat d0/df/d74/f8a x:0 0 0 2026-03-10T10:19:39.639 INFO:tasks.workunit.client.1.vm05.stdout:8/367: fdatasync d7/d14/d3a/f68 0 2026-03-10T10:19:39.640 INFO:tasks.workunit.client.1.vm05.stdout:9/423: write d0/d1/d16/f36 [661599,125117] 0 2026-03-10T10:19:39.640 INFO:tasks.workunit.client.1.vm05.stdout:0/500: getdents d1/d2/d39/d6e/d95 0 2026-03-10T10:19:39.641 INFO:tasks.workunit.client.1.vm05.stdout:9/424: fdatasync d0/f73 0 2026-03-10T10:19:39.641 INFO:tasks.workunit.client.1.vm05.stdout:1/536: dread d4/d20/f49 [4194304,4194304] 0 2026-03-10T10:19:39.648 INFO:tasks.workunit.client.1.vm05.stdout:1/537: dwrite d4/df/f73 [0,4194304] 0 2026-03-10T10:19:39.652 INFO:tasks.workunit.client.1.vm05.stdout:9/425: symlink d0/df/d74/l8b 0 2026-03-10T10:19:39.658 INFO:tasks.workunit.client.1.vm05.stdout:0/501: creat d1/d2/d9/d31/d12/d41/fa9 x:0 0 0 2026-03-10T10:19:39.658 INFO:tasks.workunit.client.1.vm05.stdout:1/538: creat d4/d20/f9d x:0 0 0 2026-03-10T10:19:39.658 INFO:tasks.workunit.client.1.vm05.stdout:8/368: creat d7/d14/d24/d3f/d6a/f6c x:0 0 0 2026-03-10T10:19:39.658 INFO:tasks.workunit.client.1.vm05.stdout:9/426: mkdir d0/df/d74/d8c 0 2026-03-10T10:19:39.658 INFO:tasks.workunit.client.1.vm05.stdout:0/502: chown d1/d2/d9/d31/d13/d15/d4e/f89 233151806 1 2026-03-10T10:19:39.658 INFO:tasks.workunit.client.1.vm05.stdout:1/539: truncate d4/dd/f64 1000716 0 2026-03-10T10:19:39.659 INFO:tasks.workunit.client.1.vm05.stdout:9/427: creat d0/df/d11/f8d x:0 0 0 2026-03-10T10:19:39.667 INFO:tasks.workunit.client.1.vm05.stdout:1/540: link d4/df/d1c/d53/f6b d4/df/d1c/d92/f9e 0 2026-03-10T10:19:39.671 INFO:tasks.workunit.client.1.vm05.stdout:1/541: dread d4/f46 [0,4194304] 0 2026-03-10T10:19:39.674 INFO:tasks.workunit.client.0.vm02.stdout:9/446: read da/f5c [3461996,80270] 0 2026-03-10T10:19:39.675 INFO:tasks.workunit.client.0.vm02.stdout:9/447: write da/d3c/d4c/f27 [4752832,33218] 0 2026-03-10T10:19:39.683 INFO:tasks.workunit.client.0.vm02.stdout:9/448: mkdir da/d3c/d4c/d38/d82/d8c 0 2026-03-10T10:19:39.691 INFO:tasks.workunit.client.0.vm02.stdout:9/449: chown da/d3c/d4c/d38/d4a/c51 51 1 2026-03-10T10:19:39.692 INFO:tasks.workunit.client.0.vm02.stdout:9/450: fsync da/d3c/d4c/d2c/d34/f4d 0 2026-03-10T10:19:39.696 INFO:tasks.workunit.client.1.vm05.stdout:9/428: dread d0/d1/d16/f40 [0,4194304] 0 2026-03-10T10:19:39.696 INFO:tasks.workunit.client.1.vm05.stdout:9/429: fsync d0/df/d11/f50 0 2026-03-10T10:19:39.700 INFO:tasks.workunit.client.1.vm05.stdout:9/430: symlink d0/d1/d13/d26/l8e 0 2026-03-10T10:19:39.701 INFO:tasks.workunit.client.1.vm05.stdout:9/431: mkdir d0/df/d74/d8c/d8f 0 2026-03-10T10:19:39.702 INFO:tasks.workunit.client.1.vm05.stdout:1/542: sync 2026-03-10T10:19:39.702 INFO:tasks.workunit.client.1.vm05.stdout:9/432: dread d0/d1/d13/de/f5b [0,4194304] 0 2026-03-10T10:19:39.703 INFO:tasks.workunit.client.1.vm05.stdout:1/543: write d4/df/d1c/d53/f98 [344369,98146] 0 2026-03-10T10:19:39.703 INFO:tasks.workunit.client.1.vm05.stdout:1/544: chown d4/d39 32665926 1 2026-03-10T10:19:39.703 INFO:tasks.workunit.client.1.vm05.stdout:9/433: dread - d0/d1/d13/de/d21/f71 zero size 2026-03-10T10:19:39.821 INFO:tasks.workunit.client.0.vm02.stdout:8/469: dwrite d1/f68 [0,4194304] 0 2026-03-10T10:19:39.823 INFO:tasks.workunit.client.0.vm02.stdout:5/639: dwrite d1/db/d11/d84/d40/d4f/d5f/f73 [0,4194304] 0 2026-03-10T10:19:39.842 INFO:tasks.workunit.client.0.vm02.stdout:8/470: dread d1/f73 [0,4194304] 0 2026-03-10T10:19:39.842 INFO:tasks.workunit.client.0.vm02.stdout:6/449: dwrite d0/f6b [0,4194304] 0 2026-03-10T10:19:39.853 INFO:tasks.workunit.client.1.vm05.stdout:4/346: truncate d1/d31/dc/f1f 4321429 0 2026-03-10T10:19:39.857 INFO:tasks.workunit.client.1.vm05.stdout:4/347: dread d1/f5d [0,4194304] 0 2026-03-10T10:19:39.862 INFO:tasks.workunit.client.1.vm05.stdout:4/348: mknod d1/d3/c6e 0 2026-03-10T10:19:39.864 INFO:tasks.workunit.client.1.vm05.stdout:4/349: truncate d1/d3/f26 602705 0 2026-03-10T10:19:39.867 INFO:tasks.workunit.client.1.vm05.stdout:4/350: dwrite d1/d3/f62 [0,4194304] 0 2026-03-10T10:19:39.888 INFO:tasks.workunit.client.1.vm05.stdout:4/351: creat d1/d31/f6f x:0 0 0 2026-03-10T10:19:39.888 INFO:tasks.workunit.client.0.vm02.stdout:2/487: dwrite d0/f72 [0,4194304] 0 2026-03-10T10:19:39.888 INFO:tasks.workunit.client.0.vm02.stdout:7/466: rmdir d1/dc/d16/d28/d2c 39 2026-03-10T10:19:39.892 INFO:tasks.workunit.client.0.vm02.stdout:1/504: write d4/da/f28 [105894,90923] 0 2026-03-10T10:19:39.892 INFO:tasks.workunit.client.0.vm02.stdout:7/467: creat d1/dc/d55/f8d x:0 0 0 2026-03-10T10:19:39.892 INFO:tasks.workunit.client.0.vm02.stdout:6/450: dread d0/d8/d8c/f36 [0,4194304] 0 2026-03-10T10:19:39.893 INFO:tasks.workunit.client.0.vm02.stdout:4/622: truncate d1/d41/d5e/d78/d7f/f74 283741 0 2026-03-10T10:19:39.896 INFO:tasks.workunit.client.0.vm02.stdout:7/468: mkdir d1/d1b/d8e 0 2026-03-10T10:19:39.900 INFO:tasks.workunit.client.0.vm02.stdout:1/505: creat d4/da/d1a/d5b/f9f x:0 0 0 2026-03-10T10:19:39.908 INFO:tasks.workunit.client.0.vm02.stdout:4/623: creat d1/d41/d5e/d78/d7f/d82/fc9 x:0 0 0 2026-03-10T10:19:39.909 INFO:tasks.workunit.client.0.vm02.stdout:7/469: dwrite d1/dc/d16/d28/d2c/f8a [0,4194304] 0 2026-03-10T10:19:39.910 INFO:tasks.workunit.client.0.vm02.stdout:1/506: fsync d4/ff 0 2026-03-10T10:19:39.911 INFO:tasks.workunit.client.0.vm02.stdout:7/470: stat d1/dc/d44 0 2026-03-10T10:19:39.912 INFO:tasks.workunit.client.0.vm02.stdout:3/475: write d1/d6/f3a [271395,103751] 0 2026-03-10T10:19:39.913 INFO:tasks.workunit.client.0.vm02.stdout:4/624: rename d1/f6f to d1/d10/d88/db2/fca 0 2026-03-10T10:19:39.919 INFO:tasks.workunit.client.0.vm02.stdout:1/507: rmdir d4/d4a 39 2026-03-10T10:19:39.920 INFO:tasks.workunit.client.0.vm02.stdout:1/508: write d4/da/d1a/d22/f32 [4248763,110012] 0 2026-03-10T10:19:39.928 INFO:tasks.workunit.client.0.vm02.stdout:4/625: fsync d1/d10/f71 0 2026-03-10T10:19:39.929 INFO:tasks.workunit.client.0.vm02.stdout:4/626: readlink d1/d41/d5e/d78/l61 0 2026-03-10T10:19:39.933 INFO:tasks.workunit.client.0.vm02.stdout:6/451: rename d0/c47 to d0/c92 0 2026-03-10T10:19:39.936 INFO:tasks.workunit.client.0.vm02.stdout:4/627: symlink d1/d41/d5e/d78/d7f/lcb 0 2026-03-10T10:19:39.937 INFO:tasks.workunit.client.0.vm02.stdout:4/628: write d1/f94 [3817097,371] 0 2026-03-10T10:19:39.938 INFO:tasks.workunit.client.0.vm02.stdout:4/629: fdatasync d1/d41/d5e/d78/d7f/fb9 0 2026-03-10T10:19:39.948 INFO:tasks.workunit.client.0.vm02.stdout:6/452: dread d0/d8/d29/d2f/f38 [0,4194304] 0 2026-03-10T10:19:39.948 INFO:tasks.workunit.client.0.vm02.stdout:6/453: readlink d0/d8/l74 0 2026-03-10T10:19:39.958 INFO:tasks.workunit.client.0.vm02.stdout:6/454: truncate d0/d8/d9/f14 3383260 0 2026-03-10T10:19:39.959 INFO:tasks.workunit.client.0.vm02.stdout:3/476: getdents d1/d6/d8e 0 2026-03-10T10:19:39.960 INFO:tasks.workunit.client.0.vm02.stdout:3/477: write d1/d8/d21/f4a [2066509,82223] 0 2026-03-10T10:19:39.962 INFO:tasks.workunit.client.0.vm02.stdout:4/630: symlink d1/d41/d5e/d78/d1a/d49/d81/dc6/lcc 0 2026-03-10T10:19:39.963 INFO:tasks.workunit.client.0.vm02.stdout:6/455: mknod d0/d8/d9/d7a/c93 0 2026-03-10T10:19:39.965 INFO:tasks.workunit.client.0.vm02.stdout:6/456: dread d0/f6b [0,4194304] 0 2026-03-10T10:19:39.971 INFO:tasks.workunit.client.0.vm02.stdout:6/457: mkdir d0/d8/d29/d94 0 2026-03-10T10:19:39.972 INFO:tasks.workunit.client.0.vm02.stdout:6/458: write d0/d8/d29/d6d/f3d [3657281,16390] 0 2026-03-10T10:19:39.975 INFO:tasks.workunit.client.0.vm02.stdout:6/459: truncate d0/d8/f64 3170927 0 2026-03-10T10:19:39.989 INFO:tasks.workunit.client.0.vm02.stdout:6/460: read d0/f21 [5820967,105403] 0 2026-03-10T10:19:39.991 INFO:tasks.workunit.client.0.vm02.stdout:6/461: mknod d0/c95 0 2026-03-10T10:19:39.994 INFO:tasks.workunit.client.0.vm02.stdout:6/462: dread d0/f6b [0,4194304] 0 2026-03-10T10:19:40.029 INFO:tasks.workunit.client.1.vm05.stdout:3/487: dwrite dd/d15/f84 [0,4194304] 0 2026-03-10T10:19:40.039 INFO:tasks.workunit.client.1.vm05.stdout:2/436: dwrite db/f4a [0,4194304] 0 2026-03-10T10:19:40.049 INFO:tasks.workunit.client.1.vm05.stdout:8/369: dwrite d7/f1c [0,4194304] 0 2026-03-10T10:19:40.066 INFO:tasks.workunit.client.1.vm05.stdout:8/370: symlink d7/d14/d62/l6d 0 2026-03-10T10:19:40.074 INFO:tasks.workunit.client.1.vm05.stdout:3/488: truncate dd/d15/d24/f63 392910 0 2026-03-10T10:19:40.075 INFO:tasks.workunit.client.1.vm05.stdout:3/489: chown dd/d39/f51 11246401 1 2026-03-10T10:19:40.087 INFO:tasks.workunit.client.1.vm05.stdout:2/437: dread db/d12/f31 [0,4194304] 0 2026-03-10T10:19:40.087 INFO:tasks.workunit.client.1.vm05.stdout:3/490: fsync dd/d15/d24/f8a 0 2026-03-10T10:19:40.095 INFO:tasks.workunit.client.1.vm05.stdout:3/491: fsync dd/d15/f1b 0 2026-03-10T10:19:40.098 INFO:tasks.workunit.client.1.vm05.stdout:3/492: dwrite dd/d15/fa3 [0,4194304] 0 2026-03-10T10:19:40.099 INFO:tasks.workunit.client.1.vm05.stdout:1/545: truncate d4/df/f73 1396537 0 2026-03-10T10:19:40.100 INFO:tasks.workunit.client.1.vm05.stdout:9/434: truncate d0/f1e 4431865 0 2026-03-10T10:19:40.106 INFO:tasks.workunit.client.1.vm05.stdout:1/546: symlink d4/d39/d88/l9f 0 2026-03-10T10:19:40.108 INFO:tasks.workunit.client.1.vm05.stdout:1/547: mkdir d4/d39/d3e/da0 0 2026-03-10T10:19:40.114 INFO:tasks.workunit.client.0.vm02.stdout:8/471: dwrite d1/d1c/f1e [4194304,4194304] 0 2026-03-10T10:19:40.121 INFO:tasks.workunit.client.1.vm05.stdout:1/548: creat d4/d39/d3e/da0/fa1 x:0 0 0 2026-03-10T10:19:40.122 INFO:tasks.workunit.client.1.vm05.stdout:1/549: write d4/d3d/d6e/f7c [892651,13811] 0 2026-03-10T10:19:40.125 INFO:tasks.workunit.client.1.vm05.stdout:1/550: read d4/d39/f67 [4071874,113766] 0 2026-03-10T10:19:40.130 INFO:tasks.workunit.client.1.vm05.stdout:1/551: dwrite d4/df/d1c/f9c [0,4194304] 0 2026-03-10T10:19:40.132 INFO:tasks.workunit.client.0.vm02.stdout:8/472: link d1/d1c/d43/f7a d1/d1c/d23/d25/f8c 0 2026-03-10T10:19:40.143 INFO:tasks.workunit.client.0.vm02.stdout:8/473: rename d1/d1c/d24 to d1/d1c/d24/d35/d8d 22 2026-03-10T10:19:40.164 INFO:tasks.workunit.client.0.vm02.stdout:8/474: mkdir d1/d1c/d24/d35/d8e 0 2026-03-10T10:19:40.164 INFO:tasks.workunit.client.0.vm02.stdout:8/475: fdatasync d1/f12 0 2026-03-10T10:19:40.165 INFO:tasks.workunit.client.0.vm02.stdout:4/631: getdents d1/d41/d5e/d78/d7f/d82 0 2026-03-10T10:19:40.166 INFO:tasks.workunit.client.0.vm02.stdout:4/632: chown d1/d41/d5e/d78/d7f/l9a 1964615 1 2026-03-10T10:19:40.167 INFO:tasks.workunit.client.0.vm02.stdout:4/633: write d1/d52/fbd [804949,100964] 0 2026-03-10T10:19:40.177 INFO:tasks.workunit.client.0.vm02.stdout:8/476: dread d1/d2/f36 [0,4194304] 0 2026-03-10T10:19:40.181 INFO:tasks.workunit.client.0.vm02.stdout:4/634: creat d1/d52/fcd x:0 0 0 2026-03-10T10:19:40.187 INFO:tasks.workunit.client.0.vm02.stdout:8/477: dwrite d1/d1c/d43/d5b/f79 [0,4194304] 0 2026-03-10T10:19:40.188 INFO:tasks.workunit.client.0.vm02.stdout:8/478: chown d1/d1c/d23/d25/c6f 3 1 2026-03-10T10:19:40.196 INFO:tasks.workunit.client.0.vm02.stdout:7/471: dwrite d1/f5 [4194304,4194304] 0 2026-03-10T10:19:40.204 INFO:tasks.workunit.client.0.vm02.stdout:8/479: rename d1/d1c/d23/d3e/l5e to d1/d1c/d23/d3e/d83/l8f 0 2026-03-10T10:19:40.212 INFO:tasks.workunit.client.0.vm02.stdout:0/524: unlink d9/l87 0 2026-03-10T10:19:40.213 INFO:tasks.workunit.client.0.vm02.stdout:7/472: rmdir d1/dc/d16/d28/d2d/d36/d67 39 2026-03-10T10:19:40.214 INFO:tasks.workunit.client.0.vm02.stdout:8/480: fdatasync d1/d1c/f20 0 2026-03-10T10:19:40.217 INFO:tasks.workunit.client.0.vm02.stdout:1/509: write d4/da/d1a/f40 [3219544,114843] 0 2026-03-10T10:19:40.217 INFO:tasks.workunit.client.0.vm02.stdout:1/510: stat d4/da/d27/d38/d3c/l55 0 2026-03-10T10:19:40.222 INFO:tasks.workunit.client.0.vm02.stdout:1/511: dread d4/da/f73 [0,4194304] 0 2026-03-10T10:19:40.222 INFO:tasks.workunit.client.0.vm02.stdout:1/512: chown d4/da/d1a/d22/f62 6692 1 2026-03-10T10:19:40.223 INFO:tasks.workunit.client.0.vm02.stdout:7/473: chown d1/dc/f3 2 1 2026-03-10T10:19:40.223 INFO:tasks.workunit.client.0.vm02.stdout:7/474: chown d1/d1b/f72 6 1 2026-03-10T10:19:40.236 INFO:tasks.workunit.client.0.vm02.stdout:8/481: symlink d1/d1c/d24/d35/d56/l90 0 2026-03-10T10:19:40.239 INFO:tasks.workunit.client.0.vm02.stdout:4/635: getdents d1 0 2026-03-10T10:19:40.244 INFO:tasks.workunit.client.0.vm02.stdout:4/636: dread d1/d52/fbd [0,4194304] 0 2026-03-10T10:19:40.244 INFO:tasks.workunit.client.0.vm02.stdout:4/637: stat d1/d32/l40 0 2026-03-10T10:19:40.263 INFO:tasks.workunit.client.0.vm02.stdout:7/475: rename d1/dc/d16/d28/d2d/d36 to d1/d1b/d8f 0 2026-03-10T10:19:40.265 INFO:tasks.workunit.client.0.vm02.stdout:8/482: unlink d1/d2/f67 0 2026-03-10T10:19:40.266 INFO:tasks.workunit.client.0.vm02.stdout:7/476: dwrite d1/d1b/f43 [0,4194304] 0 2026-03-10T10:19:40.281 INFO:tasks.workunit.client.0.vm02.stdout:5/640: creat d1/db/d11/d13/fdb x:0 0 0 2026-03-10T10:19:40.294 INFO:tasks.workunit.client.0.vm02.stdout:3/478: dwrite d1/f25 [0,4194304] 0 2026-03-10T10:19:40.298 INFO:tasks.workunit.client.0.vm02.stdout:1/513: link d4/d2c/d53/f6c d4/da/d1a/d47/fa0 0 2026-03-10T10:19:40.305 INFO:tasks.workunit.client.0.vm02.stdout:8/483: write d1/d1c/d23/d25/f76 [179369,20099] 0 2026-03-10T10:19:40.312 INFO:tasks.workunit.client.0.vm02.stdout:3/479: stat d1/d58/c8a 0 2026-03-10T10:19:40.318 INFO:tasks.workunit.client.0.vm02.stdout:8/484: creat d1/f91 x:0 0 0 2026-03-10T10:19:40.321 INFO:tasks.workunit.client.0.vm02.stdout:6/463: dwrite d0/f20 [0,4194304] 0 2026-03-10T10:19:40.327 INFO:tasks.workunit.client.1.vm05.stdout:4/352: mkdir d1/d70 0 2026-03-10T10:19:40.328 INFO:tasks.workunit.client.0.vm02.stdout:5/641: link d1/db/d11/d84/d95/fd6 d1/db/d11/d1a/fdc 0 2026-03-10T10:19:40.331 INFO:tasks.workunit.client.0.vm02.stdout:3/480: truncate d1/d20/f64 21604 0 2026-03-10T10:19:40.331 INFO:tasks.workunit.client.0.vm02.stdout:9/451: symlink da/d3c/d4c/d38/d4a/l8d 0 2026-03-10T10:19:40.336 INFO:tasks.workunit.client.1.vm05.stdout:7/508: symlink d5/l9b 0 2026-03-10T10:19:40.339 INFO:tasks.workunit.client.1.vm05.stdout:5/519: creat da/db/fad x:0 0 0 2026-03-10T10:19:40.342 INFO:tasks.workunit.client.0.vm02.stdout:1/514: creat d4/da/d1a/fa1 x:0 0 0 2026-03-10T10:19:40.357 INFO:tasks.workunit.client.0.vm02.stdout:0/525: unlink d9/d18/d1a/l21 0 2026-03-10T10:19:40.357 INFO:tasks.workunit.client.0.vm02.stdout:6/464: mkdir d0/d8/d29/d6d/d96 0 2026-03-10T10:19:40.357 INFO:tasks.workunit.client.0.vm02.stdout:3/481: dread d1/d8/f7c [4194304,4194304] 0 2026-03-10T10:19:40.361 INFO:tasks.workunit.client.0.vm02.stdout:0/526: rename d9/d18/d1a/d22/d24/d80/d49/l70 to d9/d18/d1a/d3c/la6 0 2026-03-10T10:19:40.365 INFO:tasks.workunit.client.1.vm05.stdout:4/353: sync 2026-03-10T10:19:40.401 INFO:tasks.workunit.client.0.vm02.stdout:9/452: link da/d3c/d4c/f41 da/d3c/d4c/f8e 0 2026-03-10T10:19:40.408 INFO:tasks.workunit.client.1.vm05.stdout:2/438: write db/d28/d4f/d59/f6f [772783,79987] 0 2026-03-10T10:19:40.412 INFO:tasks.workunit.client.1.vm05.stdout:8/371: dwrite d7/d14/f4c [0,4194304] 0 2026-03-10T10:19:40.412 INFO:tasks.workunit.client.0.vm02.stdout:0/527: mkdir d9/d18/d1a/d46/d5d/da7 0 2026-03-10T10:19:40.414 INFO:tasks.workunit.client.0.vm02.stdout:0/528: read - d9/d34/d3d/d8d/f95 zero size 2026-03-10T10:19:40.414 INFO:tasks.workunit.client.1.vm05.stdout:8/372: write d7/d14/d24/f34 [2481374,79291] 0 2026-03-10T10:19:40.415 INFO:tasks.workunit.client.1.vm05.stdout:8/373: chown d7/l28 90355 1 2026-03-10T10:19:40.416 INFO:tasks.workunit.client.1.vm05.stdout:8/374: write d7/d14/d3a/f68 [11102379,41877] 0 2026-03-10T10:19:40.422 INFO:tasks.workunit.client.0.vm02.stdout:9/453: dread da/d3c/d4c/f23 [0,4194304] 0 2026-03-10T10:19:40.425 INFO:tasks.workunit.client.1.vm05.stdout:6/441: rename fb to dd/d36/d7d/f8a 0 2026-03-10T10:19:40.426 INFO:tasks.workunit.client.1.vm05.stdout:0/503: write d1/d2/d9/d31/d13/d2f/f88 [1036310,56985] 0 2026-03-10T10:19:40.426 INFO:tasks.workunit.client.0.vm02.stdout:2/488: unlink d0/d1a/f53 0 2026-03-10T10:19:40.428 INFO:tasks.workunit.client.0.vm02.stdout:4/638: creat d1/d10/fce x:0 0 0 2026-03-10T10:19:40.429 INFO:tasks.workunit.client.1.vm05.stdout:9/435: dwrite d0/f28 [0,4194304] 0 2026-03-10T10:19:40.451 INFO:tasks.workunit.client.0.vm02.stdout:2/489: dread d0/d1a/d24/f6e [0,4194304] 0 2026-03-10T10:19:40.451 INFO:tasks.workunit.client.0.vm02.stdout:2/490: stat d0/d1a/d24/c83 0 2026-03-10T10:19:40.459 INFO:tasks.workunit.client.0.vm02.stdout:3/482: truncate d1/d8/d21/f35 2281858 0 2026-03-10T10:19:40.466 INFO:tasks.workunit.client.0.vm02.stdout:0/529: mkdir d9/d18/d1a/d46/d5d/da8 0 2026-03-10T10:19:40.468 INFO:tasks.workunit.client.1.vm05.stdout:4/354: symlink d1/d3/l71 0 2026-03-10T10:19:40.477 INFO:tasks.workunit.client.0.vm02.stdout:9/454: dread da/d3c/d4c/f3b [0,4194304] 0 2026-03-10T10:19:40.478 INFO:tasks.workunit.client.0.vm02.stdout:9/455: read da/d3c/d4c/f8e [903783,50474] 0 2026-03-10T10:19:40.479 INFO:tasks.workunit.client.0.vm02.stdout:0/530: dread d9/d34/d3d/f58 [0,4194304] 0 2026-03-10T10:19:40.489 INFO:tasks.workunit.client.1.vm05.stdout:1/552: truncate d4/d79/f8b 316832 0 2026-03-10T10:19:40.490 INFO:tasks.workunit.client.1.vm05.stdout:1/553: chown d4/d3d/d6e/f7c 1638 1 2026-03-10T10:19:40.506 INFO:tasks.workunit.client.0.vm02.stdout:1/515: link d4/da/d1a/d5b/f79 d4/d2c/fa2 0 2026-03-10T10:19:40.507 INFO:tasks.workunit.client.0.vm02.stdout:1/516: write d4/da/d27/d38/d3c/f8f [1818867,89081] 0 2026-03-10T10:19:40.507 INFO:tasks.workunit.client.0.vm02.stdout:1/517: readlink d4/ld 0 2026-03-10T10:19:40.514 INFO:tasks.workunit.client.0.vm02.stdout:3/483: symlink d1/d6/d8b/l99 0 2026-03-10T10:19:40.529 INFO:tasks.workunit.client.0.vm02.stdout:9/456: fsync da/d3c/d4c/f2b 0 2026-03-10T10:19:40.529 INFO:tasks.workunit.client.0.vm02.stdout:7/477: write d1/dc/d55/f64 [670891,73218] 0 2026-03-10T10:19:40.535 INFO:tasks.workunit.client.0.vm02.stdout:0/531: rename d9/d34/f97 to d9/d18/d1a/d22/d24/d80/d49/fa9 0 2026-03-10T10:19:40.535 INFO:tasks.workunit.client.0.vm02.stdout:3/484: write d1/f25 [4809734,108595] 0 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: pgmap v159: 65 pgs: 65 active+clean; 2.0 GiB data, 7.1 GiB used, 113 GiB / 120 GiB avail; 43 MiB/s rd, 144 MiB/s wr, 306 op/s 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr fail", "who": "vm02.zmavgl"}]: dispatch 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: osdmap e40: 6 total, 6 up, 6 in 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "mgr fail", "who": "vm02.zmavgl"}]': finished 2026-03-10T10:19:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:40 vm05.local ceph-mon[59051]: mgrmap e20: vm05.coparq(active, starting, since 0.0245243s) 2026-03-10T10:19:40.542 INFO:tasks.workunit.client.1.vm05.stdout:3/493: rename dd/d15/d24/d2c/d6d/fa1 to dd/d39/d66/fad 0 2026-03-10T10:19:40.543 INFO:tasks.workunit.client.1.vm05.stdout:1/554: symlink d4/df/d1c/d92/la2 0 2026-03-10T10:19:40.544 INFO:tasks.workunit.client.1.vm05.stdout:7/509: write d5/f4e [427528,48054] 0 2026-03-10T10:19:40.544 INFO:tasks.workunit.client.0.vm02.stdout:5/642: write d1/db/d11/d16/d79/d85/fb0 [1246518,128750] 0 2026-03-10T10:19:40.545 INFO:tasks.workunit.client.1.vm05.stdout:7/510: fsync d5/d1d/d29/d3e/d8c/d7f/f93 0 2026-03-10T10:19:40.545 INFO:tasks.workunit.client.0.vm02.stdout:5/643: chown d1/db/d11/d16/d79/c8d 0 1 2026-03-10T10:19:40.548 INFO:tasks.workunit.client.1.vm05.stdout:1/555: read d4/df/d1c/f63 [6948443,75913] 0 2026-03-10T10:19:40.548 INFO:tasks.workunit.client.1.vm05.stdout:1/556: rename d4 to d4/d39/d3e/da3 22 2026-03-10T10:19:40.553 INFO:tasks.workunit.client.0.vm02.stdout:6/465: getdents d0/d8/d9/d7a 0 2026-03-10T10:19:40.559 INFO:tasks.workunit.client.1.vm05.stdout:0/504: mkdir d1/d2/d9/d31/daa 0 2026-03-10T10:19:40.562 INFO:tasks.workunit.client.1.vm05.stdout:5/520: truncate da/db/f1e 3063993 0 2026-03-10T10:19:40.563 INFO:tasks.workunit.client.0.vm02.stdout:8/485: dwrite d1/d1c/d43/f46 [0,4194304] 0 2026-03-10T10:19:40.570 INFO:tasks.workunit.client.1.vm05.stdout:6/442: mkdir dd/d36/d3f/d12/d44/d2a/d77/d8b 0 2026-03-10T10:19:40.570 INFO:tasks.workunit.client.1.vm05.stdout:6/443: readlink dd/d36/d3f/d12/l13 0 2026-03-10T10:19:40.595 INFO:tasks.workunit.client.1.vm05.stdout:8/375: write d7/d14/d62/f69 [663431,48585] 0 2026-03-10T10:19:40.598 INFO:tasks.workunit.client.1.vm05.stdout:8/376: dread d7/d14/f5b [0,4194304] 0 2026-03-10T10:19:40.603 INFO:tasks.workunit.client.0.vm02.stdout:1/518: dread d4/da/d27/d38/d80/f94 [0,4194304] 0 2026-03-10T10:19:40.635 INFO:tasks.workunit.client.0.vm02.stdout:6/466: rename d0/f21 to d0/d8/d29/d6d/d96/f97 0 2026-03-10T10:19:40.635 INFO:tasks.workunit.client.1.vm05.stdout:4/355: mkdir d1/d31/d72 0 2026-03-10T10:19:40.635 INFO:tasks.workunit.client.1.vm05.stdout:3/494: rmdir dd/d39 39 2026-03-10T10:19:40.635 INFO:tasks.workunit.client.1.vm05.stdout:1/557: stat d4/df/f73 0 2026-03-10T10:19:40.635 INFO:tasks.workunit.client.1.vm05.stdout:0/505: unlink d1/d2/d9/d31/d13/d15/c35 0 2026-03-10T10:19:40.635 INFO:tasks.workunit.client.1.vm05.stdout:5/521: creat da/d9a/fae x:0 0 0 2026-03-10T10:19:40.635 INFO:tasks.workunit.client.1.vm05.stdout:5/522: chown da/db/d26/d35/d38/f48 76471280 1 2026-03-10T10:19:40.635 INFO:tasks.workunit.client.1.vm05.stdout:6/444: mkdir dd/d36/d3f/d12/d44/d2a/d3d/d48/d8c 0 2026-03-10T10:19:40.642 INFO:tasks.workunit.client.0.vm02.stdout:2/491: link d0/d1a/d24/f6e d0/d8c/fa2 0 2026-03-10T10:19:40.665 INFO:tasks.workunit.client.0.vm02.stdout:2/492: chown d0/d1a/d49/d5e/d8a/f98 65 1 2026-03-10T10:19:40.665 INFO:tasks.workunit.client.0.vm02.stdout:9/457: mknod da/d3c/d4c/c8f 0 2026-03-10T10:19:40.665 INFO:tasks.workunit.client.1.vm05.stdout:0/506: dread d1/d2/d9/d31/d13/d17/f5a [0,4194304] 0 2026-03-10T10:19:40.665 INFO:tasks.workunit.client.1.vm05.stdout:0/507: chown d1/d2/d9/d31/d13/c76 3591 1 2026-03-10T10:19:40.665 INFO:tasks.workunit.client.1.vm05.stdout:1/558: rename d4/df/d76/f7e to d4/d39/d88/fa4 0 2026-03-10T10:19:40.666 INFO:tasks.workunit.client.1.vm05.stdout:1/559: chown d4/d37/d4e/c72 5026496 1 2026-03-10T10:19:40.666 INFO:tasks.workunit.client.1.vm05.stdout:1/560: dwrite d4/d37/d4e/f62 [8388608,4194304] 0 2026-03-10T10:19:40.666 INFO:tasks.workunit.client.1.vm05.stdout:5/523: mkdir da/d9a/daf 0 2026-03-10T10:19:40.666 INFO:tasks.workunit.client.1.vm05.stdout:5/524: readlink da/d96/la8 0 2026-03-10T10:19:40.666 INFO:tasks.workunit.client.1.vm05.stdout:6/445: creat dd/d36/d3f/d12/d44/d30/f8d x:0 0 0 2026-03-10T10:19:40.666 INFO:tasks.workunit.client.1.vm05.stdout:2/439: getdents db/d61 0 2026-03-10T10:19:40.666 INFO:tasks.workunit.client.1.vm05.stdout:4/356: mknod d1/d31/c73 0 2026-03-10T10:19:40.666 INFO:tasks.workunit.client.1.vm05.stdout:4/357: write d1/d31/d4b/f59 [5009888,51782] 0 2026-03-10T10:19:40.667 INFO:tasks.workunit.client.1.vm05.stdout:3/495: dread dd/d15/d1f/f53 [0,4194304] 0 2026-03-10T10:19:40.668 INFO:tasks.workunit.client.0.vm02.stdout:4/639: getdents d1/d10/db 0 2026-03-10T10:19:40.671 INFO:tasks.workunit.client.1.vm05.stdout:0/508: unlink d1/d2/f21 0 2026-03-10T10:19:40.672 INFO:tasks.workunit.client.0.vm02.stdout:6/467: mkdir d0/d8/d29/d2f/d50/d98 0 2026-03-10T10:19:40.675 INFO:tasks.workunit.client.0.vm02.stdout:6/468: dread d0/d8/d29/d2f/d4b/f26 [0,4194304] 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:6/469: chown d0/f4c 106818 1 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:8/486: mknod d1/d1c/d23/d25/c92 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:9/458: creat da/d3c/d4c/d38/d82/f90 x:0 0 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:5/644: creat d1/fdd x:0 0 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:9/459: symlink da/d3c/d53/l91 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:9/460: write da/d3c/d53/f6a [545349,40039] 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:9/461: rename da/d3c/d4c/d38/d4a/l63 to da/d3c/d4c/d38/d82/d89/l92 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:9/462: dwrite da/d3c/d53/f6a [0,4194304] 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:5/645: symlink d1/db/d11/d84/d40/d4f/lde 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:8/487: getdents d1/d1c/d24/d35 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.0.vm02.stdout:9/463: rename da/d3c/d4c/f26 to da/d3c/d4c/d2c/f93 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:6/446: symlink dd/d36/d3f/d12/d44/d30/d4a/l8e 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:5/525: truncate da/db/d26/d5c/d4b/f6a 962994 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:5/526: write da/db/d26/d35/d38/fa6 [727880,88305] 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:3/496: fsync dd/d20/d56/f68 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:6/447: rename dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f70 to dd/d36/d3f/d12/f8f 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:6/448: write dd/d36/d3f/d12/d44/d2a/d3d/d3e/f73 [769963,100566] 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:4/358: fsync d1/d31/dc/f1f 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:3/497: mkdir dd/d15/d1f/dae 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:0/509: mkdir d1/d2/d9/d31/d13/da2/dab 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:0/510: creat d1/d2/d39/d6e/fac x:0 0 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:6/449: mknod dd/d36/d3f/d12/d44/d30/d4a/d6e/c90 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:0/511: fsync d1/d2/d9/d31/d13/d2f/f8f 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:3/498: creat dd/d15/d1f/d95/faf x:0 0 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:5/527: getdents da/db/d28/d32 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:6/450: symlink dd/d36/d3f/d12/d44/d2a/d7f/l91 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:3/499: fsync dd/d39/f96 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:3/500: chown dd/d20/d56/d5e/dab/la5 1685 1 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:5/528: rename da/db/c21 to da/d63/cb0 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:6/451: mknod dd/d36/d3f/c92 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:3/501: rename dd/d15/d24/d2c/d3b/f77 to dd/d15/d24/d74/fb0 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:6/452: rename dd/d36/d3f/d12/d44/d2a/d7f/l91 to dd/d36/d3f/d12/d44/d2a/d77/d8b/l93 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:6/453: readlink dd/d36/d3f/d12/d44/l45 0 2026-03-10T10:19:40.723 INFO:tasks.workunit.client.1.vm05.stdout:6/454: chown dd/d36/d3f/d12/d44/l45 111571488 1 2026-03-10T10:19:40.724 INFO:tasks.workunit.client.1.vm05.stdout:4/359: read d1/d3/f12 [2057901,66032] 0 2026-03-10T10:19:40.725 INFO:tasks.workunit.client.0.vm02.stdout:8/488: rename d1/d2/c48 to d1/d1c/d43/d5b/c93 0 2026-03-10T10:19:40.728 INFO:tasks.workunit.client.1.vm05.stdout:6/455: dread dd/d36/d3f/d12/d44/d2a/d3d/d48/f4b [0,4194304] 0 2026-03-10T10:19:40.728 INFO:tasks.workunit.client.0.vm02.stdout:8/489: dwrite d1/d1c/d43/d6a/f87 [0,4194304] 0 2026-03-10T10:19:40.730 INFO:tasks.workunit.client.0.vm02.stdout:8/490: fsync d1/d1c/d43/f7e 0 2026-03-10T10:19:40.736 INFO:tasks.workunit.client.0.vm02.stdout:9/464: unlink da/f4b 0 2026-03-10T10:19:40.737 INFO:tasks.workunit.client.0.vm02.stdout:8/491: symlink d1/d1c/d24/d35/l94 0 2026-03-10T10:19:40.738 INFO:tasks.workunit.client.1.vm05.stdout:1/561: dread d4/d3d/f57 [0,4194304] 0 2026-03-10T10:19:40.740 INFO:tasks.workunit.client.1.vm05.stdout:3/502: dread dd/d20/d56/f7d [0,4194304] 0 2026-03-10T10:19:40.751 INFO:tasks.workunit.client.1.vm05.stdout:0/512: dread d1/d2/d9/d31/d54/f27 [0,4194304] 0 2026-03-10T10:19:40.751 INFO:tasks.workunit.client.1.vm05.stdout:6/456: rmdir dd/d36/d3f/d12/d44/d2a/d77 39 2026-03-10T10:19:40.751 INFO:tasks.workunit.client.1.vm05.stdout:3/503: symlink dd/d20/d56/d5e/dab/lb1 0 2026-03-10T10:19:40.751 INFO:tasks.workunit.client.1.vm05.stdout:4/360: creat d1/d31/dc/d40/d63/f74 x:0 0 0 2026-03-10T10:19:40.751 INFO:tasks.workunit.client.1.vm05.stdout:6/457: symlink dd/d36/d3f/d12/d58/l94 0 2026-03-10T10:19:40.751 INFO:tasks.workunit.client.1.vm05.stdout:3/504: creat dd/d15/d24/d74/fb2 x:0 0 0 2026-03-10T10:19:40.751 INFO:tasks.workunit.client.1.vm05.stdout:4/361: mknod d1/d31/dc/d40/d45/c75 0 2026-03-10T10:19:40.752 INFO:tasks.workunit.client.1.vm05.stdout:4/362: fsync d1/d3/f5f 0 2026-03-10T10:19:40.752 INFO:tasks.workunit.client.1.vm05.stdout:1/562: rename d4/df/c35 to d4/df/ca5 0 2026-03-10T10:19:40.752 INFO:tasks.workunit.client.1.vm05.stdout:4/363: read - d1/d31/d4b/f51 zero size 2026-03-10T10:19:40.752 INFO:tasks.workunit.client.0.vm02.stdout:9/465: dread da/d3c/d4c/d2c/d34/f57 [0,4194304] 0 2026-03-10T10:19:40.752 INFO:tasks.workunit.client.1.vm05.stdout:6/458: rmdir dd/d36/d3f/d12/d59 39 2026-03-10T10:19:40.752 INFO:tasks.workunit.client.1.vm05.stdout:6/459: dread - dd/d36/d3f/d12/f8f zero size 2026-03-10T10:19:40.753 INFO:tasks.workunit.client.0.vm02.stdout:9/466: truncate da/d3c/d4c/d2c/d34/f3a 72953 0 2026-03-10T10:19:40.754 INFO:tasks.workunit.client.1.vm05.stdout:1/563: symlink d4/df/la6 0 2026-03-10T10:19:40.755 INFO:tasks.workunit.client.1.vm05.stdout:1/564: symlink d4/df/d1c/d92/la7 0 2026-03-10T10:19:40.756 INFO:tasks.workunit.client.1.vm05.stdout:6/460: getdents dd 0 2026-03-10T10:19:40.757 INFO:tasks.workunit.client.1.vm05.stdout:6/461: mknod dd/d1b/c95 0 2026-03-10T10:19:40.758 INFO:tasks.workunit.client.1.vm05.stdout:6/462: stat dd/d36/d3f/d12/d44/d30/c39 0 2026-03-10T10:19:40.759 INFO:tasks.workunit.client.1.vm05.stdout:6/463: mkdir dd/d36/d3f/d12/d96 0 2026-03-10T10:19:40.760 INFO:tasks.workunit.client.1.vm05.stdout:6/464: creat dd/d36/d7d/f97 x:0 0 0 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: pgmap v159: 65 pgs: 65 active+clean; 2.0 GiB data, 7.1 GiB used, 113 GiB / 120 GiB avail; 43 MiB/s rd, 144 MiB/s wr, 306 op/s 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr fail", "who": "vm02.zmavgl"}]: dispatch 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: osdmap e40: 6 total, 6 up, 6 in 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: from='mgr.14225 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "mgr fail", "who": "vm02.zmavgl"}]': finished 2026-03-10T10:19:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:40 vm02.local ceph-mon[50200]: mgrmap e20: vm05.coparq(active, starting, since 0.0245243s) 2026-03-10T10:19:40.788 INFO:tasks.workunit.client.1.vm05.stdout:5/529: sync 2026-03-10T10:19:40.791 INFO:tasks.workunit.client.1.vm05.stdout:5/530: truncate da/db/d26/d5c/f46 881099 0 2026-03-10T10:19:40.820 INFO:tasks.workunit.client.1.vm05.stdout:9/436: dwrite d0/d1/d16/f5c [0,4194304] 0 2026-03-10T10:19:40.821 INFO:tasks.workunit.client.1.vm05.stdout:9/437: write d0/f28 [2685509,33506] 0 2026-03-10T10:19:40.824 INFO:tasks.workunit.client.1.vm05.stdout:9/438: mkdir d0/df/d74/d90 0 2026-03-10T10:19:40.826 INFO:tasks.workunit.client.1.vm05.stdout:9/439: creat d0/d1/d57/f91 x:0 0 0 2026-03-10T10:19:40.826 INFO:tasks.workunit.client.1.vm05.stdout:9/440: stat d0/d1/c67 0 2026-03-10T10:19:40.827 INFO:tasks.workunit.client.0.vm02.stdout:3/485: sync 2026-03-10T10:19:40.827 INFO:tasks.workunit.client.0.vm02.stdout:2/493: sync 2026-03-10T10:19:40.827 INFO:tasks.workunit.client.0.vm02.stdout:5/646: sync 2026-03-10T10:19:40.828 INFO:tasks.workunit.client.0.vm02.stdout:9/467: sync 2026-03-10T10:19:40.834 INFO:tasks.workunit.client.1.vm05.stdout:9/441: creat d0/d1/d16/f92 x:0 0 0 2026-03-10T10:19:40.835 INFO:tasks.workunit.client.0.vm02.stdout:5/647: fdatasync d1/db/d11/d1a/fc6 0 2026-03-10T10:19:40.838 INFO:tasks.workunit.client.0.vm02.stdout:2/494: symlink d0/d10/la3 0 2026-03-10T10:19:40.839 INFO:tasks.workunit.client.0.vm02.stdout:2/495: write d0/f72 [4458420,51203] 0 2026-03-10T10:19:40.840 INFO:tasks.workunit.client.0.vm02.stdout:3/486: read d1/d8/d21/f3c [267226,38070] 0 2026-03-10T10:19:40.844 INFO:tasks.workunit.client.0.vm02.stdout:3/487: dwrite d1/d20/d52/f6f [0,4194304] 0 2026-03-10T10:19:40.860 INFO:tasks.workunit.client.1.vm05.stdout:9/442: dread d0/df/d11/f24 [0,4194304] 0 2026-03-10T10:19:40.862 INFO:tasks.workunit.client.0.vm02.stdout:5/648: unlink d1/db/d11/d84/d40/d4f/f97 0 2026-03-10T10:19:40.863 INFO:tasks.workunit.client.0.vm02.stdout:5/649: write d1/db/d11/d1a/f27 [732934,20297] 0 2026-03-10T10:19:40.865 INFO:tasks.workunit.client.1.vm05.stdout:7/511: write d5/d1d/d20/d35/f36 [510280,111982] 0 2026-03-10T10:19:40.868 INFO:tasks.workunit.client.0.vm02.stdout:7/478: dwrite d1/dc/d16/f6d [0,4194304] 0 2026-03-10T10:19:40.874 INFO:tasks.workunit.client.1.vm05.stdout:1/565: getdents d4/df/d76 0 2026-03-10T10:19:40.875 INFO:tasks.workunit.client.1.vm05.stdout:8/377: dwrite f6 [0,4194304] 0 2026-03-10T10:19:40.875 INFO:tasks.workunit.client.0.vm02.stdout:0/532: dwrite d9/d18/d1a/d22/d24/d80/d49/f53 [0,4194304] 0 2026-03-10T10:19:40.887 INFO:tasks.workunit.client.0.vm02.stdout:1/519: write d4/f5 [2039236,36569] 0 2026-03-10T10:19:40.897 INFO:tasks.workunit.client.0.vm02.stdout:4/640: dwrite d1/d52/d53/f5b [0,4194304] 0 2026-03-10T10:19:40.899 INFO:tasks.workunit.client.0.vm02.stdout:9/468: rename da/d3c/d4c/l64 to da/d3c/d4c/d38/d82/d8c/l94 0 2026-03-10T10:19:40.905 INFO:tasks.workunit.client.1.vm05.stdout:7/512: dread - d5/d1d/f53 zero size 2026-03-10T10:19:40.906 INFO:tasks.workunit.client.1.vm05.stdout:7/513: write d5/d1d/d20/d2d/f95 [887063,63071] 0 2026-03-10T10:19:40.907 INFO:tasks.workunit.client.1.vm05.stdout:7/514: write d5/d1d/d20/d2d/f95 [1651148,47210] 0 2026-03-10T10:19:40.918 INFO:tasks.workunit.client.0.vm02.stdout:7/479: chown d1/c23 1 1 2026-03-10T10:19:40.918 INFO:tasks.workunit.client.0.vm02.stdout:1/520: mknod d4/da/d1a/d47/ca3 0 2026-03-10T10:19:40.918 INFO:tasks.workunit.client.0.vm02.stdout:1/521: chown d4/da/d1a/d47/d65/l8b 53 1 2026-03-10T10:19:40.918 INFO:tasks.workunit.client.1.vm05.stdout:2/440: write db/d28/d4f/d59/f7e [1645519,26974] 0 2026-03-10T10:19:40.918 INFO:tasks.workunit.client.1.vm05.stdout:2/441: write db/d1c/f1f [131840,82757] 0 2026-03-10T10:19:40.918 INFO:tasks.workunit.client.1.vm05.stdout:8/378: unlink d7/d14/d15/d3b/c47 0 2026-03-10T10:19:40.918 INFO:tasks.workunit.client.1.vm05.stdout:8/379: dread - d7/d2f/d57/f66 zero size 2026-03-10T10:19:40.918 INFO:tasks.workunit.client.1.vm05.stdout:8/380: write d7/f1c [1621565,106161] 0 2026-03-10T10:19:40.919 INFO:tasks.workunit.client.1.vm05.stdout:9/443: mkdir d0/d1/d13/de/d93 0 2026-03-10T10:19:40.919 INFO:tasks.workunit.client.1.vm05.stdout:9/444: write d0/d1/d13/d26/f4f [2991938,12956] 0 2026-03-10T10:19:40.919 INFO:tasks.workunit.client.1.vm05.stdout:8/381: dwrite d7/f11 [4194304,4194304] 0 2026-03-10T10:19:40.921 INFO:tasks.workunit.client.0.vm02.stdout:9/469: sync 2026-03-10T10:19:40.922 INFO:tasks.workunit.client.0.vm02.stdout:9/470: write da/d3c/d4c/d38/d4a/d70/f74 [602865,15762] 0 2026-03-10T10:19:40.928 INFO:tasks.workunit.client.0.vm02.stdout:4/641: mknod d1/d10/d88/ccf 0 2026-03-10T10:19:40.928 INFO:tasks.workunit.client.1.vm05.stdout:2/442: fsync db/f24 0 2026-03-10T10:19:40.929 INFO:tasks.workunit.client.0.vm02.stdout:6/470: truncate d0/d8/d29/d2f/d4b/f26 1626399 0 2026-03-10T10:19:40.934 INFO:tasks.workunit.client.0.vm02.stdout:3/488: truncate d1/d8/d21/f47 1803382 0 2026-03-10T10:19:40.935 INFO:tasks.workunit.client.0.vm02.stdout:5/650: read - d1/d6a/fd7 zero size 2026-03-10T10:19:40.935 INFO:tasks.workunit.client.0.vm02.stdout:3/489: write d1/d6/f53 [2792193,59215] 0 2026-03-10T10:19:40.941 INFO:tasks.workunit.client.0.vm02.stdout:4/642: sync 2026-03-10T10:19:40.943 INFO:tasks.workunit.client.1.vm05.stdout:7/515: getdents d5/d1d/d20/d77 0 2026-03-10T10:19:40.950 INFO:tasks.workunit.client.0.vm02.stdout:0/533: mkdir d9/d18/d1a/d22/d24/d8e/d9b/daa 0 2026-03-10T10:19:40.950 INFO:tasks.workunit.client.0.vm02.stdout:0/534: read d9/f6c [2352067,10431] 0 2026-03-10T10:19:40.950 INFO:tasks.workunit.client.0.vm02.stdout:0/535: readlink d9/d18/l44 0 2026-03-10T10:19:40.950 INFO:tasks.workunit.client.1.vm05.stdout:7/516: fsync d5/d1d/f56 0 2026-03-10T10:19:40.950 INFO:tasks.workunit.client.1.vm05.stdout:9/445: fdatasync d0/d1/d16/f40 0 2026-03-10T10:19:40.950 INFO:tasks.workunit.client.1.vm05.stdout:8/382: rename d7/d2f/l46 to d7/d14/d15/d3b/l6e 0 2026-03-10T10:19:40.951 INFO:tasks.workunit.client.0.vm02.stdout:9/471: rename da/d3c/d4c/d2c/d34/d35/c76 to da/d3c/d4c/d38/d82/d89/c95 0 2026-03-10T10:19:40.954 INFO:tasks.workunit.client.1.vm05.stdout:2/443: unlink db/d2d/d5e/f86 0 2026-03-10T10:19:40.955 INFO:tasks.workunit.client.0.vm02.stdout:6/471: truncate d0/d8/d29/d2f/f61 769754 0 2026-03-10T10:19:40.955 INFO:tasks.workunit.client.0.vm02.stdout:6/472: chown d0/d8/l22 42 1 2026-03-10T10:19:40.961 INFO:tasks.workunit.client.1.vm05.stdout:0/513: dwrite d1/d2/d9/d50/f94 [0,4194304] 0 2026-03-10T10:19:40.962 INFO:tasks.workunit.client.0.vm02.stdout:3/490: write d1/d20/d52/f92 [961064,25130] 0 2026-03-10T10:19:40.964 INFO:tasks.workunit.client.1.vm05.stdout:3/505: dwrite dd/d15/f23 [0,4194304] 0 2026-03-10T10:19:40.969 INFO:tasks.workunit.client.1.vm05.stdout:7/517: mkdir d5/d26/d9c 0 2026-03-10T10:19:40.970 INFO:tasks.workunit.client.0.vm02.stdout:4/643: mkdir d1/d41/d5e/d78/d44/dd0 0 2026-03-10T10:19:40.970 INFO:tasks.workunit.client.1.vm05.stdout:3/506: dwrite dd/d15/d69/f86 [0,4194304] 0 2026-03-10T10:19:40.971 INFO:tasks.workunit.client.1.vm05.stdout:4/364: write d1/d3/f4a [916603,125388] 0 2026-03-10T10:19:40.975 INFO:tasks.workunit.client.1.vm05.stdout:4/365: write d1/d3/f6c [548624,80295] 0 2026-03-10T10:19:40.982 INFO:tasks.workunit.client.1.vm05.stdout:8/383: readlink d7/l1a 0 2026-03-10T10:19:40.984 INFO:tasks.workunit.client.1.vm05.stdout:5/531: dwrite da/db/d28/f56 [0,4194304] 0 2026-03-10T10:19:40.984 INFO:tasks.workunit.client.0.vm02.stdout:3/491: rmdir d1/d8 39 2026-03-10T10:19:40.985 INFO:tasks.workunit.client.0.vm02.stdout:3/492: write d1/d20/d52/f6f [4186691,123699] 0 2026-03-10T10:19:40.989 INFO:tasks.workunit.client.1.vm05.stdout:5/532: chown da/db/d28/d8a/fa9 741 1 2026-03-10T10:19:40.991 INFO:tasks.workunit.client.1.vm05.stdout:6/465: dwrite dd/d36/d3f/f41 [0,4194304] 0 2026-03-10T10:19:41.006 INFO:tasks.workunit.client.1.vm05.stdout:2/444: dread db/d2d/f52 [0,4194304] 0 2026-03-10T10:19:41.010 INFO:tasks.workunit.client.1.vm05.stdout:7/518: creat d5/d17/d66/f9d x:0 0 0 2026-03-10T10:19:41.013 INFO:tasks.workunit.client.0.vm02.stdout:5/651: creat d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fdf x:0 0 0 2026-03-10T10:19:41.016 INFO:tasks.workunit.client.1.vm05.stdout:3/507: mkdir dd/d20/d56/db3 0 2026-03-10T10:19:41.020 INFO:tasks.workunit.client.0.vm02.stdout:0/536: mknod d9/d34/d3d/d65/d89/cab 0 2026-03-10T10:19:41.024 INFO:tasks.workunit.client.0.vm02.stdout:6/473: link d0/d8/d29/d6d/d96/f97 d0/d8/d9/d7a/f99 0 2026-03-10T10:19:41.024 INFO:tasks.workunit.client.0.vm02.stdout:6/474: readlink d0/d8/d29/d6d/l76 0 2026-03-10T10:19:41.025 INFO:tasks.workunit.client.0.vm02.stdout:8/492: rename d1/d1c/d24/d35/c6c to d1/d1c/d43/d5b/c95 0 2026-03-10T10:19:41.026 INFO:tasks.workunit.client.0.vm02.stdout:8/493: chown d1/d1c/d23/c69 230941 1 2026-03-10T10:19:41.029 INFO:tasks.workunit.client.1.vm05.stdout:9/446: symlink d0/d1/d13/l94 0 2026-03-10T10:19:41.033 INFO:tasks.workunit.client.1.vm05.stdout:8/384: dread d7/f11 [0,4194304] 0 2026-03-10T10:19:41.040 INFO:tasks.workunit.client.1.vm05.stdout:8/385: read d7/d14/d24/f42 [3748515,91069] 0 2026-03-10T10:19:41.040 INFO:tasks.workunit.client.1.vm05.stdout:8/386: write d7/d14/f23 [3562757,103210] 0 2026-03-10T10:19:41.040 INFO:tasks.workunit.client.1.vm05.stdout:8/387: dwrite f6 [0,4194304] 0 2026-03-10T10:19:41.049 INFO:tasks.workunit.client.1.vm05.stdout:0/514: mknod d1/d2/d9/d31/d54/cad 0 2026-03-10T10:19:41.049 INFO:tasks.workunit.client.1.vm05.stdout:5/533: symlink da/db/d26/d5c/d4b/lb1 0 2026-03-10T10:19:41.049 INFO:tasks.workunit.client.0.vm02.stdout:0/537: symlink d9/d18/d1a/d22/d24/d80/d49/lac 0 2026-03-10T10:19:41.049 INFO:tasks.workunit.client.0.vm02.stdout:4/644: rename d1/d41/d5e/d78/d55/f7c to d1/d41/d5e/d78/d1a/d49/d81/fd1 0 2026-03-10T10:19:41.050 INFO:tasks.workunit.client.0.vm02.stdout:5/652: symlink d1/db/d11/d84/d40/d4f/le0 0 2026-03-10T10:19:41.053 INFO:tasks.workunit.client.1.vm05.stdout:4/366: mkdir d1/d31/d76 0 2026-03-10T10:19:41.053 INFO:tasks.workunit.client.0.vm02.stdout:8/494: dwrite d1/d1c/d23/d25/f64 [0,4194304] 0 2026-03-10T10:19:41.053 INFO:tasks.workunit.client.0.vm02.stdout:8/495: chown d1/f68 0 1 2026-03-10T10:19:41.054 INFO:tasks.workunit.client.1.vm05.stdout:3/508: stat dd/d20/d9e/ca4 0 2026-03-10T10:19:41.059 INFO:tasks.workunit.client.1.vm05.stdout:6/466: dwrite dd/d36/d7d/f8a [0,4194304] 0 2026-03-10T10:19:41.074 INFO:tasks.workunit.client.0.vm02.stdout:4/645: rmdir d1/d32/da3 39 2026-03-10T10:19:41.074 INFO:tasks.workunit.client.0.vm02.stdout:4/646: dread - d1/d10/fce zero size 2026-03-10T10:19:41.078 INFO:tasks.workunit.client.0.vm02.stdout:4/647: dwrite d1/d52/d53/fbb [0,4194304] 0 2026-03-10T10:19:41.078 INFO:tasks.workunit.client.0.vm02.stdout:4/648: stat d1/d41/d5e/d78/d37/f14 0 2026-03-10T10:19:41.079 INFO:tasks.workunit.client.1.vm05.stdout:4/367: dread d1/d31/f13 [0,4194304] 0 2026-03-10T10:19:41.083 INFO:tasks.workunit.client.0.vm02.stdout:9/472: dread da/f5c [0,4194304] 0 2026-03-10T10:19:41.093 INFO:tasks.workunit.client.1.vm05.stdout:3/509: sync 2026-03-10T10:19:41.093 INFO:tasks.workunit.client.1.vm05.stdout:3/510: stat dd/d39/f96 0 2026-03-10T10:19:41.094 INFO:tasks.workunit.client.1.vm05.stdout:3/511: fdatasync dd/d39/f6f 0 2026-03-10T10:19:41.102 INFO:tasks.workunit.client.0.vm02.stdout:5/653: chown d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fc3 767402462 1 2026-03-10T10:19:41.112 INFO:tasks.workunit.client.0.vm02.stdout:9/473: dread - da/d3c/d4c/d38/d4a/f54 zero size 2026-03-10T10:19:41.116 INFO:tasks.workunit.client.0.vm02.stdout:0/538: creat d9/d34/d3d/d65/da3/fad x:0 0 0 2026-03-10T10:19:41.117 INFO:tasks.workunit.client.0.vm02.stdout:0/539: write d9/d18/d1a/f6f [3992834,85884] 0 2026-03-10T10:19:41.125 INFO:tasks.workunit.client.1.vm05.stdout:2/445: dread db/f26 [0,4194304] 0 2026-03-10T10:19:41.132 INFO:tasks.workunit.client.0.vm02.stdout:5/654: rmdir d1/db/d11/d16/d48 39 2026-03-10T10:19:41.140 INFO:tasks.workunit.client.0.vm02.stdout:3/493: getdents d1/d8/d21 0 2026-03-10T10:19:41.151 INFO:tasks.workunit.client.0.vm02.stdout:5/655: symlink d1/db/d11/d13/d28/d37/d3d/le1 0 2026-03-10T10:19:41.155 INFO:tasks.workunit.client.0.vm02.stdout:9/474: mkdir da/d3c/d4c/d2c/d96 0 2026-03-10T10:19:41.160 INFO:tasks.workunit.client.0.vm02.stdout:3/494: symlink d1/d8/d86/l9a 0 2026-03-10T10:19:41.164 INFO:tasks.workunit.client.0.vm02.stdout:9/475: fdatasync da/d3c/d4c/f31 0 2026-03-10T10:19:41.166 INFO:tasks.workunit.client.1.vm05.stdout:8/388: symlink d7/d14/d24/l6f 0 2026-03-10T10:19:41.167 INFO:tasks.workunit.client.1.vm05.stdout:8/389: chown d7/d14/d15/l30 30844 1 2026-03-10T10:19:41.167 INFO:tasks.workunit.client.0.vm02.stdout:3/495: creat d1/d8/d86/f9b x:0 0 0 2026-03-10T10:19:41.171 INFO:tasks.workunit.client.0.vm02.stdout:5/656: creat d1/db/d11/d13/d28/da7/dd9/fe2 x:0 0 0 2026-03-10T10:19:41.173 INFO:tasks.workunit.client.0.vm02.stdout:9/476: dread - da/d3c/d4c/d38/d4a/f7f zero size 2026-03-10T10:19:41.173 INFO:tasks.workunit.client.1.vm05.stdout:0/515: dwrite d1/d2/d9/d31/d13/f4c [0,4194304] 0 2026-03-10T10:19:41.175 INFO:tasks.workunit.client.1.vm05.stdout:7/519: creat d5/d1d/d29/d3e/d8c/d96/f9e x:0 0 0 2026-03-10T10:19:41.176 INFO:tasks.workunit.client.1.vm05.stdout:5/534: dwrite da/db/d26/d35/d38/fa2 [0,4194304] 0 2026-03-10T10:19:41.177 INFO:tasks.workunit.client.0.vm02.stdout:3/496: rename d1/d8/d21/d73/d78/d84/c97 to d1/d8/d21/d73/d78/d84/c9c 0 2026-03-10T10:19:41.180 INFO:tasks.workunit.client.1.vm05.stdout:8/390: dread d7/d14/d62/f69 [0,4194304] 0 2026-03-10T10:19:41.189 INFO:tasks.workunit.client.0.vm02.stdout:3/497: dread d1/f77 [0,4194304] 0 2026-03-10T10:19:41.200 INFO:tasks.workunit.client.1.vm05.stdout:4/368: truncate d1/d31/f2f 1071976 0 2026-03-10T10:19:41.214 INFO:tasks.workunit.client.0.vm02.stdout:9/477: fsync da/d3c/d4c/d38/f45 0 2026-03-10T10:19:41.219 INFO:tasks.workunit.client.1.vm05.stdout:3/512: mknod dd/d20/d94/cb4 0 2026-03-10T10:19:41.221 INFO:tasks.workunit.client.1.vm05.stdout:2/446: chown db/d12/f3b 815 1 2026-03-10T10:19:41.221 INFO:tasks.workunit.client.1.vm05.stdout:3/513: dread dd/d15/fa3 [0,4194304] 0 2026-03-10T10:19:41.227 INFO:tasks.workunit.client.1.vm05.stdout:0/516: truncate d1/d2/d9/d31/f84 128898 0 2026-03-10T10:19:41.228 INFO:tasks.workunit.client.1.vm05.stdout:0/517: truncate d1/d2/d9/d31/d12/d41/fa9 792408 0 2026-03-10T10:19:41.231 INFO:tasks.workunit.client.1.vm05.stdout:5/535: creat da/db/d28/d97/fb2 x:0 0 0 2026-03-10T10:19:41.233 INFO:tasks.workunit.client.1.vm05.stdout:8/391: rename d7/l3d to d7/d14/d3a/l70 0 2026-03-10T10:19:41.238 INFO:tasks.workunit.client.1.vm05.stdout:3/514: truncate dd/d15/f1b 400406 0 2026-03-10T10:19:41.243 INFO:tasks.workunit.client.1.vm05.stdout:2/447: write db/d1c/d40/f50 [1508296,126905] 0 2026-03-10T10:19:41.245 INFO:tasks.workunit.client.0.vm02.stdout:3/498: sync 2026-03-10T10:19:41.250 INFO:tasks.workunit.client.1.vm05.stdout:6/467: creat dd/d36/d3f/d12/d44/d2a/f98 x:0 0 0 2026-03-10T10:19:41.256 INFO:tasks.workunit.client.1.vm05.stdout:8/392: mknod d7/d14/d24/c71 0 2026-03-10T10:19:41.264 INFO:tasks.workunit.client.0.vm02.stdout:2/496: write d0/d1a/d49/f50 [1986613,113214] 0 2026-03-10T10:19:41.277 INFO:tasks.workunit.client.1.vm05.stdout:1/566: dwrite d4/dd/f64 [0,4194304] 0 2026-03-10T10:19:41.278 INFO:tasks.workunit.client.0.vm02.stdout:7/480: dwrite d1/dc/d16/f1f [0,4194304] 0 2026-03-10T10:19:41.278 INFO:tasks.workunit.client.1.vm05.stdout:1/567: chown d4/df/la6 226951 1 2026-03-10T10:19:41.281 INFO:tasks.workunit.client.0.vm02.stdout:1/522: write d4/ff [8320691,19599] 0 2026-03-10T10:19:41.294 INFO:tasks.workunit.client.1.vm05.stdout:2/448: mknod db/d2d/c89 0 2026-03-10T10:19:41.294 INFO:tasks.workunit.client.1.vm05.stdout:3/515: mkdir dd/d15/d4c/db5 0 2026-03-10T10:19:41.294 INFO:tasks.workunit.client.0.vm02.stdout:3/499: creat d1/d20/f9d x:0 0 0 2026-03-10T10:19:41.294 INFO:tasks.workunit.client.1.vm05.stdout:0/518: link d1/d2/d9/d31/d13/d2f/f8f d1/d2/d9/d31/d13/d15/d4e/d8a/fae 0 2026-03-10T10:19:41.295 INFO:tasks.workunit.client.1.vm05.stdout:0/519: truncate d1/d2/d9/f6c 312678 0 2026-03-10T10:19:41.298 INFO:tasks.workunit.client.0.vm02.stdout:7/481: symlink d1/dc/d55/l90 0 2026-03-10T10:19:41.299 INFO:tasks.workunit.client.0.vm02.stdout:7/482: chown d1/dc/d55/l90 49659292 1 2026-03-10T10:19:41.305 INFO:tasks.workunit.client.1.vm05.stdout:8/393: unlink d7/d14/f55 0 2026-03-10T10:19:41.309 INFO:tasks.workunit.client.0.vm02.stdout:3/500: truncate d1/d8/d21/f2a 58417 0 2026-03-10T10:19:41.315 INFO:tasks.workunit.client.1.vm05.stdout:1/568: mkdir d4/d39/d3e/da0/da8 0 2026-03-10T10:19:41.323 INFO:tasks.workunit.client.0.vm02.stdout:7/483: mknod d1/dc/d55/c91 0 2026-03-10T10:19:41.323 INFO:tasks.workunit.client.1.vm05.stdout:0/520: sync 2026-03-10T10:19:41.329 INFO:tasks.workunit.client.0.vm02.stdout:7/484: dwrite d1/dc/d44/f75 [0,4194304] 0 2026-03-10T10:19:41.334 INFO:tasks.workunit.client.1.vm05.stdout:2/449: chown db/f19 96078 1 2026-03-10T10:19:41.341 INFO:tasks.workunit.client.0.vm02.stdout:2/497: link d0/d10/l79 d0/d10/d69/la4 0 2026-03-10T10:19:41.341 INFO:tasks.workunit.client.1.vm05.stdout:2/450: dwrite db/d28/d4f/d59/f6f [0,4194304] 0 2026-03-10T10:19:41.348 INFO:tasks.workunit.client.1.vm05.stdout:8/394: dread d7/f21 [0,4194304] 0 2026-03-10T10:19:41.348 INFO:tasks.workunit.client.1.vm05.stdout:3/516: creat dd/d39/fb6 x:0 0 0 2026-03-10T10:19:41.350 INFO:tasks.workunit.client.0.vm02.stdout:3/501: symlink d1/d8/d21/d7d/l9e 0 2026-03-10T10:19:41.354 INFO:tasks.workunit.client.0.vm02.stdout:7/485: mknod d1/dc/d10/d38/c92 0 2026-03-10T10:19:41.358 INFO:tasks.workunit.client.0.vm02.stdout:7/486: dwrite d1/dc/d16/f6d [0,4194304] 0 2026-03-10T10:19:41.360 INFO:tasks.workunit.client.0.vm02.stdout:7/487: fdatasync d1/dc/d16/f1e 0 2026-03-10T10:19:41.361 INFO:tasks.workunit.client.0.vm02.stdout:7/488: dread - d1/dc/d44/f4a zero size 2026-03-10T10:19:41.361 INFO:tasks.workunit.client.0.vm02.stdout:7/489: write d1/dc/d55/f8d [764567,66378] 0 2026-03-10T10:19:41.372 INFO:tasks.workunit.client.1.vm05.stdout:0/521: symlink d1/d2/d9/d31/d13/d15/laf 0 2026-03-10T10:19:41.380 INFO:tasks.workunit.client.1.vm05.stdout:2/451: creat db/d28/d4f/f8a x:0 0 0 2026-03-10T10:19:41.385 INFO:tasks.workunit.client.1.vm05.stdout:8/395: creat d7/d14/d3a/d49/f72 x:0 0 0 2026-03-10T10:19:41.389 INFO:tasks.workunit.client.0.vm02.stdout:2/498: creat d0/d10/d69/d9f/fa5 x:0 0 0 2026-03-10T10:19:41.389 INFO:tasks.workunit.client.0.vm02.stdout:1/523: rename d4/da/d1a/l68 to d4/da/d1a/d47/d88/la4 0 2026-03-10T10:19:41.395 INFO:tasks.workunit.client.0.vm02.stdout:3/502: symlink d1/d8/d21/d73/d78/d79/l9f 0 2026-03-10T10:19:41.396 INFO:tasks.workunit.client.1.vm05.stdout:2/452: dread db/d1c/d40/f73 [0,4194304] 0 2026-03-10T10:19:41.402 INFO:tasks.workunit.client.1.vm05.stdout:7/520: dread d5/d1d/d29/f5c [0,4194304] 0 2026-03-10T10:19:41.411 INFO:tasks.workunit.client.1.vm05.stdout:7/521: stat d5/l1b 0 2026-03-10T10:19:41.411 INFO:tasks.workunit.client.1.vm05.stdout:8/396: creat d7/d14/d15/d3b/f73 x:0 0 0 2026-03-10T10:19:41.411 INFO:tasks.workunit.client.1.vm05.stdout:8/397: truncate d7/d14/d3a/d49/f72 419298 0 2026-03-10T10:19:41.411 INFO:tasks.workunit.client.1.vm05.stdout:0/522: symlink d1/d2/d39/lb0 0 2026-03-10T10:19:41.411 INFO:tasks.workunit.client.1.vm05.stdout:2/453: mkdir db/d28/d4f/d8b 0 2026-03-10T10:19:41.411 INFO:tasks.workunit.client.1.vm05.stdout:2/454: fsync db/d28/d4f/f75 0 2026-03-10T10:19:41.411 INFO:tasks.workunit.client.1.vm05.stdout:7/522: dread - d5/d1d/d29/d3e/d8c/f81 zero size 2026-03-10T10:19:41.416 INFO:tasks.workunit.client.1.vm05.stdout:8/398: symlink d7/d14/l74 0 2026-03-10T10:19:41.417 INFO:tasks.workunit.client.1.vm05.stdout:8/399: chown d7/d14/d3a/d49/l56 67 1 2026-03-10T10:19:41.419 INFO:tasks.workunit.client.0.vm02.stdout:2/499: sync 2026-03-10T10:19:41.422 INFO:tasks.workunit.client.1.vm05.stdout:7/523: mknod d5/d1d/d20/d35/c9f 0 2026-03-10T10:19:41.426 INFO:tasks.workunit.client.1.vm05.stdout:8/400: fdatasync d7/d14/f38 0 2026-03-10T10:19:41.431 INFO:tasks.workunit.client.0.vm02.stdout:2/500: stat d0/c77 0 2026-03-10T10:19:41.434 INFO:tasks.workunit.client.1.vm05.stdout:0/523: dread d1/d2/d9/d31/d13/d17/f56 [0,4194304] 0 2026-03-10T10:19:41.447 INFO:tasks.workunit.client.0.vm02.stdout:6/475: getdents d0/d8/d9/d7a 0 2026-03-10T10:19:41.457 INFO:tasks.workunit.client.1.vm05.stdout:2/455: truncate db/d28/f7f 302369 0 2026-03-10T10:19:41.459 INFO:tasks.workunit.client.0.vm02.stdout:2/501: dread d0/d10/f4b [0,4194304] 0 2026-03-10T10:19:41.461 INFO:tasks.workunit.client.0.vm02.stdout:2/502: dread d0/d1a/f52 [0,4194304] 0 2026-03-10T10:19:41.461 INFO:tasks.workunit.client.0.vm02.stdout:2/503: chown d0/d1a/f52 12048 1 2026-03-10T10:19:41.462 INFO:tasks.workunit.client.1.vm05.stdout:8/401: unlink d7/d14/f38 0 2026-03-10T10:19:41.466 INFO:tasks.workunit.client.1.vm05.stdout:2/456: rename db/d28/d4f/d59/c5c to db/d12/d74/c8c 0 2026-03-10T10:19:41.469 INFO:tasks.workunit.client.0.vm02.stdout:4/649: write d1/d41/d5e/d78/f3f [2277355,10847] 0 2026-03-10T10:19:41.470 INFO:tasks.workunit.client.0.vm02.stdout:2/504: dwrite d0/d1a/f52 [0,4194304] 0 2026-03-10T10:19:41.471 INFO:tasks.workunit.client.0.vm02.stdout:2/505: fdatasync d0/d1a/d49/d5e/d8a/f98 0 2026-03-10T10:19:41.490 INFO:tasks.workunit.client.0.vm02.stdout:6/476: mkdir d0/d8/d29/d94/d9a 0 2026-03-10T10:19:41.493 INFO:tasks.workunit.client.1.vm05.stdout:8/402: write d7/d14/f5b [5184355,54587] 0 2026-03-10T10:19:41.498 INFO:tasks.workunit.client.0.vm02.stdout:8/496: dwrite d1/d1c/d23/f3b [0,4194304] 0 2026-03-10T10:19:41.503 INFO:tasks.workunit.client.0.vm02.stdout:6/477: rmdir d0/d8/d8c 39 2026-03-10T10:19:41.505 INFO:tasks.workunit.client.0.vm02.stdout:0/540: write d9/d18/d1a/d22/d24/d80/d74/f62 [2421829,85079] 0 2026-03-10T10:19:41.509 INFO:tasks.workunit.client.1.vm05.stdout:9/447: write d0/d1/d13/d26/f4e [2980547,45223] 0 2026-03-10T10:19:41.512 INFO:tasks.workunit.client.1.vm05.stdout:8/403: rename d7/d14/d24/c71 to d7/d14/c75 0 2026-03-10T10:19:41.521 INFO:tasks.workunit.client.1.vm05.stdout:9/448: read d0/d1/d13/f27 [824961,72417] 0 2026-03-10T10:19:41.522 INFO:tasks.workunit.client.1.vm05.stdout:9/449: truncate d0/d1/f7b 209009 0 2026-03-10T10:19:41.523 INFO:tasks.workunit.client.1.vm05.stdout:8/404: rmdir d7/d14/d24/d3f 39 2026-03-10T10:19:41.535 INFO:tasks.workunit.client.1.vm05.stdout:3/517: getdents dd/d20/d94 0 2026-03-10T10:19:41.535 INFO:tasks.workunit.client.0.vm02.stdout:5/657: write d1/d9c/fa9 [612325,111384] 0 2026-03-10T10:19:41.535 INFO:tasks.workunit.client.0.vm02.stdout:9/478: write da/ff [550876,57059] 0 2026-03-10T10:19:41.535 INFO:tasks.workunit.client.0.vm02.stdout:9/479: stat da/d3c/d53/l66 0 2026-03-10T10:19:41.535 INFO:tasks.workunit.client.0.vm02.stdout:9/480: stat da/d3c/d4c/d38/d82/d8c 0 2026-03-10T10:19:41.535 INFO:tasks.workunit.client.0.vm02.stdout:9/481: chown da/d3c/d4c/d38/d4a/d70 6939 1 2026-03-10T10:19:41.544 INFO:tasks.workunit.client.1.vm05.stdout:8/405: dread d7/f21 [0,4194304] 0 2026-03-10T10:19:41.556 INFO:tasks.workunit.client.1.vm05.stdout:4/369: dwrite d1/d31/f1b [0,4194304] 0 2026-03-10T10:19:41.558 INFO:tasks.workunit.client.1.vm05.stdout:5/536: dwrite da/db/d28/d6e/f89 [0,4194304] 0 2026-03-10T10:19:41.561 INFO:tasks.workunit.client.0.vm02.stdout:5/658: chown d1/c7 207143595 1 2026-03-10T10:19:41.565 INFO:tasks.workunit.client.1.vm05.stdout:9/450: link d0/df/f3b d0/d70/f95 0 2026-03-10T10:19:41.568 INFO:tasks.workunit.client.1.vm05.stdout:9/451: read d0/f1e [783495,44639] 0 2026-03-10T10:19:41.573 INFO:tasks.workunit.client.0.vm02.stdout:9/482: rename c4 to da/d3c/d4c/d75/c97 0 2026-03-10T10:19:41.579 INFO:tasks.workunit.client.0.vm02.stdout:6/478: link d0/f4c d0/d8/f9b 0 2026-03-10T10:19:41.586 INFO:tasks.workunit.client.0.vm02.stdout:8/497: dread d1/d1c/d23/d3e/f5a [0,4194304] 0 2026-03-10T10:19:41.594 INFO:tasks.workunit.client.0.vm02.stdout:2/506: dread d0/f4d [0,4194304] 0 2026-03-10T10:19:41.600 INFO:tasks.workunit.client.1.vm05.stdout:6/468: dwrite dd/d36/d3f/d12/d44/d2a/d3d/d48/f4b [0,4194304] 0 2026-03-10T10:19:41.600 INFO:tasks.workunit.client.0.vm02.stdout:2/507: dread - d0/d1a/d49/d5e/d65/f9e zero size 2026-03-10T10:19:41.600 INFO:tasks.workunit.client.0.vm02.stdout:2/508: dread - d0/d10/d81/f9b zero size 2026-03-10T10:19:41.611 INFO:tasks.workunit.client.0.vm02.stdout:9/483: read - da/d3c/d4c/d2c/d34/f68 zero size 2026-03-10T10:19:41.613 INFO:tasks.workunit.client.0.vm02.stdout:6/479: symlink d0/d8/d9/d7a/l9c 0 2026-03-10T10:19:41.622 INFO:tasks.workunit.client.1.vm05.stdout:1/569: dwrite d4/df/d1c/f9c [8388608,4194304] 0 2026-03-10T10:19:41.623 INFO:tasks.workunit.client.1.vm05.stdout:8/406: mkdir d7/d14/d24/d76 0 2026-03-10T10:19:41.632 INFO:tasks.workunit.client.0.vm02.stdout:7/490: dwrite d1/dc/f26 [0,4194304] 0 2026-03-10T10:19:41.638 INFO:tasks.workunit.client.0.vm02.stdout:7/491: dread d1/dc/d16/d28/d2c/f8a [0,4194304] 0 2026-03-10T10:19:41.641 INFO:tasks.workunit.client.0.vm02.stdout:1/524: dwrite d4/da/d27/d38/d80/f94 [0,4194304] 0 2026-03-10T10:19:41.645 INFO:tasks.workunit.client.1.vm05.stdout:6/469: creat dd/d36/d3f/d12/d44/d2a/d3d/f99 x:0 0 0 2026-03-10T10:19:41.650 INFO:tasks.workunit.client.0.vm02.stdout:6/480: dread d0/d8/d29/d6d/d96/f97 [4194304,4194304] 0 2026-03-10T10:19:41.650 INFO:tasks.workunit.client.0.vm02.stdout:1/525: read d4/d2c/d53/f58 [319630,79771] 0 2026-03-10T10:19:41.650 INFO:tasks.workunit.client.0.vm02.stdout:3/503: write d1/d20/d52/f76 [598143,124086] 0 2026-03-10T10:19:41.658 INFO:tasks.workunit.client.1.vm05.stdout:4/370: unlink d1/d31/dc/l28 0 2026-03-10T10:19:41.659 INFO:tasks.workunit.client.1.vm05.stdout:7/524: write d5/d17/f3c [1404568,69728] 0 2026-03-10T10:19:41.659 INFO:tasks.workunit.client.1.vm05.stdout:4/371: read d1/d31/dc/f33 [523938,18273] 0 2026-03-10T10:19:41.664 INFO:tasks.workunit.client.1.vm05.stdout:0/524: dwrite d1/d2/d9/d31/d13/d15/f62 [0,4194304] 0 2026-03-10T10:19:41.666 INFO:tasks.workunit.client.1.vm05.stdout:4/372: dread d1/d31/f36 [0,4194304] 0 2026-03-10T10:19:41.666 INFO:tasks.workunit.client.1.vm05.stdout:5/537: mkdir da/db/d26/d35/db3 0 2026-03-10T10:19:41.669 INFO:tasks.workunit.client.0.vm02.stdout:4/650: dwrite d1/d32/f7b [0,4194304] 0 2026-03-10T10:19:41.669 INFO:tasks.workunit.client.0.vm02.stdout:4/651: dread - d1/d32/f95 zero size 2026-03-10T10:19:41.683 INFO:tasks.workunit.client.0.vm02.stdout:8/498: mknod d1/d1c/d43/c96 0 2026-03-10T10:19:41.687 INFO:tasks.workunit.client.1.vm05.stdout:2/457: write db/d1c/f3d [566097,73387] 0 2026-03-10T10:19:41.688 INFO:tasks.workunit.client.1.vm05.stdout:3/518: truncate dd/d39/f51 682812 0 2026-03-10T10:19:41.689 INFO:tasks.workunit.client.1.vm05.stdout:3/519: write dd/d39/d5f/fa2 [204662,92228] 0 2026-03-10T10:19:41.691 INFO:tasks.workunit.client.0.vm02.stdout:0/541: dwrite d9/f28 [0,4194304] 0 2026-03-10T10:19:41.707 INFO:tasks.workunit.client.1.vm05.stdout:8/407: rename d7/d14/d15/d3b/l67 to d7/d14/d3a/l77 0 2026-03-10T10:19:41.724 INFO:tasks.workunit.client.0.vm02.stdout:7/492: unlink d1/dc/d16/d28/d2d/f3d 0 2026-03-10T10:19:41.724 INFO:tasks.workunit.client.0.vm02.stdout:5/659: dwrite d1/db/d11/d16/d79/d85/f9f [4194304,4194304] 0 2026-03-10T10:19:41.724 INFO:tasks.workunit.client.0.vm02.stdout:6/481: mknod d0/d8/d29/d2f/d4b/c9d 0 2026-03-10T10:19:41.724 INFO:tasks.workunit.client.1.vm05.stdout:6/470: fsync dd/d36/d3f/d12/f4f 0 2026-03-10T10:19:41.724 INFO:tasks.workunit.client.1.vm05.stdout:6/471: chown dd/d36/d3f/c92 5663238 1 2026-03-10T10:19:41.724 INFO:tasks.workunit.client.1.vm05.stdout:6/472: fdatasync dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b 0 2026-03-10T10:19:41.724 INFO:tasks.workunit.client.1.vm05.stdout:6/473: dread - dd/d36/d3f/d12/d44/d2a/f98 zero size 2026-03-10T10:19:41.724 INFO:tasks.workunit.client.0.vm02.stdout:5/660: read d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fb6 [943568,71259] 0 2026-03-10T10:19:41.725 INFO:tasks.workunit.client.0.vm02.stdout:5/661: dread - d1/fdd zero size 2026-03-10T10:19:41.728 INFO:tasks.workunit.client.1.vm05.stdout:6/474: dread dd/d36/d3f/d12/f56 [0,4194304] 0 2026-03-10T10:19:41.752 INFO:tasks.workunit.client.1.vm05.stdout:4/373: truncate d1/d31/dc/d40/d45/f66 513095 0 2026-03-10T10:19:41.754 INFO:tasks.workunit.client.1.vm05.stdout:3/520: read f2 [4917028,6862] 0 2026-03-10T10:19:41.754 INFO:tasks.workunit.client.1.vm05.stdout:3/521: stat dd/d15/d24/d8e/c97 0 2026-03-10T10:19:41.755 INFO:tasks.workunit.client.1.vm05.stdout:9/452: creat d0/df/f96 x:0 0 0 2026-03-10T10:19:41.759 INFO:tasks.workunit.client.1.vm05.stdout:1/570: creat d4/d39/d3e/da0/da8/fa9 x:0 0 0 2026-03-10T10:19:41.764 INFO:tasks.workunit.client.1.vm05.stdout:8/408: fdatasync d7/d14/d24/f42 0 2026-03-10T10:19:41.764 INFO:tasks.workunit.client.1.vm05.stdout:8/409: read d7/d14/d3a/f50 [1165077,68483] 0 2026-03-10T10:19:41.778 INFO:tasks.workunit.client.1.vm05.stdout:4/374: fsync d1/d31/f13 0 2026-03-10T10:19:41.784 INFO:tasks.workunit.client.1.vm05.stdout:3/522: creat dd/d20/d56/fb7 x:0 0 0 2026-03-10T10:19:41.785 INFO:tasks.workunit.client.1.vm05.stdout:3/523: read dd/d15/d24/f42 [2200075,115705] 0 2026-03-10T10:19:41.791 INFO:tasks.workunit.client.1.vm05.stdout:8/410: creat d7/f78 x:0 0 0 2026-03-10T10:19:41.795 INFO:tasks.workunit.client.1.vm05.stdout:6/475: creat dd/d36/d3f/d12/d96/f9a x:0 0 0 2026-03-10T10:19:41.795 INFO:tasks.workunit.client.1.vm05.stdout:6/476: write dd/d36/d3f/d12/d58/f5a [3264969,10969] 0 2026-03-10T10:19:41.797 INFO:tasks.workunit.client.1.vm05.stdout:6/477: read dd/d36/d3f/d12/d44/d2a/d3d/d48/f4b [1819351,73788] 0 2026-03-10T10:19:41.799 INFO:tasks.workunit.client.1.vm05.stdout:7/525: creat d5/d17/fa0 x:0 0 0 2026-03-10T10:19:41.801 INFO:tasks.workunit.client.1.vm05.stdout:0/525: link d1/d2/d9/d31/d13/d15/d4e/f60 d1/d2/d39/d3d/d9f/fb1 0 2026-03-10T10:19:41.801 INFO:tasks.workunit.client.0.vm02.stdout:2/509: write d0/d8c/fa2 [4190880,3444] 0 2026-03-10T10:19:41.811 INFO:tasks.workunit.client.1.vm05.stdout:2/458: link db/d2d/f47 db/d28/d4f/d59/f8d 0 2026-03-10T10:19:41.818 INFO:tasks.workunit.client.1.vm05.stdout:3/524: creat dd/d39/d5f/fb8 x:0 0 0 2026-03-10T10:19:41.825 INFO:tasks.workunit.client.0.vm02.stdout:3/504: dwrite d1/d58/f60 [8388608,4194304] 0 2026-03-10T10:19:41.831 INFO:tasks.workunit.client.1.vm05.stdout:5/538: dwrite da/db/d26/d5c/f33 [0,4194304] 0 2026-03-10T10:19:41.832 INFO:tasks.workunit.client.1.vm05.stdout:5/539: write da/d9a/faa [34430,105710] 0 2026-03-10T10:19:41.837 INFO:tasks.workunit.client.1.vm05.stdout:8/411: write d7/d14/d24/f42 [2497012,126016] 0 2026-03-10T10:19:41.839 INFO:tasks.workunit.client.1.vm05.stdout:8/412: write d7/d2f/d57/f66 [976461,84410] 0 2026-03-10T10:19:41.839 INFO:tasks.workunit.client.1.vm05.stdout:6/478: symlink dd/d36/d7d/l9b 0 2026-03-10T10:19:41.844 INFO:tasks.workunit.client.1.vm05.stdout:6/479: dwrite dd/d36/d3f/d12/d44/d2a/d3d/d48/f82 [0,4194304] 0 2026-03-10T10:19:41.851 INFO:tasks.workunit.client.1.vm05.stdout:6/480: dwrite dd/d36/d3f/f41 [0,4194304] 0 2026-03-10T10:19:41.851 INFO:tasks.workunit.client.1.vm05.stdout:6/481: chown dd 1 1 2026-03-10T10:19:41.853 INFO:tasks.workunit.client.0.vm02.stdout:6/482: rename d0/d8/d9/d7a/l4a to d0/d8/d29/d2f/d50/d98/l9e 0 2026-03-10T10:19:41.862 INFO:tasks.workunit.client.0.vm02.stdout:1/526: mkdir d4/d4a/da5 0 2026-03-10T10:19:41.874 INFO:tasks.workunit.client.0.vm02.stdout:1/527: dwrite d4/da/f13 [8388608,4194304] 0 2026-03-10T10:19:41.875 INFO:tasks.workunit.client.1.vm05.stdout:2/459: mknod db/d1c/d40/c8e 0 2026-03-10T10:19:41.879 INFO:tasks.workunit.client.0.vm02.stdout:4/652: symlink d1/d41/d7e/ld2 0 2026-03-10T10:19:41.885 INFO:tasks.workunit.client.1.vm05.stdout:9/453: creat d0/df/f97 x:0 0 0 2026-03-10T10:19:41.887 INFO:tasks.workunit.client.1.vm05.stdout:5/540: sync 2026-03-10T10:19:41.893 INFO:tasks.workunit.client.0.vm02.stdout:4/653: dread d1/d10/f8 [0,4194304] 0 2026-03-10T10:19:41.900 INFO:tasks.workunit.client.0.vm02.stdout:8/499: dread d1/d1c/d24/d35/f44 [0,4194304] 0 2026-03-10T10:19:41.905 INFO:tasks.workunit.client.1.vm05.stdout:1/571: rename d4/d39/d3e/da0/da8 to d4/df/d1c/d53/daa 0 2026-03-10T10:19:41.912 INFO:tasks.workunit.client.0.vm02.stdout:6/483: fsync d0/d8/d29/d2f/f4e 0 2026-03-10T10:19:41.930 INFO:tasks.workunit.client.1.vm05.stdout:6/482: truncate dd/d36/d3f/d12/d44/d2a/d3d/d3e/f64 301534 0 2026-03-10T10:19:41.930 INFO:tasks.workunit.client.0.vm02.stdout:6/484: dwrite d0/d8/f8f [0,4194304] 0 2026-03-10T10:19:41.930 INFO:tasks.workunit.client.0.vm02.stdout:6/485: stat d0/c95 0 2026-03-10T10:19:41.930 INFO:tasks.workunit.client.0.vm02.stdout:1/528: read - d4/da/f71 zero size 2026-03-10T10:19:41.930 INFO:tasks.workunit.client.0.vm02.stdout:9/484: getdents da/d3c/d4c/d38/d4a 0 2026-03-10T10:19:41.931 INFO:tasks.workunit.client.0.vm02.stdout:2/510: truncate d0/d1a/f31 8595835 0 2026-03-10T10:19:41.935 INFO:tasks.workunit.client.1.vm05.stdout:9/454: mknod d0/d1/d13/d55/c98 0 2026-03-10T10:19:41.937 INFO:tasks.workunit.client.0.vm02.stdout:7/493: creat d1/d1b/d8f/f93 x:0 0 0 2026-03-10T10:19:41.938 INFO:tasks.workunit.client.1.vm05.stdout:5/541: mkdir da/db/d26/d5c/d4b/db4 0 2026-03-10T10:19:41.941 INFO:tasks.workunit.client.0.vm02.stdout:5/662: creat d1/db/d11/d84/d40/fe3 x:0 0 0 2026-03-10T10:19:41.943 INFO:tasks.workunit.client.1.vm05.stdout:1/572: truncate d4/df/d1c/d53/f65 570583 0 2026-03-10T10:19:41.945 INFO:tasks.workunit.client.0.vm02.stdout:6/486: stat d0/c4 0 2026-03-10T10:19:41.952 INFO:tasks.workunit.client.1.vm05.stdout:1/573: dread d4/d39/f7b [0,4194304] 0 2026-03-10T10:19:41.966 INFO:tasks.workunit.client.1.vm05.stdout:8/413: creat d7/d14/d24/f79 x:0 0 0 2026-03-10T10:19:41.967 INFO:tasks.workunit.client.0.vm02.stdout:7/494: symlink d1/dc/d10/l94 0 2026-03-10T10:19:41.971 INFO:tasks.workunit.client.1.vm05.stdout:6/483: symlink dd/d36/d3f/d12/d44/d2a/d77/l9c 0 2026-03-10T10:19:41.972 INFO:tasks.workunit.client.1.vm05.stdout:6/484: chown dd/d36/d3f/d12/d44 356 1 2026-03-10T10:19:41.975 INFO:tasks.workunit.client.0.vm02.stdout:5/663: rmdir d1/db/d11/d16/d79/d85 39 2026-03-10T10:19:41.983 INFO:tasks.workunit.client.1.vm05.stdout:6/485: dread dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b [0,4194304] 0 2026-03-10T10:19:41.983 INFO:tasks.workunit.client.0.vm02.stdout:0/542: write d9/d34/d3d/d7b/f8a [29858,107026] 0 2026-03-10T10:19:41.990 INFO:tasks.workunit.client.0.vm02.stdout:3/505: dread d1/d8/d21/f2a [0,4194304] 0 2026-03-10T10:19:41.990 INFO:tasks.workunit.client.0.vm02.stdout:3/506: write d1/d6/f53 [758237,54498] 0 2026-03-10T10:19:41.991 INFO:tasks.workunit.client.0.vm02.stdout:3/507: chown d1/d8/d21/d7d/c94 67 1 2026-03-10T10:19:41.994 INFO:tasks.workunit.client.1.vm05.stdout:0/526: link d1/d2/d9/d31/f36 d1/d2/d9/d31/d12/fb2 0 2026-03-10T10:19:42.001 INFO:tasks.workunit.client.1.vm05.stdout:7/526: dwrite d5/ff [0,4194304] 0 2026-03-10T10:19:42.001 INFO:tasks.workunit.client.1.vm05.stdout:4/375: dwrite d1/d31/dc/f3a [0,4194304] 0 2026-03-10T10:19:42.002 INFO:tasks.workunit.client.0.vm02.stdout:9/485: truncate da/d3c/d4c/d56/f77 1454932 0 2026-03-10T10:19:42.003 INFO:tasks.workunit.client.1.vm05.stdout:4/376: read - d1/d31/d4b/f51 zero size 2026-03-10T10:19:42.004 INFO:tasks.workunit.client.0.vm02.stdout:4/654: creat d1/d32/fd3 x:0 0 0 2026-03-10T10:19:42.006 INFO:tasks.workunit.client.1.vm05.stdout:3/525: write dd/d15/d24/d2c/f3e [1124580,91258] 0 2026-03-10T10:19:42.007 INFO:tasks.workunit.client.0.vm02.stdout:2/511: mkdir d0/d10/da6 0 2026-03-10T10:19:42.013 INFO:tasks.workunit.client.1.vm05.stdout:3/526: dread dd/d15/d24/d2c/d3b/f48 [0,4194304] 0 2026-03-10T10:19:42.015 INFO:tasks.workunit.client.1.vm05.stdout:8/414: write d7/d14/d24/f2c [2561866,56274] 0 2026-03-10T10:19:42.020 INFO:tasks.workunit.client.1.vm05.stdout:6/486: creat dd/d36/d3f/d12/d58/f9d x:0 0 0 2026-03-10T10:19:42.031 INFO:tasks.workunit.client.0.vm02.stdout:7/495: creat d1/dc/d16/f95 x:0 0 0 2026-03-10T10:19:42.031 INFO:tasks.workunit.client.0.vm02.stdout:7/496: truncate d1/d1b/f86 961746 0 2026-03-10T10:19:42.031 INFO:tasks.workunit.client.0.vm02.stdout:8/500: dwrite d1/d1c/f14 [0,4194304] 0 2026-03-10T10:19:42.032 INFO:tasks.workunit.client.1.vm05.stdout:6/487: dread dd/d36/d7d/f8a [0,4194304] 0 2026-03-10T10:19:42.032 INFO:tasks.workunit.client.1.vm05.stdout:6/488: fdatasync dd/d36/d3f/d12/d44/d2a/d3d/f99 0 2026-03-10T10:19:42.032 INFO:tasks.workunit.client.1.vm05.stdout:6/489: dread dd/d36/d7d/f8a [0,4194304] 0 2026-03-10T10:19:42.032 INFO:tasks.workunit.client.1.vm05.stdout:7/527: creat d5/d1d/d29/fa1 x:0 0 0 2026-03-10T10:19:42.036 INFO:tasks.workunit.client.0.vm02.stdout:0/543: creat d9/d34/d3d/fae x:0 0 0 2026-03-10T10:19:42.037 INFO:tasks.workunit.client.0.vm02.stdout:0/544: truncate d9/d34/d3d/d8d/f95 967175 0 2026-03-10T10:19:42.044 INFO:tasks.workunit.client.1.vm05.stdout:2/460: link db/l1b db/d28/d4f/d59/l8f 0 2026-03-10T10:19:42.045 INFO:tasks.workunit.client.0.vm02.stdout:3/508: dread - d1/f6a zero size 2026-03-10T10:19:42.049 INFO:tasks.workunit.client.1.vm05.stdout:9/455: rename d0/d70/f95 to d0/df/f99 0 2026-03-10T10:19:42.049 INFO:tasks.workunit.client.1.vm05.stdout:9/456: readlink d0/df/d11/l7a 0 2026-03-10T10:19:42.051 INFO:tasks.workunit.client.0.vm02.stdout:6/487: write d0/d8/d8c/d60/f73 [812030,97159] 0 2026-03-10T10:19:42.053 INFO:tasks.workunit.client.1.vm05.stdout:9/457: dwrite d0/df/d11/f8d [0,4194304] 0 2026-03-10T10:19:42.054 INFO:tasks.workunit.client.1.vm05.stdout:3/527: creat dd/d39/d5c/fb9 x:0 0 0 2026-03-10T10:19:42.065 INFO:tasks.workunit.client.1.vm05.stdout:2/461: read db/f24 [1525348,1734] 0 2026-03-10T10:19:42.069 INFO:tasks.workunit.client.0.vm02.stdout:1/529: rename d4/da/d27/d38/d80/d95 to d4/d2c/d53/da6 0 2026-03-10T10:19:42.073 INFO:tasks.workunit.client.1.vm05.stdout:8/415: rmdir d7/d14/d15/d3b 39 2026-03-10T10:19:42.074 INFO:tasks.workunit.client.0.vm02.stdout:4/655: mknod d1/d41/d5e/d78/d55/cd4 0 2026-03-10T10:19:42.082 INFO:tasks.workunit.client.0.vm02.stdout:5/664: mkdir d1/db/d11/d7b/de4 0 2026-03-10T10:19:42.082 INFO:tasks.workunit.client.0.vm02.stdout:5/665: stat d1/db/d11/d7b 0 2026-03-10T10:19:42.083 INFO:tasks.workunit.client.0.vm02.stdout:5/666: readlink d1/db/d11/l50 0 2026-03-10T10:19:42.087 INFO:tasks.workunit.client.0.vm02.stdout:0/545: fsync d9/d18/d1a/d46/d5d/f66 0 2026-03-10T10:19:42.094 INFO:tasks.workunit.client.0.vm02.stdout:6/488: chown d0/d8/d9/f30 318607491 1 2026-03-10T10:19:42.098 INFO:tasks.workunit.client.0.vm02.stdout:9/486: rename da/d3c/d4c/d38/d4a/d70/f74 to da/d3c/d4c/d38/d82/d8c/f98 0 2026-03-10T10:19:42.100 INFO:tasks.workunit.client.0.vm02.stdout:1/530: fdatasync d4/fe 0 2026-03-10T10:19:42.104 INFO:tasks.workunit.client.0.vm02.stdout:4/656: symlink d1/ld5 0 2026-03-10T10:19:42.109 INFO:tasks.workunit.client.0.vm02.stdout:3/509: sync 2026-03-10T10:19:42.112 INFO:tasks.workunit.client.0.vm02.stdout:8/501: getdents d1/d1c/d43/d5b/d88 0 2026-03-10T10:19:42.118 INFO:tasks.workunit.client.0.vm02.stdout:0/546: dread d9/d34/d3d/f58 [0,4194304] 0 2026-03-10T10:19:42.122 INFO:tasks.workunit.client.0.vm02.stdout:0/547: dwrite d9/d18/d1a/d22/d24/d80/d74/f96 [0,4194304] 0 2026-03-10T10:19:42.143 INFO:tasks.workunit.client.1.vm05.stdout:7/528: read d5/d1d/d29/d3e/f65 [45733,73329] 0 2026-03-10T10:19:42.143 INFO:tasks.workunit.client.0.vm02.stdout:6/489: chown d0/d8/d9/f14 918432 1 2026-03-10T10:19:42.143 INFO:tasks.workunit.client.0.vm02.stdout:9/487: unlink da/d3c/d4c/f31 0 2026-03-10T10:19:42.143 INFO:tasks.workunit.client.0.vm02.stdout:9/488: chown da/d3c/d4c/d2c/d34/f68 3104 1 2026-03-10T10:19:42.143 INFO:tasks.workunit.client.0.vm02.stdout:9/489: fdatasync da/d3c/d53/f6a 0 2026-03-10T10:19:42.150 INFO:tasks.workunit.client.0.vm02.stdout:5/667: fsync d1/db/d11/d16/d79/d85/fa0 0 2026-03-10T10:19:42.154 INFO:tasks.workunit.client.1.vm05.stdout:5/542: rename da/db/d26/d35/d38/f51 to da/db/d26/d5c/fb5 0 2026-03-10T10:19:42.156 INFO:tasks.workunit.client.0.vm02.stdout:3/510: unlink d1/d8/d21/f2f 0 2026-03-10T10:19:42.163 INFO:tasks.workunit.client.1.vm05.stdout:3/528: mkdir dd/d20/d94/dba 0 2026-03-10T10:19:42.170 INFO:tasks.workunit.client.0.vm02.stdout:0/548: mknod d9/d18/d1a/d22/d24/d79/d7d/caf 0 2026-03-10T10:19:42.170 INFO:tasks.workunit.client.0.vm02.stdout:0/549: write d9/d18/d1a/d22/d24/d80/d74/f96 [1872892,107739] 0 2026-03-10T10:19:42.176 INFO:tasks.workunit.client.1.vm05.stdout:4/377: write d1/d3/f10 [1620450,15830] 0 2026-03-10T10:19:42.179 INFO:tasks.workunit.client.1.vm05.stdout:4/378: dread d1/d31/f13 [0,4194304] 0 2026-03-10T10:19:42.179 INFO:tasks.workunit.client.0.vm02.stdout:2/512: write d0/d1a/f47 [1109171,55121] 0 2026-03-10T10:19:42.187 INFO:tasks.workunit.client.0.vm02.stdout:6/490: creat d0/d8/d29/d2f/d50/d98/f9f x:0 0 0 2026-03-10T10:19:42.188 INFO:tasks.workunit.client.1.vm05.stdout:1/574: getdents d4/dd 0 2026-03-10T10:19:42.189 INFO:tasks.workunit.client.1.vm05.stdout:1/575: dread - d4/df/d1c/d53/f6b zero size 2026-03-10T10:19:42.189 INFO:tasks.workunit.client.0.vm02.stdout:9/490: mkdir da/d3c/d4c/d38/d4a/d99 0 2026-03-10T10:19:42.192 INFO:tasks.workunit.client.0.vm02.stdout:7/497: link d1/d1b/d8f/d67/f76 d1/dc/d10/d38/f96 0 2026-03-10T10:19:42.194 INFO:tasks.workunit.client.1.vm05.stdout:6/490: write dd/d36/d3f/f22 [193549,116563] 0 2026-03-10T10:19:42.194 INFO:tasks.workunit.client.0.vm02.stdout:1/531: dwrite d4/da/d1a/d47/fa0 [0,4194304] 0 2026-03-10T10:19:42.199 INFO:tasks.workunit.client.0.vm02.stdout:1/532: dwrite d4/da/d1a/d22/f32 [0,4194304] 0 2026-03-10T10:19:42.212 INFO:tasks.workunit.client.1.vm05.stdout:0/527: rename d1/c8 to d1/d2/d9/d31/d13/d17/da1/cb3 0 2026-03-10T10:19:42.213 INFO:tasks.workunit.client.1.vm05.stdout:5/543: chown da/db/d26/d35/d38/c86 5037096 1 2026-03-10T10:19:42.215 INFO:tasks.workunit.client.1.vm05.stdout:9/458: dread - d0/d1/d13/de/d21/f76 zero size 2026-03-10T10:19:42.215 INFO:tasks.workunit.client.0.vm02.stdout:3/511: fdatasync d1/d6/f49 0 2026-03-10T10:19:42.215 INFO:tasks.workunit.client.0.vm02.stdout:3/512: stat d1/d20/d52/l8c 0 2026-03-10T10:19:42.219 INFO:tasks.workunit.client.1.vm05.stdout:2/462: symlink db/d1c/d40/d62/d85/l90 0 2026-03-10T10:19:42.226 INFO:tasks.workunit.client.0.vm02.stdout:0/550: unlink d9/d34/d3d/d65/da3/fad 0 2026-03-10T10:19:42.226 INFO:tasks.workunit.client.1.vm05.stdout:4/379: mknod d1/d31/d4b/c77 0 2026-03-10T10:19:42.226 INFO:tasks.workunit.client.1.vm05.stdout:4/380: dread - d1/d31/f6f zero size 2026-03-10T10:19:42.229 INFO:tasks.workunit.client.1.vm05.stdout:5/544: dread da/db/d26/d35/d38/f65 [0,4194304] 0 2026-03-10T10:19:42.238 INFO:tasks.workunit.client.0.vm02.stdout:2/513: symlink d0/d1a/d24/d80/la7 0 2026-03-10T10:19:42.238 INFO:tasks.workunit.client.0.vm02.stdout:6/491: creat d0/d8/d9/fa0 x:0 0 0 2026-03-10T10:19:42.238 INFO:tasks.workunit.client.1.vm05.stdout:1/576: creat d4/df/d1c/d53/daa/fab x:0 0 0 2026-03-10T10:19:42.238 INFO:tasks.workunit.client.1.vm05.stdout:6/491: unlink dd/d36/d3f/d12/d44/d30/l57 0 2026-03-10T10:19:42.241 INFO:tasks.workunit.client.0.vm02.stdout:5/668: dread d1/db/f1e [0,4194304] 0 2026-03-10T10:19:42.246 INFO:tasks.workunit.client.0.vm02.stdout:4/657: dread d1/f1d [0,4194304] 0 2026-03-10T10:19:42.252 INFO:tasks.workunit.client.0.vm02.stdout:4/658: dread d1/f1d [0,4194304] 0 2026-03-10T10:19:42.259 INFO:tasks.workunit.client.1.vm05.stdout:0/528: mkdir d1/d2/d39/d6e/d95/db4 0 2026-03-10T10:19:42.269 INFO:tasks.workunit.client.0.vm02.stdout:7/498: dwrite d1/d1b/d8f/f5c [4194304,4194304] 0 2026-03-10T10:19:42.276 INFO:tasks.workunit.client.1.vm05.stdout:9/459: mknod d0/d1/d16/c9a 0 2026-03-10T10:19:42.279 INFO:tasks.workunit.client.1.vm05.stdout:3/529: mkdir dd/d15/d24/d2c/d6d/da7/dbb 0 2026-03-10T10:19:42.283 INFO:tasks.workunit.client.0.vm02.stdout:1/533: dread d4/da/d27/f66 [0,4194304] 0 2026-03-10T10:19:42.288 INFO:tasks.workunit.client.0.vm02.stdout:3/513: truncate d1/d20/f4b 3560106 0 2026-03-10T10:19:42.293 INFO:tasks.workunit.client.0.vm02.stdout:1/534: dread d4/f26 [0,4194304] 0 2026-03-10T10:19:42.293 INFO:tasks.workunit.client.0.vm02.stdout:1/535: chown d4/da/d1a/d22/f62 1 1 2026-03-10T10:19:42.294 INFO:tasks.workunit.client.0.vm02.stdout:1/536: read d4/ff [1184537,2676] 0 2026-03-10T10:19:42.295 INFO:tasks.workunit.client.0.vm02.stdout:8/502: creat d1/d1c/d24/d35/f97 x:0 0 0 2026-03-10T10:19:42.297 INFO:tasks.workunit.client.1.vm05.stdout:6/492: write dd/d36/d3f/d12/f56 [822769,130565] 0 2026-03-10T10:19:42.302 INFO:tasks.workunit.client.1.vm05.stdout:0/529: symlink d1/d2/d9/d31/d12/d20/lb5 0 2026-03-10T10:19:42.307 INFO:tasks.workunit.client.0.vm02.stdout:6/492: stat d0/d8/d29/d2f/d4b/f8d 0 2026-03-10T10:19:42.310 INFO:tasks.workunit.client.0.vm02.stdout:2/514: write d0/d1a/d24/f62 [69983,82974] 0 2026-03-10T10:19:42.312 INFO:tasks.workunit.client.1.vm05.stdout:9/460: mknod d0/df/d74/d8c/c9b 0 2026-03-10T10:19:42.313 INFO:tasks.workunit.client.1.vm05.stdout:3/530: readlink dd/d15/l1d 0 2026-03-10T10:19:42.321 INFO:tasks.workunit.client.1.vm05.stdout:2/463: mknod db/d4e/c91 0 2026-03-10T10:19:42.322 INFO:tasks.workunit.client.0.vm02.stdout:5/669: dread d1/db/d11/d13/d28/d37/d3d/f49 [0,4194304] 0 2026-03-10T10:19:42.325 INFO:tasks.workunit.client.1.vm05.stdout:8/416: link d7/d14/d3a/f50 d7/d14/d24/f7a 0 2026-03-10T10:19:42.327 INFO:tasks.workunit.client.0.vm02.stdout:7/499: truncate d1/dc/d16/d28/f73 859281 0 2026-03-10T10:19:42.332 INFO:tasks.workunit.client.1.vm05.stdout:7/529: rename d5/d1d/d20/d2d/f4c to d5/d1d/d20/fa2 0 2026-03-10T10:19:42.336 INFO:tasks.workunit.client.1.vm05.stdout:1/577: write d4/df/d1c/d53/f65 [1117605,36669] 0 2026-03-10T10:19:42.337 INFO:tasks.workunit.client.1.vm05.stdout:6/493: write dd/d36/d7d/f8a [2414078,117490] 0 2026-03-10T10:19:42.337 INFO:tasks.workunit.client.1.vm05.stdout:6/494: stat dd/f29 0 2026-03-10T10:19:42.339 INFO:tasks.workunit.client.0.vm02.stdout:3/514: creat d1/d6/d8e/fa0 x:0 0 0 2026-03-10T10:19:42.346 INFO:tasks.workunit.client.1.vm05.stdout:8/417: unlink d7/d14/d15/f39 0 2026-03-10T10:19:42.347 INFO:tasks.workunit.client.1.vm05.stdout:0/530: dwrite d1/f38 [0,4194304] 0 2026-03-10T10:19:42.356 INFO:tasks.workunit.client.0.vm02.stdout:1/537: dread d4/da/f12 [0,4194304] 0 2026-03-10T10:19:42.361 INFO:tasks.workunit.client.1.vm05.stdout:4/381: rename d1/d31/f1a to d1/d70/f78 0 2026-03-10T10:19:42.373 INFO:tasks.workunit.client.0.vm02.stdout:6/493: mkdir d0/d8/d8c/d60/d6f/da1 0 2026-03-10T10:19:42.373 INFO:tasks.workunit.client.1.vm05.stdout:5/545: rename da to da/db/d26/d5c/d4b/db4/db6 22 2026-03-10T10:19:42.373 INFO:tasks.workunit.client.1.vm05.stdout:7/530: creat d5/d1d/d20/d3b/fa3 x:0 0 0 2026-03-10T10:19:42.373 INFO:tasks.workunit.client.1.vm05.stdout:2/464: dread db/d28/f3f [0,4194304] 0 2026-03-10T10:19:42.373 INFO:tasks.workunit.client.1.vm05.stdout:2/465: chown db 533 1 2026-03-10T10:19:42.373 INFO:tasks.workunit.client.1.vm05.stdout:9/461: symlink d0/d1/d13/de/l9c 0 2026-03-10T10:19:42.373 INFO:tasks.workunit.client.1.vm05.stdout:3/531: symlink dd/lbc 0 2026-03-10T10:19:42.378 INFO:tasks.workunit.client.0.vm02.stdout:4/659: creat d1/d41/fd6 x:0 0 0 2026-03-10T10:19:42.381 INFO:tasks.workunit.client.0.vm02.stdout:1/538: dread d4/fe [0,4194304] 0 2026-03-10T10:19:42.383 INFO:tasks.workunit.client.1.vm05.stdout:7/531: sync 2026-03-10T10:19:42.385 INFO:tasks.workunit.client.1.vm05.stdout:7/532: sync 2026-03-10T10:19:42.390 INFO:tasks.workunit.client.0.vm02.stdout:0/551: truncate d9/f6c 110508 0 2026-03-10T10:19:42.390 INFO:tasks.workunit.client.0.vm02.stdout:2/515: write d0/d1a/f33 [5026853,58133] 0 2026-03-10T10:19:42.398 INFO:tasks.workunit.client.0.vm02.stdout:6/494: read d0/d8/d8c/d60/d6f/f7c [1506145,56287] 0 2026-03-10T10:19:42.404 INFO:tasks.workunit.client.1.vm05.stdout:9/462: write d0/df/d11/f52 [26245,14378] 0 2026-03-10T10:19:42.406 INFO:tasks.workunit.client.0.vm02.stdout:9/491: getdents da/d3c/d4c/d38/d82/d8c 0 2026-03-10T10:19:42.408 INFO:tasks.workunit.client.1.vm05.stdout:0/531: mknod d1/cb6 0 2026-03-10T10:19:42.414 INFO:tasks.workunit.client.1.vm05.stdout:6/495: truncate dd/d36/d3f/f22 433634 0 2026-03-10T10:19:42.421 INFO:tasks.workunit.client.1.vm05.stdout:8/418: truncate d7/d2f/f4b 4049099 0 2026-03-10T10:19:42.423 INFO:tasks.workunit.client.1.vm05.stdout:8/419: sync 2026-03-10T10:19:42.429 INFO:tasks.workunit.client.0.vm02.stdout:1/539: dwrite d4/da/d27/f6a [0,4194304] 0 2026-03-10T10:19:42.429 INFO:tasks.workunit.client.1.vm05.stdout:4/382: dwrite d1/d31/dc/d40/f67 [4194304,4194304] 0 2026-03-10T10:19:42.441 INFO:tasks.workunit.client.1.vm05.stdout:5/546: creat da/db/d26/d35/db3/fb7 x:0 0 0 2026-03-10T10:19:42.442 INFO:tasks.workunit.client.0.vm02.stdout:8/503: creat d1/d1c/d23/d25/f98 x:0 0 0 2026-03-10T10:19:42.448 INFO:tasks.workunit.client.0.vm02.stdout:6/495: creat d0/d8/d8c/d60/d6f/fa2 x:0 0 0 2026-03-10T10:19:42.450 INFO:tasks.workunit.client.0.vm02.stdout:6/496: chown d0/d8/d29/d2f/l65 4 1 2026-03-10T10:19:42.450 INFO:tasks.workunit.client.0.vm02.stdout:6/497: write d0/f20 [937016,92563] 0 2026-03-10T10:19:42.452 INFO:tasks.workunit.client.0.vm02.stdout:5/670: creat d1/db/d11/d84/d40/d4f/d5f/fe5 x:0 0 0 2026-03-10T10:19:42.452 INFO:tasks.workunit.client.0.vm02.stdout:5/671: chown d1/db/d11/d16/d79 12872898 1 2026-03-10T10:19:42.460 INFO:tasks.workunit.client.1.vm05.stdout:9/463: rmdir d0/df/d74 39 2026-03-10T10:19:42.461 INFO:tasks.workunit.client.1.vm05.stdout:9/464: write d0/f28 [985553,50015] 0 2026-03-10T10:19:42.461 INFO:tasks.workunit.client.1.vm05.stdout:9/465: chown d0/d1/d13/c5d 6472314 1 2026-03-10T10:19:42.464 INFO:tasks.workunit.client.0.vm02.stdout:7/500: link d1/l1c d1/d1b/l97 0 2026-03-10T10:19:42.465 INFO:tasks.workunit.client.0.vm02.stdout:7/501: chown d1/dc/f69 27352 1 2026-03-10T10:19:42.465 INFO:tasks.workunit.client.1.vm05.stdout:3/532: mkdir dd/d15/d24/d2c/d6d/da7/dbb/dbd 0 2026-03-10T10:19:42.470 INFO:tasks.workunit.client.0.vm02.stdout:0/552: dwrite d9/d34/d3d/d65/f6d [0,4194304] 0 2026-03-10T10:19:42.470 INFO:tasks.workunit.client.0.vm02.stdout:0/553: read d9/f28 [3431440,70648] 0 2026-03-10T10:19:42.471 INFO:tasks.workunit.client.0.vm02.stdout:0/554: write d9/d18/d1a/d22/d24/f26 [1125317,105778] 0 2026-03-10T10:19:42.478 INFO:tasks.workunit.client.1.vm05.stdout:0/532: mkdir d1/d2/d9/d31/d12/d41/db7 0 2026-03-10T10:19:42.478 INFO:tasks.workunit.client.1.vm05.stdout:0/533: read d1/d2/d9/d31/d54/f27 [86504,93917] 0 2026-03-10T10:19:42.479 INFO:tasks.workunit.client.1.vm05.stdout:0/534: chown d1/d2/d39/d3d 1893703752 1 2026-03-10T10:19:42.479 INFO:tasks.workunit.client.0.vm02.stdout:3/515: creat d1/fa1 x:0 0 0 2026-03-10T10:19:42.481 INFO:tasks.workunit.client.1.vm05.stdout:6/496: creat dd/d36/d3f/d12/d44/d30/f9e x:0 0 0 2026-03-10T10:19:42.483 INFO:tasks.workunit.client.1.vm05.stdout:7/533: getdents d5/d26/d9c 0 2026-03-10T10:19:42.486 INFO:tasks.workunit.client.0.vm02.stdout:2/516: rename d0/d10/c56 to d0/d10/d69/ca8 0 2026-03-10T10:19:42.498 INFO:tasks.workunit.client.1.vm05.stdout:7/534: sync 2026-03-10T10:19:42.501 INFO:tasks.workunit.client.1.vm05.stdout:2/466: truncate db/d1c/d40/d62/f83 945463 0 2026-03-10T10:19:42.501 INFO:tasks.workunit.client.1.vm05.stdout:2/467: fsync db/f24 0 2026-03-10T10:19:42.509 INFO:tasks.workunit.client.1.vm05.stdout:1/578: getdents d4/d79 0 2026-03-10T10:19:42.522 INFO:tasks.workunit.client.0.vm02.stdout:1/540: dwrite d4/d2c/f54 [0,4194304] 0 2026-03-10T10:19:42.522 INFO:tasks.workunit.client.1.vm05.stdout:8/420: dwrite d7/d14/d3a/f50 [4194304,4194304] 0 2026-03-10T10:19:42.532 INFO:tasks.workunit.client.1.vm05.stdout:4/383: write d1/d31/dc/f21 [217610,77806] 0 2026-03-10T10:19:42.537 INFO:tasks.workunit.client.0.vm02.stdout:5/672: creat d1/db/d11/d13/d28/da7/dd9/fe6 x:0 0 0 2026-03-10T10:19:42.538 INFO:tasks.workunit.client.0.vm02.stdout:5/673: write d1/db/d11/d84/d40/d4f/d5f/fe5 [779448,51513] 0 2026-03-10T10:19:42.549 INFO:tasks.workunit.client.0.vm02.stdout:4/660: creat d1/d32/da3/fd7 x:0 0 0 2026-03-10T10:19:42.553 INFO:tasks.workunit.client.0.vm02.stdout:7/502: mkdir d1/d1b/d49/d98 0 2026-03-10T10:19:42.562 INFO:tasks.workunit.client.0.vm02.stdout:0/555: fdatasync d9/d34/d3d/f58 0 2026-03-10T10:19:42.568 INFO:tasks.workunit.client.1.vm05.stdout:2/468: creat db/d61/f92 x:0 0 0 2026-03-10T10:19:42.569 INFO:tasks.workunit.client.1.vm05.stdout:2/469: dread - db/d61/f92 zero size 2026-03-10T10:19:42.571 INFO:tasks.workunit.client.0.vm02.stdout:3/516: mkdir d1/d8/d86/da2 0 2026-03-10T10:19:42.572 INFO:tasks.workunit.client.0.vm02.stdout:2/517: symlink d0/d1a/la9 0 2026-03-10T10:19:42.573 INFO:tasks.workunit.client.1.vm05.stdout:6/497: dwrite dd/d36/f69 [0,4194304] 0 2026-03-10T10:19:42.575 INFO:tasks.workunit.client.1.vm05.stdout:6/498: dread - dd/d36/d3f/d12/d44/d2a/f84 zero size 2026-03-10T10:19:42.577 INFO:tasks.workunit.client.1.vm05.stdout:1/579: mkdir d4/d3d/d6e/dac 0 2026-03-10T10:19:42.589 INFO:tasks.workunit.client.0.vm02.stdout:8/504: link d1/d1c/d43/f46 d1/d1c/d23/d3e/d83/f99 0 2026-03-10T10:19:42.595 INFO:tasks.workunit.client.0.vm02.stdout:6/498: fdatasync d0/d8/d29/d2f/f61 0 2026-03-10T10:19:42.595 INFO:tasks.workunit.client.1.vm05.stdout:5/547: dwrite da/f10 [8388608,4194304] 0 2026-03-10T10:19:42.600 INFO:tasks.workunit.client.0.vm02.stdout:1/541: creat d4/da/d27/d38/d3c/fa7 x:0 0 0 2026-03-10T10:19:42.605 INFO:tasks.workunit.client.0.vm02.stdout:5/674: rename d1/db/d11/d84/d40/d4f/d5f/d6d/l43 to d1/db/d11/d13/le7 0 2026-03-10T10:19:42.607 INFO:tasks.workunit.client.1.vm05.stdout:8/421: dread d7/d14/d15/f2e [0,4194304] 0 2026-03-10T10:19:42.607 INFO:tasks.workunit.client.1.vm05.stdout:8/422: write d7/d14/d3a/f50 [283605,89105] 0 2026-03-10T10:19:42.614 INFO:tasks.workunit.client.0.vm02.stdout:4/661: truncate d1/d32/f69 1367158 0 2026-03-10T10:19:42.616 INFO:tasks.workunit.client.0.vm02.stdout:7/503: mkdir d1/dc/d99 0 2026-03-10T10:19:42.626 INFO:tasks.workunit.client.0.vm02.stdout:3/517: mknod d1/d6/d8b/ca3 0 2026-03-10T10:19:42.632 INFO:tasks.workunit.client.1.vm05.stdout:6/499: creat dd/d36/d3f/d12/d44/d30/f9f x:0 0 0 2026-03-10T10:19:42.636 INFO:tasks.workunit.client.1.vm05.stdout:1/580: symlink d4/d39/d3e/da0/lad 0 2026-03-10T10:19:42.639 INFO:tasks.workunit.client.0.vm02.stdout:0/556: rename d9/d18/d1a/d22/d24/d80/d49/fa9 to d9/d18/d1a/d22/d24/d8e/d91/fb0 0 2026-03-10T10:19:42.642 INFO:tasks.workunit.client.1.vm05.stdout:5/548: unlink da/db/d26/d35/f31 0 2026-03-10T10:19:42.643 INFO:tasks.workunit.client.0.vm02.stdout:5/675: creat d1/db/d11/d62/fe8 x:0 0 0 2026-03-10T10:19:42.645 INFO:tasks.workunit.client.1.vm05.stdout:4/384: mknod d1/d64/c79 0 2026-03-10T10:19:42.648 INFO:tasks.workunit.client.1.vm05.stdout:4/385: dread d1/d31/dc/f53 [4194304,4194304] 0 2026-03-10T10:19:42.652 INFO:tasks.workunit.client.0.vm02.stdout:8/505: sync 2026-03-10T10:19:42.659 INFO:tasks.workunit.client.1.vm05.stdout:3/533: rename dd/d15/d1f/d95 to dd/dbe 0 2026-03-10T10:19:42.659 INFO:tasks.workunit.client.0.vm02.stdout:7/504: mkdir d1/dc/d55/d9a 0 2026-03-10T10:19:42.661 INFO:tasks.workunit.client.1.vm05.stdout:8/423: dread - d7/f59 zero size 2026-03-10T10:19:42.662 INFO:tasks.workunit.client.1.vm05.stdout:8/424: chown d7/d14/d24/d76 1 1 2026-03-10T10:19:42.665 INFO:tasks.workunit.client.0.vm02.stdout:7/505: sync 2026-03-10T10:19:42.676 INFO:tasks.workunit.client.1.vm05.stdout:9/466: truncate d0/df/d11/f52 78392 0 2026-03-10T10:19:42.682 INFO:tasks.workunit.client.0.vm02.stdout:1/542: rmdir d4/da/d27/d38/d3c 39 2026-03-10T10:19:42.696 INFO:tasks.workunit.client.0.vm02.stdout:3/518: fsync d1/d8/d21/f2a 0 2026-03-10T10:19:42.702 INFO:tasks.workunit.client.1.vm05.stdout:6/500: read dd/d36/d3f/d12/d44/d2a/d3d/d3e/f64 [147326,97091] 0 2026-03-10T10:19:42.703 INFO:tasks.workunit.client.1.vm05.stdout:2/470: dread db/d1c/d40/d62/f83 [0,4194304] 0 2026-03-10T10:19:42.714 INFO:tasks.workunit.client.0.vm02.stdout:5/676: dwrite d1/db/d11/d13/d28/f35 [0,4194304] 0 2026-03-10T10:19:42.727 INFO:tasks.workunit.client.0.vm02.stdout:4/662: symlink d1/d52/d53/ld8 0 2026-03-10T10:19:42.728 INFO:tasks.workunit.client.1.vm05.stdout:3/534: rmdir dd/d20/d94 39 2026-03-10T10:19:42.729 INFO:tasks.workunit.client.1.vm05.stdout:3/535: chown dd/d15/d1f/l30 1304885 1 2026-03-10T10:19:42.729 INFO:tasks.workunit.client.0.vm02.stdout:8/506: unlink d1/d1c/c11 0 2026-03-10T10:19:42.729 INFO:tasks.workunit.client.1.vm05.stdout:3/536: chown dd/d15/d24/d2c/d3b/f55 0 1 2026-03-10T10:19:42.732 INFO:tasks.workunit.client.1.vm05.stdout:8/425: mknod d7/d2f/d57/c7b 0 2026-03-10T10:19:42.734 INFO:tasks.workunit.client.1.vm05.stdout:7/535: getdents d5/d17 0 2026-03-10T10:19:42.737 INFO:tasks.workunit.client.1.vm05.stdout:9/467: chown d0/df/c30 90918983 1 2026-03-10T10:19:42.745 INFO:tasks.workunit.client.1.vm05.stdout:1/581: symlink d4/d37/d4e/d82/lae 0 2026-03-10T10:19:42.745 INFO:tasks.workunit.client.1.vm05.stdout:4/386: dwrite d1/d31/fd [0,4194304] 0 2026-03-10T10:19:42.750 INFO:tasks.workunit.client.1.vm05.stdout:4/387: truncate d1/d31/dc/d40/d63/f74 706852 0 2026-03-10T10:19:42.755 INFO:tasks.workunit.client.1.vm05.stdout:1/582: dread d4/d39/d3e/f7d [0,4194304] 0 2026-03-10T10:19:42.765 INFO:tasks.workunit.client.1.vm05.stdout:3/537: mknod dd/d15/d24/d2c/d6d/cbf 0 2026-03-10T10:19:42.770 INFO:tasks.workunit.client.1.vm05.stdout:0/535: rename d1/d2/d9/d31/d13/d2f/d49/f4f to d1/fb8 0 2026-03-10T10:19:42.771 INFO:tasks.workunit.client.1.vm05.stdout:8/426: creat d7/d14/d15/d3b/f7c x:0 0 0 2026-03-10T10:19:42.772 INFO:tasks.workunit.client.1.vm05.stdout:1/583: sync 2026-03-10T10:19:42.772 INFO:tasks.workunit.client.1.vm05.stdout:3/538: mkdir dd/d20/d9e/dc0 0 2026-03-10T10:19:42.776 INFO:tasks.workunit.client.1.vm05.stdout:3/539: dread dd/d20/d56/f7d [0,4194304] 0 2026-03-10T10:19:42.778 INFO:tasks.workunit.client.1.vm05.stdout:9/468: mknod d0/d1/d13/c9d 0 2026-03-10T10:19:42.779 INFO:tasks.workunit.client.1.vm05.stdout:6/501: creat dd/d36/d3f/fa0 x:0 0 0 2026-03-10T10:19:42.782 INFO:tasks.workunit.client.1.vm05.stdout:2/471: dwrite db/d2d/d5e/f71 [0,4194304] 0 2026-03-10T10:19:42.783 INFO:tasks.workunit.client.0.vm02.stdout:6/499: creat d0/fa3 x:0 0 0 2026-03-10T10:19:42.784 INFO:tasks.workunit.client.1.vm05.stdout:1/584: creat d4/d3d/d6e/faf x:0 0 0 2026-03-10T10:19:42.784 INFO:tasks.workunit.client.1.vm05.stdout:2/472: write db/d1c/d40/f50 [253516,87222] 0 2026-03-10T10:19:42.791 INFO:tasks.workunit.client.1.vm05.stdout:7/536: dread d5/d1d/d20/d2d/f30 [0,4194304] 0 2026-03-10T10:19:42.792 INFO:tasks.workunit.client.1.vm05.stdout:4/388: creat d1/d31/f7a x:0 0 0 2026-03-10T10:19:42.792 INFO:tasks.workunit.client.1.vm05.stdout:4/389: fdatasync d1/d3/f6c 0 2026-03-10T10:19:42.793 INFO:tasks.workunit.client.0.vm02.stdout:9/492: link da/lc da/d3c/d4c/d38/d4a/l9a 0 2026-03-10T10:19:42.800 INFO:tasks.workunit.client.1.vm05.stdout:8/427: dread d7/f1c [0,4194304] 0 2026-03-10T10:19:42.800 INFO:tasks.workunit.client.0.vm02.stdout:4/663: unlink d1/d32/f7b 0 2026-03-10T10:19:42.800 INFO:tasks.workunit.client.1.vm05.stdout:8/428: chown d7/f59 307 1 2026-03-10T10:19:42.801 INFO:tasks.workunit.client.1.vm05.stdout:8/429: fsync d7/d14/d15/f51 0 2026-03-10T10:19:42.801 INFO:tasks.workunit.client.1.vm05.stdout:8/430: chown d7/d14/d24/d76 27384537 1 2026-03-10T10:19:42.801 INFO:tasks.workunit.client.1.vm05.stdout:8/431: chown d7/d14/d24/f79 4 1 2026-03-10T10:19:42.814 INFO:tasks.workunit.client.1.vm05.stdout:2/473: dread db/d12/f1d [0,4194304] 0 2026-03-10T10:19:42.818 INFO:tasks.workunit.client.1.vm05.stdout:3/540: write dd/d15/f1c [3516833,92982] 0 2026-03-10T10:19:42.819 INFO:tasks.workunit.client.0.vm02.stdout:7/506: write d1/dc/d10/d38/f96 [1052109,60413] 0 2026-03-10T10:19:42.820 INFO:tasks.workunit.client.1.vm05.stdout:1/585: write d4/df/d1c/d92/f9e [798479,130983] 0 2026-03-10T10:19:42.824 INFO:tasks.workunit.client.1.vm05.stdout:1/586: dread d4/f36 [4194304,4194304] 0 2026-03-10T10:19:42.825 INFO:tasks.workunit.client.0.vm02.stdout:3/519: unlink d1/d8/d21/d73/d78/d84/c9c 0 2026-03-10T10:19:42.826 INFO:tasks.workunit.client.0.vm02.stdout:6/500: truncate d0/d8/d8c/f36 4421673 0 2026-03-10T10:19:42.830 INFO:tasks.workunit.client.1.vm05.stdout:4/390: chown d1/d31/dc/l37 12500599 1 2026-03-10T10:19:42.837 INFO:tasks.workunit.client.0.vm02.stdout:1/543: mkdir d4/da/d1a/d47/d88/da8 0 2026-03-10T10:19:42.839 INFO:tasks.workunit.client.0.vm02.stdout:7/507: creat d1/dc/d10/d38/f9b x:0 0 0 2026-03-10T10:19:42.840 INFO:tasks.workunit.client.1.vm05.stdout:2/474: symlink db/d28/d4f/l93 0 2026-03-10T10:19:42.841 INFO:tasks.workunit.client.1.vm05.stdout:2/475: stat db/d28/d4f/d59/f6f 0 2026-03-10T10:19:42.845 INFO:tasks.workunit.client.0.vm02.stdout:6/501: read d0/d8/d9/f84 [2792425,44066] 0 2026-03-10T10:19:42.849 INFO:tasks.workunit.client.0.vm02.stdout:2/518: rename d0/d1a/d49/d5e/d65/l90 to d0/laa 0 2026-03-10T10:19:42.849 INFO:tasks.workunit.client.0.vm02.stdout:2/519: write d0/d10/f6b [232461,44694] 0 2026-03-10T10:19:42.850 INFO:tasks.workunit.client.0.vm02.stdout:2/520: chown d0/d1a/d49/d5e/f63 78 1 2026-03-10T10:19:42.854 INFO:tasks.workunit.client.1.vm05.stdout:3/541: write dd/d15/d24/f2f [629114,81792] 0 2026-03-10T10:19:42.855 INFO:tasks.workunit.client.1.vm05.stdout:6/502: creat dd/d36/d3f/d12/d44/fa1 x:0 0 0 2026-03-10T10:19:42.856 INFO:tasks.workunit.client.1.vm05.stdout:6/503: write dd/d36/d3f/d12/d58/f5a [2497992,118431] 0 2026-03-10T10:19:42.860 INFO:tasks.workunit.client.0.vm02.stdout:9/493: rmdir da/d3c/d4c/d38/d82 39 2026-03-10T10:19:42.864 INFO:tasks.workunit.client.0.vm02.stdout:9/494: dwrite da/ff [0,4194304] 0 2026-03-10T10:19:42.868 INFO:tasks.workunit.client.1.vm05.stdout:7/537: symlink d5/d1d/d20/d2d/d80/la4 0 2026-03-10T10:19:42.875 INFO:tasks.workunit.client.0.vm02.stdout:8/507: creat d1/d1c/f9a x:0 0 0 2026-03-10T10:19:42.877 INFO:tasks.workunit.client.1.vm05.stdout:5/549: rename da/c6f to da/db/d28/cb8 0 2026-03-10T10:19:42.885 INFO:tasks.workunit.client.1.vm05.stdout:1/587: truncate d4/d3d/f77 717842 0 2026-03-10T10:19:42.890 INFO:tasks.workunit.client.0.vm02.stdout:6/502: unlink d0/d8/c46 0 2026-03-10T10:19:42.890 INFO:tasks.workunit.client.0.vm02.stdout:6/503: chown d0/d8/d29/d6d/c3b 287 1 2026-03-10T10:19:42.894 INFO:tasks.workunit.client.0.vm02.stdout:0/557: rename d9/d34/d3d/d7b/f3a to d9/d18/d1a/d22/d24/d80/d74/d7f/fb1 0 2026-03-10T10:19:42.894 INFO:tasks.workunit.client.0.vm02.stdout:0/558: stat d9/d18/f6a 0 2026-03-10T10:19:42.895 INFO:tasks.workunit.client.0.vm02.stdout:0/559: readlink d9/d18/d1a/d22/d24/l52 0 2026-03-10T10:19:42.897 INFO:tasks.workunit.client.1.vm05.stdout:6/504: creat dd/d36/d3f/d12/d44/d2a/d3d/fa2 x:0 0 0 2026-03-10T10:19:42.899 INFO:tasks.workunit.client.1.vm05.stdout:7/538: creat d5/d1d/d20/d2d/d68/fa5 x:0 0 0 2026-03-10T10:19:42.900 INFO:tasks.workunit.client.0.vm02.stdout:2/521: rmdir d0/d1a/d49/d5e/d8a 39 2026-03-10T10:19:42.901 INFO:tasks.workunit.client.0.vm02.stdout:2/522: dread - d0/d10/d69/d9f/fa5 zero size 2026-03-10T10:19:42.903 INFO:tasks.workunit.client.1.vm05.stdout:4/391: symlink d1/d3/l7b 0 2026-03-10T10:19:42.906 INFO:tasks.workunit.client.1.vm05.stdout:0/536: rename d1/d2/d5d/la7 to d1/d2/d9/d50/lb9 0 2026-03-10T10:19:42.909 INFO:tasks.workunit.client.1.vm05.stdout:2/476: write db/d28/d4f/d59/f7c [811235,36384] 0 2026-03-10T10:19:42.915 INFO:tasks.workunit.client.0.vm02.stdout:4/664: getdents d1 0 2026-03-10T10:19:42.917 INFO:tasks.workunit.client.0.vm02.stdout:8/508: readlink d1/d1c/d23/d3e/d83/l8f 0 2026-03-10T10:19:42.917 INFO:tasks.workunit.client.0.vm02.stdout:8/509: write d1/d1c/d23/d25/f98 [114940,44495] 0 2026-03-10T10:19:42.918 INFO:tasks.workunit.client.1.vm05.stdout:3/542: dwrite dd/d15/d24/f8a [0,4194304] 0 2026-03-10T10:19:42.919 INFO:tasks.workunit.client.1.vm05.stdout:5/550: dread da/db/f6d [0,4194304] 0 2026-03-10T10:19:42.925 INFO:tasks.workunit.client.0.vm02.stdout:3/520: creat d1/d20/fa4 x:0 0 0 2026-03-10T10:19:42.941 INFO:tasks.workunit.client.0.vm02.stdout:9/495: dwrite da/f15 [0,4194304] 0 2026-03-10T10:19:42.942 INFO:tasks.workunit.client.0.vm02.stdout:9/496: stat da/d3c/d4c/d2c/d34/f81 0 2026-03-10T10:19:42.942 INFO:tasks.workunit.client.0.vm02.stdout:7/508: dwrite d1/dc/d60/f53 [0,4194304] 0 2026-03-10T10:19:42.944 INFO:tasks.workunit.client.0.vm02.stdout:7/509: readlink d1/dc/d16/d28/l82 0 2026-03-10T10:19:42.955 INFO:tasks.workunit.client.1.vm05.stdout:9/469: rename d0/df/f96 to d0/df/d74/f9e 0 2026-03-10T10:19:42.955 INFO:tasks.workunit.client.1.vm05.stdout:9/470: fsync d0/f73 0 2026-03-10T10:19:42.960 INFO:tasks.workunit.client.1.vm05.stdout:9/471: dread d0/df/f99 [0,4194304] 0 2026-03-10T10:19:42.961 INFO:tasks.workunit.client.1.vm05.stdout:0/537: write d1/d2/d9/f98 [790540,116550] 0 2026-03-10T10:19:42.966 INFO:tasks.workunit.client.1.vm05.stdout:2/477: mkdir db/d28/d4f/d59/d94 0 2026-03-10T10:19:42.976 INFO:tasks.workunit.client.1.vm05.stdout:4/392: write d1/d31/dc/d40/d45/f50 [2912299,60853] 0 2026-03-10T10:19:42.976 INFO:tasks.workunit.client.1.vm05.stdout:4/393: chown d1/d3/f12 162234 1 2026-03-10T10:19:42.976 INFO:tasks.workunit.client.1.vm05.stdout:3/543: creat dd/d15/d24/d8e/fc1 x:0 0 0 2026-03-10T10:19:42.976 INFO:tasks.workunit.client.1.vm05.stdout:5/551: mknod da/db/d26/d5c/cb9 0 2026-03-10T10:19:42.982 INFO:tasks.workunit.client.1.vm05.stdout:6/505: mkdir dd/d36/d3f/d12/d44/d2a/d3d/d48/d8c/da3 0 2026-03-10T10:19:42.987 INFO:tasks.workunit.client.0.vm02.stdout:6/504: mknod d0/ca4 0 2026-03-10T10:19:42.991 INFO:tasks.workunit.client.0.vm02.stdout:5/677: rename d1/c81 to d1/db/d11/d13/dc9/ce9 0 2026-03-10T10:19:42.995 INFO:tasks.workunit.client.1.vm05.stdout:8/432: rename d7/f37 to d7/d14/d24/d3f/f7d 0 2026-03-10T10:19:42.997 INFO:tasks.workunit.client.0.vm02.stdout:4/665: unlink d1/d52/d53/f70 0 2026-03-10T10:19:42.998 INFO:tasks.workunit.client.1.vm05.stdout:8/433: dwrite d7/d14/d24/f42 [0,4194304] 0 2026-03-10T10:19:42.999 INFO:tasks.workunit.client.1.vm05.stdout:0/538: sync 2026-03-10T10:19:43.014 INFO:tasks.workunit.client.0.vm02.stdout:6/505: unlink d0/d8/d9/f14 0 2026-03-10T10:19:43.022 INFO:tasks.workunit.client.0.vm02.stdout:2/523: rename d0/d10/f67 to d0/d8c/fab 0 2026-03-10T10:19:43.037 INFO:tasks.workunit.client.0.vm02.stdout:8/510: rename d1/d1c/d43/d5b to d1/d1c/d43/d5b/d9b 22 2026-03-10T10:19:43.037 INFO:tasks.workunit.client.0.vm02.stdout:5/678: mknod d1/db/d11/d84/d95/cea 0 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: Active manager daemon vm05.coparq restarted 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: Activating manager daemon vm05.coparq 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/crt"}]: dispatch 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/key"}]: dispatch 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: osdmap e41: 6 total, 6 up, 6 in 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: mgrmap e21: vm05.coparq(active, starting, since 0.0438867s) 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:19:43.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mgr metadata", "who": "vm05.coparq", "id": "vm05.coparq"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:42 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:19:43.038 INFO:tasks.workunit.client.1.vm05.stdout:6/506: fsync dd/d36/d3f/d12/f35 0 2026-03-10T10:19:43.038 INFO:tasks.workunit.client.1.vm05.stdout:7/539: link d5/d1d/d20/d35/f47 d5/d1d/d29/d3e/d8c/d96/fa6 0 2026-03-10T10:19:43.038 INFO:tasks.workunit.client.1.vm05.stdout:7/540: read - d5/dd/f2f zero size 2026-03-10T10:19:43.038 INFO:tasks.workunit.client.1.vm05.stdout:6/507: dwrite dd/d36/d3f/d12/d44/d2a/f84 [0,4194304] 0 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: Active manager daemon vm05.coparq restarted 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: Activating manager daemon vm05.coparq 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/crt"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/key"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: osdmap e41: 6 total, 6 up, 6 in 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: mgrmap e21: vm05.coparq(active, starting, since 0.0438867s) 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mgr metadata", "who": "vm05.coparq", "id": "vm05.coparq"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:19:43.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:42 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:19:43.038 INFO:tasks.workunit.client.0.vm02.stdout:1/544: link d4/da/d27/c82 d4/da/d1a/ca9 0 2026-03-10T10:19:43.039 INFO:tasks.workunit.client.0.vm02.stdout:6/506: dread d0/d8/d29/d2f/f38 [0,4194304] 0 2026-03-10T10:19:43.045 INFO:tasks.workunit.client.0.vm02.stdout:0/560: write d9/f28 [809156,9356] 0 2026-03-10T10:19:43.047 INFO:tasks.workunit.client.0.vm02.stdout:3/521: write d1/f90 [1101626,40350] 0 2026-03-10T10:19:43.049 INFO:tasks.workunit.client.1.vm05.stdout:0/539: dread - d1/d2/d9/d31/d54/f6b zero size 2026-03-10T10:19:43.050 INFO:tasks.workunit.client.0.vm02.stdout:9/497: rename da/d3c/d4c/d2c/d34/f36 to da/d3c/d4c/d2c/d96/f9b 0 2026-03-10T10:19:43.052 INFO:tasks.workunit.client.0.vm02.stdout:9/498: readlink da/d3c/d4c/d2c/l7e 0 2026-03-10T10:19:43.052 INFO:tasks.workunit.client.0.vm02.stdout:9/499: readlink da/d3c/d4c/d2c/d34/d35/l69 0 2026-03-10T10:19:43.052 INFO:tasks.workunit.client.0.vm02.stdout:9/500: chown da/d3c/d53/f6a 239273 1 2026-03-10T10:19:43.059 INFO:tasks.workunit.client.1.vm05.stdout:2/478: truncate db/d28/d4f/d59/f8d 946675 0 2026-03-10T10:19:43.060 INFO:tasks.workunit.client.0.vm02.stdout:1/545: mknod d4/da/d1a/d47/caa 0 2026-03-10T10:19:43.062 INFO:tasks.workunit.client.0.vm02.stdout:7/510: getdents d1/dc/d60 0 2026-03-10T10:19:43.073 INFO:tasks.workunit.client.1.vm05.stdout:4/394: write d1/f5d [567291,95433] 0 2026-03-10T10:19:43.076 INFO:tasks.workunit.client.1.vm05.stdout:5/552: dwrite da/db/d26/d5c/f2c [0,4194304] 0 2026-03-10T10:19:43.077 INFO:tasks.workunit.client.0.vm02.stdout:4/666: dwrite d1/d10/f71 [0,4194304] 0 2026-03-10T10:19:43.085 INFO:tasks.workunit.client.0.vm02.stdout:3/522: rmdir d1/d6/d8e 39 2026-03-10T10:19:43.091 INFO:tasks.workunit.client.0.vm02.stdout:2/524: rename d0/d1a/d49/d5e/l74 to d0/d1a/d24/lac 0 2026-03-10T10:19:43.100 INFO:tasks.workunit.client.1.vm05.stdout:9/472: truncate d0/d1/f6d 2115765 0 2026-03-10T10:19:43.100 INFO:tasks.workunit.client.0.vm02.stdout:5/679: dwrite d1/db/d11/d84/d40/d4f/f57 [4194304,4194304] 0 2026-03-10T10:19:43.100 INFO:tasks.workunit.client.0.vm02.stdout:5/680: write d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fdf [2800,68305] 0 2026-03-10T10:19:43.104 INFO:tasks.workunit.client.0.vm02.stdout:8/511: unlink d1/d1c/d43/d5b/c93 0 2026-03-10T10:19:43.110 INFO:tasks.workunit.client.0.vm02.stdout:1/546: creat d4/d2c/d53/da6/fab x:0 0 0 2026-03-10T10:19:43.115 INFO:tasks.workunit.client.0.vm02.stdout:7/511: mkdir d1/dc/d55/d9c 0 2026-03-10T10:19:43.115 INFO:tasks.workunit.client.0.vm02.stdout:0/561: creat d9/d18/d1a/d46/d5d/da7/fb2 x:0 0 0 2026-03-10T10:19:43.116 INFO:tasks.workunit.client.0.vm02.stdout:0/562: write d9/d18/d1a/d46/d5d/da7/fb2 [684136,123906] 0 2026-03-10T10:19:43.129 INFO:tasks.workunit.client.0.vm02.stdout:6/507: rename d0/d8/d8c/d60 to d0/d8/d29/d2f/d4b/da5 0 2026-03-10T10:19:43.138 INFO:tasks.workunit.client.0.vm02.stdout:9/501: mknod da/d3c/d4c/d38/d82/d8c/c9c 0 2026-03-10T10:19:43.143 INFO:tasks.workunit.client.0.vm02.stdout:0/563: read d9/d34/d3d/f69 [196037,32422] 0 2026-03-10T10:19:43.144 INFO:tasks.workunit.client.0.vm02.stdout:3/523: symlink d1/d8/d86/da2/la5 0 2026-03-10T10:19:43.149 INFO:tasks.workunit.client.0.vm02.stdout:4/667: rename d1/d32/d3e/fae to d1/d41/d5e/d78/d1a/d49/fd9 0 2026-03-10T10:19:43.151 INFO:tasks.workunit.client.0.vm02.stdout:6/508: chown d0/d8/f9b 331637 1 2026-03-10T10:19:43.155 INFO:tasks.workunit.client.1.vm05.stdout:1/588: rename d4/d3d/l81 to d4/dd/lb0 0 2026-03-10T10:19:43.162 INFO:tasks.workunit.client.1.vm05.stdout:0/540: creat d1/d2/d9/d31/d13/d17/da1/fba x:0 0 0 2026-03-10T10:19:43.163 INFO:tasks.workunit.client.0.vm02.stdout:7/512: creat d1/d1b/d8e/f9d x:0 0 0 2026-03-10T10:19:43.163 INFO:tasks.workunit.client.0.vm02.stdout:7/513: chown d1/dc/d55/f8d 35340738 1 2026-03-10T10:19:43.164 INFO:tasks.workunit.client.0.vm02.stdout:3/524: read d1/d6/f1b [1358401,32641] 0 2026-03-10T10:19:43.167 INFO:tasks.workunit.client.0.vm02.stdout:8/512: sync 2026-03-10T10:19:43.167 INFO:tasks.workunit.client.0.vm02.stdout:8/513: chown d1/d1c/d43/d6a/d7c 20067 1 2026-03-10T10:19:43.169 INFO:tasks.workunit.client.1.vm05.stdout:2/479: fdatasync db/d1c/d40/f73 0 2026-03-10T10:19:43.172 INFO:tasks.workunit.client.1.vm05.stdout:1/589: dread d4/d37/f89 [0,4194304] 0 2026-03-10T10:19:43.173 INFO:tasks.workunit.client.0.vm02.stdout:0/564: dread d9/f6c [0,4194304] 0 2026-03-10T10:19:43.173 INFO:tasks.workunit.client.0.vm02.stdout:0/565: fdatasync d9/d34/d3d/d8d/f95 0 2026-03-10T10:19:43.181 INFO:tasks.workunit.client.0.vm02.stdout:5/681: dwrite d1/db/d11/d13/d28/f31 [0,4194304] 0 2026-03-10T10:19:43.192 INFO:tasks.workunit.client.1.vm05.stdout:6/508: dwrite dd/d36/d3f/d12/d58/f65 [0,4194304] 0 2026-03-10T10:19:43.196 INFO:tasks.workunit.client.1.vm05.stdout:6/509: dwrite dd/d36/d3f/d12/f56 [0,4194304] 0 2026-03-10T10:19:43.199 INFO:tasks.workunit.client.0.vm02.stdout:4/668: dread d1/d52/fbd [0,4194304] 0 2026-03-10T10:19:43.200 INFO:tasks.workunit.client.1.vm05.stdout:6/510: write dd/d36/d3f/f61 [4327523,63897] 0 2026-03-10T10:19:43.201 INFO:tasks.workunit.client.0.vm02.stdout:6/509: unlink d0/d8/d9/d7a/f7d 0 2026-03-10T10:19:43.202 INFO:tasks.workunit.client.0.vm02.stdout:2/525: creat d0/d1a/d49/d5e/fad x:0 0 0 2026-03-10T10:19:43.204 INFO:tasks.workunit.client.1.vm05.stdout:7/541: mkdir d5/d1d/d20/d91/da7 0 2026-03-10T10:19:43.206 INFO:tasks.workunit.client.0.vm02.stdout:2/526: dwrite d0/d1a/f33 [0,4194304] 0 2026-03-10T10:19:43.214 INFO:tasks.workunit.client.0.vm02.stdout:9/502: mkdir da/d9d 0 2026-03-10T10:19:43.218 INFO:tasks.workunit.client.1.vm05.stdout:3/544: rename dd/d15/d24/d74/l85 to dd/d15/d24/d2c/d3b/lc2 0 2026-03-10T10:19:43.221 INFO:tasks.workunit.client.0.vm02.stdout:1/547: creat d4/d2c/fac x:0 0 0 2026-03-10T10:19:43.228 INFO:tasks.workunit.client.1.vm05.stdout:8/434: rmdir d7/d14/d24/d76 0 2026-03-10T10:19:43.229 INFO:tasks.workunit.client.1.vm05.stdout:8/435: write d7/d14/d24/d3f/d6a/f6c [213834,30693] 0 2026-03-10T10:19:43.229 INFO:tasks.workunit.client.1.vm05.stdout:8/436: dread - d7/f78 zero size 2026-03-10T10:19:43.231 INFO:tasks.workunit.client.1.vm05.stdout:5/553: dread da/db/d26/d35/f7d [0,4194304] 0 2026-03-10T10:19:43.231 INFO:tasks.workunit.client.0.vm02.stdout:8/514: rmdir d1/d1c/d43/d5b 39 2026-03-10T10:19:43.234 INFO:tasks.workunit.client.0.vm02.stdout:8/515: dread d1/d1c/d23/f3b [0,4194304] 0 2026-03-10T10:19:43.238 INFO:tasks.workunit.client.0.vm02.stdout:0/566: rmdir d9/d18/d1a/d22/d24/d8e 39 2026-03-10T10:19:43.247 INFO:tasks.workunit.client.1.vm05.stdout:1/590: fsync d4/d37/f90 0 2026-03-10T10:19:43.247 INFO:tasks.workunit.client.1.vm05.stdout:7/542: dread d5/dd/f12 [0,4194304] 0 2026-03-10T10:19:43.247 INFO:tasks.workunit.client.1.vm05.stdout:1/591: write d4/df/d1c/f9c [4942372,103737] 0 2026-03-10T10:19:43.247 INFO:tasks.workunit.client.1.vm05.stdout:1/592: readlink d4/df/d1c/l59 0 2026-03-10T10:19:43.247 INFO:tasks.workunit.client.1.vm05.stdout:4/395: symlink d1/d3/l7c 0 2026-03-10T10:19:43.247 INFO:tasks.workunit.client.1.vm05.stdout:4/396: fdatasync d1/d31/dc/d40/f67 0 2026-03-10T10:19:43.247 INFO:tasks.workunit.client.0.vm02.stdout:5/682: dread d1/db/d11/d62/d67/fa6 [0,4194304] 0 2026-03-10T10:19:43.253 INFO:tasks.workunit.client.1.vm05.stdout:6/511: mknod dd/ca4 0 2026-03-10T10:19:43.254 INFO:tasks.workunit.client.0.vm02.stdout:2/527: dread - d0/d1a/d49/d5e/d65/f75 zero size 2026-03-10T10:19:43.259 INFO:tasks.workunit.client.1.vm05.stdout:5/554: sync 2026-03-10T10:19:43.261 INFO:tasks.workunit.client.1.vm05.stdout:3/545: mkdir dd/d15/d24/d8e/dc3 0 2026-03-10T10:19:43.263 INFO:tasks.workunit.client.0.vm02.stdout:3/525: creat d1/d6/d8e/fa6 x:0 0 0 2026-03-10T10:19:43.263 INFO:tasks.workunit.client.0.vm02.stdout:3/526: read - d1/f81 zero size 2026-03-10T10:19:43.265 INFO:tasks.workunit.client.1.vm05.stdout:8/437: write d7/f1c [1969442,35832] 0 2026-03-10T10:19:43.267 INFO:tasks.workunit.client.1.vm05.stdout:1/593: dread d4/df/d1c/d92/f9e [0,4194304] 0 2026-03-10T10:19:43.272 INFO:tasks.workunit.client.0.vm02.stdout:0/567: symlink d9/d34/d3d/d65/d89/lb3 0 2026-03-10T10:19:43.272 INFO:tasks.workunit.client.1.vm05.stdout:0/541: symlink d1/d2/d9/d31/d13/da2/dab/lbb 0 2026-03-10T10:19:43.275 INFO:tasks.workunit.client.0.vm02.stdout:0/568: dwrite d9/d18/d1a/f7e [0,4194304] 0 2026-03-10T10:19:43.279 INFO:tasks.workunit.client.1.vm05.stdout:2/480: truncate db/d12/f1a 944150 0 2026-03-10T10:19:43.280 INFO:tasks.workunit.client.0.vm02.stdout:5/683: truncate d1/db/d11/d62/f65 797532 0 2026-03-10T10:19:43.285 INFO:tasks.workunit.client.0.vm02.stdout:3/527: sync 2026-03-10T10:19:43.285 INFO:tasks.workunit.client.1.vm05.stdout:2/481: sync 2026-03-10T10:19:43.287 INFO:tasks.workunit.client.1.vm05.stdout:7/543: mknod d5/d17/d66/ca8 0 2026-03-10T10:19:43.287 INFO:tasks.workunit.client.1.vm05.stdout:7/544: readlink d5/d17/l43 0 2026-03-10T10:19:43.288 INFO:tasks.workunit.client.1.vm05.stdout:7/545: readlink d5/d1d/d29/l2e 0 2026-03-10T10:19:43.288 INFO:tasks.workunit.client.1.vm05.stdout:7/546: fdatasync d5/d1d/d20/d2d/d68/f98 0 2026-03-10T10:19:43.294 INFO:tasks.workunit.client.0.vm02.stdout:7/514: write d1/dc/f3 [140812,8996] 0 2026-03-10T10:19:43.308 INFO:tasks.workunit.client.1.vm05.stdout:9/473: write d0/df/d11/f52 [518164,6356] 0 2026-03-10T10:19:43.309 INFO:tasks.workunit.client.1.vm05.stdout:3/546: creat dd/d20/d56/d5e/dab/fc4 x:0 0 0 2026-03-10T10:19:43.309 INFO:tasks.workunit.client.1.vm05.stdout:9/474: write d0/f73 [2271409,101034] 0 2026-03-10T10:19:43.315 INFO:tasks.workunit.client.0.vm02.stdout:6/510: creat d0/d8/d29/d2f/d4b/da5/fa6 x:0 0 0 2026-03-10T10:19:43.318 INFO:tasks.workunit.client.0.vm02.stdout:6/511: dwrite d0/d8/f8f [0,4194304] 0 2026-03-10T10:19:43.324 INFO:tasks.workunit.client.0.vm02.stdout:6/512: sync 2026-03-10T10:19:43.329 INFO:tasks.workunit.client.1.vm05.stdout:7/547: truncate d5/d1d/d29/d3e/d8c/f81 778455 0 2026-03-10T10:19:43.331 INFO:tasks.workunit.client.0.vm02.stdout:9/503: creat da/d3c/d4c/d38/f9e x:0 0 0 2026-03-10T10:19:43.335 INFO:tasks.workunit.client.0.vm02.stdout:1/548: creat d4/da/d27/d38/fad x:0 0 0 2026-03-10T10:19:43.341 INFO:tasks.workunit.client.0.vm02.stdout:4/669: getdents d1/d41/d5e/d78/d37 0 2026-03-10T10:19:43.347 INFO:tasks.workunit.client.0.vm02.stdout:3/528: symlink d1/d20/la7 0 2026-03-10T10:19:43.347 INFO:tasks.workunit.client.0.vm02.stdout:3/529: fsync d1/fa1 0 2026-03-10T10:19:43.347 INFO:tasks.workunit.client.0.vm02.stdout:3/530: chown d1/d58 10348 1 2026-03-10T10:19:43.349 INFO:tasks.workunit.client.1.vm05.stdout:2/482: mkdir db/d28/d4f/d59/d94/d95 0 2026-03-10T10:19:43.350 INFO:tasks.workunit.client.0.vm02.stdout:6/513: creat d0/d87/fa7 x:0 0 0 2026-03-10T10:19:43.353 INFO:tasks.workunit.client.1.vm05.stdout:7/548: chown d5/d1d/d29/f3a 67406701 1 2026-03-10T10:19:43.354 INFO:tasks.workunit.client.1.vm05.stdout:7/549: readlink d5/d1d/d20/d35/l4a 0 2026-03-10T10:19:43.355 INFO:tasks.workunit.client.1.vm05.stdout:4/397: creat d1/d31/dc/d40/f7d x:0 0 0 2026-03-10T10:19:43.355 INFO:tasks.workunit.client.0.vm02.stdout:7/515: symlink d1/d1b/d49/d98/l9e 0 2026-03-10T10:19:43.358 INFO:tasks.workunit.client.0.vm02.stdout:1/549: fsync d4/f8 0 2026-03-10T10:19:43.358 INFO:tasks.workunit.client.0.vm02.stdout:1/550: chown d4/da/f9d 1 1 2026-03-10T10:19:43.359 INFO:tasks.workunit.client.0.vm02.stdout:1/551: readlink d4/l8d 0 2026-03-10T10:19:43.365 INFO:tasks.workunit.client.0.vm02.stdout:8/516: write d1/f65 [4035537,68547] 0 2026-03-10T10:19:43.366 INFO:tasks.workunit.client.1.vm05.stdout:8/438: rename d7/f2b to d7/d2f/f7e 0 2026-03-10T10:19:43.368 INFO:tasks.workunit.client.0.vm02.stdout:2/528: dwrite d0/f70 [0,4194304] 0 2026-03-10T10:19:43.369 INFO:tasks.workunit.client.0.vm02.stdout:2/529: write d0/d1a/d49/f50 [1840559,58067] 0 2026-03-10T10:19:43.369 INFO:tasks.workunit.client.0.vm02.stdout:2/530: readlink d0/d10/d69/l6f 0 2026-03-10T10:19:43.369 INFO:tasks.workunit.client.0.vm02.stdout:2/531: chown d0/d1a/d49/d5e/d65 3635 1 2026-03-10T10:19:43.377 INFO:tasks.workunit.client.0.vm02.stdout:4/670: mkdir d1/d52/d53/dda 0 2026-03-10T10:19:43.385 INFO:tasks.workunit.client.0.vm02.stdout:3/531: rename d1/d20/l30 to d1/d8/d21/la8 0 2026-03-10T10:19:43.389 INFO:tasks.workunit.client.0.vm02.stdout:0/569: dwrite d9/d18/d1a/d22/d24/d80/d74/f62 [0,4194304] 0 2026-03-10T10:19:43.412 INFO:tasks.workunit.client.0.vm02.stdout:7/516: mknod d1/dc/c9f 0 2026-03-10T10:19:43.412 INFO:tasks.workunit.client.0.vm02.stdout:7/517: fdatasync d1/dc/f26 0 2026-03-10T10:19:43.419 INFO:tasks.workunit.client.1.vm05.stdout:9/475: rename d0/d1/d16/l47 to d0/d1/d13/d26/l9f 0 2026-03-10T10:19:43.420 INFO:tasks.workunit.client.1.vm05.stdout:3/547: rename dd/d15/d24/d74/d88 to dd/d15/d24/d74/d88/dc5 22 2026-03-10T10:19:43.420 INFO:tasks.workunit.client.0.vm02.stdout:1/552: creat d4/da/d1a/d22/fae x:0 0 0 2026-03-10T10:19:43.422 INFO:tasks.workunit.client.1.vm05.stdout:4/398: dread d1/d31/dc/f25 [0,4194304] 0 2026-03-10T10:19:43.423 INFO:tasks.workunit.client.1.vm05.stdout:8/439: rmdir d7/d14/d3a/d49 39 2026-03-10T10:19:43.423 INFO:tasks.workunit.client.0.vm02.stdout:7/518: sync 2026-03-10T10:19:43.424 INFO:tasks.workunit.client.0.vm02.stdout:7/519: dread - d1/d1b/d8f/f8c zero size 2026-03-10T10:19:43.426 INFO:tasks.workunit.client.1.vm05.stdout:5/555: write da/db/f29 [4491245,102262] 0 2026-03-10T10:19:43.427 INFO:tasks.workunit.client.1.vm05.stdout:0/542: write d1/d2/d39/d3d/f72 [2355439,99379] 0 2026-03-10T10:19:43.433 INFO:tasks.workunit.client.0.vm02.stdout:5/684: getdents d1/db/d11/d16 0 2026-03-10T10:19:43.434 INFO:tasks.workunit.client.1.vm05.stdout:5/556: sync 2026-03-10T10:19:43.435 INFO:tasks.workunit.client.1.vm05.stdout:5/557: stat da/db/d28/d32 0 2026-03-10T10:19:43.435 INFO:tasks.workunit.client.1.vm05.stdout:2/483: dwrite db/d2d/f52 [0,4194304] 0 2026-03-10T10:19:43.437 INFO:tasks.workunit.client.1.vm05.stdout:1/594: getdents d4/d37/d4e 0 2026-03-10T10:19:43.438 INFO:tasks.workunit.client.1.vm05.stdout:2/484: sync 2026-03-10T10:19:43.450 INFO:tasks.workunit.client.0.vm02.stdout:2/532: dread - d0/f88 zero size 2026-03-10T10:19:43.452 INFO:tasks.workunit.client.1.vm05.stdout:7/550: rename d5/d1d/d20/d2d/f55 to d5/dd/fa9 0 2026-03-10T10:19:43.454 INFO:tasks.workunit.client.0.vm02.stdout:4/671: symlink d1/d41/d5e/d78/d7f/ldb 0 2026-03-10T10:19:43.458 INFO:tasks.workunit.client.0.vm02.stdout:8/517: rename d1/f12 to d1/d1c/d43/d6a/f9c 0 2026-03-10T10:19:43.461 INFO:tasks.workunit.client.0.vm02.stdout:8/518: dwrite d1/d1c/d43/d6a/f87 [0,4194304] 0 2026-03-10T10:19:43.463 INFO:tasks.workunit.client.0.vm02.stdout:8/519: write d1/d1c/f14 [3284494,9017] 0 2026-03-10T10:19:43.464 INFO:tasks.workunit.client.0.vm02.stdout:8/520: chown d1/d1c/d23/d3e/d83 1918718477 1 2026-03-10T10:19:43.468 INFO:tasks.workunit.client.0.vm02.stdout:8/521: dread d1/d1c/d24/f6b [0,4194304] 0 2026-03-10T10:19:43.478 INFO:tasks.workunit.client.0.vm02.stdout:4/672: sync 2026-03-10T10:19:43.494 INFO:tasks.workunit.client.0.vm02.stdout:6/514: mkdir d0/d8/d29/d2f/d4b/da5/d6f/da1/da8 0 2026-03-10T10:19:43.511 INFO:tasks.workunit.client.1.vm05.stdout:0/543: dread - d1/d2/d9/d31/d13/d15/f52 zero size 2026-03-10T10:19:43.511 INFO:tasks.workunit.client.0.vm02.stdout:9/504: link da/d3c/d53/l91 da/d3c/d4c/d56/l9f 0 2026-03-10T10:19:43.511 INFO:tasks.workunit.client.0.vm02.stdout:1/553: stat d4/da/d1a/c16 0 2026-03-10T10:19:43.511 INFO:tasks.workunit.client.1.vm05.stdout:1/595: mkdir d4/d39/d3e/db1 0 2026-03-10T10:19:43.512 INFO:tasks.workunit.client.0.vm02.stdout:9/505: write da/d3c/d53/f6a [2498946,124681] 0 2026-03-10T10:19:43.512 INFO:tasks.workunit.client.1.vm05.stdout:1/596: chown d4/d39/d3e/da0/fa1 68 1 2026-03-10T10:19:43.513 INFO:tasks.workunit.client.1.vm05.stdout:1/597: chown d4/df/d1c/d53/f98 24 1 2026-03-10T10:19:43.542 INFO:tasks.workunit.client.1.vm05.stdout:9/476: rename d0/d1/d13/d55/f68 to d0/df/d74/fa0 0 2026-03-10T10:19:43.544 INFO:tasks.workunit.client.0.vm02.stdout:3/532: mknod d1/d6/ca9 0 2026-03-10T10:19:43.545 INFO:tasks.workunit.client.0.vm02.stdout:3/533: truncate d1/d6/d8e/f96 431406 0 2026-03-10T10:19:43.546 INFO:tasks.workunit.client.0.vm02.stdout:5/685: write d1/db/d11/d62/fbf [365252,77300] 0 2026-03-10T10:19:43.548 INFO:tasks.workunit.client.1.vm05.stdout:3/548: symlink dd/d15/lc6 0 2026-03-10T10:19:43.551 INFO:tasks.workunit.client.1.vm05.stdout:4/399: truncate d1/d31/dc/d40/d45/f66 138473 0 2026-03-10T10:19:43.559 INFO:tasks.workunit.client.0.vm02.stdout:8/522: write d1/d1c/d24/d35/f6e [4393444,59053] 0 2026-03-10T10:19:43.563 INFO:tasks.workunit.client.0.vm02.stdout:4/673: unlink d1/d52/d53/f83 0 2026-03-10T10:19:43.565 INFO:tasks.workunit.client.0.vm02.stdout:6/515: dread - d0/d8/d8c/f75 zero size 2026-03-10T10:19:43.567 INFO:tasks.workunit.client.1.vm05.stdout:2/485: symlink db/d12/l96 0 2026-03-10T10:19:43.569 INFO:tasks.workunit.client.1.vm05.stdout:6/512: link dd/f29 dd/d36/d3f/d12/d44/d2a/fa5 0 2026-03-10T10:19:43.569 INFO:tasks.workunit.client.0.vm02.stdout:7/520: symlink d1/dc/d16/d28/la0 0 2026-03-10T10:19:43.572 INFO:tasks.workunit.client.1.vm05.stdout:7/551: symlink d5/d1d/d20/d2d/d5d/laa 0 2026-03-10T10:19:43.572 INFO:tasks.workunit.client.0.vm02.stdout:1/554: read d4/d2c/fa2 [2266719,128184] 0 2026-03-10T10:19:43.573 INFO:tasks.workunit.client.1.vm05.stdout:6/513: dwrite dd/d36/d3f/d12/d44/d2a/d3d/d48/f82 [0,4194304] 0 2026-03-10T10:19:43.574 INFO:tasks.workunit.client.1.vm05.stdout:3/549: truncate f9 1927731 0 2026-03-10T10:19:43.576 INFO:tasks.workunit.client.1.vm05.stdout:3/550: stat dd/d39/d66/fad 0 2026-03-10T10:19:43.577 INFO:tasks.workunit.client.0.vm02.stdout:2/533: fsync d0/f8 0 2026-03-10T10:19:43.578 INFO:tasks.workunit.client.0.vm02.stdout:3/534: symlink d1/d8/d21/d73/d78/d84/laa 0 2026-03-10T10:19:43.578 INFO:tasks.workunit.client.1.vm05.stdout:8/440: creat d7/d2f/f7f x:0 0 0 2026-03-10T10:19:43.579 INFO:tasks.workunit.client.1.vm05.stdout:4/400: chown d1/d31/c16 32349 1 2026-03-10T10:19:43.579 INFO:tasks.workunit.client.1.vm05.stdout:4/401: stat d1/d31/dc/f3a 0 2026-03-10T10:19:43.581 INFO:tasks.workunit.client.0.vm02.stdout:5/686: symlink d1/db/d11/d16/leb 0 2026-03-10T10:19:43.585 INFO:tasks.workunit.client.1.vm05.stdout:5/558: creat da/fba x:0 0 0 2026-03-10T10:19:43.588 INFO:tasks.workunit.client.1.vm05.stdout:1/598: write d4/df/d1c/d53/d66/f94 [231776,116971] 0 2026-03-10T10:19:43.614 INFO:tasks.workunit.client.0.vm02.stdout:8/523: fsync d1/d1c/d24/d35/f44 0 2026-03-10T10:19:43.615 INFO:tasks.workunit.client.1.vm05.stdout:0/544: write d1/d2/d9/d31/d13/d15/d4e/f60 [884428,76867] 0 2026-03-10T10:19:43.618 INFO:tasks.workunit.client.0.vm02.stdout:4/674: truncate d1/f1d 6929905 0 2026-03-10T10:19:43.620 INFO:tasks.workunit.client.0.vm02.stdout:6/516: creat d0/d8/d29/d94/fa9 x:0 0 0 2026-03-10T10:19:43.624 INFO:tasks.workunit.client.1.vm05.stdout:6/514: fsync dd/d36/f71 0 2026-03-10T10:19:43.629 INFO:tasks.workunit.client.0.vm02.stdout:7/521: dwrite d1/dc/d16/d28/d2c/f8a [0,4194304] 0 2026-03-10T10:19:43.632 INFO:tasks.workunit.client.0.vm02.stdout:7/522: write d1/dc/d10/f27 [1514442,87357] 0 2026-03-10T10:19:43.633 INFO:tasks.workunit.client.1.vm05.stdout:7/552: write d5/d17/f74 [92762,38580] 0 2026-03-10T10:19:43.634 INFO:tasks.workunit.client.1.vm05.stdout:7/553: readlink d5/d17/l43 0 2026-03-10T10:19:43.635 INFO:tasks.workunit.client.1.vm05.stdout:8/441: symlink d7/l80 0 2026-03-10T10:19:43.636 INFO:tasks.workunit.client.0.vm02.stdout:9/506: symlink da/d3c/d4c/d38/la0 0 2026-03-10T10:19:43.641 INFO:tasks.workunit.client.0.vm02.stdout:3/535: rmdir d1/d20 39 2026-03-10T10:19:43.642 INFO:tasks.workunit.client.0.vm02.stdout:2/534: stat d0/d10/d69/c73 0 2026-03-10T10:19:43.642 INFO:tasks.workunit.client.1.vm05.stdout:4/402: mknod d1/d64/c7e 0 2026-03-10T10:19:43.643 INFO:tasks.workunit.client.1.vm05.stdout:5/559: symlink da/db/d28/d32/lbb 0 2026-03-10T10:19:43.648 INFO:tasks.workunit.client.1.vm05.stdout:1/599: truncate d4/d37/f90 593200 0 2026-03-10T10:19:43.652 INFO:tasks.workunit.client.1.vm05.stdout:3/551: dread dd/d15/d24/d2c/f60 [0,4194304] 0 2026-03-10T10:19:43.658 INFO:tasks.workunit.client.1.vm05.stdout:3/552: sync 2026-03-10T10:19:43.673 INFO:tasks.workunit.client.1.vm05.stdout:1/600: dread d4/d20/f49 [0,4194304] 0 2026-03-10T10:19:43.677 INFO:tasks.workunit.client.0.vm02.stdout:8/524: dread d1/d1c/d23/d3e/f50 [0,4194304] 0 2026-03-10T10:19:43.679 INFO:tasks.workunit.client.0.vm02.stdout:4/675: fsync d1/d10/db/f16 0 2026-03-10T10:19:43.683 INFO:tasks.workunit.client.1.vm05.stdout:9/477: link d0/d1/d4c/d63/f78 d0/d1/d13/de/d93/fa1 0 2026-03-10T10:19:43.691 INFO:tasks.workunit.client.0.vm02.stdout:6/517: unlink d0/d8/d29/d2f/d4b/c4d 0 2026-03-10T10:19:43.696 INFO:tasks.workunit.client.0.vm02.stdout:0/570: getdents d9 0 2026-03-10T10:19:43.699 INFO:tasks.workunit.client.1.vm05.stdout:0/545: write d1/d2/d9/f1d [5630266,125758] 0 2026-03-10T10:19:43.705 INFO:tasks.workunit.client.0.vm02.stdout:6/518: dread d0/d8/d9/f84 [0,4194304] 0 2026-03-10T10:19:43.710 INFO:tasks.workunit.client.0.vm02.stdout:1/555: symlink d4/laf 0 2026-03-10T10:19:43.710 INFO:tasks.workunit.client.0.vm02.stdout:7/523: rename d1/dc/c29 to d1/d1b/d8f/d67/ca1 0 2026-03-10T10:19:43.712 INFO:tasks.workunit.client.1.vm05.stdout:8/442: truncate d7/f9 8218790 0 2026-03-10T10:19:43.714 INFO:tasks.workunit.client.0.vm02.stdout:9/507: creat da/d3c/d4c/d56/fa1 x:0 0 0 2026-03-10T10:19:43.714 INFO:tasks.workunit.client.1.vm05.stdout:4/403: symlink d1/d70/l7f 0 2026-03-10T10:19:43.714 INFO:tasks.workunit.client.0.vm02.stdout:9/508: read - da/d3c/d4c/d2c/d34/f68 zero size 2026-03-10T10:19:43.715 INFO:tasks.workunit.client.0.vm02.stdout:9/509: readlink da/d3c/d4c/d2c/d34/d35/l55 0 2026-03-10T10:19:43.716 INFO:tasks.workunit.client.0.vm02.stdout:3/536: creat d1/d8/d44/fab x:0 0 0 2026-03-10T10:19:43.717 INFO:tasks.workunit.client.1.vm05.stdout:5/560: mknod da/db/d28/d97/cbc 0 2026-03-10T10:19:43.718 INFO:tasks.workunit.client.1.vm05.stdout:8/443: dwrite d7/d14/f5b [4194304,4194304] 0 2026-03-10T10:19:43.728 INFO:tasks.workunit.client.1.vm05.stdout:6/515: dread dd/d36/d3f/f6f [0,4194304] 0 2026-03-10T10:19:43.732 INFO:tasks.workunit.client.1.vm05.stdout:1/601: creat d4/d39/fb2 x:0 0 0 2026-03-10T10:19:43.735 INFO:tasks.workunit.client.1.vm05.stdout:4/404: dread d1/f5d [0,4194304] 0 2026-03-10T10:19:43.736 INFO:tasks.workunit.client.0.vm02.stdout:8/525: unlink d1/d2/f27 0 2026-03-10T10:19:43.736 INFO:tasks.workunit.client.1.vm05.stdout:4/405: dread d1/f5d [0,4194304] 0 2026-03-10T10:19:43.741 INFO:tasks.workunit.client.1.vm05.stdout:7/554: getdents d5/d1d/d20/d91/da7 0 2026-03-10T10:19:43.742 INFO:tasks.workunit.client.1.vm05.stdout:7/555: readlink d5/d1d/d29/l2e 0 2026-03-10T10:19:43.742 INFO:tasks.workunit.client.1.vm05.stdout:0/546: dread d1/d2/d9/d31/d13/d17/f1b [4194304,4194304] 0 2026-03-10T10:19:43.747 INFO:tasks.workunit.client.1.vm05.stdout:7/556: dread d5/d1d/f56 [0,4194304] 0 2026-03-10T10:19:43.763 INFO:tasks.workunit.client.1.vm05.stdout:1/602: dread d4/df/d1c/f9c [0,4194304] 0 2026-03-10T10:19:43.764 INFO:tasks.workunit.client.1.vm05.stdout:9/478: symlink d0/d1/d13/de/d93/la2 0 2026-03-10T10:19:43.772 INFO:tasks.workunit.client.1.vm05.stdout:3/553: write dd/d15/d24/d2c/d3b/f48 [894633,62702] 0 2026-03-10T10:19:43.773 INFO:tasks.workunit.client.0.vm02.stdout:4/676: write d1/d41/d5e/d78/d37/f2e [5180326,111842] 0 2026-03-10T10:19:43.773 INFO:tasks.workunit.client.1.vm05.stdout:4/406: mknod d1/d64/c80 0 2026-03-10T10:19:43.775 INFO:tasks.workunit.client.1.vm05.stdout:5/561: dwrite da/db/d28/f8d [0,4194304] 0 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: Manager daemon vm05.coparq is now available 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: Migrating agent root cert to cert store 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: Migrating agent root key to cert store 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: Checking for cert/key for grafana.vm02 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: Migrating grafana.vm02 cert to cert store 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: Migrating grafana.vm02 key to cert store 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.coparq/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:19:43.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:43 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.coparq/trash_purge_schedule"}]: dispatch 2026-03-10T10:19:43.789 INFO:tasks.workunit.client.1.vm05.stdout:8/444: symlink d7/d14/d24/l81 0 2026-03-10T10:19:43.790 INFO:tasks.workunit.client.1.vm05.stdout:2/486: link db/d28/c78 db/d12/c97 0 2026-03-10T10:19:43.792 INFO:tasks.workunit.client.1.vm05.stdout:1/603: creat d4/df/d1c/d53/d66/fb3 x:0 0 0 2026-03-10T10:19:43.797 INFO:tasks.workunit.client.0.vm02.stdout:2/535: rename d0/f8 to d0/d10/d69/fae 0 2026-03-10T10:19:43.799 INFO:tasks.workunit.client.0.vm02.stdout:9/510: dread - da/d3c/d4c/d38/f84 zero size 2026-03-10T10:19:43.805 INFO:tasks.workunit.client.1.vm05.stdout:5/562: dread da/db/d26/f4c [0,4194304] 0 2026-03-10T10:19:43.805 INFO:tasks.workunit.client.1.vm05.stdout:6/516: creat dd/d36/d3f/d12/fa6 x:0 0 0 2026-03-10T10:19:43.805 INFO:tasks.workunit.client.1.vm05.stdout:8/445: dread d7/d14/d3a/d49/f72 [0,4194304] 0 2026-03-10T10:19:43.805 INFO:tasks.workunit.client.0.vm02.stdout:9/511: stat da/d3c/d4c/f41 0 2026-03-10T10:19:43.805 INFO:tasks.workunit.client.0.vm02.stdout:9/512: stat da/d3c/d4c/d38/d82/d8c/l94 0 2026-03-10T10:19:43.805 INFO:tasks.workunit.client.0.vm02.stdout:4/677: symlink d1/d32/da3/ldc 0 2026-03-10T10:19:43.805 INFO:tasks.workunit.client.0.vm02.stdout:4/678: chown d1/d52/d53/f5b 10060691 1 2026-03-10T10:19:43.807 INFO:tasks.workunit.client.0.vm02.stdout:2/536: creat d0/d8c/faf x:0 0 0 2026-03-10T10:19:43.808 INFO:tasks.workunit.client.0.vm02.stdout:7/524: symlink d1/dc/d99/la2 0 2026-03-10T10:19:43.808 INFO:tasks.workunit.client.0.vm02.stdout:7/525: chown d1/dc/d44/l47 118 1 2026-03-10T10:19:43.810 INFO:tasks.workunit.client.0.vm02.stdout:3/537: mkdir d1/d8/dac 0 2026-03-10T10:19:43.814 INFO:tasks.workunit.client.1.vm05.stdout:4/407: rename d1/d31/dc/f2e to d1/d31/f81 0 2026-03-10T10:19:43.817 INFO:tasks.workunit.client.1.vm05.stdout:6/517: symlink dd/d1b/la7 0 2026-03-10T10:19:43.818 INFO:tasks.workunit.client.0.vm02.stdout:2/537: mkdir d0/d1a/d49/d5e/d65/db0 0 2026-03-10T10:19:43.819 INFO:tasks.workunit.client.1.vm05.stdout:8/446: symlink d7/d14/d15/l82 0 2026-03-10T10:19:43.819 INFO:tasks.workunit.client.0.vm02.stdout:7/526: rmdir d1/d1b/d8e 39 2026-03-10T10:19:43.820 INFO:tasks.workunit.client.1.vm05.stdout:5/563: mkdir da/db/d26/d5c/d4b/db4/dbd 0 2026-03-10T10:19:43.821 INFO:tasks.workunit.client.0.vm02.stdout:9/513: symlink da/d9d/la2 0 2026-03-10T10:19:43.821 INFO:tasks.workunit.client.1.vm05.stdout:2/487: creat db/d28/f98 x:0 0 0 2026-03-10T10:19:43.822 INFO:tasks.workunit.client.0.vm02.stdout:5/687: creat d1/db/d11/d16/d79/fec x:0 0 0 2026-03-10T10:19:43.823 INFO:tasks.workunit.client.1.vm05.stdout:1/604: rename d4/l6 to d4/d37/d4e/d82/lb4 0 2026-03-10T10:19:43.828 INFO:tasks.workunit.client.1.vm05.stdout:6/518: dwrite dd/d36/d3f/d12/d44/d2a/f84 [0,4194304] 0 2026-03-10T10:19:43.828 INFO:tasks.workunit.client.0.vm02.stdout:9/514: stat da/ld 0 2026-03-10T10:19:43.832 INFO:tasks.workunit.client.1.vm05.stdout:8/447: dwrite d7/d14/d3a/f50 [4194304,4194304] 0 2026-03-10T10:19:43.832 INFO:tasks.workunit.client.1.vm05.stdout:5/564: mkdir da/d9a/dbe 0 2026-03-10T10:19:43.837 INFO:tasks.workunit.client.1.vm05.stdout:1/605: read - d4/df/d1c/d53/daa/fa9 zero size 2026-03-10T10:19:43.840 INFO:tasks.workunit.client.1.vm05.stdout:2/488: truncate db/d12/f31 477295 0 2026-03-10T10:19:43.840 INFO:tasks.workunit.client.1.vm05.stdout:4/408: dwrite d1/d3/f12 [0,4194304] 0 2026-03-10T10:19:43.850 INFO:tasks.workunit.client.1.vm05.stdout:1/606: dwrite d4/df/d1c/d53/f98 [0,4194304] 0 2026-03-10T10:19:43.851 INFO:tasks.workunit.client.0.vm02.stdout:0/571: truncate d9/d34/d3d/f94 274054 0 2026-03-10T10:19:43.852 INFO:tasks.workunit.client.1.vm05.stdout:6/519: dwrite dd/d36/d3f/d12/d44/d2a/f98 [0,4194304] 0 2026-03-10T10:19:43.856 INFO:tasks.workunit.client.1.vm05.stdout:6/520: dread dd/d36/d3f/d12/d44/d2a/d3d/d3e/f64 [0,4194304] 0 2026-03-10T10:19:43.859 INFO:tasks.workunit.client.0.vm02.stdout:3/538: sync 2026-03-10T10:19:43.859 INFO:tasks.workunit.client.0.vm02.stdout:3/539: dread - d1/f81 zero size 2026-03-10T10:19:43.860 INFO:tasks.workunit.client.1.vm05.stdout:0/547: write d1/d2/d9/d31/d54/f6b [483411,13152] 0 2026-03-10T10:19:43.863 INFO:tasks.workunit.client.0.vm02.stdout:5/688: mknod d1/db/d11/d13/dc9/ced 0 2026-03-10T10:19:43.863 INFO:tasks.workunit.client.0.vm02.stdout:5/689: chown d1/db/d11/cad 1221195 1 2026-03-10T10:19:43.864 INFO:tasks.workunit.client.1.vm05.stdout:9/479: write d0/d1/d16/f40 [2250464,98672] 0 2026-03-10T10:19:43.867 INFO:tasks.workunit.client.0.vm02.stdout:4/679: rmdir d1/d32/da3/dc0 0 2026-03-10T10:19:43.879 INFO:tasks.workunit.client.0.vm02.stdout:6/519: write d0/d8/d9/f4f [669274,27411] 0 2026-03-10T10:19:43.879 INFO:tasks.workunit.client.0.vm02.stdout:1/556: write d4/d2c/fa2 [3892668,61166] 0 2026-03-10T10:19:43.879 INFO:tasks.workunit.client.0.vm02.stdout:9/515: mkdir da/d3c/d4c/d38/d82/da3 0 2026-03-10T10:19:43.882 INFO:tasks.workunit.client.0.vm02.stdout:4/680: rename d1/d32/d3e to d1/d75/ddd 0 2026-03-10T10:19:43.884 INFO:tasks.workunit.client.0.vm02.stdout:1/557: unlink d4/da/d1a/d22/f62 0 2026-03-10T10:19:43.899 INFO:tasks.workunit.client.1.vm05.stdout:8/448: creat d7/d14/d3a/d49/d65/f83 x:0 0 0 2026-03-10T10:19:43.899 INFO:tasks.workunit.client.1.vm05.stdout:9/480: mknod d0/df/d74/d8c/ca3 0 2026-03-10T10:19:43.899 INFO:tasks.workunit.client.0.vm02.stdout:1/558: rmdir d4/da/d27 39 2026-03-10T10:19:43.899 INFO:tasks.workunit.client.0.vm02.stdout:1/559: chown d4/da/f71 2039 1 2026-03-10T10:19:43.910 INFO:tasks.workunit.client.1.vm05.stdout:6/521: link dd/d36/d3f/d12/d44/f2f dd/d1b/fa8 0 2026-03-10T10:19:43.913 INFO:tasks.workunit.client.1.vm05.stdout:9/481: creat d0/df/d74/d90/fa4 x:0 0 0 2026-03-10T10:19:43.913 INFO:tasks.workunit.client.1.vm05.stdout:4/409: dread d1/d3/f5f [0,4194304] 0 2026-03-10T10:19:43.914 INFO:tasks.workunit.client.0.vm02.stdout:5/690: link d1/db/d11/d16/d48/fc1 d1/db/d11/d16/d48/dcf/fee 0 2026-03-10T10:19:43.916 INFO:tasks.workunit.client.1.vm05.stdout:6/522: rename dd/d36/d3f/d12/d58/l94 to dd/d36/d3f/d12/d44/d30/la9 0 2026-03-10T10:19:43.927 INFO:tasks.workunit.client.1.vm05.stdout:8/449: creat d7/d14/d15/f84 x:0 0 0 2026-03-10T10:19:43.932 INFO:tasks.workunit.client.1.vm05.stdout:7/557: truncate d5/d26/f4d 2470842 0 2026-03-10T10:19:43.932 INFO:tasks.workunit.client.1.vm05.stdout:3/554: truncate dd/d15/d24/f2f 141878 0 2026-03-10T10:19:43.932 INFO:tasks.workunit.client.0.vm02.stdout:8/526: truncate d1/d1c/f1e 6687094 0 2026-03-10T10:19:43.932 INFO:tasks.workunit.client.0.vm02.stdout:8/527: chown d1/d1c/d24/f6b 1924 1 2026-03-10T10:19:43.945 INFO:tasks.workunit.client.0.vm02.stdout:6/520: dread d0/f6b [0,4194304] 0 2026-03-10T10:19:43.948 INFO:tasks.workunit.client.1.vm05.stdout:8/450: readlink d7/d14/d3a/l70 0 2026-03-10T10:19:43.949 INFO:tasks.workunit.client.1.vm05.stdout:6/523: read dd/d36/d3f/d12/d44/d2a/fa5 [1577707,87584] 0 2026-03-10T10:19:43.950 INFO:tasks.workunit.client.1.vm05.stdout:6/524: write dd/d36/d3f/fa0 [994204,54193] 0 2026-03-10T10:19:43.950 INFO:tasks.workunit.client.1.vm05.stdout:6/525: fsync dd/d36/f69 0 2026-03-10T10:19:43.956 INFO:tasks.workunit.client.0.vm02.stdout:1/560: link d4/d1b/c29 d4/da/d27/d38/d3c/cb0 0 2026-03-10T10:19:43.956 INFO:tasks.workunit.client.0.vm02.stdout:5/691: dread d1/db/d11/d13/d28/d37/f3c [0,4194304] 0 2026-03-10T10:19:43.957 INFO:tasks.workunit.client.0.vm02.stdout:5/692: chown d1/db/d11/f4a 24038 1 2026-03-10T10:19:43.963 INFO:tasks.workunit.client.0.vm02.stdout:2/538: dwrite d0/f88 [0,4194304] 0 2026-03-10T10:19:43.969 INFO:tasks.workunit.client.0.vm02.stdout:7/527: dwrite d1/f80 [0,4194304] 0 2026-03-10T10:19:43.972 INFO:tasks.workunit.client.1.vm05.stdout:1/607: dwrite d4/d39/d3e/f7d [0,4194304] 0 2026-03-10T10:19:43.981 INFO:tasks.workunit.client.0.vm02.stdout:0/572: dwrite d9/d34/d3d/f41 [0,4194304] 0 2026-03-10T10:19:43.981 INFO:tasks.workunit.client.0.vm02.stdout:9/516: write da/f1b [1213435,89241] 0 2026-03-10T10:19:43.981 INFO:tasks.workunit.client.1.vm05.stdout:5/565: truncate da/db/f29 2691116 0 2026-03-10T10:19:43.982 INFO:tasks.workunit.client.1.vm05.stdout:5/566: chown da/db/d28/f8d 3253 1 2026-03-10T10:19:43.982 INFO:tasks.workunit.client.0.vm02.stdout:4/681: write d1/d41/d5e/d78/d1a/d49/d81/fb1 [158436,26280] 0 2026-03-10T10:19:43.985 INFO:tasks.workunit.client.1.vm05.stdout:4/410: mkdir d1/d3/d82 0 2026-03-10T10:19:43.986 INFO:tasks.workunit.client.1.vm05.stdout:0/548: truncate d1/d2/d9/f1d 2599681 0 2026-03-10T10:19:43.987 INFO:tasks.workunit.client.0.vm02.stdout:8/528: creat d1/d1c/d23/f9d x:0 0 0 2026-03-10T10:19:43.989 INFO:tasks.workunit.client.0.vm02.stdout:8/529: chown d1/d1c/d24 6880127 1 2026-03-10T10:19:43.990 INFO:tasks.workunit.client.0.vm02.stdout:3/540: dwrite d1/d20/f64 [0,4194304] 0 2026-03-10T10:19:43.993 INFO:tasks.workunit.client.0.vm02.stdout:6/521: mknod d0/d8/caa 0 2026-03-10T10:19:43.995 INFO:tasks.workunit.client.0.vm02.stdout:1/561: chown d4/d2c/d53/l87 1169193 1 2026-03-10T10:19:43.999 INFO:tasks.workunit.client.1.vm05.stdout:3/555: creat dd/d15/d1f/dae/fc7 x:0 0 0 2026-03-10T10:19:44.001 INFO:tasks.workunit.client.1.vm05.stdout:3/556: dread dd/d20/d56/f7d [0,4194304] 0 2026-03-10T10:19:44.011 INFO:tasks.workunit.client.1.vm05.stdout:4/411: read d1/d31/dc/f21 [3963927,81928] 0 2026-03-10T10:19:44.015 INFO:tasks.workunit.client.0.vm02.stdout:2/539: creat d0/d8c/fb1 x:0 0 0 2026-03-10T10:19:44.015 INFO:tasks.workunit.client.1.vm05.stdout:4/412: dwrite d1/d3/f5 [0,4194304] 0 2026-03-10T10:19:44.017 INFO:tasks.workunit.client.1.vm05.stdout:7/558: mkdir d5/d1d/d20/d91/da7/dab 0 2026-03-10T10:19:44.018 INFO:tasks.workunit.client.1.vm05.stdout:7/559: stat d5/d1d/d20/d2d/d80 0 2026-03-10T10:19:44.020 INFO:tasks.workunit.client.1.vm05.stdout:4/413: dwrite d1/d31/dc/d40/f67 [4194304,4194304] 0 2026-03-10T10:19:44.024 INFO:tasks.workunit.client.0.vm02.stdout:7/528: dwrite d1/dc/f25 [0,4194304] 0 2026-03-10T10:19:44.026 INFO:tasks.workunit.client.1.vm05.stdout:5/567: symlink da/d9a/lbf 0 2026-03-10T10:19:44.026 INFO:tasks.workunit.client.1.vm05.stdout:1/608: creat d4/d37/d4e/fb5 x:0 0 0 2026-03-10T10:19:44.026 INFO:tasks.workunit.client.0.vm02.stdout:0/573: mkdir d9/d18/d1a/d22/db4 0 2026-03-10T10:19:44.027 INFO:tasks.workunit.client.1.vm05.stdout:1/609: write d4/d39/d3e/f7d [3509318,89552] 0 2026-03-10T10:19:44.037 INFO:tasks.workunit.client.1.vm05.stdout:1/610: dwrite d4/d39/fb2 [0,4194304] 0 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: Manager daemon vm05.coparq is now available 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: Migrating agent root cert to cert store 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: Migrating agent root key to cert store 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: Checking for cert/key for grafana.vm02 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: Migrating grafana.vm02 cert to cert store 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: Migrating grafana.vm02 key to cert store 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.coparq/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:19:44.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:43 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.coparq/trash_purge_schedule"}]: dispatch 2026-03-10T10:19:44.038 INFO:tasks.workunit.client.1.vm05.stdout:1/611: chown d4/l22 1 1 2026-03-10T10:19:44.039 INFO:tasks.workunit.client.0.vm02.stdout:1/562: mknod d4/da/d27/d38/d3c/cb1 0 2026-03-10T10:19:44.040 INFO:tasks.workunit.client.0.vm02.stdout:5/693: symlink d1/db/d11/d13/d28/lef 0 2026-03-10T10:19:44.042 INFO:tasks.workunit.client.0.vm02.stdout:7/529: truncate d1/dc/d60/f88 209953 0 2026-03-10T10:19:44.047 INFO:tasks.workunit.client.0.vm02.stdout:9/517: symlink da/d3c/d4c/d38/la4 0 2026-03-10T10:19:44.057 INFO:tasks.workunit.client.1.vm05.stdout:5/568: creat da/db/d26/d5c/d4b/db4/fc0 x:0 0 0 2026-03-10T10:19:44.057 INFO:tasks.workunit.client.1.vm05.stdout:0/549: rename d1/d2/d9/d31/d12/fb2 to d1/d2/d9/fbc 0 2026-03-10T10:19:44.057 INFO:tasks.workunit.client.1.vm05.stdout:0/550: write d1/d2/d39/d3d/f72 [2716842,117511] 0 2026-03-10T10:19:44.057 INFO:tasks.workunit.client.0.vm02.stdout:3/541: rename d1/d8/d21/la8 to d1/d8/d21/d73/lad 0 2026-03-10T10:19:44.059 INFO:tasks.workunit.client.1.vm05.stdout:1/612: dread d4/df/f73 [0,4194304] 0 2026-03-10T10:19:44.060 INFO:tasks.workunit.client.0.vm02.stdout:6/522: mknod d0/d8/d29/cab 0 2026-03-10T10:19:44.083 INFO:tasks.workunit.client.0.vm02.stdout:9/518: creat da/d3c/d4c/d38/d82/fa5 x:0 0 0 2026-03-10T10:19:44.093 INFO:tasks.workunit.client.0.vm02.stdout:2/540: rename d0/d10/d69/f97 to d0/d1a/d49/fb2 0 2026-03-10T10:19:44.100 INFO:tasks.workunit.client.0.vm02.stdout:6/523: dwrite d0/d8/d29/d52/f8b [0,4194304] 0 2026-03-10T10:19:44.104 INFO:tasks.workunit.client.0.vm02.stdout:6/524: write d0/d8/d29/d2f/d4b/da5/fa6 [810736,48440] 0 2026-03-10T10:19:44.124 INFO:tasks.workunit.client.0.vm02.stdout:4/682: link d1/d41/d5e/c6a d1/d52/cde 0 2026-03-10T10:19:44.131 INFO:tasks.workunit.client.0.vm02.stdout:4/683: dwrite d1/d32/fd3 [0,4194304] 0 2026-03-10T10:19:44.151 INFO:tasks.workunit.client.0.vm02.stdout:3/542: mknod d1/cae 0 2026-03-10T10:19:44.151 INFO:tasks.workunit.client.0.vm02.stdout:3/543: chown d1/c5a 37335758 1 2026-03-10T10:19:44.152 INFO:tasks.workunit.client.0.vm02.stdout:3/544: chown d1/d8/f7c 1 1 2026-03-10T10:19:44.152 INFO:tasks.workunit.client.0.vm02.stdout:3/545: chown d1/f14 1 1 2026-03-10T10:19:44.178 INFO:tasks.workunit.client.0.vm02.stdout:9/519: mkdir da/d3c/d4c/d38/da6 0 2026-03-10T10:19:44.179 INFO:tasks.workunit.client.0.vm02.stdout:9/520: write da/f15 [5114322,91782] 0 2026-03-10T10:19:44.185 INFO:tasks.workunit.client.0.vm02.stdout:4/684: symlink d1/d41/d5e/d78/d55/ldf 0 2026-03-10T10:19:44.190 INFO:tasks.workunit.client.0.vm02.stdout:3/546: creat d1/d8/d44/faf x:0 0 0 2026-03-10T10:19:44.191 INFO:tasks.workunit.client.0.vm02.stdout:3/547: write d1/d6/f3a [89918,46138] 0 2026-03-10T10:19:44.191 INFO:tasks.workunit.client.0.vm02.stdout:3/548: truncate d1/d20/fa4 517051 0 2026-03-10T10:19:44.243 INFO:tasks.workunit.client.0.vm02.stdout:5/694: getdents d1/db/d11/d13 0 2026-03-10T10:19:44.247 INFO:tasks.workunit.client.0.vm02.stdout:7/530: link d1/dc/d10/d38/l50 d1/d1b/la3 0 2026-03-10T10:19:44.255 INFO:tasks.workunit.client.1.vm05.stdout:2/489: dread db/d12/f31 [0,4194304] 0 2026-03-10T10:19:44.257 INFO:tasks.workunit.client.0.vm02.stdout:9/521: symlink da/d3c/d4c/d38/d4a/la7 0 2026-03-10T10:19:44.257 INFO:tasks.workunit.client.1.vm05.stdout:2/490: dread db/d2d/d5e/f71 [0,4194304] 0 2026-03-10T10:19:44.261 INFO:tasks.workunit.client.0.vm02.stdout:9/522: dread da/f15 [4194304,4194304] 0 2026-03-10T10:19:44.261 INFO:tasks.workunit.client.0.vm02.stdout:9/523: dread - da/d3c/d4c/d38/d4a/f54 zero size 2026-03-10T10:19:44.263 INFO:tasks.workunit.client.0.vm02.stdout:4/685: chown d1/d41/d5e/d78/d7f/c65 0 1 2026-03-10T10:19:44.265 INFO:tasks.workunit.client.1.vm05.stdout:9/482: getdents d0/d70 0 2026-03-10T10:19:44.267 INFO:tasks.workunit.client.1.vm05.stdout:9/483: truncate d0/df/d11/f52 937469 0 2026-03-10T10:19:44.267 INFO:tasks.workunit.client.1.vm05.stdout:8/451: link d7/d14/d24/l6f d7/d14/d3a/d49/l85 0 2026-03-10T10:19:44.273 INFO:tasks.workunit.client.0.vm02.stdout:5/695: creat d1/db/d11/d13/ff0 x:0 0 0 2026-03-10T10:19:44.274 INFO:tasks.workunit.client.0.vm02.stdout:5/696: read - d1/db/d11/d84/d40/fe3 zero size 2026-03-10T10:19:44.279 INFO:tasks.workunit.client.0.vm02.stdout:7/531: creat d1/dc/d60/fa4 x:0 0 0 2026-03-10T10:19:44.283 INFO:tasks.workunit.client.0.vm02.stdout:7/532: dwrite d1/dc/d55/f8b [0,4194304] 0 2026-03-10T10:19:44.287 INFO:tasks.workunit.client.1.vm05.stdout:6/526: truncate dd/d36/d3f/f41 3095517 0 2026-03-10T10:19:44.295 INFO:tasks.workunit.client.0.vm02.stdout:1/563: write d4/d1b/f44 [1872440,117454] 0 2026-03-10T10:19:44.303 INFO:tasks.workunit.client.0.vm02.stdout:8/530: dwrite d1/d1c/d43/d5b/f60 [0,4194304] 0 2026-03-10T10:19:44.306 INFO:tasks.workunit.client.1.vm05.stdout:7/560: mknod d5/cac 0 2026-03-10T10:19:44.309 INFO:tasks.workunit.client.1.vm05.stdout:7/561: write d5/d1d/d29/d3e/d8c/d96/f9e [121547,107188] 0 2026-03-10T10:19:44.309 INFO:tasks.workunit.client.1.vm05.stdout:4/414: rename d1/d31/c73 to d1/d31/d72/c83 0 2026-03-10T10:19:44.320 INFO:tasks.workunit.client.1.vm05.stdout:0/551: fsync d1/d2/d9/d31/d13/d17/f1b 0 2026-03-10T10:19:44.321 INFO:tasks.workunit.client.0.vm02.stdout:2/541: dwrite d0/d1a/d49/d5e/f60 [4194304,4194304] 0 2026-03-10T10:19:44.327 INFO:tasks.workunit.client.0.vm02.stdout:2/542: dwrite d0/f72 [4194304,4194304] 0 2026-03-10T10:19:44.329 INFO:tasks.workunit.client.0.vm02.stdout:2/543: dread - d0/d1a/d49/d5e/fad zero size 2026-03-10T10:19:44.339 INFO:tasks.workunit.client.1.vm05.stdout:1/613: creat d4/df/d76/fb6 x:0 0 0 2026-03-10T10:19:44.349 INFO:tasks.workunit.client.1.vm05.stdout:2/491: dread - db/d61/d67/f6e zero size 2026-03-10T10:19:44.351 INFO:tasks.workunit.client.1.vm05.stdout:9/484: chown d0/d1/d13/de/d21/l6c 1559 1 2026-03-10T10:19:44.352 INFO:tasks.workunit.client.0.vm02.stdout:1/564: rmdir d4/da/d27 39 2026-03-10T10:19:44.352 INFO:tasks.workunit.client.1.vm05.stdout:8/452: rmdir d7/d14/d24/d3f/d4f 39 2026-03-10T10:19:44.354 INFO:tasks.workunit.client.1.vm05.stdout:3/557: rmdir dd/d15/d24/d8e/dc3 0 2026-03-10T10:19:44.355 INFO:tasks.workunit.client.1.vm05.stdout:3/558: chown dd/d15/d24/d2c/f60 0 1 2026-03-10T10:19:44.361 INFO:tasks.workunit.client.0.vm02.stdout:4/686: mkdir d1/d52/d53/dda/de0 0 2026-03-10T10:19:44.365 INFO:tasks.workunit.client.0.vm02.stdout:6/525: dwrite d0/d8/f64 [0,4194304] 0 2026-03-10T10:19:44.376 INFO:tasks.workunit.client.0.vm02.stdout:7/533: mkdir d1/dc/d55/d9a/da5 0 2026-03-10T10:19:44.388 INFO:tasks.workunit.client.0.vm02.stdout:3/549: dwrite d1/d8/d21/f88 [0,4194304] 0 2026-03-10T10:19:44.388 INFO:tasks.workunit.client.1.vm05.stdout:3/559: mkdir dd/d15/d24/dc8 0 2026-03-10T10:19:44.389 INFO:tasks.workunit.client.1.vm05.stdout:6/527: mkdir dd/d36/d3f/d12/d44/daa 0 2026-03-10T10:19:44.389 INFO:tasks.workunit.client.1.vm05.stdout:6/528: dread dd/d36/d7d/f8a [0,4194304] 0 2026-03-10T10:19:44.390 INFO:tasks.workunit.client.1.vm05.stdout:2/492: read db/f23 [71713,3488] 0 2026-03-10T10:19:44.390 INFO:tasks.workunit.client.0.vm02.stdout:4/687: mknod d1/ce1 0 2026-03-10T10:19:44.395 INFO:tasks.workunit.client.0.vm02.stdout:6/526: creat d0/d8/d9/fac x:0 0 0 2026-03-10T10:19:44.403 INFO:tasks.workunit.client.1.vm05.stdout:7/562: mkdir d5/d1d/d20/d91/da7/dab/dad 0 2026-03-10T10:19:44.404 INFO:tasks.workunit.client.0.vm02.stdout:6/527: chown d0/d8/d29 1254 1 2026-03-10T10:19:44.404 INFO:tasks.workunit.client.0.vm02.stdout:7/534: mknod d1/dc/d16/ca6 0 2026-03-10T10:19:44.404 INFO:tasks.workunit.client.0.vm02.stdout:3/550: creat d1/d58/fb0 x:0 0 0 2026-03-10T10:19:44.404 INFO:tasks.workunit.client.0.vm02.stdout:6/528: chown d0/d8/l1f 0 1 2026-03-10T10:19:44.405 INFO:tasks.workunit.client.0.vm02.stdout:7/535: stat d1/c21 0 2026-03-10T10:19:44.407 INFO:tasks.workunit.client.0.vm02.stdout:1/565: link d4/da/f9d d4/da/fb2 0 2026-03-10T10:19:44.410 INFO:tasks.workunit.client.1.vm05.stdout:3/560: mkdir dd/d20/d56/d5e/dc9 0 2026-03-10T10:19:44.412 INFO:tasks.workunit.client.0.vm02.stdout:3/551: rename d1/d8/dac to d1/d8/d86/db1 0 2026-03-10T10:19:44.412 INFO:tasks.workunit.client.0.vm02.stdout:3/552: stat d1/d8/d21/f29 0 2026-03-10T10:19:44.418 INFO:tasks.workunit.client.0.vm02.stdout:7/536: unlink d1/dc/d16/d28/d2d/f2f 0 2026-03-10T10:19:44.425 INFO:tasks.workunit.client.1.vm05.stdout:2/493: fsync db/d61/d67/f6e 0 2026-03-10T10:19:44.427 INFO:tasks.workunit.client.1.vm05.stdout:1/614: link d4/df/d1c/d53/d66/l69 d4/d37/d4e/lb7 0 2026-03-10T10:19:44.428 INFO:tasks.workunit.client.0.vm02.stdout:2/544: sync 2026-03-10T10:19:44.431 INFO:tasks.workunit.client.0.vm02.stdout:7/537: mkdir d1/d1b/d8f/d67/da7 0 2026-03-10T10:19:44.431 INFO:tasks.workunit.client.0.vm02.stdout:7/538: fdatasync d1/d1b/d8f/d67/f76 0 2026-03-10T10:19:44.434 INFO:tasks.workunit.client.1.vm05.stdout:9/485: sync 2026-03-10T10:19:44.434 INFO:tasks.workunit.client.1.vm05.stdout:6/529: symlink dd/d36/d3f/lab 0 2026-03-10T10:19:44.438 INFO:tasks.workunit.client.0.vm02.stdout:3/553: mkdir d1/d20/db2 0 2026-03-10T10:19:44.443 INFO:tasks.workunit.client.0.vm02.stdout:3/554: chown d1/d20/d52/l8c 24 1 2026-03-10T10:19:44.444 INFO:tasks.workunit.client.1.vm05.stdout:2/494: creat db/d61/f99 x:0 0 0 2026-03-10T10:19:44.444 INFO:tasks.workunit.client.1.vm05.stdout:1/615: sync 2026-03-10T10:19:44.449 INFO:tasks.workunit.client.1.vm05.stdout:1/616: dwrite d4/d3d/d6e/faf [0,4194304] 0 2026-03-10T10:19:44.454 INFO:tasks.workunit.client.0.vm02.stdout:3/555: rename d1/f85 to d1/d8/d21/d7d/fb3 0 2026-03-10T10:19:44.454 INFO:tasks.workunit.client.1.vm05.stdout:1/617: dwrite d4/d39/d3e/da0/fa1 [0,4194304] 0 2026-03-10T10:19:44.478 INFO:tasks.workunit.client.1.vm05.stdout:9/486: rename d0/df/d11/l6f to d0/d1/d16/la5 0 2026-03-10T10:19:44.480 INFO:tasks.workunit.client.0.vm02.stdout:5/697: write d1/db/d11/d16/d48/f92 [460475,5144] 0 2026-03-10T10:19:44.482 INFO:tasks.workunit.client.0.vm02.stdout:9/524: dwrite da/d3c/f72 [0,4194304] 0 2026-03-10T10:19:44.484 INFO:tasks.workunit.client.0.vm02.stdout:9/525: chown da/d3c/d4c/d2c/f32 0 1 2026-03-10T10:19:44.490 INFO:tasks.workunit.client.0.vm02.stdout:6/529: getdents d0/d8/d29/d2f/d4b/da5 0 2026-03-10T10:19:44.493 INFO:tasks.workunit.client.0.vm02.stdout:2/545: symlink d0/d10/da6/lb3 0 2026-03-10T10:19:44.495 INFO:tasks.workunit.client.0.vm02.stdout:3/556: sync 2026-03-10T10:19:44.497 INFO:tasks.workunit.client.0.vm02.stdout:5/698: readlink d1/l8 0 2026-03-10T10:19:44.500 INFO:tasks.workunit.client.1.vm05.stdout:6/530: symlink dd/d36/d3f/lac 0 2026-03-10T10:19:44.507 INFO:tasks.workunit.client.0.vm02.stdout:6/530: write d0/d8/d29/d6d/d96/f97 [3326426,25760] 0 2026-03-10T10:19:44.522 INFO:tasks.workunit.client.1.vm05.stdout:9/487: symlink d0/df/d74/la6 0 2026-03-10T10:19:44.571 INFO:tasks.workunit.client.0.vm02.stdout:8/531: write d1/d1c/d43/f4b [2452771,125628] 0 2026-03-10T10:19:44.572 INFO:tasks.workunit.client.1.vm05.stdout:4/415: write d1/d31/d4b/f59 [5454324,3028] 0 2026-03-10T10:19:44.572 INFO:tasks.workunit.client.0.vm02.stdout:8/532: write d1/f65 [4568168,97797] 0 2026-03-10T10:19:44.573 INFO:tasks.workunit.client.1.vm05.stdout:4/416: stat d1/d31/dc/d40 0 2026-03-10T10:19:44.575 INFO:tasks.workunit.client.1.vm05.stdout:5/569: dwrite da/db/f3b [0,4194304] 0 2026-03-10T10:19:44.577 INFO:tasks.workunit.client.1.vm05.stdout:0/552: dwrite d1/d2/d9/d31/d54/f27 [0,4194304] 0 2026-03-10T10:19:44.578 INFO:tasks.workunit.client.1.vm05.stdout:5/570: fsync da/db/d26/d5c/f2c 0 2026-03-10T10:19:44.579 INFO:tasks.workunit.client.1.vm05.stdout:0/553: chown d1/d2/d9/d31/d54/c67 0 1 2026-03-10T10:19:44.584 INFO:tasks.workunit.client.1.vm05.stdout:5/571: dread da/db/f3b [0,4194304] 0 2026-03-10T10:19:44.588 INFO:tasks.workunit.client.1.vm05.stdout:0/554: dread d1/d2/d9/d31/d13/d15/d4e/d8a/fae [0,4194304] 0 2026-03-10T10:19:44.594 INFO:tasks.workunit.client.0.vm02.stdout:0/574: dwrite d9/d18/d1a/d22/d24/d80/d49/f5e [0,4194304] 0 2026-03-10T10:19:44.602 INFO:tasks.workunit.client.1.vm05.stdout:8/453: dwrite d7/d14/d62/f69 [0,4194304] 0 2026-03-10T10:19:44.611 INFO:tasks.workunit.client.0.vm02.stdout:0/575: sync 2026-03-10T10:19:44.627 INFO:tasks.workunit.client.0.vm02.stdout:2/546: rename d0/d10/d69/d9f/fa5 to d0/d1a/fb4 0 2026-03-10T10:19:44.632 INFO:tasks.workunit.client.0.vm02.stdout:3/557: unlink d1/d58/f60 0 2026-03-10T10:19:44.644 INFO:tasks.workunit.client.0.vm02.stdout:2/547: dread d0/f4d [0,4194304] 0 2026-03-10T10:19:44.646 INFO:tasks.workunit.client.1.vm05.stdout:4/417: creat d1/d64/f84 x:0 0 0 2026-03-10T10:19:44.651 INFO:tasks.workunit.client.1.vm05.stdout:4/418: dread d1/d3/f4a [0,4194304] 0 2026-03-10T10:19:44.668 INFO:tasks.workunit.client.0.vm02.stdout:1/566: dwrite d4/d2c/d53/f75 [0,4194304] 0 2026-03-10T10:19:44.669 INFO:tasks.workunit.client.0.vm02.stdout:1/567: chown d4/da/d1a/c16 1570 1 2026-03-10T10:19:44.670 INFO:tasks.workunit.client.0.vm02.stdout:1/568: chown d4/da/d1a/d22/c6b 367921623 1 2026-03-10T10:19:44.680 INFO:tasks.workunit.client.0.vm02.stdout:8/533: rename d1/d1c/d23/d25/c30 to d1/d1c/d43/d6a/d7c/c9e 0 2026-03-10T10:19:44.684 INFO:tasks.workunit.client.0.vm02.stdout:8/534: dwrite d1/d2/f36 [0,4194304] 0 2026-03-10T10:19:44.696 INFO:tasks.workunit.client.1.vm05.stdout:7/563: write d5/dd/fa9 [1198727,70972] 0 2026-03-10T10:19:44.706 INFO:tasks.workunit.client.1.vm05.stdout:3/561: write dd/d15/d24/f63 [573099,58414] 0 2026-03-10T10:19:44.707 INFO:tasks.workunit.client.0.vm02.stdout:7/539: write d1/d1b/d8f/f59 [356874,39525] 0 2026-03-10T10:19:44.711 INFO:tasks.workunit.client.0.vm02.stdout:7/540: dwrite d1/dc/d60/f53 [0,4194304] 0 2026-03-10T10:19:44.713 INFO:tasks.workunit.client.0.vm02.stdout:7/541: readlink d1/dc/le 0 2026-03-10T10:19:44.743 INFO:tasks.workunit.client.0.vm02.stdout:0/576: symlink d9/d34/d3d/d65/da2/lb5 0 2026-03-10T10:19:44.749 INFO:tasks.workunit.client.0.vm02.stdout:4/688: creat d1/d75/fe2 x:0 0 0 2026-03-10T10:19:44.752 INFO:tasks.workunit.client.1.vm05.stdout:5/572: mknod da/db/d28/cc1 0 2026-03-10T10:19:44.755 INFO:tasks.workunit.client.1.vm05.stdout:0/555: unlink d1/d2/fc 0 2026-03-10T10:19:44.760 INFO:tasks.workunit.client.1.vm05.stdout:0/556: dread d1/d2/d9/f98 [0,4194304] 0 2026-03-10T10:19:44.770 INFO:tasks.workunit.client.1.vm05.stdout:7/564: mkdir d5/d17/dae 0 2026-03-10T10:19:44.774 INFO:tasks.workunit.client.0.vm02.stdout:8/535: mkdir d1/d1c/d23/d3e/d83/d9f 0 2026-03-10T10:19:44.780 INFO:tasks.workunit.client.1.vm05.stdout:4/419: creat d1/d31/d4b/d6d/f85 x:0 0 0 2026-03-10T10:19:44.785 INFO:tasks.workunit.client.1.vm05.stdout:0/557: mkdir d1/d2/d9/d31/d13/d17/da1/dbd 0 2026-03-10T10:19:44.786 INFO:tasks.workunit.client.1.vm05.stdout:0/558: stat d1/d2/d9/d31/d54/l74 0 2026-03-10T10:19:44.792 INFO:tasks.workunit.client.1.vm05.stdout:4/420: mknod d1/d31/d4b/d6d/c86 0 2026-03-10T10:19:44.794 INFO:tasks.workunit.client.1.vm05.stdout:5/573: symlink da/db/d26/d5c/d4b/db4/dbd/lc2 0 2026-03-10T10:19:44.796 INFO:tasks.workunit.client.1.vm05.stdout:0/559: unlink d1/d2/d9/d31/d13/d15/c30 0 2026-03-10T10:19:44.799 INFO:tasks.workunit.client.1.vm05.stdout:4/421: symlink d1/d31/dc/d40/d63/l87 0 2026-03-10T10:19:44.799 INFO:tasks.workunit.client.1.vm05.stdout:4/422: stat d1/d31/l32 0 2026-03-10T10:19:44.803 INFO:tasks.workunit.client.1.vm05.stdout:4/423: dwrite d1/d3/f5 [4194304,4194304] 0 2026-03-10T10:19:44.804 INFO:tasks.workunit.client.1.vm05.stdout:5/574: symlink da/db/d28/lc3 0 2026-03-10T10:19:44.820 INFO:tasks.workunit.client.0.vm02.stdout:7/542: symlink d1/dc/d55/d9a/la8 0 2026-03-10T10:19:44.830 INFO:tasks.workunit.client.0.vm02.stdout:9/526: getdents da 0 2026-03-10T10:19:44.835 INFO:tasks.workunit.client.0.vm02.stdout:9/527: creat da/d3c/d4c/d38/d82/d8c/fa8 x:0 0 0 2026-03-10T10:19:44.835 INFO:tasks.workunit.client.0.vm02.stdout:9/528: write da/d3c/d53/f6a [210078,35848] 0 2026-03-10T10:19:44.835 INFO:tasks.workunit.client.0.vm02.stdout:9/529: chown da/f25 1045132 1 2026-03-10T10:19:44.835 INFO:tasks.workunit.client.0.vm02.stdout:1/569: creat d4/d2c/d53/fb3 x:0 0 0 2026-03-10T10:19:44.842 INFO:tasks.workunit.client.0.vm02.stdout:0/577: creat d9/d18/d1a/d22/d24/fb6 x:0 0 0 2026-03-10T10:19:44.844 INFO:tasks.workunit.client.0.vm02.stdout:9/530: mknod da/d9d/ca9 0 2026-03-10T10:19:44.844 INFO:tasks.workunit.client.0.vm02.stdout:9/531: stat da/d3c/d4c/d38/la0 0 2026-03-10T10:19:44.854 INFO:tasks.workunit.client.0.vm02.stdout:1/570: dread d4/da/d27/d38/d3c/f8f [0,4194304] 0 2026-03-10T10:19:44.857 INFO:tasks.workunit.client.0.vm02.stdout:0/578: fsync d9/d34/d3d/d65/f84 0 2026-03-10T10:19:44.868 INFO:tasks.workunit.client.0.vm02.stdout:0/579: dread d9/d18/d1a/d22/d24/f26 [0,4194304] 0 2026-03-10T10:19:44.869 INFO:tasks.workunit.client.0.vm02.stdout:1/571: creat d4/da/d1a/d47/d78/fb4 x:0 0 0 2026-03-10T10:19:44.872 INFO:tasks.workunit.client.0.vm02.stdout:1/572: dread d4/d1b/f4c [0,4194304] 0 2026-03-10T10:19:44.876 INFO:tasks.workunit.client.0.vm02.stdout:1/573: dwrite d4/d2c/d53/da6/fab [0,4194304] 0 2026-03-10T10:19:44.887 INFO:tasks.workunit.client.0.vm02.stdout:9/532: symlink da/d3c/d4c/d38/d4a/d99/laa 0 2026-03-10T10:19:44.887 INFO:tasks.workunit.client.0.vm02.stdout:9/533: chown da/f5c 248186 1 2026-03-10T10:19:44.888 INFO:tasks.workunit.client.0.vm02.stdout:1/574: read d4/da/d27/d38/d80/f94 [2136043,105705] 0 2026-03-10T10:19:44.898 INFO:tasks.workunit.client.0.vm02.stdout:1/575: rename d4/da/f9d to d4/da/d27/d38/d80/fb5 0 2026-03-10T10:19:44.909 INFO:tasks.workunit.client.0.vm02.stdout:1/576: stat d4/da/d1a/d47/fa0 0 2026-03-10T10:19:44.909 INFO:tasks.workunit.client.0.vm02.stdout:1/577: chown d4/da/c4d 10252903 1 2026-03-10T10:19:44.910 INFO:tasks.workunit.client.0.vm02.stdout:0/580: sync 2026-03-10T10:19:44.922 INFO:tasks.workunit.client.0.vm02.stdout:1/578: dread d4/d2c/d53/f6c [0,4194304] 0 2026-03-10T10:19:44.924 INFO:tasks.workunit.client.0.vm02.stdout:1/579: dread d4/d2c/d53/f74 [0,4194304] 0 2026-03-10T10:19:44.932 INFO:tasks.workunit.client.0.vm02.stdout:1/580: dread d4/f8 [0,4194304] 0 2026-03-10T10:19:44.936 INFO:tasks.workunit.client.0.vm02.stdout:2/548: getdents d0/d1a 0 2026-03-10T10:19:44.936 INFO:tasks.workunit.client.0.vm02.stdout:2/549: chown d0/d1a/d24/c83 6458 1 2026-03-10T10:19:44.938 INFO:tasks.workunit.client.1.vm05.stdout:1/618: write d4/d3d/f77 [1410581,60638] 0 2026-03-10T10:19:44.941 INFO:tasks.workunit.client.1.vm05.stdout:9/488: dwrite d0/d1/d13/de/d93/fa1 [0,4194304] 0 2026-03-10T10:19:44.943 INFO:tasks.workunit.client.0.vm02.stdout:5/699: dwrite d1/db/d11/d84/d40/d4f/f60 [0,4194304] 0 2026-03-10T10:19:44.944 INFO:tasks.workunit.client.0.vm02.stdout:6/531: dwrite d0/d8/d9/f30 [0,4194304] 0 2026-03-10T10:19:44.948 INFO:tasks.workunit.client.0.vm02.stdout:0/581: sync 2026-03-10T10:19:44.948 INFO:tasks.workunit.client.0.vm02.stdout:1/581: sync 2026-03-10T10:19:44.950 INFO:tasks.workunit.client.0.vm02.stdout:0/582: write d9/d34/d3d/f41 [3400417,51387] 0 2026-03-10T10:19:44.950 INFO:tasks.workunit.client.0.vm02.stdout:0/583: chown d9/l1c 21542 1 2026-03-10T10:19:44.954 INFO:tasks.workunit.client.0.vm02.stdout:1/582: dwrite d4/d1b/f44 [0,4194304] 0 2026-03-10T10:19:44.968 INFO:tasks.workunit.client.1.vm05.stdout:1/619: dwrite d4/d20/f49 [0,4194304] 0 2026-03-10T10:19:44.972 INFO:tasks.workunit.client.1.vm05.stdout:9/489: link d0/df/d11/f24 d0/d1/fa7 0 2026-03-10T10:19:44.982 INFO:tasks.workunit.client.1.vm05.stdout:1/620: mkdir d4/d39/d3e/db1/db8 0 2026-03-10T10:19:44.982 INFO:tasks.workunit.client.0.vm02.stdout:1/583: mknod d4/d4a/cb6 0 2026-03-10T10:19:44.985 INFO:tasks.workunit.client.1.vm05.stdout:9/490: creat d0/d1/d13/d62/fa8 x:0 0 0 2026-03-10T10:19:44.986 INFO:tasks.workunit.client.1.vm05.stdout:1/621: dwrite d4/df/d1c/d53/d66/f94 [0,4194304] 0 2026-03-10T10:19:44.994 INFO:tasks.workunit.client.0.vm02.stdout:3/558: dwrite d1/d8/d21/f2a [0,4194304] 0 2026-03-10T10:19:44.995 INFO:tasks.workunit.client.0.vm02.stdout:2/550: creat d0/d1a/d49/d5e/fb5 x:0 0 0 2026-03-10T10:19:45.000 INFO:tasks.workunit.client.1.vm05.stdout:1/622: chown d4/d3d/c93 3337 1 2026-03-10T10:19:45.003 INFO:tasks.workunit.client.0.vm02.stdout:1/584: creat d4/da/d27/d38/d80/fb7 x:0 0 0 2026-03-10T10:19:45.004 INFO:tasks.workunit.client.0.vm02.stdout:1/585: fsync d4/d2c/f54 0 2026-03-10T10:19:45.008 INFO:tasks.workunit.client.1.vm05.stdout:1/623: dwrite d4/d3d/f77 [0,4194304] 0 2026-03-10T10:19:45.010 INFO:tasks.workunit.client.0.vm02.stdout:2/551: dwrite d0/d1a/d49/d5e/fa0 [0,4194304] 0 2026-03-10T10:19:45.021 INFO:tasks.workunit.client.0.vm02.stdout:5/700: creat d1/db/d11/d16/d79/ff1 x:0 0 0 2026-03-10T10:19:45.029 INFO:tasks.workunit.client.0.vm02.stdout:2/552: creat d0/d10/da6/fb6 x:0 0 0 2026-03-10T10:19:45.029 INFO:tasks.workunit.client.0.vm02.stdout:2/553: fsync d0/f72 0 2026-03-10T10:19:45.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:44 vm02.local ceph-mon[50200]: mgrmap e22: vm05.coparq(active, since 1.13099s) 2026-03-10T10:19:45.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:44 vm02.local ceph-mon[50200]: pgmap v3: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-10T10:19:45.036 INFO:tasks.workunit.client.0.vm02.stdout:5/701: dread d1/db/d11/d84/fb2 [0,4194304] 0 2026-03-10T10:19:45.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:44 vm05.local ceph-mon[59051]: mgrmap e22: vm05.coparq(active, since 1.13099s) 2026-03-10T10:19:45.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:44 vm05.local ceph-mon[59051]: pgmap v3: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-10T10:19:45.039 INFO:tasks.workunit.client.0.vm02.stdout:2/554: creat d0/d71/fb7 x:0 0 0 2026-03-10T10:19:45.046 INFO:tasks.workunit.client.0.vm02.stdout:2/555: fsync d0/d10/d81/f9b 0 2026-03-10T10:19:45.046 INFO:tasks.workunit.client.1.vm05.stdout:1/624: dread d4/d37/d4e/f62 [0,4194304] 0 2026-03-10T10:19:45.046 INFO:tasks.workunit.client.1.vm05.stdout:1/625: readlink d4/d3d/d6e/l8f 0 2026-03-10T10:19:45.046 INFO:tasks.workunit.client.1.vm05.stdout:1/626: stat d4/d37/l87 0 2026-03-10T10:19:45.046 INFO:tasks.workunit.client.1.vm05.stdout:1/627: dread d4/df/d1c/f63 [4194304,4194304] 0 2026-03-10T10:19:45.056 INFO:tasks.workunit.client.0.vm02.stdout:5/702: unlink d1/db/d11/d13/lab 0 2026-03-10T10:19:45.064 INFO:tasks.workunit.client.0.vm02.stdout:2/556: mknod d0/d10/cb8 0 2026-03-10T10:19:45.077 INFO:tasks.workunit.client.1.vm05.stdout:1/628: symlink d4/d39/d3e/da0/lb9 0 2026-03-10T10:19:45.110 INFO:tasks.workunit.client.1.vm05.stdout:1/629: chown d4/df/d1c/d53/f6b 1 1 2026-03-10T10:19:45.110 INFO:tasks.workunit.client.1.vm05.stdout:1/630: creat d4/d37/d4e/d82/fba x:0 0 0 2026-03-10T10:19:45.110 INFO:tasks.workunit.client.1.vm05.stdout:1/631: write d4/df/d1c/d53/d66/fb3 [677586,61666] 0 2026-03-10T10:19:45.110 INFO:tasks.workunit.client.1.vm05.stdout:1/632: chown d4/d3d/c93 62622154 1 2026-03-10T10:19:45.110 INFO:tasks.workunit.client.1.vm05.stdout:1/633: write d4/d37/d4e/f62 [12894284,60077] 0 2026-03-10T10:19:45.110 INFO:tasks.workunit.client.1.vm05.stdout:1/634: write d4/d37/d4e/f62 [4709123,26875] 0 2026-03-10T10:19:45.110 INFO:tasks.workunit.client.1.vm05.stdout:1/635: stat d4/d39/d3e/c47 0 2026-03-10T10:19:45.110 INFO:tasks.workunit.client.1.vm05.stdout:1/636: fdatasync d4/d20/f2d 0 2026-03-10T10:19:45.110 INFO:tasks.workunit.client.1.vm05.stdout:1/637: fdatasync d4/d3d/f57 0 2026-03-10T10:19:45.120 INFO:tasks.workunit.client.1.vm05.stdout:1/638: dread d4/d79/f8b [0,4194304] 0 2026-03-10T10:19:45.121 INFO:tasks.workunit.client.1.vm05.stdout:1/639: dread d4/df/d1c/d92/f9e [0,4194304] 0 2026-03-10T10:19:45.199 INFO:tasks.workunit.client.1.vm05.stdout:7/565: write d5/d26/f92 [931947,67394] 0 2026-03-10T10:19:45.206 INFO:tasks.workunit.client.1.vm05.stdout:7/566: symlink d5/d1d/d20/d2d/d5d/d7a/laf 0 2026-03-10T10:19:45.208 INFO:tasks.workunit.client.1.vm05.stdout:7/567: creat d5/d1d/d20/d2d/fb0 x:0 0 0 2026-03-10T10:19:45.235 INFO:tasks.workunit.client.1.vm05.stdout:7/568: creat d5/d17/fb1 x:0 0 0 2026-03-10T10:19:45.243 INFO:tasks.workunit.client.0.vm02.stdout:7/543: dwrite d1/dc/d60/f79 [0,4194304] 0 2026-03-10T10:19:45.248 INFO:tasks.workunit.client.0.vm02.stdout:7/544: dwrite d1/d1b/d8f/f59 [0,4194304] 0 2026-03-10T10:19:45.263 INFO:tasks.workunit.client.0.vm02.stdout:8/536: write d1/d1c/f33 [5023631,120240] 0 2026-03-10T10:19:45.263 INFO:tasks.workunit.client.0.vm02.stdout:4/689: write d1/d41/d5e/d78/d7f/f74 [816008,38349] 0 2026-03-10T10:19:45.264 INFO:tasks.workunit.client.0.vm02.stdout:4/690: chown d1/d41/d5e/d78/d7f/lcb 1 1 2026-03-10T10:19:45.264 INFO:tasks.workunit.client.0.vm02.stdout:4/691: stat d1/d41/d5e/d78/d7f/f74 0 2026-03-10T10:19:45.273 INFO:tasks.workunit.client.0.vm02.stdout:8/537: stat d1/d1c/d43/d5b/c95 0 2026-03-10T10:19:45.282 INFO:tasks.workunit.client.0.vm02.stdout:4/692: symlink d1/d75/ddd/le3 0 2026-03-10T10:19:45.283 INFO:tasks.workunit.client.0.vm02.stdout:8/538: dwrite d1/d1c/d23/d25/f76 [0,4194304] 0 2026-03-10T10:19:45.285 INFO:tasks.workunit.client.0.vm02.stdout:9/534: dwrite da/d3c/d4c/d56/f77 [0,4194304] 0 2026-03-10T10:19:45.317 INFO:tasks.workunit.client.1.vm05.stdout:8/454: rename d7/d14/l74 to d7/d14/d24/d3f/l86 0 2026-03-10T10:19:45.321 INFO:tasks.workunit.client.1.vm05.stdout:6/531: unlink dd/d36/d3f/d12/f56 0 2026-03-10T10:19:45.323 INFO:tasks.workunit.client.1.vm05.stdout:0/560: rename d1/d2/d39/d6e/d95/db4 to d1/d2/d9/d31/d12/d20/dbe 0 2026-03-10T10:19:45.325 INFO:tasks.workunit.client.1.vm05.stdout:6/532: fdatasync dd/d36/d3f/d12/d44/d2a/d3d/d48/f75 0 2026-03-10T10:19:45.328 INFO:tasks.workunit.client.1.vm05.stdout:4/424: rename d1/d70/l7f to d1/l88 0 2026-03-10T10:19:45.330 INFO:tasks.workunit.client.1.vm05.stdout:6/533: fdatasync dd/d36/d3f/d12/d44/d2a/fa5 0 2026-03-10T10:19:45.331 INFO:tasks.workunit.client.0.vm02.stdout:8/539: mknod d1/ca0 0 2026-03-10T10:19:45.333 INFO:tasks.workunit.client.1.vm05.stdout:6/534: symlink dd/d36/d3f/d12/d58/lad 0 2026-03-10T10:19:45.333 INFO:tasks.workunit.client.1.vm05.stdout:4/425: dwrite d1/d3/f5 [4194304,4194304] 0 2026-03-10T10:19:45.335 INFO:tasks.workunit.client.1.vm05.stdout:5/575: rename da/d9a/l9b to da/db/d26/lc4 0 2026-03-10T10:19:45.336 INFO:tasks.workunit.client.1.vm05.stdout:1/640: rename d4/df/c32 to d4/d79/cbb 0 2026-03-10T10:19:45.336 INFO:tasks.workunit.client.1.vm05.stdout:4/426: creat d1/d31/dc/d40/d63/f89 x:0 0 0 2026-03-10T10:19:45.337 INFO:tasks.workunit.client.1.vm05.stdout:1/641: mknod d4/df/d1c/d53/d66/cbc 0 2026-03-10T10:19:45.338 INFO:tasks.workunit.client.1.vm05.stdout:1/642: write d4/df/d1c/d53/d66/fb3 [130877,9282] 0 2026-03-10T10:19:45.355 INFO:tasks.workunit.client.1.vm05.stdout:5/576: rename da/f78 to da/db/d26/d5c/fc5 0 2026-03-10T10:19:45.355 INFO:tasks.workunit.client.1.vm05.stdout:1/643: chown d4/dd/l19 234 1 2026-03-10T10:19:45.355 INFO:tasks.workunit.client.1.vm05.stdout:4/427: unlink d1/d31/dc/d40/d45/l5e 0 2026-03-10T10:19:45.355 INFO:tasks.workunit.client.1.vm05.stdout:6/535: truncate dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b 458714 0 2026-03-10T10:19:45.355 INFO:tasks.workunit.client.1.vm05.stdout:5/577: fsync da/db/d26/d5c/d4b/db4/fc0 0 2026-03-10T10:19:45.355 INFO:tasks.workunit.client.1.vm05.stdout:1/644: chown d4/df/d1c/d53 0 1 2026-03-10T10:19:45.355 INFO:tasks.workunit.client.1.vm05.stdout:1/645: write d4/df/d1c/d53/d66/f94 [2629948,97030] 0 2026-03-10T10:19:45.355 INFO:tasks.workunit.client.1.vm05.stdout:4/428: dread d1/d31/dc/d40/f67 [4194304,4194304] 0 2026-03-10T10:19:45.355 INFO:tasks.workunit.client.1.vm05.stdout:6/536: dwrite dd/d36/d3f/d12/d58/f5a [0,4194304] 0 2026-03-10T10:19:45.368 INFO:tasks.workunit.client.1.vm05.stdout:4/429: dwrite d1/d64/f84 [0,4194304] 0 2026-03-10T10:19:45.369 INFO:tasks.workunit.client.1.vm05.stdout:6/537: fsync dd/d1b/f1d 0 2026-03-10T10:19:45.373 INFO:tasks.workunit.client.1.vm05.stdout:6/538: dwrite dd/d36/d3f/f61 [0,4194304] 0 2026-03-10T10:19:45.385 INFO:tasks.workunit.client.1.vm05.stdout:6/539: creat dd/d36/d3f/d12/d44/daa/fae x:0 0 0 2026-03-10T10:19:45.398 INFO:tasks.workunit.client.0.vm02.stdout:8/540: dread d1/d1c/d24/d35/f6e [0,4194304] 0 2026-03-10T10:19:45.401 INFO:tasks.workunit.client.0.vm02.stdout:8/541: chown d1/d1c/c3a 273231564 1 2026-03-10T10:19:45.401 INFO:tasks.workunit.client.0.vm02.stdout:8/542: read d1/d2/f36 [2497634,45728] 0 2026-03-10T10:19:45.409 INFO:tasks.workunit.client.1.vm05.stdout:4/430: dread f0 [0,4194304] 0 2026-03-10T10:19:45.410 INFO:tasks.workunit.client.1.vm05.stdout:4/431: rmdir d1/d31/d4b 39 2026-03-10T10:19:45.411 INFO:tasks.workunit.client.1.vm05.stdout:4/432: fdatasync d1/f39 0 2026-03-10T10:19:45.414 INFO:tasks.workunit.client.1.vm05.stdout:8/455: dread f2 [0,4194304] 0 2026-03-10T10:19:45.415 INFO:tasks.workunit.client.1.vm05.stdout:8/456: dread - d7/f44 zero size 2026-03-10T10:19:45.415 INFO:tasks.workunit.client.1.vm05.stdout:8/457: chown d7/d14/d3a/f5f 2637 1 2026-03-10T10:19:45.416 INFO:tasks.workunit.client.1.vm05.stdout:1/646: sync 2026-03-10T10:19:45.417 INFO:tasks.workunit.client.1.vm05.stdout:8/458: rename d7/d14/d3a/l70 to d7/d14/d24/d3f/d4f/l87 0 2026-03-10T10:19:45.436 INFO:tasks.workunit.client.1.vm05.stdout:1/647: sync 2026-03-10T10:19:45.436 INFO:tasks.workunit.client.1.vm05.stdout:8/459: sync 2026-03-10T10:19:45.438 INFO:tasks.workunit.client.1.vm05.stdout:8/460: fsync d7/d14/d3a/f5f 0 2026-03-10T10:19:45.439 INFO:tasks.workunit.client.1.vm05.stdout:1/648: truncate d4/d79/f8d 942448 0 2026-03-10T10:19:45.440 INFO:tasks.workunit.client.0.vm02.stdout:6/532: dwrite d0/d8/d9/f84 [0,4194304] 0 2026-03-10T10:19:45.443 INFO:tasks.workunit.client.1.vm05.stdout:8/461: symlink d7/d14/d3a/d49/d65/l88 0 2026-03-10T10:19:45.444 INFO:tasks.workunit.client.0.vm02.stdout:6/533: rename d0/d8/d9/c1b to d0/d8/d29/d52/cad 0 2026-03-10T10:19:45.447 INFO:tasks.workunit.client.1.vm05.stdout:1/649: unlink d4/d20/f49 0 2026-03-10T10:19:45.450 INFO:tasks.workunit.client.0.vm02.stdout:6/534: symlink d0/d8/d8c/lae 0 2026-03-10T10:19:45.451 INFO:tasks.workunit.client.1.vm05.stdout:1/650: mknod d4/df/d1c/d92/cbd 0 2026-03-10T10:19:45.453 INFO:tasks.workunit.client.0.vm02.stdout:6/535: symlink d0/d8/d29/d52/laf 0 2026-03-10T10:19:45.453 INFO:tasks.workunit.client.0.vm02.stdout:6/536: readlink d0/d8/d29/d6d/l3e 0 2026-03-10T10:19:45.454 INFO:tasks.workunit.client.1.vm05.stdout:1/651: write d4/d20/f2c [7540329,74025] 0 2026-03-10T10:19:45.455 INFO:tasks.workunit.client.0.vm02.stdout:6/537: read d0/f5d [5174,2378] 0 2026-03-10T10:19:45.456 INFO:tasks.workunit.client.0.vm02.stdout:6/538: write d0/d8/d29/d2f/d4b/f53 [4639035,24389] 0 2026-03-10T10:19:45.466 INFO:tasks.workunit.client.0.vm02.stdout:7/545: unlink d1/dc/d16/d28/c46 0 2026-03-10T10:19:45.473 INFO:tasks.workunit.client.0.vm02.stdout:6/539: symlink d0/d8/d29/d6d/d96/lb0 0 2026-03-10T10:19:45.473 INFO:tasks.workunit.client.1.vm05.stdout:8/462: dread d7/f21 [0,4194304] 0 2026-03-10T10:19:45.473 INFO:tasks.workunit.client.0.vm02.stdout:7/546: symlink d1/dc/d16/d28/d2c/la9 0 2026-03-10T10:19:45.474 INFO:tasks.workunit.client.1.vm05.stdout:8/463: write d7/d14/d24/d3f/d6a/f6c [5012,90241] 0 2026-03-10T10:19:45.474 INFO:tasks.workunit.client.1.vm05.stdout:8/464: stat d7/d14/d24/f7a 0 2026-03-10T10:19:45.475 INFO:tasks.workunit.client.0.vm02.stdout:6/540: read - d0/d8/d29/d2f/f8e zero size 2026-03-10T10:19:45.477 INFO:tasks.workunit.client.1.vm05.stdout:8/465: readlink d7/d14/d3a/d49/l85 0 2026-03-10T10:19:45.477 INFO:tasks.workunit.client.0.vm02.stdout:7/547: creat d1/dc/d16/faa x:0 0 0 2026-03-10T10:19:45.477 INFO:tasks.workunit.client.0.vm02.stdout:7/548: fdatasync d1/f80 0 2026-03-10T10:19:45.478 INFO:tasks.workunit.client.0.vm02.stdout:6/541: read d0/d8/d29/d2f/d4b/f26 [1215319,127557] 0 2026-03-10T10:19:45.479 INFO:tasks.workunit.client.1.vm05.stdout:8/466: mknod d7/d14/d15/c89 0 2026-03-10T10:19:45.480 INFO:tasks.workunit.client.0.vm02.stdout:7/549: fsync d1/d1b/f72 0 2026-03-10T10:19:45.483 INFO:tasks.workunit.client.1.vm05.stdout:3/562: mknod dd/d15/cca 0 2026-03-10T10:19:45.483 INFO:tasks.workunit.client.0.vm02.stdout:7/550: rmdir d1/dc/d44 39 2026-03-10T10:19:45.484 INFO:tasks.workunit.client.0.vm02.stdout:7/551: truncate d1/dc/d10/d38/f96 1662593 0 2026-03-10T10:19:45.484 INFO:tasks.workunit.client.0.vm02.stdout:7/552: chown d1/dc/d99/la2 1309851431 1 2026-03-10T10:19:45.485 INFO:tasks.workunit.client.0.vm02.stdout:7/553: write d1/dc/d55/f64 [414219,2696] 0 2026-03-10T10:19:45.486 INFO:tasks.workunit.client.1.vm05.stdout:3/563: fdatasync dd/d15/d24/d2c/d3b/f40 0 2026-03-10T10:19:45.488 INFO:tasks.workunit.client.0.vm02.stdout:6/542: getdents d0/d8/d29/d2f 0 2026-03-10T10:19:45.488 INFO:tasks.workunit.client.1.vm05.stdout:8/467: sync 2026-03-10T10:19:45.492 INFO:tasks.workunit.client.1.vm05.stdout:8/468: mkdir d7/d14/d24/d3f/d6a/d8a 0 2026-03-10T10:19:45.493 INFO:tasks.workunit.client.1.vm05.stdout:3/564: truncate dd/d15/d24/d2c/f3c 2669377 0 2026-03-10T10:19:45.494 INFO:tasks.workunit.client.1.vm05.stdout:8/469: creat d7/d14/d15/d3b/f8b x:0 0 0 2026-03-10T10:19:45.498 INFO:tasks.workunit.client.0.vm02.stdout:3/559: rmdir d1/d58 39 2026-03-10T10:19:45.498 INFO:tasks.workunit.client.1.vm05.stdout:9/491: write d0/f45 [4578156,48186] 0 2026-03-10T10:19:45.498 INFO:tasks.workunit.client.1.vm05.stdout:8/470: stat d7/d2f/f4b 0 2026-03-10T10:19:45.501 INFO:tasks.workunit.client.0.vm02.stdout:3/560: dwrite d1/d8/d86/f9b [0,4194304] 0 2026-03-10T10:19:45.519 INFO:tasks.workunit.client.0.vm02.stdout:3/561: creat d1/d8/d21/d73/d78/d84/fb4 x:0 0 0 2026-03-10T10:19:45.519 INFO:tasks.workunit.client.0.vm02.stdout:3/562: chown d1/d6/d8e 11837 1 2026-03-10T10:19:45.521 INFO:tasks.workunit.client.1.vm05.stdout:8/471: write d7/f11 [3503341,12304] 0 2026-03-10T10:19:45.521 INFO:tasks.workunit.client.0.vm02.stdout:3/563: creat d1/d6/d8e/fb5 x:0 0 0 2026-03-10T10:19:45.526 INFO:tasks.workunit.client.1.vm05.stdout:8/472: rename d7/d14/d24/d3f/d4f/c53 to d7/d14/d3a/c8c 0 2026-03-10T10:19:45.540 INFO:tasks.workunit.client.0.vm02.stdout:0/584: truncate d9/f6c 810720 0 2026-03-10T10:19:45.542 INFO:tasks.workunit.client.0.vm02.stdout:0/585: dread d9/d18/d1a/d22/d24/d8e/d91/fb0 [0,4194304] 0 2026-03-10T10:19:45.544 INFO:tasks.workunit.client.0.vm02.stdout:0/586: mkdir d9/d18/d1a/d46/d5d/da7/db7 0 2026-03-10T10:19:45.544 INFO:tasks.workunit.client.0.vm02.stdout:0/587: chown d9/d18/d1a/d22/d24/d80/d74/d7f 52974 1 2026-03-10T10:19:45.564 INFO:tasks.workunit.client.0.vm02.stdout:3/564: sync 2026-03-10T10:19:45.565 INFO:tasks.workunit.client.1.vm05.stdout:8/473: dread d7/d14/f5b [0,4194304] 0 2026-03-10T10:19:45.566 INFO:tasks.workunit.client.1.vm05.stdout:2/495: dread db/f26 [0,4194304] 0 2026-03-10T10:19:45.567 INFO:tasks.workunit.client.1.vm05.stdout:8/474: dread f2 [0,4194304] 0 2026-03-10T10:19:45.568 INFO:tasks.workunit.client.1.vm05.stdout:2/496: write db/d28/d4f/d59/f7c [189527,41748] 0 2026-03-10T10:19:45.568 INFO:tasks.workunit.client.1.vm05.stdout:8/475: chown d7/d14/d24/l81 918 1 2026-03-10T10:19:45.569 INFO:tasks.workunit.client.0.vm02.stdout:3/565: creat d1/fb6 x:0 0 0 2026-03-10T10:19:45.573 INFO:tasks.workunit.client.1.vm05.stdout:8/476: sync 2026-03-10T10:19:45.573 INFO:tasks.workunit.client.1.vm05.stdout:8/477: read d7/d14/d62/f69 [790766,100114] 0 2026-03-10T10:19:45.574 INFO:tasks.workunit.client.1.vm05.stdout:8/478: readlink d7/d2f/d57/l60 0 2026-03-10T10:19:45.594 INFO:tasks.workunit.client.1.vm05.stdout:2/497: mkdir db/d28/d4f/d8b/d9a 0 2026-03-10T10:19:45.596 INFO:tasks.workunit.client.0.vm02.stdout:1/586: write d4/d2c/d53/f74 [2600936,126746] 0 2026-03-10T10:19:45.601 INFO:tasks.workunit.client.0.vm02.stdout:2/557: write d0/f1b [343131,45718] 0 2026-03-10T10:19:45.607 INFO:tasks.workunit.client.0.vm02.stdout:1/587: mkdir d4/d2c/d53/da6/db8 0 2026-03-10T10:19:45.613 INFO:tasks.workunit.client.1.vm05.stdout:2/498: rename db/f24 to db/d1c/f9b 0 2026-03-10T10:19:45.616 INFO:tasks.workunit.client.0.vm02.stdout:1/588: fsync d4/da/fb2 0 2026-03-10T10:19:45.623 INFO:tasks.workunit.client.0.vm02.stdout:1/589: creat d4/d4a/da5/fb9 x:0 0 0 2026-03-10T10:19:45.625 INFO:tasks.workunit.client.0.vm02.stdout:1/590: dread - d4/da/d1a/d47/d65/f9a zero size 2026-03-10T10:19:45.626 INFO:tasks.workunit.client.1.vm05.stdout:7/569: dwrite d5/d1d/d20/d35/f37 [0,4194304] 0 2026-03-10T10:19:45.627 INFO:tasks.workunit.client.1.vm05.stdout:8/479: getdents d7/d14/d3a/d49 0 2026-03-10T10:19:45.627 INFO:tasks.workunit.client.1.vm05.stdout:8/480: chown d7/f44 102065 1 2026-03-10T10:19:45.635 INFO:tasks.workunit.client.0.vm02.stdout:1/591: fdatasync d4/f21 0 2026-03-10T10:19:45.653 INFO:tasks.workunit.client.1.vm05.stdout:7/570: dread d5/d26/f39 [0,4194304] 0 2026-03-10T10:19:45.658 INFO:tasks.workunit.client.1.vm05.stdout:7/571: rename d5/d1d/d20/d77 to d5/d26/db2 0 2026-03-10T10:19:45.659 INFO:tasks.workunit.client.0.vm02.stdout:1/592: sync 2026-03-10T10:19:45.662 INFO:tasks.workunit.client.1.vm05.stdout:7/572: rename d5/c6 to d5/d1d/d29/cb3 0 2026-03-10T10:19:45.663 INFO:tasks.workunit.client.0.vm02.stdout:1/593: truncate d4/da/d27/d38/f4e 3643596 0 2026-03-10T10:19:45.664 INFO:tasks.workunit.client.0.vm02.stdout:1/594: write d4/da/d27/d38/d3c/fa7 [759588,41988] 0 2026-03-10T10:19:45.672 INFO:tasks.workunit.client.1.vm05.stdout:0/561: write d1/d2/d9/d31/d13/f3e [1056448,87820] 0 2026-03-10T10:19:45.672 INFO:tasks.workunit.client.0.vm02.stdout:9/535: write da/d3c/d4c/d38/d4a/f7f [811515,18772] 0 2026-03-10T10:19:45.682 INFO:tasks.workunit.client.1.vm05.stdout:5/578: dwrite da/db/d28/d8a/fa0 [0,4194304] 0 2026-03-10T10:19:45.687 INFO:tasks.workunit.client.1.vm05.stdout:0/562: rename d1/d2/d39/d3d/f7b to d1/d2/d9/d50/d9a/fbf 0 2026-03-10T10:19:45.691 INFO:tasks.workunit.client.1.vm05.stdout:6/540: write dd/d36/d3f/f6f [2609978,15252] 0 2026-03-10T10:19:45.701 INFO:tasks.workunit.client.1.vm05.stdout:0/563: mkdir d1/d2/d39/d6e/dc0 0 2026-03-10T10:19:45.704 INFO:tasks.workunit.client.0.vm02.stdout:8/543: write d1/f6d [5069779,93723] 0 2026-03-10T10:19:45.713 INFO:tasks.workunit.client.0.vm02.stdout:8/544: dread d1/f65 [0,4194304] 0 2026-03-10T10:19:45.717 INFO:tasks.workunit.client.1.vm05.stdout:4/433: dwrite f0 [0,4194304] 0 2026-03-10T10:19:45.734 INFO:tasks.workunit.client.1.vm05.stdout:6/541: symlink dd/d36/d3f/d12/d44/d2a/laf 0 2026-03-10T10:19:45.737 INFO:tasks.workunit.client.1.vm05.stdout:4/434: read - d1/d31/d4b/f51 zero size 2026-03-10T10:19:45.751 INFO:tasks.workunit.client.1.vm05.stdout:8/481: rmdir d7/d14/d3a/d49/d65 39 2026-03-10T10:19:45.752 INFO:tasks.workunit.client.1.vm05.stdout:8/482: stat d7/d2f/d57/f66 0 2026-03-10T10:19:45.753 INFO:tasks.workunit.client.0.vm02.stdout:6/543: getdents d0/d8/d9 0 2026-03-10T10:19:45.755 INFO:tasks.workunit.client.1.vm05.stdout:8/483: dwrite d7/f78 [0,4194304] 0 2026-03-10T10:19:45.759 INFO:tasks.workunit.client.1.vm05.stdout:6/542: creat dd/d36/d3f/d12/d44/d2a/fb0 x:0 0 0 2026-03-10T10:19:45.769 INFO:tasks.workunit.client.0.vm02.stdout:5/703: dread d1/f12 [0,4194304] 0 2026-03-10T10:19:45.771 INFO:tasks.workunit.client.0.vm02.stdout:4/693: dread d1/d41/d5e/d78/d1a/f4c [4194304,4194304] 0 2026-03-10T10:19:45.772 INFO:tasks.workunit.client.0.vm02.stdout:4/694: chown d1/ld5 27613938 1 2026-03-10T10:19:45.780 INFO:tasks.workunit.client.1.vm05.stdout:1/652: read d4/f36 [5165023,48328] 0 2026-03-10T10:19:45.785 INFO:tasks.workunit.client.0.vm02.stdout:7/554: dwrite d1/dc/f69 [0,4194304] 0 2026-03-10T10:19:45.785 INFO:tasks.workunit.client.0.vm02.stdout:7/555: read - d1/dc/d16/f95 zero size 2026-03-10T10:19:45.790 INFO:tasks.workunit.client.0.vm02.stdout:7/556: dwrite d1/dc/d16/d28/d2c/f8a [0,4194304] 0 2026-03-10T10:19:45.810 INFO:tasks.workunit.client.0.vm02.stdout:7/557: dread d1/d1b/f43 [0,4194304] 0 2026-03-10T10:19:45.824 INFO:tasks.workunit.client.1.vm05.stdout:8/484: mknod d7/d14/d15/d3b/c8d 0 2026-03-10T10:19:45.825 INFO:tasks.workunit.client.1.vm05.stdout:6/543: fsync dd/d36/d7d/f8a 0 2026-03-10T10:19:45.825 INFO:tasks.workunit.client.0.vm02.stdout:1/595: getdents d4/da/d1a/d22 0 2026-03-10T10:19:45.825 INFO:tasks.workunit.client.0.vm02.stdout:1/596: stat d4/da/d27/d38/d3c/cb1 0 2026-03-10T10:19:45.830 INFO:tasks.workunit.client.1.vm05.stdout:3/565: dwrite dd/d15/d1f/f53 [0,4194304] 0 2026-03-10T10:19:45.844 INFO:tasks.workunit.client.0.vm02.stdout:5/704: symlink d1/d6a/lf2 0 2026-03-10T10:19:45.845 INFO:tasks.workunit.client.0.vm02.stdout:5/705: chown d1/db/d11/d84/d40/d4f/l8e 645 1 2026-03-10T10:19:45.847 INFO:tasks.workunit.client.1.vm05.stdout:3/566: rmdir dd/d39/d5f 39 2026-03-10T10:19:45.849 INFO:tasks.workunit.client.0.vm02.stdout:4/695: chown d1/d52/d53/ca4 3411529 1 2026-03-10T10:19:45.850 INFO:tasks.workunit.client.1.vm05.stdout:9/492: dwrite d0/d1/d13/d26/f43 [0,4194304] 0 2026-03-10T10:19:45.852 INFO:tasks.workunit.client.1.vm05.stdout:6/544: link dd/d36/d3f/d12/d96/f9a dd/d36/d3f/d12/d59/fb1 0 2026-03-10T10:19:45.853 INFO:tasks.workunit.client.1.vm05.stdout:6/545: truncate dd/d36/d3f/d12/d58/f5a 4412401 0 2026-03-10T10:19:45.856 INFO:tasks.workunit.client.1.vm05.stdout:3/567: mknod dd/d20/d9e/ccb 0 2026-03-10T10:19:45.856 INFO:tasks.workunit.client.1.vm05.stdout:3/568: fsync dd/d15/d1f/dae/fc7 0 2026-03-10T10:19:45.862 INFO:tasks.workunit.client.0.vm02.stdout:7/558: mknod d1/d1b/d8f/d67/cab 0 2026-03-10T10:19:45.864 INFO:tasks.workunit.client.1.vm05.stdout:6/546: creat dd/d36/d3f/d12/d44/d2a/d3d/d48/fb2 x:0 0 0 2026-03-10T10:19:45.865 INFO:tasks.workunit.client.0.vm02.stdout:9/536: getdents da/d3c/d4c/d2c/d96 0 2026-03-10T10:19:45.865 INFO:tasks.workunit.client.0.vm02.stdout:9/537: dread - da/d3c/d4c/d2c/d34/f68 zero size 2026-03-10T10:19:45.868 INFO:tasks.workunit.client.1.vm05.stdout:3/569: dread - dd/d39/d5f/fb8 zero size 2026-03-10T10:19:45.869 INFO:tasks.workunit.client.0.vm02.stdout:1/597: sync 2026-03-10T10:19:45.869 INFO:tasks.workunit.client.0.vm02.stdout:1/598: fsync d4/da/d27/d38/d80/fb7 0 2026-03-10T10:19:45.870 INFO:tasks.workunit.client.1.vm05.stdout:3/570: read f6 [4302348,94098] 0 2026-03-10T10:19:45.871 INFO:tasks.workunit.client.0.vm02.stdout:8/545: mknod d1/d1c/d23/d25/ca1 0 2026-03-10T10:19:45.872 INFO:tasks.workunit.client.0.vm02.stdout:8/546: chown d1/d1c/d43/d5b/d88 0 1 2026-03-10T10:19:45.872 INFO:tasks.workunit.client.0.vm02.stdout:8/547: chown d1/d1c/d43/d5b/c95 130868164 1 2026-03-10T10:19:45.873 INFO:tasks.workunit.client.1.vm05.stdout:3/571: dread dd/d39/d66/fad [0,4194304] 0 2026-03-10T10:19:45.874 INFO:tasks.workunit.client.1.vm05.stdout:3/572: chown dd/d15/d24/d74/fb0 8 1 2026-03-10T10:19:45.874 INFO:tasks.workunit.client.1.vm05.stdout:3/573: fsync dd/d15/d24/f8a 0 2026-03-10T10:19:45.876 INFO:tasks.workunit.client.0.vm02.stdout:8/548: dwrite d1/d1c/d23/d25/f76 [0,4194304] 0 2026-03-10T10:19:45.883 INFO:tasks.workunit.client.1.vm05.stdout:8/485: sync 2026-03-10T10:19:45.884 INFO:tasks.workunit.client.1.vm05.stdout:8/486: write d7/d14/d15/f1f [9689416,43659] 0 2026-03-10T10:19:45.884 INFO:tasks.workunit.client.0.vm02.stdout:4/696: unlink d1/d41/d5e/d78/c50 0 2026-03-10T10:19:45.887 INFO:tasks.workunit.client.1.vm05.stdout:8/487: dread d7/d14/d3a/d49/f54 [0,4194304] 0 2026-03-10T10:19:45.897 INFO:tasks.workunit.client.0.vm02.stdout:0/588: dwrite d9/d18/d1a/d22/d24/f4f [0,4194304] 0 2026-03-10T10:19:45.900 INFO:tasks.workunit.client.0.vm02.stdout:5/706: dread d1/db/f15 [0,4194304] 0 2026-03-10T10:19:45.915 INFO:tasks.workunit.client.1.vm05.stdout:3/574: symlink dd/d39/d5f/lcc 0 2026-03-10T10:19:45.921 INFO:tasks.workunit.client.1.vm05.stdout:9/493: dread d0/d1/fb [0,4194304] 0 2026-03-10T10:19:45.921 INFO:tasks.workunit.client.1.vm05.stdout:6/547: getdents dd/d36/d3f/d12/d44/d2a/d77/d8b 0 2026-03-10T10:19:45.932 INFO:tasks.workunit.client.0.vm02.stdout:8/549: creat d1/d1c/d24/d71/fa2 x:0 0 0 2026-03-10T10:19:45.939 INFO:tasks.workunit.client.0.vm02.stdout:4/697: creat d1/d75/ddd/fe4 x:0 0 0 2026-03-10T10:19:45.939 INFO:tasks.workunit.client.0.vm02.stdout:0/589: fsync d9/d34/d3d/d65/f7a 0 2026-03-10T10:19:45.939 INFO:tasks.workunit.client.0.vm02.stdout:0/590: readlink d9/l2b 0 2026-03-10T10:19:45.940 INFO:tasks.workunit.client.1.vm05.stdout:8/488: dwrite d7/d14/d24/f2c [0,4194304] 0 2026-03-10T10:19:45.943 INFO:tasks.workunit.client.0.vm02.stdout:5/707: rename d1/db/d11/d16/d79 to d1/db/d11/d16/d79/d85/df3 22 2026-03-10T10:19:45.947 INFO:tasks.workunit.client.0.vm02.stdout:7/559: creat d1/dc/d44/d5f/fac x:0 0 0 2026-03-10T10:19:45.949 INFO:tasks.workunit.client.1.vm05.stdout:6/548: sync 2026-03-10T10:19:45.952 INFO:tasks.workunit.client.1.vm05.stdout:8/489: dread d7/d14/d62/f69 [0,4194304] 0 2026-03-10T10:19:45.952 INFO:tasks.workunit.client.1.vm05.stdout:6/549: dread dd/d36/d3f/d12/d58/f5a [0,4194304] 0 2026-03-10T10:19:45.953 INFO:tasks.workunit.client.1.vm05.stdout:3/575: truncate dd/d39/f51 376344 0 2026-03-10T10:19:45.953 INFO:tasks.workunit.client.1.vm05.stdout:6/550: dread - dd/d36/d3f/d12/fa6 zero size 2026-03-10T10:19:45.959 INFO:tasks.workunit.client.0.vm02.stdout:8/550: dwrite d1/f91 [0,4194304] 0 2026-03-10T10:19:45.960 INFO:tasks.workunit.client.0.vm02.stdout:8/551: stat d1/d1c/l2c 0 2026-03-10T10:19:45.963 INFO:tasks.workunit.client.0.vm02.stdout:4/698: write d1/d32/fd3 [3188539,4428] 0 2026-03-10T10:19:45.966 INFO:tasks.workunit.client.1.vm05.stdout:8/490: write d7/d14/f5b [9325644,105284] 0 2026-03-10T10:19:45.966 INFO:tasks.workunit.client.1.vm05.stdout:8/491: chown d7/d2f 260 1 2026-03-10T10:19:45.971 INFO:tasks.workunit.client.1.vm05.stdout:8/492: fdatasync d7/d2f/f7f 0 2026-03-10T10:19:45.972 INFO:tasks.workunit.client.0.vm02.stdout:0/591: read d9/d18/d1a/d22/d24/f26 [935861,35452] 0 2026-03-10T10:19:45.973 INFO:tasks.workunit.client.0.vm02.stdout:0/592: truncate d9/d18/d1a/d46/d5d/da7/fb2 1534542 0 2026-03-10T10:19:45.977 INFO:tasks.workunit.client.1.vm05.stdout:6/551: unlink dd/d36/d7d/f8a 0 2026-03-10T10:19:45.983 INFO:tasks.workunit.client.1.vm05.stdout:6/552: chown dd/d1b 113 1 2026-03-10T10:19:45.989 INFO:tasks.workunit.client.1.vm05.stdout:8/493: symlink d7/d2f/d57/l8e 0 2026-03-10T10:19:45.994 INFO:tasks.workunit.client.0.vm02.stdout:3/566: dwrite d1/d8/d21/d73/f7e [0,4194304] 0 2026-03-10T10:19:45.996 INFO:tasks.workunit.client.0.vm02.stdout:2/558: dwrite d0/d1a/f25 [0,4194304] 0 2026-03-10T10:19:45.996 INFO:tasks.workunit.client.0.vm02.stdout:2/559: stat d0/d1a/d24/c83 0 2026-03-10T10:19:46.008 INFO:tasks.workunit.client.1.vm05.stdout:6/553: symlink dd/d36/d3f/d12/d44/d2a/d77/lb3 0 2026-03-10T10:19:46.012 INFO:tasks.workunit.client.1.vm05.stdout:9/494: getdents d0/d1/d13/d62 0 2026-03-10T10:19:46.014 INFO:tasks.workunit.client.0.vm02.stdout:4/699: truncate d1/d75/ddd/fa6 829309 0 2026-03-10T10:19:46.019 INFO:tasks.workunit.client.0.vm02.stdout:0/593: fsync d9/d18/d1a/d22/d24/f26 0 2026-03-10T10:19:46.027 INFO:tasks.workunit.client.0.vm02.stdout:9/538: getdents da/d3c/d4c/d38/d82/d8c 0 2026-03-10T10:19:46.031 INFO:tasks.workunit.client.1.vm05.stdout:6/554: unlink dd/d36/d3f/d12/d44/d2a/c43 0 2026-03-10T10:19:46.034 INFO:tasks.workunit.client.1.vm05.stdout:9/495: symlink d0/d1/la9 0 2026-03-10T10:19:46.036 INFO:tasks.workunit.client.1.vm05.stdout:2/499: write db/d28/d4f/d59/f8d [803852,119676] 0 2026-03-10T10:19:46.037 INFO:tasks.workunit.client.1.vm05.stdout:7/573: write d5/dd/f1a [4390136,9945] 0 2026-03-10T10:19:46.039 INFO:tasks.workunit.client.1.vm05.stdout:0/564: write d1/d2/d9/f98 [788487,74207] 0 2026-03-10T10:19:46.042 INFO:tasks.workunit.client.1.vm05.stdout:5/579: dwrite da/db/d28/d97/f8e [0,4194304] 0 2026-03-10T10:19:46.060 INFO:tasks.workunit.client.1.vm05.stdout:4/435: write d1/d31/f36 [4068181,4071] 0 2026-03-10T10:19:46.061 INFO:tasks.workunit.client.1.vm05.stdout:4/436: dread - d1/d31/f7a zero size 2026-03-10T10:19:46.061 INFO:tasks.workunit.client.1.vm05.stdout:8/494: creat d7/d14/d3a/f8f x:0 0 0 2026-03-10T10:19:46.061 INFO:tasks.workunit.client.0.vm02.stdout:3/567: chown d1/d8/d21/d73/lad 89 1 2026-03-10T10:19:46.061 INFO:tasks.workunit.client.0.vm02.stdout:2/560: creat d0/d71/fb9 x:0 0 0 2026-03-10T10:19:46.061 INFO:tasks.workunit.client.0.vm02.stdout:8/552: mknod d1/d1c/d43/ca3 0 2026-03-10T10:19:46.061 INFO:tasks.workunit.client.0.vm02.stdout:0/594: mknod d9/d34/d3d/d65/d89/cb8 0 2026-03-10T10:19:46.061 INFO:tasks.workunit.client.0.vm02.stdout:3/568: readlink d1/l27 0 2026-03-10T10:19:46.069 INFO:tasks.workunit.client.0.vm02.stdout:2/561: symlink d0/d1a/lba 0 2026-03-10T10:19:46.070 INFO:tasks.workunit.client.0.vm02.stdout:9/539: sync 2026-03-10T10:19:46.077 INFO:tasks.workunit.client.0.vm02.stdout:5/708: link d1/f7f d1/db/d11/d7b/ff4 0 2026-03-10T10:19:46.090 INFO:tasks.workunit.client.0.vm02.stdout:3/569: dread - d1/d6/f63 zero size 2026-03-10T10:19:46.090 INFO:tasks.workunit.client.0.vm02.stdout:3/570: chown d1/d8/d44/fab 2156 1 2026-03-10T10:19:46.090 INFO:tasks.workunit.client.1.vm05.stdout:8/495: sync 2026-03-10T10:19:46.094 INFO:tasks.workunit.client.1.vm05.stdout:8/496: sync 2026-03-10T10:19:46.099 INFO:tasks.workunit.client.0.vm02.stdout:5/709: dread d1/fe [0,4194304] 0 2026-03-10T10:19:46.104 INFO:tasks.workunit.client.0.vm02.stdout:6/544: dwrite d0/d8/d29/d2f/d50/f78 [0,4194304] 0 2026-03-10T10:19:46.109 INFO:tasks.workunit.client.0.vm02.stdout:6/545: dwrite d0/d8/d29/d2f/d4b/f53 [0,4194304] 0 2026-03-10T10:19:46.112 INFO:tasks.workunit.client.0.vm02.stdout:6/546: dread - d0/d8/d29/d2f/f8e zero size 2026-03-10T10:19:46.126 INFO:tasks.workunit.client.0.vm02.stdout:1/599: dwrite d4/da/d1a/d47/d65/f9a [0,4194304] 0 2026-03-10T10:19:46.128 INFO:tasks.workunit.client.0.vm02.stdout:3/571: creat d1/d8/d21/d73/d78/d84/fb7 x:0 0 0 2026-03-10T10:19:46.132 INFO:tasks.workunit.client.0.vm02.stdout:7/560: rename d1/dc/d44 to d1/d1b/d8f/dad 0 2026-03-10T10:19:46.136 INFO:tasks.workunit.client.0.vm02.stdout:4/700: link d1/d41/d5e/d78/d7f/caa d1/d32/ce5 0 2026-03-10T10:19:46.139 INFO:tasks.workunit.client.0.vm02.stdout:5/710: write d1/db/d11/d13/d28/da7/dd9/fe6 [391838,124410] 0 2026-03-10T10:19:46.146 INFO:tasks.workunit.client.0.vm02.stdout:6/547: creat d0/d8/d29/d2f/d50/d98/fb1 x:0 0 0 2026-03-10T10:19:46.151 INFO:tasks.workunit.client.0.vm02.stdout:3/572: creat d1/d8/d21/d73/d78/d84/fb8 x:0 0 0 2026-03-10T10:19:46.153 INFO:tasks.workunit.client.1.vm05.stdout:6/555: write dd/d36/d3f/d12/d44/d2a/d3d/d3e/f7c [159814,10264] 0 2026-03-10T10:19:46.157 INFO:tasks.workunit.client.0.vm02.stdout:0/595: rename d9/d18/d1a/d46/d5d/da1 to d9/d18/d1a/d46/d5d/da7/db9 0 2026-03-10T10:19:46.161 INFO:tasks.workunit.client.0.vm02.stdout:7/561: mkdir d1/dc/d16/d28/d2d/dae 0 2026-03-10T10:19:46.161 INFO:tasks.workunit.client.0.vm02.stdout:7/562: write d1/dc/d16/f95 [625722,121375] 0 2026-03-10T10:19:46.165 INFO:tasks.workunit.client.1.vm05.stdout:9/496: unlink d0/df/c4d 0 2026-03-10T10:19:46.167 INFO:tasks.workunit.client.0.vm02.stdout:9/540: write da/d3c/f3e [2936944,10859] 0 2026-03-10T10:19:46.170 INFO:tasks.workunit.client.0.vm02.stdout:8/553: dwrite d1/d1c/d24/d35/d56/f81 [0,4194304] 0 2026-03-10T10:19:46.180 INFO:tasks.workunit.client.1.vm05.stdout:7/574: rmdir d5/d1d/d29/d3e 39 2026-03-10T10:19:46.183 INFO:tasks.workunit.client.0.vm02.stdout:3/573: dread - d1/d8/d21/d7d/fb3 zero size 2026-03-10T10:19:46.185 INFO:tasks.workunit.client.1.vm05.stdout:4/437: rename d1/d31/fd to d1/d31/d4b/f8a 0 2026-03-10T10:19:46.189 INFO:tasks.workunit.client.0.vm02.stdout:2/562: rename d0/d1a/f47 to d0/d1a/d49/d5e/d65/db0/fbb 0 2026-03-10T10:19:46.191 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:45 vm02.local ceph-mon[50200]: Deploying cephadm binary to vm02 2026-03-10T10:19:46.191 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:45 vm02.local ceph-mon[50200]: Deploying cephadm binary to vm05 2026-03-10T10:19:46.191 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:45 vm02.local ceph-mon[50200]: pgmap v4: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-10T10:19:46.191 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:45 vm02.local ceph-mon[50200]: mgrmap e23: vm05.coparq(active, since 2s) 2026-03-10T10:19:46.193 INFO:tasks.workunit.client.0.vm02.stdout:0/596: unlink d9/d34/d3d/d65/da2/lb5 0 2026-03-10T10:19:46.196 INFO:tasks.workunit.client.1.vm05.stdout:8/497: chown d7/l1a 47 1 2026-03-10T10:19:46.196 INFO:tasks.workunit.client.0.vm02.stdout:7/563: unlink d1/d1b/l97 0 2026-03-10T10:19:46.196 INFO:tasks.workunit.client.0.vm02.stdout:7/564: dread - d1/d1b/d8f/f93 zero size 2026-03-10T10:19:46.196 INFO:tasks.workunit.client.1.vm05.stdout:8/498: read d7/d14/d62/f69 [2335545,26025] 0 2026-03-10T10:19:46.197 INFO:tasks.workunit.client.0.vm02.stdout:7/565: readlink d1/dc/d10/d38/l56 0 2026-03-10T10:19:46.198 INFO:tasks.workunit.client.0.vm02.stdout:7/566: truncate d1/dc/d60/fa4 62728 0 2026-03-10T10:19:46.199 INFO:tasks.workunit.client.1.vm05.stdout:8/499: dread d7/fd [0,4194304] 0 2026-03-10T10:19:46.202 INFO:tasks.workunit.client.1.vm05.stdout:8/500: dwrite d7/d14/d15/f51 [8388608,4194304] 0 2026-03-10T10:19:46.205 INFO:tasks.workunit.client.1.vm05.stdout:1/653: stat d4/f36 0 2026-03-10T10:19:46.210 INFO:tasks.workunit.client.0.vm02.stdout:7/567: dread d1/dc/d16/f95 [0,4194304] 0 2026-03-10T10:19:46.230 INFO:tasks.workunit.client.0.vm02.stdout:9/541: dwrite da/d3c/d4c/f60 [0,4194304] 0 2026-03-10T10:19:46.237 INFO:tasks.workunit.client.0.vm02.stdout:5/711: mkdir d1/db/d11/d16/d48/dcf/df5 0 2026-03-10T10:19:46.245 INFO:tasks.workunit.client.0.vm02.stdout:9/542: dread da/f5c [0,4194304] 0 2026-03-10T10:19:46.246 INFO:tasks.workunit.client.0.vm02.stdout:9/543: write da/d3c/d4c/f1d [2602477,95468] 0 2026-03-10T10:19:46.249 INFO:tasks.workunit.client.0.vm02.stdout:4/701: write d1/d10/faf [156138,60057] 0 2026-03-10T10:19:46.253 INFO:tasks.workunit.client.1.vm05.stdout:6/556: dwrite dd/d36/d7d/f97 [0,4194304] 0 2026-03-10T10:19:46.256 INFO:tasks.workunit.client.0.vm02.stdout:3/574: chown d1/c11 5031 1 2026-03-10T10:19:46.262 INFO:tasks.workunit.client.1.vm05.stdout:5/580: mknod da/db/cc6 0 2026-03-10T10:19:46.270 INFO:tasks.workunit.client.0.vm02.stdout:1/600: rename d4/f81 to d4/da/d1a/d47/d65/fba 0 2026-03-10T10:19:46.275 INFO:tasks.workunit.client.0.vm02.stdout:2/563: symlink d0/d8c/lbc 0 2026-03-10T10:19:46.279 INFO:tasks.workunit.client.1.vm05.stdout:4/438: symlink d1/d31/dc/d40/d63/l8b 0 2026-03-10T10:19:46.281 INFO:tasks.workunit.client.0.vm02.stdout:0/597: creat d9/d18/d1a/d46/d5d/da7/db9/fba x:0 0 0 2026-03-10T10:19:46.284 INFO:tasks.workunit.client.1.vm05.stdout:7/575: rename d5/d1d/d20/d2d/f95 to d5/d1d/d20/d91/da7/dab/fb4 0 2026-03-10T10:19:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:45 vm05.local ceph-mon[59051]: Deploying cephadm binary to vm02 2026-03-10T10:19:46.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:45 vm05.local ceph-mon[59051]: Deploying cephadm binary to vm05 2026-03-10T10:19:46.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:45 vm05.local ceph-mon[59051]: pgmap v4: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-10T10:19:46.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:45 vm05.local ceph-mon[59051]: mgrmap e23: vm05.coparq(active, since 2s) 2026-03-10T10:19:46.297 INFO:tasks.workunit.client.0.vm02.stdout:7/568: dread d1/dc/d16/f48 [0,4194304] 0 2026-03-10T10:19:46.298 INFO:tasks.workunit.client.0.vm02.stdout:7/569: write d1/d1b/d8f/f8c [136428,14287] 0 2026-03-10T10:19:46.299 INFO:tasks.workunit.client.0.vm02.stdout:7/570: dread d1/dc/d16/f48 [0,4194304] 0 2026-03-10T10:19:46.300 INFO:tasks.workunit.client.0.vm02.stdout:7/571: write d1/d1b/d8f/f59 [1893206,112292] 0 2026-03-10T10:19:46.303 INFO:tasks.workunit.client.0.vm02.stdout:6/548: write d0/d8/d8c/f36 [744467,52833] 0 2026-03-10T10:19:46.307 INFO:tasks.workunit.client.1.vm05.stdout:9/497: dwrite d0/df/f99 [0,4194304] 0 2026-03-10T10:19:46.309 INFO:tasks.workunit.client.1.vm05.stdout:9/498: fsync d0/d1/d16/f40 0 2026-03-10T10:19:46.333 INFO:tasks.workunit.client.1.vm05.stdout:8/501: mkdir d7/d14/d62/d90 0 2026-03-10T10:19:46.342 INFO:tasks.workunit.client.1.vm05.stdout:1/654: chown d4/d39/d3e/f96 82 1 2026-03-10T10:19:46.349 INFO:tasks.workunit.client.0.vm02.stdout:9/544: dread da/d3c/d4c/d38/f88 [0,4194304] 0 2026-03-10T10:19:46.349 INFO:tasks.workunit.client.0.vm02.stdout:4/702: symlink d1/d52/le6 0 2026-03-10T10:19:46.355 INFO:tasks.workunit.client.0.vm02.stdout:2/564: creat d0/d1a/d49/d5e/d65/db0/fbd x:0 0 0 2026-03-10T10:19:46.355 INFO:tasks.workunit.client.1.vm05.stdout:6/557: creat dd/d36/d3f/d12/d44/d2a/d77/fb4 x:0 0 0 2026-03-10T10:19:46.355 INFO:tasks.workunit.client.0.vm02.stdout:0/598: rmdir d9/d18/d1a/d22/d24/d8e 39 2026-03-10T10:19:46.365 INFO:tasks.workunit.client.1.vm05.stdout:2/500: creat db/d12/f9c x:0 0 0 2026-03-10T10:19:46.366 INFO:tasks.workunit.client.1.vm05.stdout:2/501: write db/d12/f9c [742142,39303] 0 2026-03-10T10:19:46.368 INFO:tasks.workunit.client.1.vm05.stdout:5/581: stat da/db/f29 0 2026-03-10T10:19:46.375 INFO:tasks.workunit.client.1.vm05.stdout:4/439: creat d1/d31/dc/d40/d45/f8c x:0 0 0 2026-03-10T10:19:46.375 INFO:tasks.workunit.client.1.vm05.stdout:4/440: fsync d1/d31/dc/f3a 0 2026-03-10T10:19:46.381 INFO:tasks.workunit.client.1.vm05.stdout:4/441: dread d1/d3/f6c [0,4194304] 0 2026-03-10T10:19:46.382 INFO:tasks.workunit.client.1.vm05.stdout:3/576: link dd/d15/d24/d2c/f3c dd/d15/d1f/dae/fcd 0 2026-03-10T10:19:46.382 INFO:tasks.workunit.client.0.vm02.stdout:1/601: write d4/da/d27/d38/d80/fb5 [921971,40925] 0 2026-03-10T10:19:46.387 INFO:tasks.workunit.client.0.vm02.stdout:6/549: rmdir d0/d8/d29/d2f/d4b/da5/d6f/da1 39 2026-03-10T10:19:46.392 INFO:tasks.workunit.client.0.vm02.stdout:1/602: dread d4/da/d27/f66 [0,4194304] 0 2026-03-10T10:19:46.393 INFO:tasks.workunit.client.1.vm05.stdout:9/499: mknod d0/d1/d4c/d63/caa 0 2026-03-10T10:19:46.393 INFO:tasks.workunit.client.0.vm02.stdout:8/554: creat d1/d1c/d43/fa4 x:0 0 0 2026-03-10T10:19:46.394 INFO:tasks.workunit.client.0.vm02.stdout:8/555: fsync d1/d1c/f33 0 2026-03-10T10:19:46.394 INFO:tasks.workunit.client.0.vm02.stdout:8/556: chown d1/d2/l70 42 1 2026-03-10T10:19:46.406 INFO:tasks.workunit.client.0.vm02.stdout:3/575: symlink d1/d20/db2/lb9 0 2026-03-10T10:19:46.408 INFO:tasks.workunit.client.1.vm05.stdout:0/565: getdents d1/d2/d39 0 2026-03-10T10:19:46.414 INFO:tasks.workunit.client.0.vm02.stdout:9/545: mknod da/d3c/d53/cab 0 2026-03-10T10:19:46.416 INFO:tasks.workunit.client.1.vm05.stdout:6/558: symlink dd/d36/d3f/d12/d58/lb5 0 2026-03-10T10:19:46.417 INFO:tasks.workunit.client.0.vm02.stdout:4/703: mkdir d1/d41/d5e/d78/d44/de7 0 2026-03-10T10:19:46.417 INFO:tasks.workunit.client.0.vm02.stdout:7/572: write d1/dc/d60/f89 [745264,106930] 0 2026-03-10T10:19:46.417 INFO:tasks.workunit.client.0.vm02.stdout:4/704: chown d1/d52/d53/dda 35025968 1 2026-03-10T10:19:46.421 INFO:tasks.workunit.client.1.vm05.stdout:5/582: rename da/db/d26/d5c/d4b to da/d9a/dc7 0 2026-03-10T10:19:46.427 INFO:tasks.workunit.client.0.vm02.stdout:2/565: mknod d0/d1a/cbe 0 2026-03-10T10:19:46.428 INFO:tasks.workunit.client.0.vm02.stdout:5/712: write d1/d6a/faa [592152,49130] 0 2026-03-10T10:19:46.438 INFO:tasks.workunit.client.1.vm05.stdout:3/577: mknod dd/d15/d24/d2c/d6d/cce 0 2026-03-10T10:19:46.438 INFO:tasks.workunit.client.1.vm05.stdout:8/502: symlink d7/d2f/l91 0 2026-03-10T10:19:46.438 INFO:tasks.workunit.client.1.vm05.stdout:9/500: creat d0/d1/d13/de/d21/fab x:0 0 0 2026-03-10T10:19:46.439 INFO:tasks.workunit.client.1.vm05.stdout:3/578: stat dd/d15/d24/d2c/d3b/f55 0 2026-03-10T10:19:46.443 INFO:tasks.workunit.client.1.vm05.stdout:8/503: chown d7/d14/d24/l6f 2 1 2026-03-10T10:19:46.443 INFO:tasks.workunit.client.1.vm05.stdout:1/655: mkdir d4/d20/dbe 0 2026-03-10T10:19:46.444 INFO:tasks.workunit.client.1.vm05.stdout:0/566: mknod d1/d2/d9/d31/d13/d15/d4e/cc1 0 2026-03-10T10:19:46.444 INFO:tasks.workunit.client.1.vm05.stdout:8/504: chown d7/d14/d15/d3b 610 1 2026-03-10T10:19:46.444 INFO:tasks.workunit.client.0.vm02.stdout:8/557: write d1/d1c/d23/d3e/f5a [3026965,65605] 0 2026-03-10T10:19:46.444 INFO:tasks.workunit.client.1.vm05.stdout:2/502: mkdir db/d28/d4f/d8b/d9a/d9d 0 2026-03-10T10:19:46.456 INFO:tasks.workunit.client.1.vm05.stdout:0/567: dread d1/d2/d39/d3d/d9f/fb1 [0,4194304] 0 2026-03-10T10:19:46.458 INFO:tasks.workunit.client.0.vm02.stdout:3/576: rename d1/f77 to d1/d8/d21/d73/d78/d84/fba 0 2026-03-10T10:19:46.459 INFO:tasks.workunit.client.1.vm05.stdout:6/559: rmdir dd/d36/d3f/d12/d96 39 2026-03-10T10:19:46.459 INFO:tasks.workunit.client.0.vm02.stdout:9/546: rmdir da/d3c/d4c/d38/d4a 39 2026-03-10T10:19:46.459 INFO:tasks.workunit.client.1.vm05.stdout:5/583: creat da/d9a/dc7/db4/dbd/fc8 x:0 0 0 2026-03-10T10:19:46.462 INFO:tasks.workunit.client.1.vm05.stdout:5/584: readlink da/d9a/dc7/l8f 0 2026-03-10T10:19:46.462 INFO:tasks.workunit.client.1.vm05.stdout:7/576: creat d5/d1d/d20/fb5 x:0 0 0 2026-03-10T10:19:46.463 INFO:tasks.workunit.client.0.vm02.stdout:7/573: read - d1/dc/d16/f7a zero size 2026-03-10T10:19:46.464 INFO:tasks.workunit.client.1.vm05.stdout:9/501: chown d0/d1/d13/c5a 12118 1 2026-03-10T10:19:46.474 INFO:tasks.workunit.client.0.vm02.stdout:6/550: mkdir d0/d8/d29/d2f/d50/d7e/db2 0 2026-03-10T10:19:46.481 INFO:tasks.workunit.client.1.vm05.stdout:5/585: dwrite da/fba [0,4194304] 0 2026-03-10T10:19:46.485 INFO:tasks.workunit.client.0.vm02.stdout:6/551: dwrite d0/d8/d9/fac [0,4194304] 0 2026-03-10T10:19:46.490 INFO:tasks.workunit.client.0.vm02.stdout:2/566: dread d0/d1a/d49/f54 [0,4194304] 0 2026-03-10T10:19:46.492 INFO:tasks.workunit.client.1.vm05.stdout:5/586: read da/db/d26/d5c/fb5 [2485492,117258] 0 2026-03-10T10:19:46.506 INFO:tasks.workunit.client.0.vm02.stdout:0/599: fdatasync d9/d18/d1a/d22/f3f 0 2026-03-10T10:19:46.518 INFO:tasks.workunit.client.1.vm05.stdout:4/442: dread d1/d31/dc/f21 [0,4194304] 0 2026-03-10T10:19:46.518 INFO:tasks.workunit.client.1.vm05.stdout:3/579: dread dd/d15/f23 [0,4194304] 0 2026-03-10T10:19:46.520 INFO:tasks.workunit.client.1.vm05.stdout:4/443: write d1/d31/d4b/d6d/f85 [459564,68253] 0 2026-03-10T10:19:46.548 INFO:tasks.workunit.client.0.vm02.stdout:9/547: rename da/d3c/d4c/d38/d82/fa5 to da/d3c/d4c/d56/fac 0 2026-03-10T10:19:46.559 INFO:tasks.workunit.client.1.vm05.stdout:1/656: chown d4/dd/lb0 3 1 2026-03-10T10:19:46.560 INFO:tasks.workunit.client.1.vm05.stdout:1/657: chown d4/d20/c28 39564 1 2026-03-10T10:19:46.570 INFO:tasks.workunit.client.0.vm02.stdout:4/705: mkdir d1/de8 0 2026-03-10T10:19:46.570 INFO:tasks.workunit.client.1.vm05.stdout:5/587: dread - da/d9a/fae zero size 2026-03-10T10:19:46.571 INFO:tasks.workunit.client.1.vm05.stdout:9/502: fsync d0/d1/fa7 0 2026-03-10T10:19:46.571 INFO:tasks.workunit.client.1.vm05.stdout:5/588: chown da/d96/la8 0 1 2026-03-10T10:19:46.575 INFO:tasks.workunit.client.1.vm05.stdout:3/580: mkdir dd/d20/d56/d5e/dab/dcf 0 2026-03-10T10:19:46.575 INFO:tasks.workunit.client.1.vm05.stdout:2/503: creat db/d1c/d40/d80/f9e x:0 0 0 2026-03-10T10:19:46.576 INFO:tasks.workunit.client.0.vm02.stdout:1/603: creat d4/da/fbb x:0 0 0 2026-03-10T10:19:46.578 INFO:tasks.workunit.client.0.vm02.stdout:3/577: truncate d1/f12 2476043 0 2026-03-10T10:19:46.579 INFO:tasks.workunit.client.1.vm05.stdout:0/568: creat d1/d2/d39/d3d/d9f/fc2 x:0 0 0 2026-03-10T10:19:46.579 INFO:tasks.workunit.client.1.vm05.stdout:7/577: creat d5/d1d/d29/d3e/d8c/d96/fb6 x:0 0 0 2026-03-10T10:19:46.582 INFO:tasks.workunit.client.1.vm05.stdout:6/560: dwrite f3 [4194304,4194304] 0 2026-03-10T10:19:46.582 INFO:tasks.workunit.client.0.vm02.stdout:3/578: dread - d1/fb6 zero size 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.1.vm05.stdout:5/589: creat da/d96/fc9 x:0 0 0 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.1.vm05.stdout:7/578: chown d5/d1d/d29/f3a 36535 1 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.1.vm05.stdout:3/581: dwrite dd/d20/d56/f7d [0,4194304] 0 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.1.vm05.stdout:2/504: dwrite db/d1c/d40/f50 [0,4194304] 0 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.0.vm02.stdout:0/600: rmdir d9/d18/d1a/d22/d24/d8e/d9b 39 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.0.vm02.stdout:5/713: link d1/db/d11/d16/d79/d85/fb0 d1/db/d11/d13/d28/d37/ff6 0 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.0.vm02.stdout:9/548: rmdir da/d3c/d4c/d38/d4a/d99 39 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.0.vm02.stdout:7/574: creat d1/d1b/faf x:0 0 0 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.1.vm05.stdout:9/503: truncate d0/d1/f4a 125861 0 2026-03-10T10:19:46.597 INFO:tasks.workunit.client.1.vm05.stdout:8/505: dwrite d7/d14/d62/f69 [0,4194304] 0 2026-03-10T10:19:46.613 INFO:tasks.workunit.client.0.vm02.stdout:3/579: rmdir d1/d6/d8b 39 2026-03-10T10:19:46.616 INFO:tasks.workunit.client.1.vm05.stdout:0/569: dread - d1/d2/d39/d6e/fac zero size 2026-03-10T10:19:46.626 INFO:tasks.workunit.client.0.vm02.stdout:3/580: dwrite d1/d20/f64 [0,4194304] 0 2026-03-10T10:19:46.626 INFO:tasks.workunit.client.0.vm02.stdout:4/706: rmdir d1/d75 39 2026-03-10T10:19:46.633 INFO:tasks.workunit.client.0.vm02.stdout:0/601: rmdir d9/d18/d1a/d22/d24/d80/d49 39 2026-03-10T10:19:46.647 INFO:tasks.workunit.client.0.vm02.stdout:5/714: rename d1/db/d11/f4a to d1/db/d11/d62/d67/ff7 0 2026-03-10T10:19:46.658 INFO:tasks.workunit.client.1.vm05.stdout:9/504: creat d0/df/d74/d8c/fac x:0 0 0 2026-03-10T10:19:46.658 INFO:tasks.workunit.client.0.vm02.stdout:9/549: readlink da/d3c/d4c/l22 0 2026-03-10T10:19:46.664 INFO:tasks.workunit.client.1.vm05.stdout:5/590: sync 2026-03-10T10:19:46.670 INFO:tasks.workunit.client.0.vm02.stdout:9/550: dread da/d3c/d53/f6a [0,4194304] 0 2026-03-10T10:19:46.670 INFO:tasks.workunit.client.0.vm02.stdout:9/551: chown da/d3c/d4c/d2c/l7e 70716437 1 2026-03-10T10:19:46.670 INFO:tasks.workunit.client.0.vm02.stdout:9/552: chown da/d3c/d4c/d38/d4a/f7f 78 1 2026-03-10T10:19:46.677 INFO:tasks.workunit.client.0.vm02.stdout:7/575: sync 2026-03-10T10:19:46.686 INFO:tasks.workunit.client.1.vm05.stdout:0/570: write d1/d2/d9/d31/d13/d15/d4e/f60 [1152384,74074] 0 2026-03-10T10:19:46.686 INFO:tasks.workunit.client.0.vm02.stdout:7/576: fdatasync d1/dc/d16/faa 0 2026-03-10T10:19:46.686 INFO:tasks.workunit.client.0.vm02.stdout:5/715: mkdir d1/db/d11/d13/d28/da7/dd9/df8 0 2026-03-10T10:19:46.710 INFO:tasks.workunit.client.1.vm05.stdout:9/505: read d0/df/d11/f52 [7502,18568] 0 2026-03-10T10:19:46.712 INFO:tasks.workunit.client.0.vm02.stdout:0/602: mknod d9/d18/d1a/d46/d5d/d9c/cbb 0 2026-03-10T10:19:46.717 INFO:tasks.workunit.client.1.vm05.stdout:2/505: rename db/d12/f1a to db/d1c/d40/d62/f9f 0 2026-03-10T10:19:46.722 INFO:tasks.workunit.client.0.vm02.stdout:6/552: write d0/d8/d29/d2f/d4b/f8d [1467875,114493] 0 2026-03-10T10:19:46.722 INFO:tasks.workunit.client.0.vm02.stdout:2/567: write d0/d1a/f66 [2843713,124658] 0 2026-03-10T10:19:46.722 INFO:tasks.workunit.client.1.vm05.stdout:9/506: dwrite d0/d1/d13/de/d93/fa1 [0,4194304] 0 2026-03-10T10:19:46.724 INFO:tasks.workunit.client.0.vm02.stdout:4/707: dread d1/d10/f8 [0,4194304] 0 2026-03-10T10:19:46.728 INFO:tasks.workunit.client.1.vm05.stdout:4/444: dwrite d1/d3/d65/f6a [0,4194304] 0 2026-03-10T10:19:46.728 INFO:tasks.workunit.client.0.vm02.stdout:7/577: creat d1/dc/d16/d28/d2d/fb0 x:0 0 0 2026-03-10T10:19:46.729 INFO:tasks.workunit.client.0.vm02.stdout:7/578: write d1/d1b/d8f/dad/d5f/fac [998731,125162] 0 2026-03-10T10:19:46.730 INFO:tasks.workunit.client.0.vm02.stdout:7/579: chown d1/d1b/d8f/c65 1320549 1 2026-03-10T10:19:46.730 INFO:tasks.workunit.client.0.vm02.stdout:5/716: symlink d1/db/d11/d62/d67/lf9 0 2026-03-10T10:19:46.733 INFO:tasks.workunit.client.1.vm05.stdout:3/582: mkdir dd/d15/d24/d2c/dd0 0 2026-03-10T10:19:46.736 INFO:tasks.workunit.client.0.vm02.stdout:0/603: sync 2026-03-10T10:19:46.755 INFO:tasks.workunit.client.1.vm05.stdout:5/591: mkdir da/db/d28/d8a/dca 0 2026-03-10T10:19:46.778 INFO:tasks.workunit.client.0.vm02.stdout:8/558: truncate d1/d1c/d43/d6a/f87 613421 0 2026-03-10T10:19:46.782 INFO:tasks.workunit.client.0.vm02.stdout:8/559: dread d1/d1c/d24/d35/f6e [0,4194304] 0 2026-03-10T10:19:46.786 INFO:tasks.workunit.client.1.vm05.stdout:6/561: truncate dd/d36/d3f/f6f 762813 0 2026-03-10T10:19:46.786 INFO:tasks.workunit.client.1.vm05.stdout:8/506: write d7/f44 [498948,32717] 0 2026-03-10T10:19:46.786 INFO:tasks.workunit.client.1.vm05.stdout:1/658: truncate d4/d3d/d6e/faf 1600762 0 2026-03-10T10:19:46.788 INFO:tasks.workunit.client.1.vm05.stdout:8/507: write d7/d14/d15/f3c [1915264,107927] 0 2026-03-10T10:19:46.793 INFO:tasks.workunit.client.0.vm02.stdout:1/604: dwrite d4/da/d1a/d5b/f9f [0,4194304] 0 2026-03-10T10:19:46.799 INFO:tasks.workunit.client.1.vm05.stdout:4/445: symlink d1/d31/dc/d40/d63/l8d 0 2026-03-10T10:19:46.803 INFO:tasks.workunit.client.0.vm02.stdout:9/553: write da/d3c/d4c/d38/d82/f90 [541777,6264] 0 2026-03-10T10:19:46.812 INFO:tasks.workunit.client.1.vm05.stdout:7/579: dwrite d5/d1d/d20/d35/f47 [4194304,4194304] 0 2026-03-10T10:19:46.818 INFO:tasks.workunit.client.1.vm05.stdout:4/446: dwrite d1/d31/dc/d40/d63/f89 [0,4194304] 0 2026-03-10T10:19:46.819 INFO:tasks.workunit.client.1.vm05.stdout:8/508: dread d7/d14/d24/f34 [0,4194304] 0 2026-03-10T10:19:46.826 INFO:tasks.workunit.client.1.vm05.stdout:7/580: sync 2026-03-10T10:19:46.833 INFO:tasks.workunit.client.1.vm05.stdout:1/659: mkdir d4/d39/d3e/da0/dbf 0 2026-03-10T10:19:46.850 INFO:tasks.workunit.client.1.vm05.stdout:3/583: dread f1 [0,4194304] 0 2026-03-10T10:19:46.855 INFO:tasks.workunit.client.1.vm05.stdout:0/571: creat d1/d2/d9/d31/d12/fc3 x:0 0 0 2026-03-10T10:19:46.862 INFO:tasks.workunit.client.1.vm05.stdout:5/592: unlink da/db/f85 0 2026-03-10T10:19:46.866 INFO:tasks.workunit.client.1.vm05.stdout:9/507: creat d0/d1/fad x:0 0 0 2026-03-10T10:19:46.868 INFO:tasks.workunit.client.1.vm05.stdout:4/447: truncate d1/d31/dc/f33 421574 0 2026-03-10T10:19:46.874 INFO:tasks.workunit.client.1.vm05.stdout:8/509: symlink d7/d2f/d57/l92 0 2026-03-10T10:19:46.874 INFO:tasks.workunit.client.1.vm05.stdout:7/581: dread - d5/d1d/f7d zero size 2026-03-10T10:19:46.874 INFO:tasks.workunit.client.1.vm05.stdout:1/660: mknod d4/d20/cc0 0 2026-03-10T10:19:46.874 INFO:tasks.workunit.client.1.vm05.stdout:1/661: write d4/d39/fb2 [4729979,113224] 0 2026-03-10T10:19:46.878 INFO:tasks.workunit.client.0.vm02.stdout:2/568: mkdir d0/d1a/d24/dbf 0 2026-03-10T10:19:46.879 INFO:tasks.workunit.client.1.vm05.stdout:6/562: getdents dd/d36/d3f/d12/d44/d2a/d3d/d48/d8c/da3 0 2026-03-10T10:19:46.880 INFO:tasks.workunit.client.1.vm05.stdout:6/563: dread - dd/d36/d3f/d12/d44/d2a/d3d/f99 zero size 2026-03-10T10:19:46.881 INFO:tasks.workunit.client.1.vm05.stdout:6/564: fsync dd/d36/d3f/d12/d44/d2a/d3d/d48/fb2 0 2026-03-10T10:19:46.886 INFO:tasks.workunit.client.0.vm02.stdout:6/553: fdatasync d0/d8/d29/d2f/f55 0 2026-03-10T10:19:46.891 INFO:tasks.workunit.client.1.vm05.stdout:3/584: read dd/d20/f50 [528851,78992] 0 2026-03-10T10:19:46.898 INFO:tasks.workunit.client.1.vm05.stdout:0/572: rename d1/d2/d9/d31/d13/f3e to d1/d2/d9/d31/d12/d20/dbe/fc4 0 2026-03-10T10:19:46.917 INFO:tasks.workunit.client.0.vm02.stdout:7/580: unlink d1/dc/d16/d28/d2c/c35 0 2026-03-10T10:19:46.918 INFO:tasks.workunit.client.1.vm05.stdout:2/506: getdents db/d1c/d40 0 2026-03-10T10:19:46.920 INFO:tasks.workunit.client.1.vm05.stdout:5/593: mknod da/db/d28/d97/ccb 0 2026-03-10T10:19:46.920 INFO:tasks.workunit.client.0.vm02.stdout:7/581: fdatasync d1/d1b/d8f/f59 0 2026-03-10T10:19:46.922 INFO:tasks.workunit.client.0.vm02.stdout:7/582: truncate d1/d1b/f86 985044 0 2026-03-10T10:19:46.922 INFO:tasks.workunit.client.0.vm02.stdout:7/583: dread - d1/dc/d16/d28/d2d/f4c zero size 2026-03-10T10:19:46.923 INFO:tasks.workunit.client.0.vm02.stdout:7/584: chown d1/d1b/d8f/dad 365749 1 2026-03-10T10:19:46.924 INFO:tasks.workunit.client.1.vm05.stdout:9/508: fdatasync d0/f1e 0 2026-03-10T10:19:46.925 INFO:tasks.workunit.client.0.vm02.stdout:0/604: rename f2 to d9/d34/d3d/d65/d89/fbc 0 2026-03-10T10:19:46.925 INFO:tasks.workunit.client.0.vm02.stdout:7/585: chown d1/dc 175 1 2026-03-10T10:19:46.927 INFO:tasks.workunit.client.1.vm05.stdout:9/509: dwrite d0/d1/d13/d62/fa8 [0,4194304] 0 2026-03-10T10:19:46.929 INFO:tasks.workunit.client.1.vm05.stdout:8/510: symlink d7/d14/d24/d3f/d6a/l93 0 2026-03-10T10:19:46.931 INFO:tasks.workunit.client.0.vm02.stdout:1/605: unlink d4/da/c4d 0 2026-03-10T10:19:46.932 INFO:tasks.workunit.client.0.vm02.stdout:8/560: creat d1/d1c/d23/d3e/fa5 x:0 0 0 2026-03-10T10:19:46.941 INFO:tasks.workunit.client.0.vm02.stdout:3/581: getdents d1/d8 0 2026-03-10T10:19:46.942 INFO:tasks.workunit.client.0.vm02.stdout:2/569: fdatasync d0/f44 0 2026-03-10T10:19:46.946 INFO:tasks.workunit.client.0.vm02.stdout:6/554: truncate d0/d8/d9/f8a 806651 0 2026-03-10T10:19:46.953 INFO:tasks.workunit.client.1.vm05.stdout:0/573: creat d1/d2/d9/d31/d12/d20/dbe/fc5 x:0 0 0 2026-03-10T10:19:46.955 INFO:tasks.workunit.client.1.vm05.stdout:2/507: rename db/d28/d4f/c58 to db/d12/d74/ca0 0 2026-03-10T10:19:46.958 INFO:tasks.workunit.client.1.vm05.stdout:5/594: fdatasync da/db/fad 0 2026-03-10T10:19:46.967 INFO:tasks.workunit.client.1.vm05.stdout:8/511: chown d7/d14/d3a/d49/d65 2037762404 1 2026-03-10T10:19:46.968 INFO:tasks.workunit.client.1.vm05.stdout:5/595: read da/d9a/dc7/f83 [998836,99869] 0 2026-03-10T10:19:46.969 INFO:tasks.workunit.client.1.vm05.stdout:5/596: readlink da/db/d26/d35/d38/l55 0 2026-03-10T10:19:46.973 INFO:tasks.workunit.client.0.vm02.stdout:0/605: dread d9/d18/d1a/d22/d24/d80/d74/f62 [0,4194304] 0 2026-03-10T10:19:46.975 INFO:tasks.workunit.client.0.vm02.stdout:5/717: dread d1/db/d11/d13/f25 [0,4194304] 0 2026-03-10T10:19:46.977 INFO:tasks.workunit.client.1.vm05.stdout:5/597: dwrite da/d96/fc9 [0,4194304] 0 2026-03-10T10:19:46.979 INFO:tasks.workunit.client.0.vm02.stdout:8/561: mkdir d1/d1c/d43/d6a/d7c/da6 0 2026-03-10T10:19:46.986 INFO:tasks.workunit.client.1.vm05.stdout:1/662: symlink d4/d3d/d6e/dac/lc1 0 2026-03-10T10:19:46.994 INFO:tasks.workunit.client.1.vm05.stdout:6/565: creat dd/d36/d3f/d12/d44/d2a/d3d/d48/d8c/da3/fb6 x:0 0 0 2026-03-10T10:19:46.997 INFO:tasks.workunit.client.0.vm02.stdout:2/570: stat d0/l5 0 2026-03-10T10:19:46.997 INFO:tasks.workunit.client.1.vm05.stdout:3/585: getdents dd/d15/d24/d8e/dac 0 2026-03-10T10:19:47.002 INFO:tasks.workunit.client.0.vm02.stdout:9/554: dwrite da/f65 [0,4194304] 0 2026-03-10T10:19:47.003 INFO:tasks.workunit.client.1.vm05.stdout:0/574: dread d1/d2/d9/fbc [0,4194304] 0 2026-03-10T10:19:47.011 INFO:tasks.workunit.client.1.vm05.stdout:3/586: sync 2026-03-10T10:19:47.012 INFO:tasks.workunit.client.1.vm05.stdout:3/587: sync 2026-03-10T10:19:47.024 INFO:tasks.workunit.client.0.vm02.stdout:3/582: dread d1/d20/f4b [0,4194304] 0 2026-03-10T10:19:47.025 INFO:tasks.workunit.client.0.vm02.stdout:6/555: creat d0/d8/d29/d2f/d50/d7e/fb3 x:0 0 0 2026-03-10T10:19:47.029 INFO:tasks.workunit.client.0.vm02.stdout:6/556: dread d0/d8/d29/d2f/d4b/da5/fa6 [0,4194304] 0 2026-03-10T10:19:47.033 INFO:tasks.workunit.client.1.vm05.stdout:4/448: dwrite d1/d70/f78 [0,4194304] 0 2026-03-10T10:19:47.034 INFO:tasks.workunit.client.0.vm02.stdout:4/708: rename d1/d32/f95 to d1/d41/d5e/d78/d7f/d82/fe9 0 2026-03-10T10:19:47.036 INFO:tasks.workunit.client.0.vm02.stdout:6/557: dread d0/d8/d9/fac [0,4194304] 0 2026-03-10T10:19:47.045 INFO:tasks.workunit.client.0.vm02.stdout:7/586: mknod d1/dc/cb1 0 2026-03-10T10:19:47.051 INFO:tasks.workunit.client.0.vm02.stdout:1/606: fdatasync d4/f3a 0 2026-03-10T10:19:47.055 INFO:tasks.workunit.client.1.vm05.stdout:9/510: link d0/df/f97 d0/d1/d13/d55/fae 0 2026-03-10T10:19:47.071 INFO:tasks.workunit.client.0.vm02.stdout:2/571: creat d0/d1a/d49/fc0 x:0 0 0 2026-03-10T10:19:47.072 INFO:tasks.workunit.client.0.vm02.stdout:2/572: dread - d0/d1a/d49/d5e/fad zero size 2026-03-10T10:19:47.081 INFO:tasks.workunit.client.0.vm02.stdout:9/555: mknod da/d3c/d4c/d38/d4a/d70/cad 0 2026-03-10T10:19:47.090 INFO:tasks.workunit.client.1.vm05.stdout:7/582: creat d5/d1d/d29/fb7 x:0 0 0 2026-03-10T10:19:47.093 INFO:tasks.workunit.client.1.vm05.stdout:8/512: dwrite d7/d14/f33 [0,4194304] 0 2026-03-10T10:19:47.094 INFO:tasks.workunit.client.1.vm05.stdout:8/513: chown d7/d2f 4 1 2026-03-10T10:19:47.094 INFO:tasks.workunit.client.1.vm05.stdout:7/583: read d5/d1d/f31 [2447260,81038] 0 2026-03-10T10:19:47.111 INFO:tasks.workunit.client.0.vm02.stdout:3/583: rmdir d1/d8/d21/d73/d78/d84 39 2026-03-10T10:19:47.128 INFO:tasks.workunit.client.1.vm05.stdout:6/566: rename dd/d36/d3f/d12/d44/d2a/d3d/d48/d8c to dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7 0 2026-03-10T10:19:47.130 INFO:tasks.workunit.client.0.vm02.stdout:0/606: rename d9/d34/d3d/d65/f7a to d9/d34/d3d/fbd 0 2026-03-10T10:19:47.146 INFO:tasks.workunit.client.1.vm05.stdout:0/575: unlink d1/c1a 0 2026-03-10T10:19:47.146 INFO:tasks.workunit.client.0.vm02.stdout:3/584: sync 2026-03-10T10:19:47.147 INFO:tasks.workunit.client.0.vm02.stdout:3/585: chown d1/d8/d21/f5e 593378 1 2026-03-10T10:19:47.148 INFO:tasks.workunit.client.0.vm02.stdout:3/586: write d1/d6/f3a [1273395,81172] 0 2026-03-10T10:19:47.162 INFO:tasks.workunit.client.1.vm05.stdout:2/508: mknod db/ca1 0 2026-03-10T10:19:47.163 INFO:tasks.workunit.client.1.vm05.stdout:2/509: read - db/d1c/d40/f5f zero size 2026-03-10T10:19:47.167 INFO:tasks.workunit.client.0.vm02.stdout:8/562: unlink d1/d1c/d24/d35/d56/c8b 0 2026-03-10T10:19:47.169 INFO:tasks.workunit.client.1.vm05.stdout:3/588: dread dd/d15/d24/f2f [0,4194304] 0 2026-03-10T10:19:47.172 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:46 vm02.local ceph-mon[50200]: [10/Mar/2026:10:19:45] ENGINE Bus STARTING 2026-03-10T10:19:47.172 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:46 vm02.local ceph-mon[50200]: [10/Mar/2026:10:19:45] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T10:19:47.172 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:46 vm02.local ceph-mon[50200]: [10/Mar/2026:10:19:45] ENGINE Client ('192.168.123.105', 50846) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:19:47.172 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:46 vm02.local ceph-mon[50200]: [10/Mar/2026:10:19:46] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T10:19:47.172 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:46 vm02.local ceph-mon[50200]: [10/Mar/2026:10:19:46] ENGINE Bus STARTED 2026-03-10T10:19:47.176 INFO:tasks.workunit.client.0.vm02.stdout:2/573: symlink d0/d1a/d24/d80/lc1 0 2026-03-10T10:19:47.184 INFO:tasks.workunit.client.1.vm05.stdout:5/598: write da/db/d26/d35/f1c [3665587,60340] 0 2026-03-10T10:19:47.185 INFO:tasks.workunit.client.0.vm02.stdout:9/556: creat da/fae x:0 0 0 2026-03-10T10:19:47.194 INFO:tasks.workunit.client.0.vm02.stdout:6/558: write d0/d87/f90 [337920,21368] 0 2026-03-10T10:19:47.194 INFO:tasks.workunit.client.0.vm02.stdout:1/607: write d4/f5 [4872279,28428] 0 2026-03-10T10:19:47.199 INFO:tasks.workunit.client.0.vm02.stdout:0/607: dread - d9/d18/d1a/d22/d24/d79/d7d/fa5 zero size 2026-03-10T10:19:47.200 INFO:tasks.workunit.client.0.vm02.stdout:6/559: write d0/d8/d29/d2f/d4b/f8d [1415486,5475] 0 2026-03-10T10:19:47.206 INFO:tasks.workunit.client.0.vm02.stdout:0/608: sync 2026-03-10T10:19:47.220 INFO:tasks.workunit.client.0.vm02.stdout:6/560: dread d0/d87/f90 [0,4194304] 0 2026-03-10T10:19:47.236 INFO:tasks.workunit.client.0.vm02.stdout:6/561: dwrite d0/d8/d29/d2f/f4e [4194304,4194304] 0 2026-03-10T10:19:47.255 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:46 vm05.local ceph-mon[59051]: [10/Mar/2026:10:19:45] ENGINE Bus STARTING 2026-03-10T10:19:47.255 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:46 vm05.local ceph-mon[59051]: [10/Mar/2026:10:19:45] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T10:19:47.255 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:46 vm05.local ceph-mon[59051]: [10/Mar/2026:10:19:45] ENGINE Client ('192.168.123.105', 50846) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:19:47.255 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:46 vm05.local ceph-mon[59051]: [10/Mar/2026:10:19:46] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T10:19:47.255 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:46 vm05.local ceph-mon[59051]: [10/Mar/2026:10:19:46] ENGINE Bus STARTED 2026-03-10T10:19:47.255 INFO:tasks.workunit.client.0.vm02.stdout:3/587: dread - d1/d20/f9d zero size 2026-03-10T10:19:47.256 INFO:tasks.workunit.client.0.vm02.stdout:3/588: stat d1/d6/d8e 0 2026-03-10T10:19:47.269 INFO:tasks.workunit.client.0.vm02.stdout:8/563: symlink d1/d1c/d43/d6a/d7c/la7 0 2026-03-10T10:19:47.274 INFO:tasks.workunit.client.0.vm02.stdout:2/574: mkdir d0/d1a/d49/d5e/d65/db0/dc2 0 2026-03-10T10:19:47.278 INFO:tasks.workunit.client.0.vm02.stdout:4/709: dwrite d1/d41/d5e/d78/d1a/d49/d81/fd1 [0,4194304] 0 2026-03-10T10:19:47.284 INFO:tasks.workunit.client.0.vm02.stdout:1/608: mkdir d4/da/d1a/d47/dbc 0 2026-03-10T10:19:47.285 INFO:tasks.workunit.client.0.vm02.stdout:1/609: truncate d4/d2c/f54 4625319 0 2026-03-10T10:19:47.298 INFO:tasks.workunit.client.0.vm02.stdout:5/718: rename d1/db/d11/d16/d48/fc1 to d1/db/d11/d13/d28/da7/ffa 0 2026-03-10T10:19:47.299 INFO:tasks.workunit.client.0.vm02.stdout:5/719: readlink d1/db/d11/d84/d40/d4f/l8e 0 2026-03-10T10:19:47.309 INFO:tasks.workunit.client.0.vm02.stdout:0/609: dread - d9/d18/d1a/d22/d24/d79/d7d/f85 zero size 2026-03-10T10:19:47.317 INFO:tasks.workunit.client.0.vm02.stdout:6/562: fsync d0/d8/d29/d2f/f38 0 2026-03-10T10:19:47.322 INFO:tasks.workunit.client.0.vm02.stdout:3/589: mknod d1/d20/d52/cbb 0 2026-03-10T10:19:47.349 INFO:tasks.workunit.client.0.vm02.stdout:4/710: write d1/d41/d5e/d78/d7f/f8e [679949,71655] 0 2026-03-10T10:19:47.362 INFO:tasks.workunit.client.0.vm02.stdout:7/587: rename d1/dc/d55/f64 to d1/dc/d99/fb2 0 2026-03-10T10:19:47.363 INFO:tasks.workunit.client.0.vm02.stdout:1/610: write d4/d2c/d53/f58 [1603663,49676] 0 2026-03-10T10:19:47.379 INFO:tasks.workunit.client.0.vm02.stdout:3/590: mkdir d1/d8/d86/db1/dbc 0 2026-03-10T10:19:47.379 INFO:tasks.workunit.client.0.vm02.stdout:6/563: creat d0/d8/d29/d94/fb4 x:0 0 0 2026-03-10T10:19:47.381 INFO:tasks.workunit.client.0.vm02.stdout:2/575: mknod d0/d1a/d49/d5e/d8a/cc3 0 2026-03-10T10:19:47.389 INFO:tasks.workunit.client.0.vm02.stdout:4/711: read - d1/d32/fb3 zero size 2026-03-10T10:19:47.390 INFO:tasks.workunit.client.1.vm05.stdout:9/511: mkdir d0/d1/d16/d6e/daf 0 2026-03-10T10:19:47.392 INFO:tasks.workunit.client.0.vm02.stdout:0/610: dread d9/d18/d1a/f88 [0,4194304] 0 2026-03-10T10:19:47.392 INFO:tasks.workunit.client.0.vm02.stdout:4/712: chown d1/d32/da3/fd7 65301 1 2026-03-10T10:19:47.407 INFO:tasks.workunit.client.0.vm02.stdout:8/564: rename d1/d1c/d24/d35 to d1/d1c/d43/d6a/da8 0 2026-03-10T10:19:47.410 INFO:tasks.workunit.client.1.vm05.stdout:7/584: symlink d5/d1d/d20/d3b/lb8 0 2026-03-10T10:19:47.414 INFO:tasks.workunit.client.1.vm05.stdout:7/585: dwrite d5/d1d/d20/fb5 [0,4194304] 0 2026-03-10T10:19:47.418 INFO:tasks.workunit.client.1.vm05.stdout:7/586: stat d5/d26/f33 0 2026-03-10T10:19:47.418 INFO:tasks.workunit.client.1.vm05.stdout:1/663: truncate d4/df/d1c/f38 3444397 0 2026-03-10T10:19:47.419 INFO:tasks.workunit.client.1.vm05.stdout:6/567: mkdir dd/d36/d3f/d12/d58/db8 0 2026-03-10T10:19:47.420 INFO:tasks.workunit.client.1.vm05.stdout:7/587: chown d5/d1d/d20/d2d/fb0 137953 1 2026-03-10T10:19:47.423 INFO:tasks.workunit.client.0.vm02.stdout:9/557: dwrite da/f28 [0,4194304] 0 2026-03-10T10:19:47.436 INFO:tasks.workunit.client.0.vm02.stdout:5/720: fsync d1/db/d11/d13/fdb 0 2026-03-10T10:19:47.438 INFO:tasks.workunit.client.0.vm02.stdout:5/721: chown d1/db/d11/d84/f82 16628 1 2026-03-10T10:19:47.441 INFO:tasks.workunit.client.1.vm05.stdout:0/576: rmdir d1/d2/d9/d31/d13/da2/dab 39 2026-03-10T10:19:47.446 INFO:tasks.workunit.client.0.vm02.stdout:3/591: truncate d1/d8/d21/d73/d78/d84/fb7 597592 0 2026-03-10T10:19:47.448 INFO:tasks.workunit.client.0.vm02.stdout:2/576: mkdir d0/d1a/d49/d5e/d65/dc4 0 2026-03-10T10:19:47.456 INFO:tasks.workunit.client.1.vm05.stdout:4/449: symlink d1/d31/dc/l8e 0 2026-03-10T10:19:47.456 INFO:tasks.workunit.client.1.vm05.stdout:5/599: creat da/db/d26/d70/fcc x:0 0 0 2026-03-10T10:19:47.459 INFO:tasks.workunit.client.1.vm05.stdout:9/512: rmdir d0/d1/d13/d62 39 2026-03-10T10:19:47.467 INFO:tasks.workunit.client.1.vm05.stdout:8/514: unlink d7/d14/d24/c36 0 2026-03-10T10:19:47.467 INFO:tasks.workunit.client.0.vm02.stdout:6/564: rename d0/d8/d29/d2f/d4b/c9d to d0/d8/d9/cb5 0 2026-03-10T10:19:47.467 INFO:tasks.workunit.client.0.vm02.stdout:8/565: dread - d1/d1c/d23/f75 zero size 2026-03-10T10:19:47.467 INFO:tasks.workunit.client.0.vm02.stdout:8/566: write d1/f6d [5886103,78980] 0 2026-03-10T10:19:47.468 INFO:tasks.workunit.client.0.vm02.stdout:6/565: write d0/d8/d9/d7a/f99 [12156226,115422] 0 2026-03-10T10:19:47.472 INFO:tasks.workunit.client.0.vm02.stdout:6/566: readlink d0/d8/d29/d2f/d4b/l2e 0 2026-03-10T10:19:47.483 INFO:tasks.workunit.client.0.vm02.stdout:1/611: rmdir d4/da/d1a/d47/d88 39 2026-03-10T10:19:47.483 INFO:tasks.workunit.client.0.vm02.stdout:1/612: dread - d4/da/f71 zero size 2026-03-10T10:19:47.491 INFO:tasks.workunit.client.1.vm05.stdout:2/510: write db/d28/f3f [4929565,87746] 0 2026-03-10T10:19:47.500 INFO:tasks.workunit.client.0.vm02.stdout:5/722: truncate d1/db/d11/d84/f82 459922 0 2026-03-10T10:19:47.500 INFO:tasks.workunit.client.1.vm05.stdout:7/588: symlink d5/d1d/d29/d3e/d8c/d7f/lb9 0 2026-03-10T10:19:47.500 INFO:tasks.workunit.client.1.vm05.stdout:6/568: stat dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b 0 2026-03-10T10:19:47.501 INFO:tasks.workunit.client.1.vm05.stdout:4/450: creat d1/d64/f8f x:0 0 0 2026-03-10T10:19:47.502 INFO:tasks.workunit.client.0.vm02.stdout:6/567: sync 2026-03-10T10:19:47.502 INFO:tasks.workunit.client.0.vm02.stdout:2/577: mkdir d0/d8c/dc5 0 2026-03-10T10:19:47.503 INFO:tasks.workunit.client.0.vm02.stdout:6/568: read d0/d8/d9/f54 [2787471,27404] 0 2026-03-10T10:19:47.508 INFO:tasks.workunit.client.0.vm02.stdout:0/611: rename d9/d18/d1a/d22/f3f to d9/d18/d1a/d22/d24/d79/d7d/fbe 0 2026-03-10T10:19:47.515 INFO:tasks.workunit.client.0.vm02.stdout:0/612: dwrite d9/f28 [0,4194304] 0 2026-03-10T10:19:47.516 INFO:tasks.workunit.client.1.vm05.stdout:5/600: creat da/d96/fcd x:0 0 0 2026-03-10T10:19:47.523 INFO:tasks.workunit.client.0.vm02.stdout:6/569: dread d0/d8/d9/f54 [0,4194304] 0 2026-03-10T10:19:47.542 INFO:tasks.workunit.client.0.vm02.stdout:7/588: link d1/dc/le d1/dc/d16/d28/d2c/lb3 0 2026-03-10T10:19:47.551 INFO:tasks.workunit.client.1.vm05.stdout:3/589: creat dd/d15/d24/d2c/fd1 x:0 0 0 2026-03-10T10:19:47.555 INFO:tasks.workunit.client.0.vm02.stdout:9/558: readlink da/d3c/d4c/d38/d82/d89/l92 0 2026-03-10T10:19:47.571 INFO:tasks.workunit.client.1.vm05.stdout:0/577: mkdir d1/d2/dc6 0 2026-03-10T10:19:47.574 INFO:tasks.workunit.client.1.vm05.stdout:0/578: chown d1/d2/d9/d50/f93 2049139 1 2026-03-10T10:19:47.585 INFO:tasks.workunit.client.0.vm02.stdout:2/578: fdatasync d0/f8f 0 2026-03-10T10:19:47.589 INFO:tasks.workunit.client.1.vm05.stdout:6/569: mknod dd/d36/d3f/d12/d44/d2a/d3d/d3e/cb9 0 2026-03-10T10:19:47.592 INFO:tasks.workunit.client.1.vm05.stdout:4/451: chown d1/d31/dc/d40/l42 11681646 1 2026-03-10T10:19:47.593 INFO:tasks.workunit.client.0.vm02.stdout:3/592: rename d1/d20/f7b to d1/d8/d21/d73/d78/d79/fbd 0 2026-03-10T10:19:47.594 INFO:tasks.workunit.client.0.vm02.stdout:2/579: dwrite d0/d1a/f66 [0,4194304] 0 2026-03-10T10:19:47.610 INFO:tasks.workunit.client.1.vm05.stdout:8/515: mknod d7/d14/d24/d3f/c94 0 2026-03-10T10:19:47.610 INFO:tasks.workunit.client.1.vm05.stdout:8/516: dread - d7/d14/d15/f84 zero size 2026-03-10T10:19:47.613 INFO:tasks.workunit.client.1.vm05.stdout:1/664: rename d4/d37/f90 to d4/df/d76/fc2 0 2026-03-10T10:19:47.613 INFO:tasks.workunit.client.0.vm02.stdout:7/589: dread - d1/d1b/d8e/f9d zero size 2026-03-10T10:19:47.620 INFO:tasks.workunit.client.0.vm02.stdout:5/723: mkdir d1/db/d11/d13/d28/d37/dce/dfb 0 2026-03-10T10:19:47.628 INFO:tasks.workunit.client.1.vm05.stdout:6/570: dread - dd/d36/d3f/d12/d44/d30/f8d zero size 2026-03-10T10:19:47.631 INFO:tasks.workunit.client.1.vm05.stdout:6/571: read dd/d36/d3f/f61 [3647060,58935] 0 2026-03-10T10:19:47.632 INFO:tasks.workunit.client.0.vm02.stdout:3/593: unlink d1/d8/c1f 0 2026-03-10T10:19:47.633 INFO:tasks.workunit.client.1.vm05.stdout:4/452: dwrite d1/d31/dc/d40/f67 [0,4194304] 0 2026-03-10T10:19:47.636 INFO:tasks.workunit.client.0.vm02.stdout:9/559: dread da/f25 [0,4194304] 0 2026-03-10T10:19:47.650 INFO:tasks.workunit.client.0.vm02.stdout:9/560: dwrite da/d3c/d4c/d38/f9e [0,4194304] 0 2026-03-10T10:19:47.660 INFO:tasks.workunit.client.1.vm05.stdout:9/513: creat d0/d1/fb0 x:0 0 0 2026-03-10T10:19:47.666 INFO:tasks.workunit.client.0.vm02.stdout:2/580: rename d0/d10/d69 to d0/d1a/d24/dc6 0 2026-03-10T10:19:47.667 INFO:tasks.workunit.client.1.vm05.stdout:0/579: dwrite d1/d2/d9/d31/d13/f73 [0,4194304] 0 2026-03-10T10:19:47.671 INFO:tasks.workunit.client.0.vm02.stdout:1/613: creat d4/d2c/d53/fbd x:0 0 0 2026-03-10T10:19:47.676 INFO:tasks.workunit.client.1.vm05.stdout:5/601: mkdir da/d9a/daf/dce 0 2026-03-10T10:19:47.677 INFO:tasks.workunit.client.1.vm05.stdout:1/665: rmdir d4/d37/d4e/d82 39 2026-03-10T10:19:47.687 INFO:tasks.workunit.client.1.vm05.stdout:4/453: creat d1/d31/dc/d40/d63/f90 x:0 0 0 2026-03-10T10:19:47.695 INFO:tasks.workunit.client.0.vm02.stdout:4/713: rename d1/d52/d53/f66 to d1/d75/ddd/fea 0 2026-03-10T10:19:47.695 INFO:tasks.workunit.client.0.vm02.stdout:0/613: symlink d9/d18/d1a/d22/d24/d51/lbf 0 2026-03-10T10:19:47.696 INFO:tasks.workunit.client.0.vm02.stdout:2/581: creat d0/d1a/d49/d5e/d65/db0/fc7 x:0 0 0 2026-03-10T10:19:47.700 INFO:tasks.workunit.client.1.vm05.stdout:2/511: rename db/le to db/d28/d4f/la2 0 2026-03-10T10:19:47.719 INFO:tasks.workunit.client.0.vm02.stdout:9/561: truncate da/f13 4560786 0 2026-03-10T10:19:47.720 INFO:tasks.workunit.client.0.vm02.stdout:1/614: dread d4/da/d1a/f40 [0,4194304] 0 2026-03-10T10:19:47.729 INFO:tasks.workunit.client.0.vm02.stdout:9/562: dread da/f25 [0,4194304] 0 2026-03-10T10:19:47.733 INFO:tasks.workunit.client.1.vm05.stdout:7/589: write d5/d17/d66/f9d [782379,104034] 0 2026-03-10T10:19:47.741 INFO:tasks.workunit.client.1.vm05.stdout:7/590: dwrite d5/d1d/d29/d3e/d8c/d96/fb6 [0,4194304] 0 2026-03-10T10:19:47.742 INFO:tasks.workunit.client.0.vm02.stdout:7/590: truncate d1/dc/d60/f79 2689125 0 2026-03-10T10:19:47.751 INFO:tasks.workunit.client.0.vm02.stdout:5/724: dwrite d1/db/f96 [0,4194304] 0 2026-03-10T10:19:47.767 INFO:tasks.workunit.client.0.vm02.stdout:7/591: dwrite d1/dc/d16/faa [0,4194304] 0 2026-03-10T10:19:47.781 INFO:tasks.workunit.client.1.vm05.stdout:1/666: write d4/df/d1c/d53/f6b [1473020,19877] 0 2026-03-10T10:19:47.788 INFO:tasks.workunit.client.1.vm05.stdout:8/517: creat d7/d14/d24/f95 x:0 0 0 2026-03-10T10:19:47.789 INFO:tasks.workunit.client.1.vm05.stdout:3/590: getdents dd/d15/d69 0 2026-03-10T10:19:47.789 INFO:tasks.workunit.client.1.vm05.stdout:3/591: chown dd/d15/d4c 519441020 1 2026-03-10T10:19:47.790 INFO:tasks.workunit.client.1.vm05.stdout:8/518: chown d7/d14/d3a/d49/f72 48886234 1 2026-03-10T10:19:47.791 INFO:tasks.workunit.client.1.vm05.stdout:8/519: write d7/d14/d15/f1f [13460323,92003] 0 2026-03-10T10:19:47.808 INFO:tasks.workunit.client.1.vm05.stdout:8/520: dread - d7/d2f/f7f zero size 2026-03-10T10:19:47.808 INFO:tasks.workunit.client.1.vm05.stdout:5/602: mknod da/db/d26/ccf 0 2026-03-10T10:19:47.821 INFO:tasks.workunit.client.1.vm05.stdout:7/591: creat d5/d1d/d20/d3b/fba x:0 0 0 2026-03-10T10:19:47.832 INFO:tasks.workunit.client.0.vm02.stdout:7/592: dread d1/dc/f25 [0,4194304] 0 2026-03-10T10:19:47.832 INFO:tasks.workunit.client.1.vm05.stdout:2/512: write db/d2d/f5d [1538531,87864] 0 2026-03-10T10:19:47.833 INFO:tasks.workunit.client.0.vm02.stdout:7/593: read - d1/dc/d16/d28/d2d/fb0 zero size 2026-03-10T10:19:47.836 INFO:tasks.workunit.client.1.vm05.stdout:6/572: link dd/d36/d3f/d12/d44/l45 dd/d36/d3f/d12/d44/daa/lba 0 2026-03-10T10:19:47.847 INFO:tasks.workunit.client.1.vm05.stdout:9/514: creat d0/df/fb1 x:0 0 0 2026-03-10T10:19:47.848 INFO:tasks.workunit.client.1.vm05.stdout:0/580: link d1/d2/d9/f6c d1/d2/d9/fc7 0 2026-03-10T10:19:47.850 INFO:tasks.workunit.client.1.vm05.stdout:0/581: dread d1/d2/d9/fc7 [0,4194304] 0 2026-03-10T10:19:47.851 INFO:tasks.workunit.client.1.vm05.stdout:0/582: dread - d1/d2/d39/d6e/fac zero size 2026-03-10T10:19:47.852 INFO:tasks.workunit.client.1.vm05.stdout:0/583: chown d1/d2/d9/d31/d13/d15 7053 1 2026-03-10T10:19:47.853 INFO:tasks.workunit.client.1.vm05.stdout:0/584: write d1/f38 [4767956,28269] 0 2026-03-10T10:19:47.855 INFO:tasks.workunit.client.1.vm05.stdout:1/667: dread d4/d3d/d6e/faf [0,4194304] 0 2026-03-10T10:19:47.865 INFO:tasks.workunit.client.1.vm05.stdout:6/573: chown dd/d36/d3f/d12/d44/d30/c5e 221 1 2026-03-10T10:19:47.867 INFO:tasks.workunit.client.0.vm02.stdout:6/570: rename d0/d8/d29/d6d/c3b to d0/d8/d29/d6d/d96/cb6 0 2026-03-10T10:19:47.867 INFO:tasks.workunit.client.0.vm02.stdout:6/571: stat d0/d8/d29/d2f/d4b/da5/d6f/f7c 0 2026-03-10T10:19:47.879 INFO:tasks.workunit.client.1.vm05.stdout:9/515: symlink d0/d1/d13/de/d93/lb2 0 2026-03-10T10:19:47.900 INFO:tasks.workunit.client.0.vm02.stdout:2/582: unlink d0/f30 0 2026-03-10T10:19:47.900 INFO:tasks.workunit.client.1.vm05.stdout:3/592: truncate f2 85497 0 2026-03-10T10:19:47.900 INFO:tasks.workunit.client.1.vm05.stdout:8/521: mkdir d7/d14/d24/d3f/d6a/d8a/d96 0 2026-03-10T10:19:47.900 INFO:tasks.workunit.client.1.vm05.stdout:3/593: dread dd/d15/f23 [0,4194304] 0 2026-03-10T10:19:47.912 INFO:tasks.workunit.client.1.vm05.stdout:5/603: unlink da/c80 0 2026-03-10T10:19:47.915 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:47 vm02.local ceph-mon[50200]: pgmap v5: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-10T10:19:47.915 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:47 vm02.local ceph-mon[50200]: mgrmap e24: vm05.coparq(active, since 4s) 2026-03-10T10:19:47.917 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:47 vm05.local ceph-mon[59051]: pgmap v5: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-10T10:19:47.917 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:47 vm05.local ceph-mon[59051]: mgrmap e24: vm05.coparq(active, since 4s) 2026-03-10T10:19:47.923 INFO:tasks.workunit.client.0.vm02.stdout:8/567: rename d1/d1c/f2a to d1/d1c/d43/d6a/da8/d8e/fa9 0 2026-03-10T10:19:47.928 INFO:tasks.workunit.client.1.vm05.stdout:4/454: write d1/d3/f26 [482463,101810] 0 2026-03-10T10:19:47.928 INFO:tasks.workunit.client.1.vm05.stdout:7/592: write d5/fe [3716723,22241] 0 2026-03-10T10:19:47.940 INFO:tasks.workunit.client.0.vm02.stdout:5/725: dwrite d1/db/d11/d13/d28/d37/f76 [0,4194304] 0 2026-03-10T10:19:47.941 INFO:tasks.workunit.client.1.vm05.stdout:4/455: dread d1/d31/dc/f1f [4194304,4194304] 0 2026-03-10T10:19:47.943 INFO:tasks.workunit.client.1.vm05.stdout:1/668: write d4/df/d1c/d53/f98 [5183418,90414] 0 2026-03-10T10:19:47.957 INFO:tasks.workunit.client.1.vm05.stdout:0/585: dwrite d1/d2/d9/d31/fa8 [0,4194304] 0 2026-03-10T10:19:47.958 INFO:tasks.workunit.client.0.vm02.stdout:3/594: link d1/d20/la7 d1/lbe 0 2026-03-10T10:19:47.959 INFO:tasks.workunit.client.1.vm05.stdout:9/516: mknod d0/d70/cb3 0 2026-03-10T10:19:47.962 INFO:tasks.workunit.client.0.vm02.stdout:9/563: mknod da/d3c/d4c/d38/d4a/d99/caf 0 2026-03-10T10:19:47.964 INFO:tasks.workunit.client.0.vm02.stdout:2/583: creat d0/d1a/d49/fc8 x:0 0 0 2026-03-10T10:19:47.970 INFO:tasks.workunit.client.1.vm05.stdout:3/594: fdatasync fb 0 2026-03-10T10:19:47.975 INFO:tasks.workunit.client.1.vm05.stdout:5/604: rmdir da/db/d28/d8a 39 2026-03-10T10:19:47.975 INFO:tasks.workunit.client.1.vm05.stdout:5/605: write da/db/d28/f8d [1913651,46053] 0 2026-03-10T10:19:47.975 INFO:tasks.workunit.client.1.vm05.stdout:5/606: chown da/db/cc6 2040447557 1 2026-03-10T10:19:47.976 INFO:tasks.workunit.client.1.vm05.stdout:7/593: readlink d5/d1d/d20/d3b/l5b 0 2026-03-10T10:19:47.980 INFO:tasks.workunit.client.0.vm02.stdout:0/614: rename d9/d18/d1a/d22/d24/f2f to d9/d34/d3d/d7b/fc0 0 2026-03-10T10:19:47.986 INFO:tasks.workunit.client.1.vm05.stdout:1/669: truncate d4/df/d1c/f23 3752809 0 2026-03-10T10:19:47.990 INFO:tasks.workunit.client.0.vm02.stdout:8/568: dread - d1/d1c/d23/d25/f8c zero size 2026-03-10T10:19:47.993 INFO:tasks.workunit.client.0.vm02.stdout:7/594: write d1/d1b/f72 [1527821,106793] 0 2026-03-10T10:19:48.003 INFO:tasks.workunit.client.1.vm05.stdout:9/517: fsync d0/d1/f9 0 2026-03-10T10:19:48.011 INFO:tasks.workunit.client.0.vm02.stdout:9/564: creat da/d3c/d4c/d38/d82/d89/fb0 x:0 0 0 2026-03-10T10:19:48.017 INFO:tasks.workunit.client.0.vm02.stdout:4/714: write d1/d32/f69 [2337694,27307] 0 2026-03-10T10:19:48.019 INFO:tasks.workunit.client.0.vm02.stdout:6/572: dwrite d0/f43 [4194304,4194304] 0 2026-03-10T10:19:48.023 INFO:tasks.workunit.client.0.vm02.stdout:3/595: dread d1/f54 [0,4194304] 0 2026-03-10T10:19:48.030 INFO:tasks.workunit.client.1.vm05.stdout:8/522: write d7/d14/d3a/d49/f6b [352305,34911] 0 2026-03-10T10:19:48.031 INFO:tasks.workunit.client.0.vm02.stdout:2/584: creat d0/d1a/d49/d5e/d8a/fc9 x:0 0 0 2026-03-10T10:19:48.033 INFO:tasks.workunit.client.1.vm05.stdout:8/523: write d7/d14/d3a/f50 [3682073,5101] 0 2026-03-10T10:19:48.034 INFO:tasks.workunit.client.1.vm05.stdout:8/524: fdatasync d7/f11 0 2026-03-10T10:19:48.034 INFO:tasks.workunit.client.0.vm02.stdout:0/615: dread d9/d18/d1a/f7e [0,4194304] 0 2026-03-10T10:19:48.038 INFO:tasks.workunit.client.0.vm02.stdout:5/726: write d1/d9c/fa9 [1585239,44616] 0 2026-03-10T10:19:48.048 INFO:tasks.workunit.client.0.vm02.stdout:1/615: rename d4/da/d27/d38/d3c/f96 to d4/d2c/d53/da6/fbe 0 2026-03-10T10:19:48.048 INFO:tasks.workunit.client.1.vm05.stdout:6/574: write dd/f14 [2126305,61902] 0 2026-03-10T10:19:48.048 INFO:tasks.workunit.client.1.vm05.stdout:5/607: fsync f5 0 2026-03-10T10:19:48.055 INFO:tasks.workunit.client.0.vm02.stdout:7/595: unlink d1/d1b/d8f/dad/f4a 0 2026-03-10T10:19:48.057 INFO:tasks.workunit.client.1.vm05.stdout:2/513: getdents db/d2d/d5e 0 2026-03-10T10:19:48.059 INFO:tasks.workunit.client.1.vm05.stdout:4/456: mknod d1/d31/c91 0 2026-03-10T10:19:48.062 INFO:tasks.workunit.client.1.vm05.stdout:4/457: dwrite d1/d64/f8f [0,4194304] 0 2026-03-10T10:19:48.068 INFO:tasks.workunit.client.1.vm05.stdout:0/586: unlink d1/d2/d39/d3d/l75 0 2026-03-10T10:19:48.069 INFO:tasks.workunit.client.1.vm05.stdout:0/587: chown d1/d2/d9/f32 0 1 2026-03-10T10:19:48.071 INFO:tasks.workunit.client.0.vm02.stdout:4/715: readlink d1/d41/d5e/d78/d1a/d49/l72 0 2026-03-10T10:19:48.079 INFO:tasks.workunit.client.0.vm02.stdout:2/585: sync 2026-03-10T10:19:48.079 INFO:tasks.workunit.client.1.vm05.stdout:9/518: symlink d0/d70/lb4 0 2026-03-10T10:19:48.089 INFO:tasks.workunit.client.0.vm02.stdout:0/616: read - d9/d34/d3d/d67/f9f zero size 2026-03-10T10:19:48.105 INFO:tasks.workunit.client.1.vm05.stdout:3/595: symlink dd/d15/ld2 0 2026-03-10T10:19:48.106 INFO:tasks.workunit.client.0.vm02.stdout:2/586: dread d0/d1a/d49/d5e/f68 [0,4194304] 0 2026-03-10T10:19:48.106 INFO:tasks.workunit.client.0.vm02.stdout:2/587: write d0/d1a/d49/d5e/d8a/fc9 [772127,74476] 0 2026-03-10T10:19:48.106 INFO:tasks.workunit.client.0.vm02.stdout:8/569: symlink d1/d1c/d43/laa 0 2026-03-10T10:19:48.106 INFO:tasks.workunit.client.0.vm02.stdout:8/570: stat d1/d1c/d43/d6a/da8/f44 0 2026-03-10T10:19:48.106 INFO:tasks.workunit.client.0.vm02.stdout:8/571: fdatasync d1/d1c/d43/d5b/f60 0 2026-03-10T10:19:48.106 INFO:tasks.workunit.client.0.vm02.stdout:8/572: stat d1/f16 0 2026-03-10T10:19:48.106 INFO:tasks.workunit.client.0.vm02.stdout:2/588: dread d0/d1a/d49/f54 [0,4194304] 0 2026-03-10T10:19:48.106 INFO:tasks.workunit.client.0.vm02.stdout:8/573: readlink d1/d1c/d43/laa 0 2026-03-10T10:19:48.111 INFO:tasks.workunit.client.0.vm02.stdout:0/617: sync 2026-03-10T10:19:48.112 INFO:tasks.workunit.client.0.vm02.stdout:2/589: chown d0/f70 66051359 1 2026-03-10T10:19:48.114 INFO:tasks.workunit.client.0.vm02.stdout:2/590: write d0/d10/d81/f9b [76658,18182] 0 2026-03-10T10:19:48.130 INFO:tasks.workunit.client.1.vm05.stdout:1/670: dwrite d4/d20/f2d [0,4194304] 0 2026-03-10T10:19:48.141 INFO:tasks.workunit.client.0.vm02.stdout:4/716: dread d1/d10/f71 [0,4194304] 0 2026-03-10T10:19:48.150 INFO:tasks.workunit.client.0.vm02.stdout:9/565: dwrite da/d3c/d4c/d38/f84 [0,4194304] 0 2026-03-10T10:19:48.165 INFO:tasks.workunit.client.0.vm02.stdout:7/596: mknod d1/cb4 0 2026-03-10T10:19:48.179 INFO:tasks.workunit.client.0.vm02.stdout:8/574: mkdir d1/d1c/d43/d5b/dab 0 2026-03-10T10:19:48.186 INFO:tasks.workunit.client.0.vm02.stdout:0/618: truncate d9/f6c 1487348 0 2026-03-10T10:19:48.195 INFO:tasks.workunit.client.0.vm02.stdout:5/727: truncate d1/d9c/fa9 1176253 0 2026-03-10T10:19:48.197 INFO:tasks.workunit.client.0.vm02.stdout:5/728: chown d1/db/d11/d16/d79/d85/fa0 897309 1 2026-03-10T10:19:48.197 INFO:tasks.workunit.client.0.vm02.stdout:3/596: symlink d1/d8/d44/lbf 0 2026-03-10T10:19:48.200 INFO:tasks.workunit.client.0.vm02.stdout:5/729: dread d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fdf [0,4194304] 0 2026-03-10T10:19:48.215 INFO:tasks.workunit.client.0.vm02.stdout:1/616: rename d4/d1b/l2e to d4/lbf 0 2026-03-10T10:19:48.220 INFO:tasks.workunit.client.1.vm05.stdout:6/575: chown dd/d36/d3f/d12/d44/l47 483441 1 2026-03-10T10:19:48.221 INFO:tasks.workunit.client.0.vm02.stdout:6/573: getdents d0/d8/d29/d2f/d4b 0 2026-03-10T10:19:48.229 INFO:tasks.workunit.client.1.vm05.stdout:7/594: rename d5/d26/f4d to d5/d1d/d20/d35/fbb 0 2026-03-10T10:19:48.231 INFO:tasks.workunit.client.1.vm05.stdout:2/514: rmdir db/d2d/d5e 39 2026-03-10T10:19:48.231 INFO:tasks.workunit.client.0.vm02.stdout:0/619: creat d9/d34/d3d/d65/d89/fc1 x:0 0 0 2026-03-10T10:19:48.231 INFO:tasks.workunit.client.0.vm02.stdout:2/591: getdents d0/d1a/d24/dbf 0 2026-03-10T10:19:48.232 INFO:tasks.workunit.client.0.vm02.stdout:3/597: creat d1/d20/db2/fc0 x:0 0 0 2026-03-10T10:19:48.238 INFO:tasks.workunit.client.0.vm02.stdout:2/592: dread d0/d1a/f25 [0,4194304] 0 2026-03-10T10:19:48.238 INFO:tasks.workunit.client.0.vm02.stdout:2/593: write d0/d71/fb9 [998840,102791] 0 2026-03-10T10:19:48.240 INFO:tasks.workunit.client.0.vm02.stdout:2/594: chown d0/f9 125 1 2026-03-10T10:19:48.245 INFO:tasks.workunit.client.0.vm02.stdout:9/566: write da/d3c/d4c/d2c/d34/f57 [3790036,23973] 0 2026-03-10T10:19:48.246 INFO:tasks.workunit.client.0.vm02.stdout:5/730: truncate d1/db/f88 654915 0 2026-03-10T10:19:48.247 INFO:tasks.workunit.client.0.vm02.stdout:0/620: dread d9/d18/d1a/d46/d5d/da7/fb2 [0,4194304] 0 2026-03-10T10:19:48.254 INFO:tasks.workunit.client.0.vm02.stdout:8/575: rename d1/d1c/d23/d3e to d1/d1c/d43/d5b/d88/dac 0 2026-03-10T10:19:48.255 INFO:tasks.workunit.client.1.vm05.stdout:9/519: fdatasync d0/df/d11/f24 0 2026-03-10T10:19:48.259 INFO:tasks.workunit.client.0.vm02.stdout:8/576: chown d1/d1c/d43/d6a/da8/d56/f85 47939 1 2026-03-10T10:19:48.261 INFO:tasks.workunit.client.1.vm05.stdout:3/596: dread - dd/d39/d66/f7e zero size 2026-03-10T10:19:48.261 INFO:tasks.workunit.client.1.vm05.stdout:3/597: stat dd/d20/d56/d5e/dab 0 2026-03-10T10:19:48.266 INFO:tasks.workunit.client.1.vm05.stdout:2/515: sync 2026-03-10T10:19:48.266 INFO:tasks.workunit.client.1.vm05.stdout:2/516: stat db/d28/d4f/d59/f6f 0 2026-03-10T10:19:48.267 INFO:tasks.workunit.client.1.vm05.stdout:2/517: chown db/d28/d4f/l6b 0 1 2026-03-10T10:19:48.267 INFO:tasks.workunit.client.1.vm05.stdout:1/671: creat d4/d3d/d6e/fc3 x:0 0 0 2026-03-10T10:19:48.270 INFO:tasks.workunit.client.1.vm05.stdout:6/576: rmdir dd/d36/d3f/d12/d44/d2a/d77/d8b 39 2026-03-10T10:19:48.271 INFO:tasks.workunit.client.1.vm05.stdout:6/577: write dd/f14 [1815552,115323] 0 2026-03-10T10:19:48.273 INFO:tasks.workunit.client.0.vm02.stdout:3/598: truncate d1/f54 217839 0 2026-03-10T10:19:48.282 INFO:tasks.workunit.client.1.vm05.stdout:6/578: dread dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b [0,4194304] 0 2026-03-10T10:19:48.283 INFO:tasks.workunit.client.0.vm02.stdout:2/595: symlink d0/d1a/d49/lca 0 2026-03-10T10:19:48.290 INFO:tasks.workunit.client.1.vm05.stdout:8/525: rename d7/l28 to d7/d14/d24/d3f/d6a/d8a/l97 0 2026-03-10T10:19:48.300 INFO:tasks.workunit.client.0.vm02.stdout:7/597: dwrite d1/dc/d16/d28/f73 [0,4194304] 0 2026-03-10T10:19:48.312 INFO:tasks.workunit.client.0.vm02.stdout:6/574: dwrite d0/d8/d29/d2f/d4b/da5/d6f/f7c [0,4194304] 0 2026-03-10T10:19:48.315 INFO:tasks.workunit.client.1.vm05.stdout:9/520: creat d0/df/d74/d8c/fb5 x:0 0 0 2026-03-10T10:19:48.319 INFO:tasks.workunit.client.0.vm02.stdout:6/575: read d0/d8/d29/d2f/f38 [993791,76450] 0 2026-03-10T10:19:48.321 INFO:tasks.workunit.client.0.vm02.stdout:6/576: chown d0/d8/d29/d2f/f77 111408 1 2026-03-10T10:19:48.335 INFO:tasks.workunit.client.1.vm05.stdout:3/598: symlink dd/d15/d24/d74/ld3 0 2026-03-10T10:19:48.344 INFO:tasks.workunit.client.1.vm05.stdout:5/608: write da/f20 [479266,127826] 0 2026-03-10T10:19:48.354 INFO:tasks.workunit.client.1.vm05.stdout:2/518: fsync db/d61/f92 0 2026-03-10T10:19:48.361 INFO:tasks.workunit.client.1.vm05.stdout:6/579: symlink dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/lbb 0 2026-03-10T10:19:48.365 INFO:tasks.workunit.client.1.vm05.stdout:6/580: write dd/d36/f69 [3722269,33592] 0 2026-03-10T10:19:48.365 INFO:tasks.workunit.client.1.vm05.stdout:8/526: creat d7/d14/d24/d3f/d4f/f98 x:0 0 0 2026-03-10T10:19:48.365 INFO:tasks.workunit.client.0.vm02.stdout:2/596: creat d0/d1a/d49/d5e/d65/db0/fcb x:0 0 0 2026-03-10T10:19:48.374 INFO:tasks.workunit.client.0.vm02.stdout:2/597: dwrite d0/d10/d81/f9b [0,4194304] 0 2026-03-10T10:19:48.386 INFO:tasks.workunit.client.1.vm05.stdout:8/527: dwrite d7/d14/d15/f3c [4194304,4194304] 0 2026-03-10T10:19:48.387 INFO:tasks.workunit.client.0.vm02.stdout:9/567: mkdir da/d3c/d4c/db1 0 2026-03-10T10:19:48.390 INFO:tasks.workunit.client.1.vm05.stdout:8/528: chown d7/d14/f40 1049910 1 2026-03-10T10:19:48.390 INFO:tasks.workunit.client.1.vm05.stdout:8/529: read d7/f21 [1941889,67599] 0 2026-03-10T10:19:48.413 INFO:tasks.workunit.client.0.vm02.stdout:7/598: mknod d1/dc/d16/d28/d2c/cb5 0 2026-03-10T10:19:48.416 INFO:tasks.workunit.client.0.vm02.stdout:4/717: link d1/d52/lb0 d1/d10/leb 0 2026-03-10T10:19:48.416 INFO:tasks.workunit.client.1.vm05.stdout:5/609: dwrite da/db/d26/d5c/fb5 [0,4194304] 0 2026-03-10T10:19:48.417 INFO:tasks.workunit.client.0.vm02.stdout:4/718: chown d1/d52/fcd 112310554 1 2026-03-10T10:19:48.424 INFO:tasks.workunit.client.1.vm05.stdout:5/610: chown da/db/d26/f4c 0 1 2026-03-10T10:19:48.427 INFO:tasks.workunit.client.0.vm02.stdout:1/617: write d4/da/d27/d38/d80/f94 [1798906,74021] 0 2026-03-10T10:19:48.428 INFO:tasks.workunit.client.0.vm02.stdout:1/618: chown d4/da/d1a/f40 171 1 2026-03-10T10:19:48.428 INFO:tasks.workunit.client.0.vm02.stdout:1/619: stat d4/da/d1a/fa1 0 2026-03-10T10:19:48.434 INFO:tasks.workunit.client.1.vm05.stdout:0/588: link d1/d2/d9/d31/l29 d1/d2/d9/d31/d13/d15/d4e/d8a/lc8 0 2026-03-10T10:19:48.439 INFO:tasks.workunit.client.0.vm02.stdout:8/577: mkdir d1/d1c/d24/dad 0 2026-03-10T10:19:48.442 INFO:tasks.workunit.client.1.vm05.stdout:6/581: creat dd/d36/d3f/d12/d44/daa/fbc x:0 0 0 2026-03-10T10:19:48.451 INFO:tasks.workunit.client.0.vm02.stdout:2/598: mkdir d0/d1a/d49/dcc 0 2026-03-10T10:19:48.459 INFO:tasks.workunit.client.0.vm02.stdout:5/731: creat d1/db/d11/d84/d40/d4f/d5f/ffc x:0 0 0 2026-03-10T10:19:48.459 INFO:tasks.workunit.client.0.vm02.stdout:5/732: stat d1/db/f1e 0 2026-03-10T10:19:48.460 INFO:tasks.workunit.client.1.vm05.stdout:4/458: rename d1/d31/dc/d40/d45/f8c to d1/f92 0 2026-03-10T10:19:48.461 INFO:tasks.workunit.client.0.vm02.stdout:9/568: dread da/d3c/d4c/d2c/d34/f83 [0,4194304] 0 2026-03-10T10:19:48.462 INFO:tasks.workunit.client.0.vm02.stdout:4/719: symlink d1/d41/d5e/d78/d1a/lec 0 2026-03-10T10:19:48.475 INFO:tasks.workunit.client.1.vm05.stdout:0/589: readlink d1/d2/d9/d50/lb9 0 2026-03-10T10:19:48.476 INFO:tasks.workunit.client.0.vm02.stdout:1/620: read - d4/da/d1a/fa1 zero size 2026-03-10T10:19:48.476 INFO:tasks.workunit.client.0.vm02.stdout:3/599: rename d1/d6/d8b/l99 to d1/d8/d21/d73/d78/d79/lc1 0 2026-03-10T10:19:48.477 INFO:tasks.workunit.client.1.vm05.stdout:5/611: sync 2026-03-10T10:19:48.483 INFO:tasks.workunit.client.1.vm05.stdout:6/582: mkdir dd/d36/d3f/dbd 0 2026-03-10T10:19:48.484 INFO:tasks.workunit.client.0.vm02.stdout:1/621: dread d4/da/d1a/d47/fa0 [0,4194304] 0 2026-03-10T10:19:48.484 INFO:tasks.workunit.client.0.vm02.stdout:1/622: chown d4/d2c/d91 0 1 2026-03-10T10:19:48.488 INFO:tasks.workunit.client.1.vm05.stdout:2/519: write db/d1c/d40/f4d [42311,44239] 0 2026-03-10T10:19:48.490 INFO:tasks.workunit.client.0.vm02.stdout:6/577: dwrite d0/d8/d29/d2f/d4b/f39 [4194304,4194304] 0 2026-03-10T10:19:48.492 INFO:tasks.workunit.client.0.vm02.stdout:0/621: dwrite d9/d18/d1a/d22/d24/d80/d49/f8b [0,4194304] 0 2026-03-10T10:19:48.492 INFO:tasks.workunit.client.1.vm05.stdout:7/595: dwrite d5/d1d/d20/fa2 [0,4194304] 0 2026-03-10T10:19:48.496 INFO:tasks.workunit.client.1.vm05.stdout:7/596: chown d5/l1b 245 1 2026-03-10T10:19:48.504 INFO:tasks.workunit.client.1.vm05.stdout:1/672: dwrite d4/f46 [0,4194304] 0 2026-03-10T10:19:48.504 INFO:tasks.workunit.client.1.vm05.stdout:9/521: dwrite d0/d1/d13/d26/f4e [4194304,4194304] 0 2026-03-10T10:19:48.505 INFO:tasks.workunit.client.0.vm02.stdout:8/578: write d1/d1c/f42 [5010418,31929] 0 2026-03-10T10:19:48.515 INFO:tasks.workunit.client.1.vm05.stdout:3/599: dwrite dd/d39/fb6 [0,4194304] 0 2026-03-10T10:19:48.526 INFO:tasks.workunit.client.0.vm02.stdout:7/599: symlink d1/dc/d16/d28/lb6 0 2026-03-10T10:19:48.533 INFO:tasks.workunit.client.1.vm05.stdout:7/597: dread d5/d17/d66/f9d [0,4194304] 0 2026-03-10T10:19:48.544 INFO:tasks.workunit.client.1.vm05.stdout:5/612: read - da/db/d26/d5c/fc5 zero size 2026-03-10T10:19:48.553 INFO:tasks.workunit.client.0.vm02.stdout:5/733: dread d1/db/d11/d84/d40/f66 [0,4194304] 0 2026-03-10T10:19:48.556 INFO:tasks.workunit.client.0.vm02.stdout:3/600: truncate d1/d20/d52/f76 274114 0 2026-03-10T10:19:48.563 INFO:tasks.workunit.client.1.vm05.stdout:2/520: mkdir db/d28/d4f/da3 0 2026-03-10T10:19:48.592 INFO:tasks.workunit.client.1.vm05.stdout:9/522: creat d0/d70/fb6 x:0 0 0 2026-03-10T10:19:48.626 INFO:tasks.workunit.client.1.vm05.stdout:8/530: getdents d7/d14/d24/d3f/d4f 0 2026-03-10T10:19:48.642 INFO:tasks.workunit.client.0.vm02.stdout:8/579: mknod d1/d1c/d43/d5b/d88/dac/cae 0 2026-03-10T10:19:48.642 INFO:tasks.workunit.client.0.vm02.stdout:7/600: creat d1/d1b/d49/d98/fb7 x:0 0 0 2026-03-10T10:19:48.642 INFO:tasks.workunit.client.0.vm02.stdout:7/601: dwrite d1/f6b [0,4194304] 0 2026-03-10T10:19:48.642 INFO:tasks.workunit.client.0.vm02.stdout:7/602: stat d1/dc/d16/d28/c51 0 2026-03-10T10:19:48.642 INFO:tasks.workunit.client.0.vm02.stdout:4/720: creat d1/d41/d5e/d78/d44/de7/fed x:0 0 0 2026-03-10T10:19:48.642 INFO:tasks.workunit.client.1.vm05.stdout:4/459: rmdir d1/d3/d82 0 2026-03-10T10:19:48.644 INFO:tasks.workunit.client.1.vm05.stdout:7/598: symlink d5/d1d/d20/d2d/d5d/d7a/lbc 0 2026-03-10T10:19:48.647 INFO:tasks.workunit.client.1.vm05.stdout:5/613: creat da/d9a/dc7/db4/dbd/fd0 x:0 0 0 2026-03-10T10:19:48.663 INFO:tasks.workunit.client.0.vm02.stdout:3/601: symlink d1/d6/d8b/lc2 0 2026-03-10T10:19:48.663 INFO:tasks.workunit.client.0.vm02.stdout:1/623: mknod d4/da/d1a/d47/cc0 0 2026-03-10T10:19:48.663 INFO:tasks.workunit.client.0.vm02.stdout:1/624: chown d4/da/d27/d38/l42 197 1 2026-03-10T10:19:48.666 INFO:tasks.workunit.client.0.vm02.stdout:2/599: creat d0/fcd x:0 0 0 2026-03-10T10:19:48.668 INFO:tasks.workunit.client.1.vm05.stdout:2/521: dwrite db/d1c/f69 [0,4194304] 0 2026-03-10T10:19:48.676 INFO:tasks.workunit.client.0.vm02.stdout:9/569: creat da/d3c/d4c/d38/fb2 x:0 0 0 2026-03-10T10:19:48.679 INFO:tasks.workunit.client.0.vm02.stdout:3/602: rmdir d1/d8/d44 39 2026-03-10T10:19:48.684 INFO:tasks.workunit.client.1.vm05.stdout:3/600: creat dd/d20/d56/d5e/dab/d9c/fd4 x:0 0 0 2026-03-10T10:19:48.685 INFO:tasks.workunit.client.0.vm02.stdout:1/625: unlink d4/da/c50 0 2026-03-10T10:19:48.685 INFO:tasks.workunit.client.1.vm05.stdout:8/531: symlink d7/d2f/d57/l99 0 2026-03-10T10:19:48.685 INFO:tasks.workunit.client.1.vm05.stdout:7/599: rename d5/d26/f2c to d5/d1d/d20/d91/fbd 0 2026-03-10T10:19:48.687 INFO:tasks.workunit.client.1.vm05.stdout:8/532: stat d7/d14/d3a/d49/d65/f83 0 2026-03-10T10:19:48.689 INFO:tasks.workunit.client.0.vm02.stdout:0/622: rename d9/d34/d3d/fbd to d9/d18/d1a/d22/d24/d8e/d9b/fc2 0 2026-03-10T10:19:48.690 INFO:tasks.workunit.client.1.vm05.stdout:6/583: creat dd/d36/d3f/fbe x:0 0 0 2026-03-10T10:19:48.691 INFO:tasks.workunit.client.1.vm05.stdout:4/460: symlink d1/d31/d76/l93 0 2026-03-10T10:19:48.701 INFO:tasks.workunit.client.1.vm05.stdout:4/461: dwrite d1/d3/f26 [0,4194304] 0 2026-03-10T10:19:48.704 INFO:tasks.workunit.client.0.vm02.stdout:5/734: creat d1/db/d11/ffd x:0 0 0 2026-03-10T10:19:48.709 INFO:tasks.workunit.client.1.vm05.stdout:4/462: dwrite d1/d31/dc/d40/d45/f48 [0,4194304] 0 2026-03-10T10:19:48.711 INFO:tasks.workunit.client.1.vm05.stdout:4/463: chown d1/d3 1271049 1 2026-03-10T10:19:48.712 INFO:tasks.workunit.client.0.vm02.stdout:3/603: rmdir d1/d6/d8b 39 2026-03-10T10:19:48.714 INFO:tasks.workunit.client.0.vm02.stdout:1/626: symlink d4/da/d27/d38/d3c/lc1 0 2026-03-10T10:19:48.717 INFO:tasks.workunit.client.0.vm02.stdout:7/603: sync 2026-03-10T10:19:48.721 INFO:tasks.workunit.client.0.vm02.stdout:4/721: rename d1/d41/d5e/d78/d1a/d49/d81/dc6/lcc to d1/d10/db/lee 0 2026-03-10T10:19:48.727 INFO:tasks.workunit.client.1.vm05.stdout:4/464: dwrite d1/d31/dc/d40/d63/f90 [0,4194304] 0 2026-03-10T10:19:48.732 INFO:tasks.workunit.client.1.vm05.stdout:2/522: rename db/d4e to db/d28/d4f/d59/da4 0 2026-03-10T10:19:48.733 INFO:tasks.workunit.client.1.vm05.stdout:8/533: symlink d7/d14/d15/d3b/l9a 0 2026-03-10T10:19:48.733 INFO:tasks.workunit.client.1.vm05.stdout:6/584: symlink dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/lbf 0 2026-03-10T10:19:48.733 INFO:tasks.workunit.client.1.vm05.stdout:2/523: readlink db/d2d/l3e 0 2026-03-10T10:19:48.743 INFO:tasks.workunit.client.1.vm05.stdout:7/600: getdents d5/d17/d85 0 2026-03-10T10:19:48.743 INFO:tasks.workunit.client.0.vm02.stdout:2/600: mknod d0/d8c/dc5/cce 0 2026-03-10T10:19:48.743 INFO:tasks.workunit.client.0.vm02.stdout:9/570: symlink da/d3c/d4c/db1/lb3 0 2026-03-10T10:19:48.744 INFO:tasks.workunit.client.0.vm02.stdout:2/601: chown d0/d1a/d24/l3a 1891022437 1 2026-03-10T10:19:48.749 INFO:tasks.workunit.client.1.vm05.stdout:8/534: creat d7/d14/f9b x:0 0 0 2026-03-10T10:19:48.769 INFO:tasks.workunit.client.1.vm05.stdout:6/585: rename dd/d36/d3f/d12/d58/lb5 to dd/d36/d3f/d12/d44/d2a/d3d/lc0 0 2026-03-10T10:19:48.785 INFO:tasks.workunit.client.1.vm05.stdout:7/601: mkdir d5/d1d/d29/dbe 0 2026-03-10T10:19:48.795 INFO:tasks.workunit.client.1.vm05.stdout:7/602: read - d5/d1d/d20/d35/f78 zero size 2026-03-10T10:19:48.795 INFO:tasks.workunit.client.1.vm05.stdout:8/535: creat d7/d14/d24/f9c x:0 0 0 2026-03-10T10:19:48.796 INFO:tasks.workunit.client.1.vm05.stdout:6/586: read dd/d1b/f1d [589738,124263] 0 2026-03-10T10:19:48.797 INFO:tasks.workunit.client.1.vm05.stdout:8/536: dread - d7/d14/d3a/d49/d65/f83 zero size 2026-03-10T10:19:48.797 INFO:tasks.workunit.client.1.vm05.stdout:6/587: chown dd/d1b/c95 21737310 1 2026-03-10T10:19:48.819 INFO:tasks.workunit.client.1.vm05.stdout:6/588: symlink dd/d36/d3f/d12/d44/lc1 0 2026-03-10T10:19:48.832 INFO:tasks.workunit.client.0.vm02.stdout:1/627: rename d4/d1b/f6f to d4/da/d1a/d47/d78/fc2 0 2026-03-10T10:19:48.833 INFO:tasks.workunit.client.0.vm02.stdout:9/571: rmdir da/d3c/d4c/d75 39 2026-03-10T10:19:48.844 INFO:tasks.workunit.client.0.vm02.stdout:4/722: mkdir d1/def 0 2026-03-10T10:19:48.844 INFO:tasks.workunit.client.0.vm02.stdout:1/628: dwrite d4/d2c/f54 [0,4194304] 0 2026-03-10T10:19:48.857 INFO:tasks.workunit.client.0.vm02.stdout:7/604: rename d1/d1b/d8f/f77 to d1/dc/d16/fb8 0 2026-03-10T10:19:48.867 INFO:tasks.workunit.client.1.vm05.stdout:6/589: rename dd/d36/d3f/d12/d44/d30/d4a/c7a to dd/d36/d3f/d12/d44/d30/d4a/d6e/cc2 0 2026-03-10T10:19:48.872 INFO:tasks.workunit.client.1.vm05.stdout:6/590: chown dd/d36/d3f/d12/d44/fa1 288260362 1 2026-03-10T10:19:48.872 INFO:tasks.workunit.client.1.vm05.stdout:6/591: dread f2 [4194304,4194304] 0 2026-03-10T10:19:48.872 INFO:tasks.workunit.client.0.vm02.stdout:9/572: dwrite da/d3c/d4c/d38/d82/d8c/f98 [0,4194304] 0 2026-03-10T10:19:48.879 INFO:tasks.workunit.client.1.vm05.stdout:6/592: dwrite dd/f14 [4194304,4194304] 0 2026-03-10T10:19:48.885 INFO:tasks.workunit.client.0.vm02.stdout:2/602: dread d0/f9 [0,4194304] 0 2026-03-10T10:19:48.890 INFO:tasks.workunit.client.0.vm02.stdout:3/604: creat d1/fc3 x:0 0 0 2026-03-10T10:19:48.892 INFO:tasks.workunit.client.0.vm02.stdout:5/735: getdents d1/db/d11/d62/d67 0 2026-03-10T10:19:48.895 INFO:tasks.workunit.client.0.vm02.stdout:1/629: dread d4/d1b/f44 [0,4194304] 0 2026-03-10T10:19:48.898 INFO:tasks.workunit.client.0.vm02.stdout:2/603: rename d0/d1a/d49/d5e/d65/f9d to d0/d1a/d24/dc6/fcf 0 2026-03-10T10:19:48.903 INFO:tasks.workunit.client.0.vm02.stdout:5/736: dread - d1/db/d11/d84/d40/d4f/f6e zero size 2026-03-10T10:19:48.921 INFO:tasks.workunit.client.0.vm02.stdout:1/630: write d4/d2c/fac [940373,65147] 0 2026-03-10T10:19:48.929 INFO:tasks.workunit.client.0.vm02.stdout:1/631: dwrite d4/ff [8388608,4194304] 0 2026-03-10T10:19:48.929 INFO:tasks.workunit.client.0.vm02.stdout:3/605: symlink d1/d8/d21/lc4 0 2026-03-10T10:19:48.929 INFO:tasks.workunit.client.0.vm02.stdout:5/737: mknod d1/db/d11/d13/d28/d37/d3d/da3/cfe 0 2026-03-10T10:19:48.929 INFO:tasks.workunit.client.0.vm02.stdout:9/573: link da/d3c/d4c/d2c/d34/l4e da/d3c/d4c/lb4 0 2026-03-10T10:19:48.929 INFO:tasks.workunit.client.0.vm02.stdout:2/604: link d0/d10/c82 d0/d1a/d49/dcc/cd0 0 2026-03-10T10:19:48.932 INFO:tasks.workunit.client.0.vm02.stdout:5/738: truncate d1/db/d11/d84/d40/d4f/d5f/f6b 1801464 0 2026-03-10T10:19:48.934 INFO:tasks.workunit.client.0.vm02.stdout:9/574: read da/f13 [865835,61566] 0 2026-03-10T10:19:48.950 INFO:tasks.workunit.client.0.vm02.stdout:3/606: rename d1/d8/d86/l9a to d1/d6/d8e/lc5 0 2026-03-10T10:19:48.950 INFO:tasks.workunit.client.0.vm02.stdout:9/575: creat da/d3c/d4c/d38/d82/d89/fb5 x:0 0 0 2026-03-10T10:19:48.950 INFO:tasks.workunit.client.0.vm02.stdout:3/607: fdatasync d1/d8/d21/d73/d78/d79/fbd 0 2026-03-10T10:19:48.950 INFO:tasks.workunit.client.0.vm02.stdout:3/608: symlink d1/d8/d21/d7d/lc6 0 2026-03-10T10:19:48.950 INFO:tasks.workunit.client.0.vm02.stdout:3/609: read - d1/d20/f9d zero size 2026-03-10T10:19:48.952 INFO:tasks.workunit.client.0.vm02.stdout:3/610: truncate d1/d8/d86/f87 1051960 0 2026-03-10T10:19:48.953 INFO:tasks.workunit.client.0.vm02.stdout:3/611: fdatasync d1/d6/f3a 0 2026-03-10T10:19:48.956 INFO:tasks.workunit.client.0.vm02.stdout:3/612: creat d1/d6/d8e/fc7 x:0 0 0 2026-03-10T10:19:48.964 INFO:tasks.workunit.client.0.vm02.stdout:3/613: creat d1/d20/fc8 x:0 0 0 2026-03-10T10:19:48.964 INFO:tasks.workunit.client.1.vm05.stdout:6/593: dread dd/d1b/f1d [0,4194304] 0 2026-03-10T10:19:48.966 INFO:tasks.workunit.client.0.vm02.stdout:3/614: mkdir d1/d58/dc9 0 2026-03-10T10:19:48.966 INFO:tasks.workunit.client.0.vm02.stdout:3/615: dread - d1/d6/d8e/fc7 zero size 2026-03-10T10:19:48.970 INFO:tasks.workunit.client.0.vm02.stdout:3/616: unlink d1/d58/fb0 0 2026-03-10T10:19:48.987 INFO:tasks.workunit.client.0.vm02.stdout:3/617: fdatasync d1/d8/f7c 0 2026-03-10T10:19:48.989 INFO:tasks.workunit.client.0.vm02.stdout:1/632: sync 2026-03-10T10:19:48.996 INFO:tasks.workunit.client.0.vm02.stdout:1/633: mkdir d4/dc3 0 2026-03-10T10:19:49.005 INFO:tasks.workunit.client.0.vm02.stdout:1/634: read d4/da/d27/d38/f3b [1059201,94589] 0 2026-03-10T10:19:49.013 INFO:tasks.workunit.client.0.vm02.stdout:1/635: rename d4/da/d1a/c16 to d4/da/d27/d38/cc4 0 2026-03-10T10:19:49.024 INFO:tasks.workunit.client.0.vm02.stdout:1/636: link d4/d2c/d53/fbd d4/d2c/d53/fc5 0 2026-03-10T10:19:49.070 INFO:tasks.workunit.client.1.vm05.stdout:0/590: truncate d1/d2/d39/d3d/f72 2170772 0 2026-03-10T10:19:49.070 INFO:tasks.workunit.client.1.vm05.stdout:1/673: write d4/d39/f67 [529087,74339] 0 2026-03-10T10:19:49.077 INFO:tasks.workunit.client.0.vm02.stdout:6/578: dwrite d0/d8/d29/d2f/d4b/da5/d6f/fa2 [0,4194304] 0 2026-03-10T10:19:49.082 INFO:tasks.workunit.client.1.vm05.stdout:9/523: write d0/df/d11/f2c [3900474,21484] 0 2026-03-10T10:19:49.096 INFO:tasks.workunit.client.1.vm05.stdout:5/614: dwrite da/db/d26/d35/f7d [0,4194304] 0 2026-03-10T10:19:49.109 INFO:tasks.workunit.client.0.vm02.stdout:8/580: truncate d1/d1c/d43/d6a/da8/f6e 1327800 0 2026-03-10T10:19:49.111 INFO:tasks.workunit.client.0.vm02.stdout:6/579: creat d0/d8/d29/d2f/d50/d7e/fb7 x:0 0 0 2026-03-10T10:19:49.111 INFO:tasks.workunit.client.0.vm02.stdout:6/580: chown d0/c95 103 1 2026-03-10T10:19:49.114 INFO:tasks.workunit.client.1.vm05.stdout:3/601: write dd/d39/d5c/fb9 [320866,57316] 0 2026-03-10T10:19:49.127 INFO:tasks.workunit.client.0.vm02.stdout:6/581: write d0/fa3 [717343,3700] 0 2026-03-10T10:19:49.138 INFO:tasks.workunit.client.0.vm02.stdout:6/582: link d0/d8/d29/d52/c57 d0/d8/d29/d2f/d4b/cb8 0 2026-03-10T10:19:49.138 INFO:tasks.workunit.client.0.vm02.stdout:0/623: dwrite d9/d34/d3d/f58 [0,4194304] 0 2026-03-10T10:19:49.154 INFO:tasks.workunit.client.1.vm05.stdout:1/674: mknod d4/df/d1c/cc4 0 2026-03-10T10:19:49.156 INFO:tasks.workunit.client.1.vm05.stdout:1/675: stat d4/df/d1c/d53/l8e 0 2026-03-10T10:19:49.156 INFO:tasks.workunit.client.0.vm02.stdout:6/583: mkdir d0/db9 0 2026-03-10T10:19:49.157 INFO:tasks.workunit.client.1.vm05.stdout:4/465: dwrite d1/d31/dc/d40/d63/f74 [0,4194304] 0 2026-03-10T10:19:49.157 INFO:tasks.workunit.client.1.vm05.stdout:1/676: fsync d4/df/d1c/d53/d66/f94 0 2026-03-10T10:19:49.157 INFO:tasks.workunit.client.0.vm02.stdout:6/584: chown d0/d8/d9/fac 15305829 1 2026-03-10T10:19:49.166 INFO:tasks.workunit.client.1.vm05.stdout:2/524: dwrite db/d28/d4f/f8a [0,4194304] 0 2026-03-10T10:19:49.172 INFO:tasks.workunit.client.0.vm02.stdout:6/585: mknod d0/d8/d29/d2f/d50/cba 0 2026-03-10T10:19:49.175 INFO:tasks.workunit.client.1.vm05.stdout:0/591: rename d1/d2/d9/d31/d54/l58 to d1/d2/d9/d31/d12/d20/lc9 0 2026-03-10T10:19:49.177 INFO:tasks.workunit.client.1.vm05.stdout:8/537: write d7/d2f/f4b [3816999,83127] 0 2026-03-10T10:19:49.177 INFO:tasks.workunit.client.1.vm05.stdout:7/603: dwrite d5/d1d/f7c [0,4194304] 0 2026-03-10T10:19:49.180 INFO:tasks.workunit.client.0.vm02.stdout:6/586: chown d0/d8/d29/d2f/d4b/da5/d6f/c89 22190 1 2026-03-10T10:19:49.184 INFO:tasks.workunit.client.1.vm05.stdout:9/524: read d0/df/d11/f52 [501044,120562] 0 2026-03-10T10:19:49.190 INFO:tasks.workunit.client.0.vm02.stdout:0/624: link d9/d34/d3d/d67/f9f d9/d34/d3d/d67/fc3 0 2026-03-10T10:19:49.190 INFO:tasks.workunit.client.0.vm02.stdout:0/625: chown d9/d34/c3b 784 1 2026-03-10T10:19:49.197 INFO:tasks.workunit.client.1.vm05.stdout:5/615: creat da/db/d26/d70/fd1 x:0 0 0 2026-03-10T10:19:49.201 INFO:tasks.workunit.client.0.vm02.stdout:1/637: rmdir d4/d1b 39 2026-03-10T10:19:49.202 INFO:tasks.workunit.client.0.vm02.stdout:1/638: fsync d4/da/d27/d38/d3c/fa7 0 2026-03-10T10:19:49.202 INFO:tasks.workunit.client.0.vm02.stdout:1/639: fsync d4/ff 0 2026-03-10T10:19:49.204 INFO:tasks.workunit.client.1.vm05.stdout:3/602: write dd/d15/d24/d74/fb0 [1201384,86234] 0 2026-03-10T10:19:49.208 INFO:tasks.workunit.client.0.vm02.stdout:0/626: creat d9/d34/d3d/d65/d89/fc4 x:0 0 0 2026-03-10T10:19:49.213 INFO:tasks.workunit.client.0.vm02.stdout:1/640: dwrite d4/da/d1a/d47/d78/fb4 [0,4194304] 0 2026-03-10T10:19:49.220 INFO:tasks.workunit.client.1.vm05.stdout:1/677: unlink d4/df/d1c/d53/l8e 0 2026-03-10T10:19:49.220 INFO:tasks.workunit.client.1.vm05.stdout:4/466: fdatasync d1/d3/f10 0 2026-03-10T10:19:49.220 INFO:tasks.workunit.client.0.vm02.stdout:0/627: truncate d9/d18/d1a/d3c/f92 2395489 0 2026-03-10T10:19:49.222 INFO:tasks.workunit.client.0.vm02.stdout:4/723: dwrite d1/d10/d88/db2/fca [0,4194304] 0 2026-03-10T10:19:49.228 INFO:tasks.workunit.client.0.vm02.stdout:1/641: unlink d4/da/d27/d38/f4e 0 2026-03-10T10:19:49.236 INFO:tasks.workunit.client.1.vm05.stdout:8/538: creat d7/d14/d62/f9d x:0 0 0 2026-03-10T10:19:49.236 INFO:tasks.workunit.client.0.vm02.stdout:0/628: creat d9/d18/d1a/d22/d24/d8e/d9b/fc5 x:0 0 0 2026-03-10T10:19:49.239 INFO:tasks.workunit.client.1.vm05.stdout:7/604: mknod d5/d1d/d29/d3e/d8c/d96/cbf 0 2026-03-10T10:19:49.245 INFO:tasks.workunit.client.0.vm02.stdout:7/605: write d1/dc/d10/f7d [752107,6519] 0 2026-03-10T10:19:49.254 INFO:tasks.workunit.client.1.vm05.stdout:9/525: readlink d0/d1/d13/de/l2b 0 2026-03-10T10:19:49.255 INFO:tasks.workunit.client.1.vm05.stdout:1/678: sync 2026-03-10T10:19:49.259 INFO:tasks.workunit.client.1.vm05.stdout:4/467: sync 2026-03-10T10:19:49.263 INFO:tasks.workunit.client.0.vm02.stdout:0/629: creat d9/d18/d1a/d46/d5d/da7/fc6 x:0 0 0 2026-03-10T10:19:49.264 INFO:tasks.workunit.client.0.vm02.stdout:0/630: readlink d9/d34/d3d/d7b/l82 0 2026-03-10T10:19:49.267 INFO:tasks.workunit.client.0.vm02.stdout:0/631: dwrite d9/d18/d1a/d22/d24/d80/d49/f8b [0,4194304] 0 2026-03-10T10:19:49.275 INFO:tasks.workunit.client.0.vm02.stdout:0/632: write d9/d18/d1a/d22/d24/fb6 [211740,89470] 0 2026-03-10T10:19:49.276 INFO:tasks.workunit.client.0.vm02.stdout:0/633: dread - d9/d18/d1a/d46/d5d/da7/db9/fba zero size 2026-03-10T10:19:49.285 INFO:tasks.workunit.client.1.vm05.stdout:3/603: creat dd/d39/d66/fd5 x:0 0 0 2026-03-10T10:19:49.289 INFO:tasks.workunit.client.0.vm02.stdout:1/642: creat d4/d1b/fc6 x:0 0 0 2026-03-10T10:19:49.293 INFO:tasks.workunit.client.1.vm05.stdout:2/525: creat db/d28/d4f/d59/d94/d95/fa5 x:0 0 0 2026-03-10T10:19:49.295 INFO:tasks.workunit.client.1.vm05.stdout:0/592: creat d1/d2/d9/d31/daa/fca x:0 0 0 2026-03-10T10:19:49.303 INFO:tasks.workunit.client.1.vm05.stdout:2/526: sync 2026-03-10T10:19:49.306 INFO:tasks.workunit.client.1.vm05.stdout:1/679: unlink d4/d39/f67 0 2026-03-10T10:19:49.306 INFO:tasks.workunit.client.0.vm02.stdout:0/634: mkdir d9/d18/dc7 0 2026-03-10T10:19:49.308 INFO:tasks.workunit.client.1.vm05.stdout:2/527: stat db/d61 0 2026-03-10T10:19:49.309 INFO:tasks.workunit.client.1.vm05.stdout:4/468: unlink d1/f92 0 2026-03-10T10:19:49.313 INFO:tasks.workunit.client.0.vm02.stdout:0/635: mknod d9/d34/d3d/cc8 0 2026-03-10T10:19:49.318 INFO:tasks.workunit.client.0.vm02.stdout:1/643: creat d4/d2c/fc7 x:0 0 0 2026-03-10T10:19:49.327 INFO:tasks.workunit.client.0.vm02.stdout:0/636: truncate d9/d34/d3d/fae 870560 0 2026-03-10T10:19:49.331 INFO:tasks.workunit.client.1.vm05.stdout:0/593: dread d1/d2/d9/d31/f8c [0,4194304] 0 2026-03-10T10:19:49.334 INFO:tasks.workunit.client.0.vm02.stdout:0/637: mkdir d9/d18/d1a/d46/d5d/da7/db9/dc9 0 2026-03-10T10:19:49.335 INFO:tasks.workunit.client.0.vm02.stdout:0/638: readlink d9/d34/d3d/d67/l75 0 2026-03-10T10:19:49.342 INFO:tasks.workunit.client.0.vm02.stdout:2/605: write d0/d1a/f52 [4185933,12269] 0 2026-03-10T10:19:49.364 INFO:tasks.workunit.client.1.vm05.stdout:4/469: creat d1/d31/dc/d40/d63/f94 x:0 0 0 2026-03-10T10:19:49.368 INFO:tasks.workunit.client.1.vm05.stdout:3/604: symlink dd/d15/d24/d2c/dd0/ld6 0 2026-03-10T10:19:49.368 INFO:tasks.workunit.client.0.vm02.stdout:0/639: truncate d9/d34/d3d/f69 187587 0 2026-03-10T10:19:49.368 INFO:tasks.workunit.client.0.vm02.stdout:1/644: creat d4/da/d1a/fc8 x:0 0 0 2026-03-10T10:19:49.375 INFO:tasks.workunit.client.1.vm05.stdout:2/528: mknod db/d28/d4f/d59/d94/d95/ca6 0 2026-03-10T10:19:49.378 INFO:tasks.workunit.client.0.vm02.stdout:2/606: mknod d0/d1a/cd1 0 2026-03-10T10:19:49.378 INFO:tasks.workunit.client.0.vm02.stdout:9/576: write da/d3c/d4c/f3b [94405,24062] 0 2026-03-10T10:19:49.380 INFO:tasks.workunit.client.0.vm02.stdout:5/739: dwrite d1/db/d11/d13/d28/f91 [4194304,4194304] 0 2026-03-10T10:19:49.380 INFO:tasks.workunit.client.0.vm02.stdout:2/607: write d0/d10/da6/fb6 [112463,42028] 0 2026-03-10T10:19:49.403 INFO:tasks.workunit.client.1.vm05.stdout:8/539: link d7/d2f/d57/l99 d7/d14/d62/l9e 0 2026-03-10T10:19:49.410 INFO:tasks.workunit.client.1.vm05.stdout:6/594: write dd/d36/d3f/d12/d44/d2a/fa5 [2140604,54159] 0 2026-03-10T10:19:49.411 INFO:tasks.workunit.client.1.vm05.stdout:1/680: rename d4/d39/d3e/da0/dbf to d4/d79/d83/dc5 0 2026-03-10T10:19:49.413 INFO:tasks.workunit.client.0.vm02.stdout:2/608: read - d0/d10/f93 zero size 2026-03-10T10:19:49.416 INFO:tasks.workunit.client.0.vm02.stdout:2/609: dwrite d0/f1b [0,4194304] 0 2026-03-10T10:19:49.416 INFO:tasks.workunit.client.0.vm02.stdout:2/610: fsync d0/f91 0 2026-03-10T10:19:49.419 INFO:tasks.workunit.client.1.vm05.stdout:4/470: creat d1/d31/d76/f95 x:0 0 0 2026-03-10T10:19:49.427 INFO:tasks.workunit.client.0.vm02.stdout:3/618: dwrite d1/d6/d8b/f95 [0,4194304] 0 2026-03-10T10:19:49.427 INFO:tasks.workunit.client.1.vm05.stdout:4/471: dread - d1/d31/d76/f95 zero size 2026-03-10T10:19:49.427 INFO:tasks.workunit.client.1.vm05.stdout:4/472: readlink d1/d31/dc/d40/d63/l8d 0 2026-03-10T10:19:49.427 INFO:tasks.workunit.client.1.vm05.stdout:0/594: mknod d1/d2/d9/d50/d99/ccb 0 2026-03-10T10:19:49.427 INFO:tasks.workunit.client.1.vm05.stdout:2/529: dread db/d28/f7f [0,4194304] 0 2026-03-10T10:19:49.429 INFO:tasks.workunit.client.1.vm05.stdout:3/605: dwrite dd/d20/d56/d5e/dab/fc4 [0,4194304] 0 2026-03-10T10:19:49.430 INFO:tasks.workunit.client.1.vm05.stdout:3/606: readlink dd/d39/d5f/l87 0 2026-03-10T10:19:49.434 INFO:tasks.workunit.client.1.vm05.stdout:6/595: fsync dd/d36/d3f/d12/d58/f7b 0 2026-03-10T10:19:49.434 INFO:tasks.workunit.client.1.vm05.stdout:8/540: sync 2026-03-10T10:19:49.439 INFO:tasks.workunit.client.0.vm02.stdout:2/611: stat d0/l57 0 2026-03-10T10:19:49.440 INFO:tasks.workunit.client.0.vm02.stdout:2/612: chown d0/d1a/d49/d5e/d8a/cc3 277555760 1 2026-03-10T10:19:49.452 INFO:tasks.workunit.client.1.vm05.stdout:1/681: unlink d4/d39/d3e/c71 0 2026-03-10T10:19:49.453 INFO:tasks.workunit.client.1.vm05.stdout:1/682: chown d4/f46 0 1 2026-03-10T10:19:49.454 INFO:tasks.workunit.client.1.vm05.stdout:4/473: creat d1/d70/f96 x:0 0 0 2026-03-10T10:19:49.455 INFO:tasks.workunit.client.0.vm02.stdout:1/645: symlink d4/da/d1a/d47/d88/da8/lc9 0 2026-03-10T10:19:49.463 INFO:tasks.workunit.client.0.vm02.stdout:8/581: write d1/d1c/d23/f75 [862371,72805] 0 2026-03-10T10:19:49.487 INFO:tasks.workunit.client.0.vm02.stdout:2/613: creat d0/d1a/d49/d5e/d8a/fd2 x:0 0 0 2026-03-10T10:19:49.496 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:49 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:49.496 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:49 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:49.496 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:49 vm05.local ceph-mon[59051]: pgmap v6: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-10T10:19:49.496 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:49 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:49.496 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:49 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:49.496 INFO:tasks.workunit.client.0.vm02.stdout:1/646: rmdir d4/da/d1a/d47/d88 39 2026-03-10T10:19:49.505 INFO:tasks.workunit.client.1.vm05.stdout:8/541: fsync d7/d2f/d57/f66 0 2026-03-10T10:19:49.506 INFO:tasks.workunit.client.0.vm02.stdout:9/577: link da/d3c/d4c/c1a da/d3c/d4c/d2c/d34/cb6 0 2026-03-10T10:19:49.512 INFO:tasks.workunit.client.0.vm02.stdout:6/587: dwrite d0/f5d [0,4194304] 0 2026-03-10T10:19:49.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:49 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:49.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:49 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:49.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:49 vm02.local ceph-mon[50200]: pgmap v6: 65 pgs: 65 active+clean; 2.3 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-10T10:19:49.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:49 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:49.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:49 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:49.582 INFO:tasks.workunit.client.1.vm05.stdout:4/474: creat d1/d31/d4b/f97 x:0 0 0 2026-03-10T10:19:49.583 INFO:tasks.workunit.client.1.vm05.stdout:2/530: unlink db/d2d/d5e/f71 0 2026-03-10T10:19:49.586 INFO:tasks.workunit.client.0.vm02.stdout:9/578: symlink da/d3c/d4c/d56/lb7 0 2026-03-10T10:19:49.600 INFO:tasks.workunit.client.0.vm02.stdout:6/588: mkdir d0/d8/d29/d2f/d50/d7e/db2/dbb 0 2026-03-10T10:19:49.603 INFO:tasks.workunit.client.1.vm05.stdout:8/542: mknod d7/d2f/d57/c9f 0 2026-03-10T10:19:49.605 INFO:tasks.workunit.client.1.vm05.stdout:6/596: mkdir dd/d36/d3f/d12/d44/d30/d4a/d6e/dc3 0 2026-03-10T10:19:49.605 INFO:tasks.workunit.client.0.vm02.stdout:4/724: write d1/d32/f46 [730972,128953] 0 2026-03-10T10:19:49.607 INFO:tasks.workunit.client.0.vm02.stdout:8/582: link d1/d1c/d43/d5b/d88/dac/d83/l8f d1/d1c/d43/d6a/d7c/laf 0 2026-03-10T10:19:49.614 INFO:tasks.workunit.client.1.vm05.stdout:8/543: dwrite d7/d14/d62/f69 [4194304,4194304] 0 2026-03-10T10:19:49.647 INFO:tasks.workunit.client.1.vm05.stdout:5/616: dwrite da/db/fa1 [0,4194304] 0 2026-03-10T10:19:49.648 INFO:tasks.workunit.client.1.vm05.stdout:4/475: truncate d1/d3/f10 4112486 0 2026-03-10T10:19:49.651 INFO:tasks.workunit.client.1.vm05.stdout:5/617: chown da/d9a/dc7/f95 74 1 2026-03-10T10:19:49.654 INFO:tasks.workunit.client.0.vm02.stdout:7/606: truncate d1/d1b/d8f/d67/f76 621178 0 2026-03-10T10:19:49.655 INFO:tasks.workunit.client.0.vm02.stdout:7/607: fsync d1/d1b/f72 0 2026-03-10T10:19:49.655 INFO:tasks.workunit.client.1.vm05.stdout:7/605: dwrite d5/dd/f62 [0,4194304] 0 2026-03-10T10:19:49.692 INFO:tasks.workunit.client.1.vm05.stdout:0/595: link d1/d2/d9/d31/d13/d15/l2c d1/d2/dc6/lcc 0 2026-03-10T10:19:49.700 INFO:tasks.workunit.client.1.vm05.stdout:8/544: mkdir d7/d14/d15/d3b/da0 0 2026-03-10T10:19:49.703 INFO:tasks.workunit.client.0.vm02.stdout:9/579: dread da/d3c/f3e [0,4194304] 0 2026-03-10T10:19:49.703 INFO:tasks.workunit.client.0.vm02.stdout:9/580: chown da/d3c/d4c/d38/d82/da3 1693 1 2026-03-10T10:19:49.711 INFO:tasks.workunit.client.1.vm05.stdout:4/476: read d1/d3/f5f [2130236,125370] 0 2026-03-10T10:19:49.712 INFO:tasks.workunit.client.1.vm05.stdout:9/526: write d0/d1/f6d [1624282,45126] 0 2026-03-10T10:19:49.717 INFO:tasks.workunit.client.1.vm05.stdout:7/606: mknod d5/d1d/d20/d2d/d68/cc0 0 2026-03-10T10:19:49.723 INFO:tasks.workunit.client.1.vm05.stdout:4/477: dwrite d1/d31/f36 [0,4194304] 0 2026-03-10T10:19:49.726 INFO:tasks.workunit.client.0.vm02.stdout:0/640: dwrite d9/d18/d1a/d46/d5d/f9a [0,4194304] 0 2026-03-10T10:19:49.730 INFO:tasks.workunit.client.0.vm02.stdout:0/641: chown d9/d34/l5a 1702725 1 2026-03-10T10:19:49.730 INFO:tasks.workunit.client.0.vm02.stdout:0/642: chown d9/d18/d1a/d46/d5d/da7/fc6 0 1 2026-03-10T10:19:49.750 INFO:tasks.workunit.client.0.vm02.stdout:5/740: write d1/db/d11/d13/fdb [305783,77882] 0 2026-03-10T10:19:49.760 INFO:tasks.workunit.client.1.vm05.stdout:0/596: creat d1/d2/d39/d6e/dc0/fcd x:0 0 0 2026-03-10T10:19:49.764 INFO:tasks.workunit.client.1.vm05.stdout:0/597: chown d1/d2/d9/d31/d12/d41/f6d 24873895 1 2026-03-10T10:19:49.767 INFO:tasks.workunit.client.1.vm05.stdout:8/545: mknod d7/d14/d62/d90/ca1 0 2026-03-10T10:19:49.781 INFO:tasks.workunit.client.1.vm05.stdout:9/527: mkdir d0/d1/d16/d6e/daf/db7 0 2026-03-10T10:19:49.781 INFO:tasks.workunit.client.1.vm05.stdout:0/598: fdatasync d1/d2/d9/fc7 0 2026-03-10T10:19:49.782 INFO:tasks.workunit.client.0.vm02.stdout:3/619: write d1/d6/f1b [2432725,75037] 0 2026-03-10T10:19:49.783 INFO:tasks.workunit.client.1.vm05.stdout:0/599: write d1/d2/d39/d6e/dc0/fcd [956387,102687] 0 2026-03-10T10:19:49.796 INFO:tasks.workunit.client.1.vm05.stdout:3/607: dwrite dd/d15/d4c/f73 [0,4194304] 0 2026-03-10T10:19:49.807 INFO:tasks.workunit.client.1.vm05.stdout:8/546: creat d7/d14/d24/d3f/d6a/d8a/d96/fa2 x:0 0 0 2026-03-10T10:19:49.818 INFO:tasks.workunit.client.1.vm05.stdout:0/600: fsync d1/d2/d9/f1d 0 2026-03-10T10:19:49.825 INFO:tasks.workunit.client.0.vm02.stdout:9/581: fsync da/f65 0 2026-03-10T10:19:49.826 INFO:tasks.workunit.client.1.vm05.stdout:4/478: link d1/d31/dc/l37 d1/d31/dc/d40/l98 0 2026-03-10T10:19:49.832 INFO:tasks.workunit.client.0.vm02.stdout:7/608: symlink d1/lb9 0 2026-03-10T10:19:49.838 INFO:tasks.workunit.client.1.vm05.stdout:8/547: mkdir d7/d2f/da3 0 2026-03-10T10:19:49.839 INFO:tasks.workunit.client.0.vm02.stdout:0/643: rename d9/d34/d3d/d8d to d9/d18/dc7/dca 0 2026-03-10T10:19:49.839 INFO:tasks.workunit.client.1.vm05.stdout:4/479: creat d1/d64/f99 x:0 0 0 2026-03-10T10:19:49.840 INFO:tasks.workunit.client.1.vm05.stdout:8/548: chown d7/d14/d24/f42 12509053 1 2026-03-10T10:19:49.841 INFO:tasks.workunit.client.1.vm05.stdout:8/549: write d7/d14/d24/f34 [4976589,64520] 0 2026-03-10T10:19:49.844 INFO:tasks.workunit.client.1.vm05.stdout:4/480: dread d1/d64/f8f [0,4194304] 0 2026-03-10T10:19:49.847 INFO:tasks.workunit.client.0.vm02.stdout:7/609: dread d1/dc/d60/fa4 [0,4194304] 0 2026-03-10T10:19:49.848 INFO:tasks.workunit.client.0.vm02.stdout:7/610: write d1/dc/d16/faa [5001720,117413] 0 2026-03-10T10:19:49.858 INFO:tasks.workunit.client.0.vm02.stdout:0/644: dread d9/d18/dc7/dca/f95 [0,4194304] 0 2026-03-10T10:19:49.865 INFO:tasks.workunit.client.1.vm05.stdout:8/550: dread f6 [0,4194304] 0 2026-03-10T10:19:49.865 INFO:tasks.workunit.client.0.vm02.stdout:7/611: rename d1/d1b/d8f/dad/d5f to d1/d1b/d8f/dad/d7e/dba 0 2026-03-10T10:19:49.866 INFO:tasks.workunit.client.0.vm02.stdout:7/612: stat d1/d1b/l54 0 2026-03-10T10:19:49.874 INFO:tasks.workunit.client.0.vm02.stdout:0/645: mknod d9/d18/d1a/d3c/ccb 0 2026-03-10T10:19:49.882 INFO:tasks.workunit.client.1.vm05.stdout:8/551: getdents d7/d14/d24/d3f/d6a 0 2026-03-10T10:19:49.883 INFO:tasks.workunit.client.1.vm05.stdout:8/552: truncate d7/d14/d24/f95 527559 0 2026-03-10T10:19:49.887 INFO:tasks.workunit.client.1.vm05.stdout:1/683: dwrite d4/df/d1c/d53/daa/fab [0,4194304] 0 2026-03-10T10:19:49.892 INFO:tasks.workunit.client.0.vm02.stdout:0/646: mkdir d9/d18/d1a/d22/d24/d80/dcc 0 2026-03-10T10:19:49.897 INFO:tasks.workunit.client.0.vm02.stdout:0/647: creat d9/d34/d3d/d65/d89/fcd x:0 0 0 2026-03-10T10:19:49.912 INFO:tasks.workunit.client.0.vm02.stdout:0/648: creat d9/d18/d1a/d22/d24/d8e/fce x:0 0 0 2026-03-10T10:19:49.913 INFO:tasks.workunit.client.0.vm02.stdout:0/649: dread - d9/d18/d1a/d46/d5d/f66 zero size 2026-03-10T10:19:49.919 INFO:tasks.workunit.client.1.vm05.stdout:1/684: link d4/d37/d4e/d82/lb4 d4/d20/lc6 0 2026-03-10T10:19:49.920 INFO:tasks.workunit.client.1.vm05.stdout:1/685: dread d4/d3d/d6e/faf [0,4194304] 0 2026-03-10T10:19:49.930 INFO:tasks.workunit.client.1.vm05.stdout:2/531: unlink db/d28/d4f/d59/c82 0 2026-03-10T10:19:49.931 INFO:tasks.workunit.client.1.vm05.stdout:2/532: rmdir db/d2d/d5e 39 2026-03-10T10:19:49.934 INFO:tasks.workunit.client.1.vm05.stdout:2/533: mkdir db/d28/d4f/d59/da4/d81/da7 0 2026-03-10T10:19:49.939 INFO:tasks.workunit.client.0.vm02.stdout:6/589: write d0/d8/d29/d2f/d50/d98/f9f [399500,117748] 0 2026-03-10T10:19:49.943 INFO:tasks.workunit.client.0.vm02.stdout:6/590: rename d0/d8/f5a to d0/d8/d29/d52/fbc 0 2026-03-10T10:19:49.948 INFO:tasks.workunit.client.0.vm02.stdout:6/591: creat d0/d8/d29/d52/fbd x:0 0 0 2026-03-10T10:19:49.952 INFO:tasks.workunit.client.0.vm02.stdout:6/592: getdents d0/d8/d29/d2f/d50 0 2026-03-10T10:19:49.958 INFO:tasks.workunit.client.1.vm05.stdout:6/597: write dd/d36/d3f/d12/d44/d2a/f98 [2185903,18467] 0 2026-03-10T10:19:49.961 INFO:tasks.workunit.client.1.vm05.stdout:6/598: chown dd/d36/d3f/d12/d44/d30/d4a/c60 6 1 2026-03-10T10:19:49.965 INFO:tasks.workunit.client.0.vm02.stdout:8/583: dwrite d1/d1c/d43/d6a/da8/d56/f85 [0,4194304] 0 2026-03-10T10:19:49.967 INFO:tasks.workunit.client.0.vm02.stdout:4/725: dwrite d1/d41/d5e/d78/d7f/fb9 [0,4194304] 0 2026-03-10T10:19:49.977 INFO:tasks.workunit.client.0.vm02.stdout:4/726: creat d1/d41/d5e/d78/d44/de7/ff0 x:0 0 0 2026-03-10T10:19:50.000 INFO:tasks.workunit.client.0.vm02.stdout:4/727: creat d1/d41/d7e/ff1 x:0 0 0 2026-03-10T10:19:50.005 INFO:tasks.workunit.client.0.vm02.stdout:4/728: mkdir d1/d41/d5e/d78/d1a/d49/d81/dc6/df2 0 2026-03-10T10:19:50.014 INFO:tasks.workunit.client.0.vm02.stdout:4/729: dread d1/d41/d5e/d78/d7f/f74 [0,4194304] 0 2026-03-10T10:19:50.018 INFO:tasks.workunit.client.1.vm05.stdout:5/618: write da/db/d26/d35/d38/fab [616614,117161] 0 2026-03-10T10:19:50.035 INFO:tasks.workunit.client.1.vm05.stdout:5/619: truncate da/db/d26/d5c/f92 61602 0 2026-03-10T10:19:50.035 INFO:tasks.workunit.client.1.vm05.stdout:5/620: stat da/db/d26/d35/d38/f65 0 2026-03-10T10:19:50.036 INFO:tasks.workunit.client.1.vm05.stdout:0/601: write d1/d2/d9/f6c [1277342,90339] 0 2026-03-10T10:19:50.038 INFO:tasks.workunit.client.1.vm05.stdout:5/621: creat da/db/d26/d35/db3/fd2 x:0 0 0 2026-03-10T10:19:50.049 INFO:tasks.workunit.client.1.vm05.stdout:0/602: mkdir d1/d2/d9/d31/d13/da2/dab/dce 0 2026-03-10T10:19:50.106 INFO:tasks.workunit.client.1.vm05.stdout:0/603: dwrite d1/d2/d9/f98 [0,4194304] 0 2026-03-10T10:19:50.106 INFO:tasks.workunit.client.1.vm05.stdout:9/528: creat d0/df/fb8 x:0 0 0 2026-03-10T10:19:50.106 INFO:tasks.workunit.client.1.vm05.stdout:9/529: readlink d0/d1/d13/d26/l6a 0 2026-03-10T10:19:50.106 INFO:tasks.workunit.client.1.vm05.stdout:9/530: fdatasync d0/d1/d13/d62/fa8 0 2026-03-10T10:19:50.106 INFO:tasks.workunit.client.1.vm05.stdout:9/531: dwrite d0/df/fb1 [0,4194304] 0 2026-03-10T10:19:50.125 INFO:tasks.workunit.client.1.vm05.stdout:9/532: dread d0/f1e [0,4194304] 0 2026-03-10T10:19:50.161 INFO:tasks.workunit.client.1.vm05.stdout:9/533: write d0/d1/d13/d26/f43 [373981,127554] 0 2026-03-10T10:19:50.161 INFO:tasks.workunit.client.1.vm05.stdout:9/534: chown d0/d1/d16/c82 28 1 2026-03-10T10:19:50.161 INFO:tasks.workunit.client.1.vm05.stdout:9/535: dwrite d0/d1/d13/d62/fa8 [0,4194304] 0 2026-03-10T10:19:50.161 INFO:tasks.workunit.client.1.vm05.stdout:9/536: dwrite d0/df/d74/d8c/fac [0,4194304] 0 2026-03-10T10:19:50.161 INFO:tasks.workunit.client.1.vm05.stdout:9/537: truncate d0/d1/d13/d26/f58 4464177 0 2026-03-10T10:19:50.162 INFO:tasks.workunit.client.1.vm05.stdout:9/538: read - d0/df/d74/d8c/fb5 zero size 2026-03-10T10:19:50.169 INFO:tasks.workunit.client.0.vm02.stdout:4/730: sync 2026-03-10T10:19:50.186 INFO:tasks.workunit.client.1.vm05.stdout:9/539: dread d0/df/d11/f52 [0,4194304] 0 2026-03-10T10:19:50.202 INFO:tasks.workunit.client.0.vm02.stdout:4/731: getdents d1/d10/d88 0 2026-03-10T10:19:50.222 INFO:tasks.workunit.client.0.vm02.stdout:5/741: mknod d1/db/d11/d16/cff 0 2026-03-10T10:19:50.228 INFO:tasks.workunit.client.0.vm02.stdout:5/742: fdatasync d1/db/d11/d62/f74 0 2026-03-10T10:19:50.251 INFO:tasks.workunit.client.1.vm05.stdout:7/607: rename d5/d1d/d20/d3b/f70 to d5/d1d/d20/d91/fc1 0 2026-03-10T10:19:50.251 INFO:tasks.workunit.client.0.vm02.stdout:2/614: mkdir d0/d1a/d24/dd3 0 2026-03-10T10:19:50.251 INFO:tasks.workunit.client.0.vm02.stdout:5/743: truncate d1/db/d11/f47 729771 0 2026-03-10T10:19:50.263 INFO:tasks.workunit.client.0.vm02.stdout:2/615: sync 2026-03-10T10:19:50.264 INFO:tasks.workunit.client.0.vm02.stdout:3/620: dwrite d1/d8/d21/f4c [0,4194304] 0 2026-03-10T10:19:50.265 INFO:tasks.workunit.client.0.vm02.stdout:5/744: dwrite d1/db/d11/d13/d28/d37/f76 [4194304,4194304] 0 2026-03-10T10:19:50.270 INFO:tasks.workunit.client.0.vm02.stdout:5/745: stat d1/d6a/lf2 0 2026-03-10T10:19:50.295 INFO:tasks.workunit.client.1.vm05.stdout:3/608: rename dd/d20/faa to dd/d15/d24/d8e/dac/fd7 0 2026-03-10T10:19:50.296 INFO:tasks.workunit.client.0.vm02.stdout:4/732: dread d1/d41/d5e/d78/d1a/d49/f5c [0,4194304] 0 2026-03-10T10:19:50.307 INFO:tasks.workunit.client.1.vm05.stdout:7/608: creat d5/d1d/d20/d2d/d68/fc2 x:0 0 0 2026-03-10T10:19:50.307 INFO:tasks.workunit.client.1.vm05.stdout:7/609: chown d5/d1d/d20/d3b/l45 7 1 2026-03-10T10:19:50.308 INFO:tasks.workunit.client.1.vm05.stdout:4/481: truncate d1/d31/dc/d40/f67 2281498 0 2026-03-10T10:19:50.309 INFO:tasks.workunit.client.0.vm02.stdout:2/616: dread - d0/d1a/d49/fb2 zero size 2026-03-10T10:19:50.313 INFO:tasks.workunit.client.1.vm05.stdout:7/610: read d5/dd/f1a [3006781,60700] 0 2026-03-10T10:19:50.317 INFO:tasks.workunit.client.0.vm02.stdout:5/746: creat d1/db/d11/d16/d48/dcf/f100 x:0 0 0 2026-03-10T10:19:50.321 INFO:tasks.workunit.client.1.vm05.stdout:8/553: link d7/d14/d3a/c5d d7/d14/d24/ca4 0 2026-03-10T10:19:50.324 INFO:tasks.workunit.client.0.vm02.stdout:5/747: chown d1/db/d11/d16/ld5 21294 1 2026-03-10T10:19:50.324 INFO:tasks.workunit.client.1.vm05.stdout:7/611: dread d5/d1d/d20/d2d/d5d/f67 [0,4194304] 0 2026-03-10T10:19:50.325 INFO:tasks.workunit.client.0.vm02.stdout:7/613: truncate d1/dc/f3 874175 0 2026-03-10T10:19:50.326 INFO:tasks.workunit.client.0.vm02.stdout:7/614: stat d1/d1b/f72 0 2026-03-10T10:19:50.328 INFO:tasks.workunit.client.1.vm05.stdout:6/599: rename dd/d36/d3f/d12/d44/f2f to dd/d36/d3f/d12/d44/d63/fc4 0 2026-03-10T10:19:50.329 INFO:tasks.workunit.client.0.vm02.stdout:1/647: symlink d4/da/d27/d38/lca 0 2026-03-10T10:19:50.331 INFO:tasks.workunit.client.0.vm02.stdout:2/617: mkdir d0/dd4 0 2026-03-10T10:19:50.332 INFO:tasks.workunit.client.0.vm02.stdout:2/618: write d0/d71/fb7 [178356,28294] 0 2026-03-10T10:19:50.345 INFO:tasks.workunit.client.1.vm05.stdout:8/554: rmdir d7/d14/d3a/d49 39 2026-03-10T10:19:50.346 INFO:tasks.workunit.client.1.vm05.stdout:8/555: write d7/d14/d24/d3f/d6a/d8a/d96/fa2 [866083,102676] 0 2026-03-10T10:19:50.347 INFO:tasks.workunit.client.1.vm05.stdout:7/612: symlink d5/d1d/d20/d2d/d80/lc3 0 2026-03-10T10:19:50.348 INFO:tasks.workunit.client.1.vm05.stdout:7/613: chown d5/d1d/d20/d3b/c61 180680 1 2026-03-10T10:19:50.357 INFO:tasks.workunit.client.0.vm02.stdout:0/650: write d9/d34/d3d/d65/f6d [1077561,91408] 0 2026-03-10T10:19:50.357 INFO:tasks.workunit.client.0.vm02.stdout:0/651: readlink d9/d18/l8c 0 2026-03-10T10:19:50.360 INFO:tasks.workunit.client.1.vm05.stdout:1/686: dwrite d4/d79/f8b [0,4194304] 0 2026-03-10T10:19:50.360 INFO:tasks.workunit.client.0.vm02.stdout:7/615: read d1/dc/d16/f95 [394680,27066] 0 2026-03-10T10:19:50.361 INFO:tasks.workunit.client.1.vm05.stdout:3/609: rename l0 to dd/d15/d24/d2c/d6d/da7/dbb/dbd/ld8 0 2026-03-10T10:19:50.362 INFO:tasks.workunit.client.0.vm02.stdout:0/652: dread d9/d18/d1a/d22/d24/d8e/d91/fb0 [0,4194304] 0 2026-03-10T10:19:50.365 INFO:tasks.workunit.client.1.vm05.stdout:6/600: rmdir dd/d36/d3f/d12/d44/d2a/d3d/d3e 39 2026-03-10T10:19:50.373 INFO:tasks.workunit.client.1.vm05.stdout:2/534: dwrite db/d28/d4f/d59/f6f [0,4194304] 0 2026-03-10T10:19:50.377 INFO:tasks.workunit.client.1.vm05.stdout:2/535: read db/d28/d4f/f8a [4159486,122694] 0 2026-03-10T10:19:50.385 INFO:tasks.workunit.client.1.vm05.stdout:8/556: creat d7/d14/fa5 x:0 0 0 2026-03-10T10:19:50.392 INFO:tasks.workunit.client.1.vm05.stdout:7/614: creat d5/d1d/d20/d2d/d68/fc4 x:0 0 0 2026-03-10T10:19:50.394 INFO:tasks.workunit.client.0.vm02.stdout:6/593: dwrite d0/d87/fa7 [0,4194304] 0 2026-03-10T10:19:50.404 INFO:tasks.workunit.client.0.vm02.stdout:8/584: dwrite d1/d1c/d43/d6a/da8/f4f [0,4194304] 0 2026-03-10T10:19:50.410 INFO:tasks.workunit.client.0.vm02.stdout:1/648: mkdir d4/da/d1a/d47/dbc/dcb 0 2026-03-10T10:19:50.415 INFO:tasks.workunit.client.0.vm02.stdout:2/619: getdents d0/dd4 0 2026-03-10T10:19:50.415 INFO:tasks.workunit.client.0.vm02.stdout:2/620: chown d0/d71 3450713 1 2026-03-10T10:19:50.420 INFO:tasks.workunit.client.1.vm05.stdout:1/687: fsync d4/d39/d88/fa4 0 2026-03-10T10:19:50.420 INFO:tasks.workunit.client.1.vm05.stdout:3/610: write dd/d15/d1f/f53 [2028152,3029] 0 2026-03-10T10:19:50.428 INFO:tasks.workunit.client.0.vm02.stdout:9/582: creat da/d3c/d4c/d2c/fb8 x:0 0 0 2026-03-10T10:19:50.435 INFO:tasks.workunit.client.1.vm05.stdout:5/622: dwrite da/db/d26/d70/f82 [0,4194304] 0 2026-03-10T10:19:50.437 INFO:tasks.workunit.client.1.vm05.stdout:5/623: chown da/db/d26/d35/d38/c53 253914 1 2026-03-10T10:19:50.443 INFO:tasks.workunit.client.0.vm02.stdout:3/621: creat d1/d20/fca x:0 0 0 2026-03-10T10:19:50.444 INFO:tasks.workunit.client.0.vm02.stdout:3/622: chown d1/d20/d52/f92 483633 1 2026-03-10T10:19:50.454 INFO:tasks.workunit.client.1.vm05.stdout:8/557: mknod d7/d14/d15/d3b/ca6 0 2026-03-10T10:19:50.455 INFO:tasks.workunit.client.0.vm02.stdout:0/653: mknod d9/d18/ccf 0 2026-03-10T10:19:50.468 INFO:tasks.workunit.client.0.vm02.stdout:8/585: truncate d1/d1c/d43/f7e 4656247 0 2026-03-10T10:19:50.469 INFO:tasks.workunit.client.1.vm05.stdout:0/604: dwrite d1/d2/d9/d31/d13/d2f/d49/f5c [0,4194304] 0 2026-03-10T10:19:50.472 INFO:tasks.workunit.client.1.vm05.stdout:6/601: dread dd/d1b/fa8 [0,4194304] 0 2026-03-10T10:19:50.472 INFO:tasks.workunit.client.1.vm05.stdout:6/602: dread - dd/d36/d3f/d12/d44/daa/fae zero size 2026-03-10T10:19:50.480 INFO:tasks.workunit.client.1.vm05.stdout:9/540: truncate d0/f45 1376535 0 2026-03-10T10:19:50.482 INFO:tasks.workunit.client.0.vm02.stdout:2/621: rename d0/d1a/d49/d5e/d8a/l8e to d0/d71/ld5 0 2026-03-10T10:19:50.483 INFO:tasks.workunit.client.0.vm02.stdout:2/622: fsync d0/d1a/d49/d5e/fb5 0 2026-03-10T10:19:50.494 INFO:tasks.workunit.client.1.vm05.stdout:3/611: mkdir dd/d15/d24/d2c/dd0/dd9 0 2026-03-10T10:19:50.496 INFO:tasks.workunit.client.0.vm02.stdout:5/748: getdents d1/db/d11/d16 0 2026-03-10T10:19:50.505 INFO:tasks.workunit.client.0.vm02.stdout:9/583: unlink da/d3c/d4c/d38/d82/d8c/c9c 0 2026-03-10T10:19:50.505 INFO:tasks.workunit.client.0.vm02.stdout:9/584: readlink da/d3c/d4c/d2c/d34/d35/l55 0 2026-03-10T10:19:50.508 INFO:tasks.workunit.client.1.vm05.stdout:0/605: dread d1/f38 [0,4194304] 0 2026-03-10T10:19:50.520 INFO:tasks.workunit.client.0.vm02.stdout:5/749: sync 2026-03-10T10:19:50.521 INFO:tasks.workunit.client.0.vm02.stdout:4/733: dwrite d1/d41/d5e/d78/d1a/fad [0,4194304] 0 2026-03-10T10:19:50.526 INFO:tasks.workunit.client.1.vm05.stdout:7/615: mknod d5/d17/d85/cc5 0 2026-03-10T10:19:50.528 INFO:tasks.workunit.client.1.vm05.stdout:4/482: dwrite d1/d31/f2d [0,4194304] 0 2026-03-10T10:19:50.531 INFO:tasks.workunit.client.1.vm05.stdout:4/483: dwrite d1/d31/f7a [0,4194304] 0 2026-03-10T10:19:50.549 INFO:tasks.workunit.client.0.vm02.stdout:6/594: creat d0/d7f/fbe x:0 0 0 2026-03-10T10:19:50.553 INFO:tasks.workunit.client.0.vm02.stdout:8/586: mknod d1/d1c/d43/d5b/d88/dac/d83/cb0 0 2026-03-10T10:19:50.580 INFO:tasks.workunit.client.0.vm02.stdout:7/616: getdents d1/dc/d10/d38 0 2026-03-10T10:19:50.586 INFO:tasks.workunit.client.0.vm02.stdout:9/585: unlink da/d3c/d4c/d38/d82/c86 0 2026-03-10T10:19:50.586 INFO:tasks.workunit.client.1.vm05.stdout:2/536: write db/d2d/f65 [842026,24363] 0 2026-03-10T10:19:50.586 INFO:tasks.workunit.client.1.vm05.stdout:9/541: rmdir d0/d1/d4c/d63 39 2026-03-10T10:19:50.588 INFO:tasks.workunit.client.1.vm05.stdout:9/542: fsync d0/d1/f6d 0 2026-03-10T10:19:50.590 INFO:tasks.workunit.client.1.vm05.stdout:1/688: fdatasync d4/df/d1c/f23 0 2026-03-10T10:19:50.590 INFO:tasks.workunit.client.1.vm05.stdout:1/689: dread - d4/d3d/d6e/fc3 zero size 2026-03-10T10:19:50.592 INFO:tasks.workunit.client.0.vm02.stdout:1/649: dwrite d4/d2c/d53/f99 [4194304,4194304] 0 2026-03-10T10:19:50.606 INFO:tasks.workunit.client.0.vm02.stdout:4/734: rmdir d1/d41/d5e/d78 39 2026-03-10T10:19:50.615 INFO:tasks.workunit.client.0.vm02.stdout:6/595: fsync d0/d8/f64 0 2026-03-10T10:19:50.616 INFO:tasks.workunit.client.0.vm02.stdout:4/735: dread d1/d32/f69 [0,4194304] 0 2026-03-10T10:19:50.616 INFO:tasks.workunit.client.1.vm05.stdout:7/616: mknod d5/d1d/d20/d91/da7/cc6 0 2026-03-10T10:19:50.622 INFO:tasks.workunit.client.1.vm05.stdout:8/558: mkdir d7/d14/d15/da7 0 2026-03-10T10:19:50.622 INFO:tasks.workunit.client.1.vm05.stdout:4/484: sync 2026-03-10T10:19:50.623 INFO:tasks.workunit.client.0.vm02.stdout:9/586: write da/d3c/d4c/d56/fac [310460,63663] 0 2026-03-10T10:19:50.627 INFO:tasks.workunit.client.0.vm02.stdout:0/654: creat d9/d18/d1a/d22/d24/d8e/fd0 x:0 0 0 2026-03-10T10:19:50.628 INFO:tasks.workunit.client.1.vm05.stdout:6/603: getdents dd/d36/d3f/d12/d44/d30/d4a/d6e/dc3 0 2026-03-10T10:19:50.629 INFO:tasks.workunit.client.1.vm05.stdout:6/604: write dd/f14 [7322101,68385] 0 2026-03-10T10:19:50.643 INFO:tasks.workunit.client.0.vm02.stdout:5/750: link d1/db/d11/d16/d48/dcf/f100 d1/db/d11/d13/d28/d37/dce/f101 0 2026-03-10T10:19:50.660 INFO:tasks.workunit.client.1.vm05.stdout:9/543: mknod d0/d1/d13/d62/cb9 0 2026-03-10T10:19:50.660 INFO:tasks.workunit.client.1.vm05.stdout:9/544: dread d0/df/fb1 [0,4194304] 0 2026-03-10T10:19:50.664 INFO:tasks.workunit.client.0.vm02.stdout:6/596: creat d0/d8/d29/d94/fbf x:0 0 0 2026-03-10T10:19:50.664 INFO:tasks.workunit.client.0.vm02.stdout:6/597: stat d0/d8/d29/d94/fbf 0 2026-03-10T10:19:50.677 INFO:tasks.workunit.client.0.vm02.stdout:8/587: creat d1/d1c/d43/d6a/d7c/da6/fb1 x:0 0 0 2026-03-10T10:19:50.686 INFO:tasks.workunit.client.0.vm02.stdout:7/617: rename d1/d1b/la3 to d1/dc/d55/d9a/da5/lbb 0 2026-03-10T10:19:50.693 INFO:tasks.workunit.client.0.vm02.stdout:9/587: dread da/d3c/d4c/d38/f88 [0,4194304] 0 2026-03-10T10:19:50.697 INFO:tasks.workunit.client.1.vm05.stdout:5/624: creat da/db/d26/d35/fd3 x:0 0 0 2026-03-10T10:19:50.712 INFO:tasks.workunit.client.1.vm05.stdout:5/625: chown da/db/fa1 1325088582 1 2026-03-10T10:19:50.713 INFO:tasks.workunit.client.1.vm05.stdout:7/617: mknod d5/d1d/d29/d3e/d8c/d82/cc7 0 2026-03-10T10:19:50.713 INFO:tasks.workunit.client.1.vm05.stdout:7/618: chown d5/d26/f5a 273264 1 2026-03-10T10:19:50.713 INFO:tasks.workunit.client.0.vm02.stdout:0/655: dread d9/d18/d1a/d22/d24/d80/f72 [0,4194304] 0 2026-03-10T10:19:50.725 INFO:tasks.workunit.client.1.vm05.stdout:8/559: read d7/d14/d3a/d49/f6b [119406,82570] 0 2026-03-10T10:19:50.749 INFO:tasks.workunit.client.0.vm02.stdout:5/751: truncate d1/db/d11/d84/d40/d4f/d5f/d6d/fb8 58287 0 2026-03-10T10:19:50.760 INFO:tasks.workunit.client.0.vm02.stdout:3/623: write d1/d20/d52/f6f [1590329,111748] 0 2026-03-10T10:19:50.770 INFO:tasks.workunit.client.0.vm02.stdout:6/598: fdatasync d0/d8/d9/f54 0 2026-03-10T10:19:50.774 INFO:tasks.workunit.client.0.vm02.stdout:8/588: fsync d1/d1c/d43/d6a/f9c 0 2026-03-10T10:19:50.783 INFO:tasks.workunit.client.0.vm02.stdout:2/623: write d0/f44 [1819689,109350] 0 2026-03-10T10:19:50.788 INFO:tasks.workunit.client.0.vm02.stdout:7/618: unlink d1/dc/d16/d28/d2c/f8a 0 2026-03-10T10:19:50.797 INFO:tasks.workunit.client.1.vm05.stdout:0/606: rename d1/d2/d9/d31/d12/d20/l43 to d1/lcf 0 2026-03-10T10:19:50.804 INFO:tasks.workunit.client.0.vm02.stdout:9/588: mknod da/d3c/d4c/d38/d82/cb9 0 2026-03-10T10:19:50.823 INFO:tasks.workunit.client.1.vm05.stdout:7/619: symlink d5/d1d/d29/d3e/d8c/lc8 0 2026-03-10T10:19:50.824 INFO:tasks.workunit.client.1.vm05.stdout:7/620: chown d5/ff 8 1 2026-03-10T10:19:50.824 INFO:tasks.workunit.client.1.vm05.stdout:3/612: dwrite dd/d20/d56/d5e/dab/f9b [0,4194304] 0 2026-03-10T10:19:50.825 INFO:tasks.workunit.client.1.vm05.stdout:2/537: write db/d61/d67/f6e [538702,44722] 0 2026-03-10T10:19:50.826 INFO:tasks.workunit.client.0.vm02.stdout:7/619: sync 2026-03-10T10:19:50.826 INFO:tasks.workunit.client.0.vm02.stdout:9/589: sync 2026-03-10T10:19:50.826 INFO:tasks.workunit.client.0.vm02.stdout:9/590: chown da/d3c/d4c/d56/fac 103151155 1 2026-03-10T10:19:50.843 INFO:tasks.workunit.client.1.vm05.stdout:1/690: rename d4/dd/l26 to d4/d39/lc7 0 2026-03-10T10:19:50.844 INFO:tasks.workunit.client.1.vm05.stdout:1/691: stat d4/df/d1c/l59 0 2026-03-10T10:19:50.845 INFO:tasks.workunit.client.1.vm05.stdout:0/607: truncate d1/d2/d9/d31/d54/f4 2544853 0 2026-03-10T10:19:50.848 INFO:tasks.workunit.client.1.vm05.stdout:6/605: dwrite dd/d36/d3f/d12/d58/f5a [0,4194304] 0 2026-03-10T10:19:50.859 INFO:tasks.workunit.client.1.vm05.stdout:6/606: dwrite dd/d36/d3f/d12/fa6 [0,4194304] 0 2026-03-10T10:19:50.873 INFO:tasks.workunit.client.1.vm05.stdout:3/613: symlink dd/d39/d5f/lda 0 2026-03-10T10:19:50.873 INFO:tasks.workunit.client.1.vm05.stdout:2/538: creat db/d28/d4f/d8b/fa8 x:0 0 0 2026-03-10T10:19:50.875 INFO:tasks.workunit.client.1.vm05.stdout:4/485: creat d1/d31/f9a x:0 0 0 2026-03-10T10:19:50.876 INFO:tasks.workunit.client.1.vm05.stdout:9/545: creat d0/d1/fba x:0 0 0 2026-03-10T10:19:50.880 INFO:tasks.workunit.client.1.vm05.stdout:9/546: chown d0/df/d74/d8c/fb5 351293 1 2026-03-10T10:19:50.889 INFO:tasks.workunit.client.0.vm02.stdout:5/752: rmdir d1/db/d11/d62 39 2026-03-10T10:19:50.895 INFO:tasks.workunit.client.0.vm02.stdout:4/736: getdents d1/d41/d5e/d78/d37 0 2026-03-10T10:19:50.895 INFO:tasks.workunit.client.1.vm05.stdout:8/560: creat d7/d14/d15/fa8 x:0 0 0 2026-03-10T10:19:50.895 INFO:tasks.workunit.client.1.vm05.stdout:2/539: dread - db/d61/f99 zero size 2026-03-10T10:19:50.898 INFO:tasks.workunit.client.1.vm05.stdout:7/621: dread d5/dd/f1a [0,4194304] 0 2026-03-10T10:19:50.901 INFO:tasks.workunit.client.1.vm05.stdout:3/614: unlink dd/d15/l2e 0 2026-03-10T10:19:50.903 INFO:tasks.workunit.client.0.vm02.stdout:3/624: dread d1/d20/d52/f92 [0,4194304] 0 2026-03-10T10:19:50.904 INFO:tasks.workunit.client.0.vm02.stdout:0/656: dread d9/d18/d1a/f6f [0,4194304] 0 2026-03-10T10:19:50.904 INFO:tasks.workunit.client.0.vm02.stdout:0/657: chown d9/d34/d3d/d65/d89/cb8 13278342 1 2026-03-10T10:19:50.905 INFO:tasks.workunit.client.1.vm05.stdout:4/486: fsync d1/d31/dc/d40/d45/f52 0 2026-03-10T10:19:50.910 INFO:tasks.workunit.client.1.vm05.stdout:1/692: creat d4/d20/dbe/fc8 x:0 0 0 2026-03-10T10:19:50.916 INFO:tasks.workunit.client.0.vm02.stdout:8/589: dread d1/f16 [0,4194304] 0 2026-03-10T10:19:50.921 INFO:tasks.workunit.client.1.vm05.stdout:9/547: unlink d0/d1/d13/c5d 0 2026-03-10T10:19:50.931 INFO:tasks.workunit.client.1.vm05.stdout:5/626: write da/db/f7b [8246,112865] 0 2026-03-10T10:19:50.938 INFO:tasks.workunit.client.1.vm05.stdout:2/540: dread - db/d28/f7d zero size 2026-03-10T10:19:50.938 INFO:tasks.workunit.client.1.vm05.stdout:6/607: dwrite dd/d36/d3f/d12/d58/f65 [0,4194304] 0 2026-03-10T10:19:50.942 INFO:tasks.workunit.client.1.vm05.stdout:2/541: write db/d28/d4f/d59/d94/d95/fa5 [115117,64116] 0 2026-03-10T10:19:50.949 INFO:tasks.workunit.client.1.vm05.stdout:3/615: dwrite dd/d39/d5c/f6b [0,4194304] 0 2026-03-10T10:19:50.971 INFO:tasks.workunit.client.1.vm05.stdout:2/542: chown db/d28/c78 68352237 1 2026-03-10T10:19:50.975 INFO:tasks.workunit.client.0.vm02.stdout:1/650: rename d4/c1f to d4/da/d27/d38/ccc 0 2026-03-10T10:19:50.980 INFO:tasks.workunit.client.1.vm05.stdout:2/543: dwrite db/d28/d4f/d59/f7c [0,4194304] 0 2026-03-10T10:19:50.981 INFO:tasks.workunit.client.1.vm05.stdout:3/616: truncate dd/d15/fa3 960265 0 2026-03-10T10:19:50.984 INFO:tasks.workunit.client.1.vm05.stdout:0/608: getdents d1/d2/d39/d6e/d95 0 2026-03-10T10:19:50.987 INFO:tasks.workunit.client.1.vm05.stdout:4/487: dread d1/d3/d65/f6a [0,4194304] 0 2026-03-10T10:19:50.989 INFO:tasks.workunit.client.0.vm02.stdout:0/658: mknod d9/d18/d1a/d22/d24/d8e/cd1 0 2026-03-10T10:19:50.990 INFO:tasks.workunit.client.0.vm02.stdout:0/659: dread - d9/d18/d1a/d22/d24/d79/d7d/f85 zero size 2026-03-10T10:19:50.993 INFO:tasks.workunit.client.1.vm05.stdout:9/548: rename d0/d1/fba to d0/df/d74/fbb 0 2026-03-10T10:19:50.994 INFO:tasks.workunit.client.1.vm05.stdout:9/549: write d0/d1/fad [210975,11514] 0 2026-03-10T10:19:50.994 INFO:tasks.workunit.client.1.vm05.stdout:9/550: truncate d0/f1e 4804182 0 2026-03-10T10:19:50.997 INFO:tasks.workunit.client.0.vm02.stdout:3/625: dread d1/d8/d21/f47 [0,4194304] 0 2026-03-10T10:19:51.001 INFO:tasks.workunit.client.1.vm05.stdout:5/627: mknod da/db/d26/cd4 0 2026-03-10T10:19:51.003 INFO:tasks.workunit.client.0.vm02.stdout:1/651: symlink d4/d1b/lcd 0 2026-03-10T10:19:51.006 INFO:tasks.workunit.client.0.vm02.stdout:7/620: creat d1/dc/fbc x:0 0 0 2026-03-10T10:19:51.008 INFO:tasks.workunit.client.1.vm05.stdout:8/561: link d7/d14/d15/l82 d7/d14/d24/d3f/d6a/la9 0 2026-03-10T10:19:51.010 INFO:tasks.workunit.client.0.vm02.stdout:5/753: rmdir d1/db/d11/d7b/de4 0 2026-03-10T10:19:51.012 INFO:tasks.workunit.client.1.vm05.stdout:6/608: rmdir dd/d36/d3f/d12/d44/d2a/d3d/d3e 39 2026-03-10T10:19:51.014 INFO:tasks.workunit.client.1.vm05.stdout:7/622: dwrite d5/d1d/d20/d2d/f3d [0,4194304] 0 2026-03-10T10:19:51.018 INFO:tasks.workunit.client.1.vm05.stdout:2/544: mknod db/d1c/d40/d62/ca9 0 2026-03-10T10:19:51.027 INFO:tasks.workunit.client.0.vm02.stdout:3/626: mkdir d1/d20/db2/dcb 0 2026-03-10T10:19:51.030 INFO:tasks.workunit.client.0.vm02.stdout:9/591: dwrite da/d3c/d4c/d2c/d34/f81 [0,4194304] 0 2026-03-10T10:19:51.032 INFO:tasks.workunit.client.1.vm05.stdout:1/693: dwrite d4/df/d1c/d53/f65 [0,4194304] 0 2026-03-10T10:19:51.032 INFO:tasks.workunit.client.0.vm02.stdout:2/624: write d0/d8c/fab [829844,60084] 0 2026-03-10T10:19:51.032 INFO:tasks.workunit.client.0.vm02.stdout:2/625: dread - d0/d1a/d49/d5e/d8a/fd2 zero size 2026-03-10T10:19:51.037 INFO:tasks.workunit.client.0.vm02.stdout:1/652: symlink d4/d4a/lce 0 2026-03-10T10:19:51.037 INFO:tasks.workunit.client.0.vm02.stdout:1/653: dread - d4/da/d1a/fa1 zero size 2026-03-10T10:19:51.037 INFO:tasks.workunit.client.1.vm05.stdout:2/545: dread db/d28/f30 [0,4194304] 0 2026-03-10T10:19:51.039 INFO:tasks.workunit.client.1.vm05.stdout:0/609: read - d1/d2/d9/d31/d54/f86 zero size 2026-03-10T10:19:51.053 INFO:tasks.workunit.client.0.vm02.stdout:7/621: fdatasync d1/dc/d60/fa4 0 2026-03-10T10:19:51.056 INFO:tasks.workunit.client.0.vm02.stdout:6/599: dread d0/d8/d9/f84 [0,4194304] 0 2026-03-10T10:19:51.059 INFO:tasks.workunit.client.0.vm02.stdout:8/590: dread d1/d1c/f33 [0,4194304] 0 2026-03-10T10:19:51.062 INFO:tasks.workunit.client.1.vm05.stdout:4/488: rename d1/d31/d4b/f59 to d1/d31/d4b/f9b 0 2026-03-10T10:19:51.065 INFO:tasks.workunit.client.0.vm02.stdout:5/754: symlink d1/db/d11/d1a/l102 0 2026-03-10T10:19:51.069 INFO:tasks.workunit.client.1.vm05.stdout:9/551: truncate d0/d1/d16/f3d 1983031 0 2026-03-10T10:19:51.072 INFO:tasks.workunit.client.0.vm02.stdout:4/737: write d1/d41/d5e/d78/d7f/d82/fe9 [561066,18570] 0 2026-03-10T10:19:51.076 INFO:tasks.workunit.client.0.vm02.stdout:3/627: mknod d1/d8/d86/da2/ccc 0 2026-03-10T10:19:51.084 INFO:tasks.workunit.client.0.vm02.stdout:9/592: fdatasync da/d3c/d4c/d2c/f32 0 2026-03-10T10:19:51.101 INFO:tasks.workunit.client.1.vm05.stdout:6/609: unlink dd/d36/d3f/d12/d58/f7b 0 2026-03-10T10:19:51.103 INFO:tasks.workunit.client.0.vm02.stdout:0/660: write d9/d18/d1a/d22/d24/d80/d49/f53 [4133082,55139] 0 2026-03-10T10:19:51.104 INFO:tasks.workunit.client.0.vm02.stdout:1/654: creat d4/da/d1a/d47/d78/fcf x:0 0 0 2026-03-10T10:19:51.112 INFO:tasks.workunit.client.0.vm02.stdout:7/622: dread d1/dc/d10/f27 [0,4194304] 0 2026-03-10T10:19:51.115 INFO:tasks.workunit.client.1.vm05.stdout:6/610: dwrite dd/d36/d7d/f97 [4194304,4194304] 0 2026-03-10T10:19:51.116 INFO:tasks.workunit.client.0.vm02.stdout:6/600: chown d0/d8/d29/d52/cad 4 1 2026-03-10T10:19:51.120 INFO:tasks.workunit.client.0.vm02.stdout:8/591: symlink d1/d1c/d43/d5b/d88/lb2 0 2026-03-10T10:19:51.139 INFO:tasks.workunit.client.1.vm05.stdout:6/611: dread - dd/d36/d3f/d12/d44/d2a/d3d/d48/fb2 zero size 2026-03-10T10:19:51.139 INFO:tasks.workunit.client.0.vm02.stdout:5/755: symlink d1/db/d11/d16/d79/d85/d93/l103 0 2026-03-10T10:19:51.139 INFO:tasks.workunit.client.0.vm02.stdout:9/593: rename da/d3c/d4c/c2a to da/d3c/d4c/d2c/d96/cba 0 2026-03-10T10:19:51.165 INFO:tasks.workunit.client.1.vm05.stdout:3/617: write dd/dbe/faf [843191,73365] 0 2026-03-10T10:19:51.175 INFO:tasks.workunit.client.0.vm02.stdout:8/592: rmdir d1/d1c/d24/d71 39 2026-03-10T10:19:51.176 INFO:tasks.workunit.client.0.vm02.stdout:5/756: dread - d1/db/d11/d13/ff0 zero size 2026-03-10T10:19:51.177 INFO:tasks.workunit.client.0.vm02.stdout:5/757: readlink d1/db/d11/d1a/l102 0 2026-03-10T10:19:51.181 INFO:tasks.workunit.client.0.vm02.stdout:4/738: getdents d1/def 0 2026-03-10T10:19:51.183 INFO:tasks.workunit.client.0.vm02.stdout:2/626: rmdir d0/d1a/d49/d5e/d65/db0/dc2 0 2026-03-10T10:19:51.185 INFO:tasks.workunit.client.0.vm02.stdout:1/655: creat d4/dc3/fd0 x:0 0 0 2026-03-10T10:19:51.190 INFO:tasks.workunit.client.0.vm02.stdout:4/739: dwrite d1/d10/d88/db2/fca [0,4194304] 0 2026-03-10T10:19:51.190 INFO:tasks.workunit.client.1.vm05.stdout:5/628: write da/d9a/fae [157664,20310] 0 2026-03-10T10:19:51.212 INFO:tasks.workunit.client.1.vm05.stdout:1/694: write d4/df/f73 [87313,30090] 0 2026-03-10T10:19:51.216 INFO:tasks.workunit.client.0.vm02.stdout:3/628: write d1/d6/f63 [778383,121311] 0 2026-03-10T10:19:51.220 INFO:tasks.workunit.client.0.vm02.stdout:3/629: read d1/fe [1570088,107439] 0 2026-03-10T10:19:51.227 INFO:tasks.workunit.client.0.vm02.stdout:0/661: mknod d9/d18/d1a/d46/d5d/da7/db7/cd2 0 2026-03-10T10:19:51.231 INFO:tasks.workunit.client.0.vm02.stdout:0/662: dread d9/d18/d1a/d22/d24/fb6 [0,4194304] 0 2026-03-10T10:19:51.232 INFO:tasks.workunit.client.0.vm02.stdout:6/601: write d0/f4c [5142887,117970] 0 2026-03-10T10:19:51.233 INFO:tasks.workunit.client.0.vm02.stdout:0/663: readlink d9/d18/d1a/d22/d24/d51/lbf 0 2026-03-10T10:19:51.234 INFO:tasks.workunit.client.0.vm02.stdout:8/593: creat d1/d1c/d43/d5b/fb3 x:0 0 0 2026-03-10T10:19:51.235 INFO:tasks.workunit.client.0.vm02.stdout:8/594: chown d1/d1c/d43/d6a/d7c 7057831 1 2026-03-10T10:19:51.241 INFO:tasks.workunit.client.0.vm02.stdout:9/594: creat da/d3c/d4c/d75/fbb x:0 0 0 2026-03-10T10:19:51.249 INFO:tasks.workunit.client.0.vm02.stdout:9/595: chown da/d3c/d4c/d56/fa1 1552 1 2026-03-10T10:19:51.252 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:50 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:51.253 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:50 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:51.253 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:50 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:51.253 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:50 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:51.253 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:50 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:19:51.253 INFO:tasks.workunit.client.0.vm02.stdout:2/627: unlink d0/d1a/d49/d5e/l94 0 2026-03-10T10:19:51.253 INFO:tasks.workunit.client.0.vm02.stdout:2/628: readlink d0/d1a/l18 0 2026-03-10T10:19:51.255 INFO:tasks.workunit.client.0.vm02.stdout:1/656: mkdir d4/da/d27/d38/d3c/dd1 0 2026-03-10T10:19:51.256 INFO:tasks.workunit.client.0.vm02.stdout:1/657: readlink d4/d2c/d53/da6/l9b 0 2026-03-10T10:19:51.262 INFO:tasks.workunit.client.0.vm02.stdout:0/664: sync 2026-03-10T10:19:51.262 INFO:tasks.workunit.client.0.vm02.stdout:3/630: sync 2026-03-10T10:19:51.270 INFO:tasks.workunit.client.0.vm02.stdout:4/740: dread d1/d41/d5e/d78/d1a/d49/f7a [0,4194304] 0 2026-03-10T10:19:51.272 INFO:tasks.workunit.client.0.vm02.stdout:4/741: sync 2026-03-10T10:19:51.280 INFO:tasks.workunit.client.0.vm02.stdout:7/623: write d1/dc/d16/fb8 [544840,114072] 0 2026-03-10T10:19:51.287 INFO:tasks.workunit.client.0.vm02.stdout:6/602: mkdir d0/d8/d9/d7a/dc0 0 2026-03-10T10:19:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:50 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:50 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:50 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:50 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:50 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:19:51.289 INFO:tasks.workunit.client.0.vm02.stdout:8/595: chown d1/d1c/d43/f52 15 1 2026-03-10T10:19:51.292 INFO:tasks.workunit.client.0.vm02.stdout:9/596: mkdir da/d3c/d4c/d56/dbc 0 2026-03-10T10:19:51.292 INFO:tasks.workunit.client.1.vm05.stdout:7/623: rename d5/d26/f41 to d5/d1d/d20/d91/fc9 0 2026-03-10T10:19:51.293 INFO:tasks.workunit.client.1.vm05.stdout:3/618: truncate dd/d15/d24/d2c/f3f 4846131 0 2026-03-10T10:19:51.295 INFO:tasks.workunit.client.0.vm02.stdout:9/597: sync 2026-03-10T10:19:51.295 INFO:tasks.workunit.client.1.vm05.stdout:5/629: stat da/db/d28/d8a/fa0 0 2026-03-10T10:19:51.295 INFO:tasks.workunit.client.0.vm02.stdout:2/629: rename d0/f72 to d0/d1a/d49/d5e/d65/dc4/fd6 0 2026-03-10T10:19:51.304 INFO:tasks.workunit.client.0.vm02.stdout:4/742: truncate d1/d75/f85 1000933 0 2026-03-10T10:19:51.305 INFO:tasks.workunit.client.1.vm05.stdout:4/489: unlink d1/d31/f81 0 2026-03-10T10:19:51.306 INFO:tasks.workunit.client.0.vm02.stdout:7/624: mknod d1/dc/d16/d28/d2c/cbd 0 2026-03-10T10:19:51.307 INFO:tasks.workunit.client.1.vm05.stdout:9/552: mknod d0/d1/d4c/d63/cbc 0 2026-03-10T10:19:51.309 INFO:tasks.workunit.client.1.vm05.stdout:8/562: creat d7/d14/d15/faa x:0 0 0 2026-03-10T10:19:51.309 INFO:tasks.workunit.client.1.vm05.stdout:4/490: fdatasync d1/d31/d76/f95 0 2026-03-10T10:19:51.313 INFO:tasks.workunit.client.0.vm02.stdout:5/758: creat d1/db/d11/d13/d28/d37/f104 x:0 0 0 2026-03-10T10:19:51.314 INFO:tasks.workunit.client.1.vm05.stdout:2/546: dwrite db/f36 [0,4194304] 0 2026-03-10T10:19:51.317 INFO:tasks.workunit.client.0.vm02.stdout:0/665: write d9/d18/d1a/d22/d24/d79/d7d/f85 [642211,56827] 0 2026-03-10T10:19:51.321 INFO:tasks.workunit.client.0.vm02.stdout:1/658: dwrite d4/da/f71 [0,4194304] 0 2026-03-10T10:19:51.323 INFO:tasks.workunit.client.0.vm02.stdout:0/666: truncate d9/d18/d1a/d22/d24/d80/d49/f53 4400060 0 2026-03-10T10:19:51.330 INFO:tasks.workunit.client.1.vm05.stdout:2/547: fdatasync db/d28/d4f/d59/f6f 0 2026-03-10T10:19:51.335 INFO:tasks.workunit.client.1.vm05.stdout:9/553: dwrite d0/d1/d16/f36 [0,4194304] 0 2026-03-10T10:19:51.341 INFO:tasks.workunit.client.1.vm05.stdout:9/554: truncate d0/df/fb8 241972 0 2026-03-10T10:19:51.344 INFO:tasks.workunit.client.1.vm05.stdout:9/555: chown d0/df/c30 12 1 2026-03-10T10:19:51.354 INFO:tasks.workunit.client.1.vm05.stdout:1/695: dread d4/d20/f2c [0,4194304] 0 2026-03-10T10:19:51.367 INFO:tasks.workunit.client.1.vm05.stdout:5/630: mknod da/d9a/dc7/db4/cd5 0 2026-03-10T10:19:51.371 INFO:tasks.workunit.client.1.vm05.stdout:5/631: write da/d9a/fae [422529,95395] 0 2026-03-10T10:19:51.381 INFO:tasks.workunit.client.0.vm02.stdout:9/598: symlink da/d3c/d4c/d38/d82/lbd 0 2026-03-10T10:19:51.381 INFO:tasks.workunit.client.1.vm05.stdout:8/563: dread d7/d14/d15/f1f [4194304,4194304] 0 2026-03-10T10:19:51.387 INFO:tasks.workunit.client.0.vm02.stdout:9/599: dwrite da/d3c/d4c/d2c/d34/f57 [0,4194304] 0 2026-03-10T10:19:51.400 INFO:tasks.workunit.client.1.vm05.stdout:2/548: dread - db/d61/f92 zero size 2026-03-10T10:19:51.401 INFO:tasks.workunit.client.1.vm05.stdout:2/549: read - db/d12/f3c zero size 2026-03-10T10:19:51.412 INFO:tasks.workunit.client.1.vm05.stdout:0/610: rename d1/d2/d39/d3d/f72 to d1/d2/d39/fd0 0 2026-03-10T10:19:51.413 INFO:tasks.workunit.client.0.vm02.stdout:7/625: rmdir d1 39 2026-03-10T10:19:51.414 INFO:tasks.workunit.client.0.vm02.stdout:5/759: symlink d1/db/d11/d16/d79/d85/d93/l105 0 2026-03-10T10:19:51.415 INFO:tasks.workunit.client.1.vm05.stdout:2/550: dwrite db/f36 [8388608,4194304] 0 2026-03-10T10:19:51.420 INFO:tasks.workunit.client.1.vm05.stdout:1/696: creat d4/d39/d3e/da0/fc9 x:0 0 0 2026-03-10T10:19:51.422 INFO:tasks.workunit.client.1.vm05.stdout:1/697: readlink d4/df/d1c/d53/l99 0 2026-03-10T10:19:51.434 INFO:tasks.workunit.client.0.vm02.stdout:6/603: creat d0/db9/fc1 x:0 0 0 2026-03-10T10:19:51.434 INFO:tasks.workunit.client.0.vm02.stdout:3/631: rename d1/d20/c2b to d1/d8/d86/ccd 0 2026-03-10T10:19:51.437 INFO:tasks.workunit.client.1.vm05.stdout:3/619: rename dd/d15/d24/d2c/d6d/cce to dd/dbe/cdb 0 2026-03-10T10:19:51.438 INFO:tasks.workunit.client.1.vm05.stdout:2/551: creat db/d1c/d40/d80/faa x:0 0 0 2026-03-10T10:19:51.440 INFO:tasks.workunit.client.0.vm02.stdout:2/630: creat d0/dd4/fd7 x:0 0 0 2026-03-10T10:19:51.442 INFO:tasks.workunit.client.0.vm02.stdout:2/631: write d0/d71/fb9 [86973,26996] 0 2026-03-10T10:19:51.443 INFO:tasks.workunit.client.0.vm02.stdout:1/659: unlink d4/da/d1a/ca9 0 2026-03-10T10:19:51.445 INFO:tasks.workunit.client.1.vm05.stdout:4/491: creat d1/d31/dc/d40/f9c x:0 0 0 2026-03-10T10:19:51.448 INFO:tasks.workunit.client.0.vm02.stdout:9/600: read da/f5c [1648264,21365] 0 2026-03-10T10:19:51.452 INFO:tasks.workunit.client.0.vm02.stdout:0/667: rename d9/d18/d1a/d46/d5d to d9/d34/d3d/d65/d89/dd3 0 2026-03-10T10:19:51.462 INFO:tasks.workunit.client.1.vm05.stdout:9/556: rename d0/d1/fad to d0/d1/d13/de/d93/fbd 0 2026-03-10T10:19:51.462 INFO:tasks.workunit.client.1.vm05.stdout:6/612: write dd/d36/d3f/f41 [1598595,58834] 0 2026-03-10T10:19:51.462 INFO:tasks.workunit.client.1.vm05.stdout:7/624: getdents d5/d1d/d29/d3e/d8c 0 2026-03-10T10:19:51.462 INFO:tasks.workunit.client.0.vm02.stdout:0/668: chown d9/d18/d1a/d22/d24/f26 274332 1 2026-03-10T10:19:51.462 INFO:tasks.workunit.client.0.vm02.stdout:8/596: write d1/d1c/f9a [35240,23072] 0 2026-03-10T10:19:51.462 INFO:tasks.workunit.client.0.vm02.stdout:8/597: write d1/d1c/f9a [141851,116969] 0 2026-03-10T10:19:51.466 INFO:tasks.workunit.client.0.vm02.stdout:2/632: sync 2026-03-10T10:19:51.481 INFO:tasks.workunit.client.1.vm05.stdout:8/564: creat d7/d14/d24/d3f/fab x:0 0 0 2026-03-10T10:19:51.482 INFO:tasks.workunit.client.0.vm02.stdout:1/660: truncate d4/f7a 1358660 0 2026-03-10T10:19:51.482 INFO:tasks.workunit.client.0.vm02.stdout:4/743: dwrite d1/d41/d5e/d78/d37/f48 [0,4194304] 0 2026-03-10T10:19:51.487 INFO:tasks.workunit.client.0.vm02.stdout:4/744: sync 2026-03-10T10:19:51.497 INFO:tasks.workunit.client.1.vm05.stdout:5/632: rename da/db/d26/d35/d38/c86 to da/d9a/dc7/cd6 0 2026-03-10T10:19:51.497 INFO:tasks.workunit.client.1.vm05.stdout:9/557: mknod d0/df/d74/cbe 0 2026-03-10T10:19:51.497 INFO:tasks.workunit.client.0.vm02.stdout:6/604: mkdir d0/d8/d29/d94/d9a/dc2 0 2026-03-10T10:19:51.500 INFO:tasks.workunit.client.0.vm02.stdout:5/760: write d1/db/f56 [3285837,60591] 0 2026-03-10T10:19:51.502 INFO:tasks.workunit.client.1.vm05.stdout:7/625: chown d5/d17/f3c 846736 1 2026-03-10T10:19:51.503 INFO:tasks.workunit.client.0.vm02.stdout:2/633: truncate d0/d1a/d49/d5e/f60 8926906 0 2026-03-10T10:19:51.506 INFO:tasks.workunit.client.1.vm05.stdout:2/552: creat db/d28/d4f/d8b/d9a/d9d/fab x:0 0 0 2026-03-10T10:19:51.517 INFO:tasks.workunit.client.0.vm02.stdout:3/632: link d1/d6/l57 d1/d58/dc9/lce 0 2026-03-10T10:19:51.520 INFO:tasks.workunit.client.1.vm05.stdout:6/613: getdents dd/d36/d3f/d12/d58/db8 0 2026-03-10T10:19:51.520 INFO:tasks.workunit.client.1.vm05.stdout:6/614: readlink dd/d36/d3f/d12/d44/l47 0 2026-03-10T10:19:51.526 INFO:tasks.workunit.client.1.vm05.stdout:9/558: creat d0/d1/d57/fbf x:0 0 0 2026-03-10T10:19:51.528 INFO:tasks.workunit.client.0.vm02.stdout:8/598: creat d1/d1c/d24/d71/fb4 x:0 0 0 2026-03-10T10:19:51.533 INFO:tasks.workunit.client.1.vm05.stdout:7/626: mknod d5/d1d/d29/d3e/d8c/d82/d90/cca 0 2026-03-10T10:19:51.535 INFO:tasks.workunit.client.1.vm05.stdout:1/698: getdents d4/d37/d4e/d82 0 2026-03-10T10:19:51.537 INFO:tasks.workunit.client.0.vm02.stdout:2/634: mkdir d0/d71/dd8 0 2026-03-10T10:19:51.541 INFO:tasks.workunit.client.1.vm05.stdout:3/620: write dd/d20/d94/fa9 [910991,128607] 0 2026-03-10T10:19:51.542 INFO:tasks.workunit.client.0.vm02.stdout:9/601: dwrite da/d3c/d4c/d38/f88 [0,4194304] 0 2026-03-10T10:19:51.543 INFO:tasks.workunit.client.0.vm02.stdout:9/602: chown da/d3c/d4c/d2c/d34/d35/l69 6098 1 2026-03-10T10:19:51.544 INFO:tasks.workunit.client.1.vm05.stdout:3/621: readlink dd/d39/d5f/lda 0 2026-03-10T10:19:51.548 INFO:tasks.workunit.client.0.vm02.stdout:7/626: creat d1/fbe x:0 0 0 2026-03-10T10:19:51.562 INFO:tasks.workunit.client.1.vm05.stdout:4/492: creat d1/d31/f9d x:0 0 0 2026-03-10T10:19:51.564 INFO:tasks.workunit.client.0.vm02.stdout:3/633: creat d1/d8/d21/d73/fcf x:0 0 0 2026-03-10T10:19:51.569 INFO:tasks.workunit.client.1.vm05.stdout:6/615: creat dd/d36/d3f/d12/d44/d63/fc5 x:0 0 0 2026-03-10T10:19:51.575 INFO:tasks.workunit.client.1.vm05.stdout:9/559: mknod d0/d1/d13/d55/cc0 0 2026-03-10T10:19:51.578 INFO:tasks.workunit.client.1.vm05.stdout:9/560: write d0/d1/d13/de/d93/fa1 [4106425,32795] 0 2026-03-10T10:19:51.578 INFO:tasks.workunit.client.1.vm05.stdout:9/561: stat d0/df/d74 0 2026-03-10T10:19:51.581 INFO:tasks.workunit.client.1.vm05.stdout:3/622: sync 2026-03-10T10:19:51.583 INFO:tasks.workunit.client.1.vm05.stdout:3/623: write dd/d20/d94/fa9 [834164,83556] 0 2026-03-10T10:19:51.587 INFO:tasks.workunit.client.1.vm05.stdout:3/624: read dd/dbe/faf [656855,117841] 0 2026-03-10T10:19:51.593 INFO:tasks.workunit.client.0.vm02.stdout:8/599: dread d1/d1c/f34 [0,4194304] 0 2026-03-10T10:19:51.596 INFO:tasks.workunit.client.1.vm05.stdout:1/699: mknod d4/d39/cca 0 2026-03-10T10:19:51.622 INFO:tasks.workunit.client.1.vm05.stdout:9/562: dread d0/df/d11/f2c [0,4194304] 0 2026-03-10T10:19:51.631 INFO:tasks.workunit.client.0.vm02.stdout:6/605: write d0/d8/d29/d2f/f8e [292262,44753] 0 2026-03-10T10:19:51.635 INFO:tasks.workunit.client.0.vm02.stdout:1/661: write d4/da/d1a/d47/d65/fba [1609763,71803] 0 2026-03-10T10:19:51.636 INFO:tasks.workunit.client.1.vm05.stdout:8/565: truncate d7/d14/d24/f7a 11772006 0 2026-03-10T10:19:51.640 INFO:tasks.workunit.client.1.vm05.stdout:7/627: write d5/d1d/d29/d3e/d8c/f81 [1487423,93813] 0 2026-03-10T10:19:51.641 INFO:tasks.workunit.client.1.vm05.stdout:2/553: creat db/d2d/d5e/fac x:0 0 0 2026-03-10T10:19:51.642 INFO:tasks.workunit.client.0.vm02.stdout:9/603: write da/d3c/f3e [4785096,112295] 0 2026-03-10T10:19:51.644 INFO:tasks.workunit.client.1.vm05.stdout:4/493: creat d1/d31/d4b/f9e x:0 0 0 2026-03-10T10:19:51.644 INFO:tasks.workunit.client.1.vm05.stdout:7/628: write d5/d1d/d29/fb7 [325152,42947] 0 2026-03-10T10:19:51.647 INFO:tasks.workunit.client.1.vm05.stdout:4/494: write d1/d31/f9a [642189,49115] 0 2026-03-10T10:19:51.647 INFO:tasks.workunit.client.1.vm05.stdout:7/629: chown d5/d1d/f7d 7 1 2026-03-10T10:19:51.650 INFO:tasks.workunit.client.1.vm05.stdout:0/611: creat d1/d2/d39/fd1 x:0 0 0 2026-03-10T10:19:51.653 INFO:tasks.workunit.client.1.vm05.stdout:2/554: dwrite db/d28/d4f/d8b/fa8 [0,4194304] 0 2026-03-10T10:19:51.655 INFO:tasks.workunit.client.0.vm02.stdout:4/745: rename d1/d10/d88/cab to d1/d41/d5e/cf3 0 2026-03-10T10:19:51.672 INFO:tasks.workunit.client.0.vm02.stdout:3/634: rmdir d1/d8/d21/d73 39 2026-03-10T10:19:51.673 INFO:tasks.workunit.client.1.vm05.stdout:6/616: mkdir dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6 0 2026-03-10T10:19:51.679 INFO:tasks.workunit.client.0.vm02.stdout:0/669: link d9/d18/d1a/d22/d24/d80/ca0 d9/d18/d1a/d22/d24/d8e/d9b/cd4 0 2026-03-10T10:19:51.679 INFO:tasks.workunit.client.1.vm05.stdout:6/617: write dd/d36/d3f/d12/d44/daa/fae [692240,130551] 0 2026-03-10T10:19:51.681 INFO:tasks.workunit.client.0.vm02.stdout:8/600: read d1/d1c/d24/f31 [634370,98008] 0 2026-03-10T10:19:51.686 INFO:tasks.workunit.client.1.vm05.stdout:6/618: dwrite dd/d36/f69 [0,4194304] 0 2026-03-10T10:19:51.706 INFO:tasks.workunit.client.0.vm02.stdout:6/606: unlink d0/d8/d29/l37 0 2026-03-10T10:19:51.707 INFO:tasks.workunit.client.0.vm02.stdout:6/607: write d0/d8/d29/d2f/d4b/da5/d6f/fa2 [2514946,104538] 0 2026-03-10T10:19:51.708 INFO:tasks.workunit.client.0.vm02.stdout:6/608: write d0/d8/d29/d2f/f38 [1452196,24078] 0 2026-03-10T10:19:51.728 INFO:tasks.workunit.client.1.vm05.stdout:8/566: dread - d7/d14/d24/f79 zero size 2026-03-10T10:19:51.728 INFO:tasks.workunit.client.1.vm05.stdout:9/563: write d0/d1/d13/de/d93/fbd [227451,14980] 0 2026-03-10T10:19:51.730 INFO:tasks.workunit.client.0.vm02.stdout:1/662: creat d4/d4a/fd2 x:0 0 0 2026-03-10T10:19:51.730 INFO:tasks.workunit.client.0.vm02.stdout:7/627: rename d1/dc/d10/d38/f83 to d1/d1b/d49/fbf 0 2026-03-10T10:19:51.731 INFO:tasks.workunit.client.0.vm02.stdout:1/663: chown d4/d2c/d53/l89 246983865 1 2026-03-10T10:19:51.733 INFO:tasks.workunit.client.1.vm05.stdout:7/630: unlink d5/d17/d66/f8b 0 2026-03-10T10:19:51.740 INFO:tasks.workunit.client.0.vm02.stdout:5/761: getdents d1/db/d11/d13/d28/da7 0 2026-03-10T10:19:51.767 INFO:tasks.workunit.client.1.vm05.stdout:6/619: dread dd/f14 [4194304,4194304] 0 2026-03-10T10:19:51.773 INFO:tasks.workunit.client.0.vm02.stdout:0/670: creat d9/d34/fd5 x:0 0 0 2026-03-10T10:19:51.779 INFO:tasks.workunit.client.0.vm02.stdout:9/604: write da/d3c/f8b [998346,90467] 0 2026-03-10T10:19:51.780 INFO:tasks.workunit.client.0.vm02.stdout:4/746: write d1/d52/fcd [637258,18850] 0 2026-03-10T10:19:51.780 INFO:tasks.workunit.client.1.vm05.stdout:4/495: write d1/d31/f1b [1031312,81200] 0 2026-03-10T10:19:51.780 INFO:tasks.workunit.client.0.vm02.stdout:4/747: readlink d1/d32/da3/ldc 0 2026-03-10T10:19:51.783 INFO:tasks.workunit.client.0.vm02.stdout:4/748: read d1/d41/d5e/d78/f34 [2962939,2771] 0 2026-03-10T10:19:51.786 INFO:tasks.workunit.client.0.vm02.stdout:3/635: dwrite d1/d20/fa4 [0,4194304] 0 2026-03-10T10:19:51.792 INFO:tasks.workunit.client.0.vm02.stdout:8/601: mkdir d1/d1c/d43/d6a/da8/d56/db5 0 2026-03-10T10:19:51.792 INFO:tasks.workunit.client.0.vm02.stdout:2/635: link d0/f8f d0/d1a/d24/dbf/fd9 0 2026-03-10T10:19:51.792 INFO:tasks.workunit.client.0.vm02.stdout:8/602: stat d1/d1c/d43/fa4 0 2026-03-10T10:19:51.796 INFO:tasks.workunit.client.1.vm05.stdout:5/633: link da/db/f9f da/db/d28/fd7 0 2026-03-10T10:19:51.800 INFO:tasks.workunit.client.1.vm05.stdout:5/634: dread da/db/d26/d70/f82 [0,4194304] 0 2026-03-10T10:19:51.800 INFO:tasks.workunit.client.1.vm05.stdout:3/625: rename dd/dbe/faf to dd/d20/d56/d5e/dab/d9c/fdc 0 2026-03-10T10:19:51.803 INFO:tasks.workunit.client.1.vm05.stdout:1/700: getdents d4/d39/d3e/db1/db8 0 2026-03-10T10:19:51.804 INFO:tasks.workunit.client.1.vm05.stdout:8/567: chown d7/d14/d15/l82 5 1 2026-03-10T10:19:51.813 INFO:tasks.workunit.client.1.vm05.stdout:1/701: mkdir d4/d79/d83/dc5/dcb 0 2026-03-10T10:19:51.814 INFO:tasks.workunit.client.1.vm05.stdout:8/568: rmdir d7/d2f/d57 39 2026-03-10T10:19:51.820 INFO:tasks.workunit.client.1.vm05.stdout:0/612: creat d1/d2/d9/d31/d13/fd2 x:0 0 0 2026-03-10T10:19:51.824 INFO:tasks.workunit.client.1.vm05.stdout:3/626: mkdir dd/d20/d9e/dc0/ddd 0 2026-03-10T10:19:51.828 INFO:tasks.workunit.client.1.vm05.stdout:6/620: symlink dd/d36/d3f/d12/lc7 0 2026-03-10T10:19:51.836 INFO:tasks.workunit.client.1.vm05.stdout:4/496: link d1/d31/dc/f53 d1/d31/d4b/d6d/f9f 0 2026-03-10T10:19:51.836 INFO:tasks.workunit.client.1.vm05.stdout:8/569: dread d7/d14/f4c [0,4194304] 0 2026-03-10T10:19:51.839 INFO:tasks.workunit.client.1.vm05.stdout:7/631: rename d5/d1d/d20/c21 to d5/d1d/d29/ccb 0 2026-03-10T10:19:51.840 INFO:tasks.workunit.client.1.vm05.stdout:6/621: symlink dd/d36/d3f/d12/d44/d2a/d3d/d48/lc8 0 2026-03-10T10:19:51.840 INFO:tasks.workunit.client.1.vm05.stdout:7/632: chown d5/dd 15711405 1 2026-03-10T10:19:51.846 INFO:tasks.workunit.client.1.vm05.stdout:8/570: mkdir d7/d14/d62/d90/dac 0 2026-03-10T10:19:51.853 INFO:tasks.workunit.client.1.vm05.stdout:7/633: sync 2026-03-10T10:19:51.853 INFO:tasks.workunit.client.1.vm05.stdout:2/555: dwrite db/d2d/f52 [4194304,4194304] 0 2026-03-10T10:19:51.858 INFO:tasks.workunit.client.1.vm05.stdout:5/635: rename da/db/d26/d5c/l4a to da/d63/ld8 0 2026-03-10T10:19:51.868 INFO:tasks.workunit.client.1.vm05.stdout:6/622: dread dd/d36/d3f/d12/d44/d2a/f98 [0,4194304] 0 2026-03-10T10:19:51.874 INFO:tasks.workunit.client.1.vm05.stdout:9/564: dwrite d0/f28 [0,4194304] 0 2026-03-10T10:19:51.902 INFO:tasks.workunit.client.0.vm02.stdout:7/628: write d1/dc/d16/f1f [5151905,82853] 0 2026-03-10T10:19:51.902 INFO:tasks.workunit.client.0.vm02.stdout:7/629: dread - d1/dc/d16/d28/d2d/f4c zero size 2026-03-10T10:19:51.909 INFO:tasks.workunit.client.1.vm05.stdout:3/627: rmdir dd/d20/d56/d5e/dab/dcf 0 2026-03-10T10:19:51.909 INFO:tasks.workunit.client.1.vm05.stdout:8/571: mknod d7/d14/d24/d3f/d6a/d8a/d96/cad 0 2026-03-10T10:19:51.910 INFO:tasks.workunit.client.0.vm02.stdout:9/605: dread da/f1f [0,4194304] 0 2026-03-10T10:19:51.910 INFO:tasks.workunit.client.1.vm05.stdout:8/572: chown d7/d14/d3a/l63 4435 1 2026-03-10T10:19:51.910 INFO:tasks.workunit.client.0.vm02.stdout:9/606: chown da/d3c/d4c/f8e 336135190 1 2026-03-10T10:19:51.918 INFO:tasks.workunit.client.1.vm05.stdout:5/636: mkdir da/d96/dd9 0 2026-03-10T10:19:51.919 INFO:tasks.workunit.client.1.vm05.stdout:1/702: rename d4/d37/l5a to d4/d3d/d6e/lcc 0 2026-03-10T10:19:51.921 INFO:tasks.workunit.client.1.vm05.stdout:6/623: creat dd/d36/d3f/d12/d44/d30/d4a/fc9 x:0 0 0 2026-03-10T10:19:51.921 INFO:tasks.workunit.client.0.vm02.stdout:5/762: mknod d1/db/d11/d13/d28/d37/c106 0 2026-03-10T10:19:51.923 INFO:tasks.workunit.client.1.vm05.stdout:9/565: write d0/df/d74/f9e [87265,65986] 0 2026-03-10T10:19:51.923 INFO:tasks.workunit.client.1.vm05.stdout:0/613: getdents d1/d2/d9/d50/d9a 0 2026-03-10T10:19:51.926 INFO:tasks.workunit.client.1.vm05.stdout:0/614: sync 2026-03-10T10:19:51.943 INFO:tasks.workunit.client.0.vm02.stdout:7/630: dread d1/f15 [0,4194304] 0 2026-03-10T10:19:51.947 INFO:tasks.workunit.client.1.vm05.stdout:3/628: chown dd/d15/f6a 0 1 2026-03-10T10:19:51.948 INFO:tasks.workunit.client.0.vm02.stdout:3/636: mknod d1/d58/dc9/cd0 0 2026-03-10T10:19:51.948 INFO:tasks.workunit.client.0.vm02.stdout:4/749: write d1/d41/d5e/d78/d1a/f93 [1702040,113191] 0 2026-03-10T10:19:51.949 INFO:tasks.workunit.client.1.vm05.stdout:3/629: write dd/d20/d56/d5e/dab/d9c/fd4 [693131,78015] 0 2026-03-10T10:19:51.949 INFO:tasks.workunit.client.0.vm02.stdout:4/750: readlink d1/d41/d5e/d78/d55/lbf 0 2026-03-10T10:19:51.952 INFO:tasks.workunit.client.1.vm05.stdout:7/634: dwrite d5/dd/f2f [0,4194304] 0 2026-03-10T10:19:51.954 INFO:tasks.workunit.client.0.vm02.stdout:9/607: symlink da/d3c/d4c/d38/d4a/d70/lbe 0 2026-03-10T10:19:51.955 INFO:tasks.workunit.client.0.vm02.stdout:1/664: dwrite d4/da/d1a/d47/d78/fc2 [0,4194304] 0 2026-03-10T10:19:51.956 INFO:tasks.workunit.client.0.vm02.stdout:2/636: dwrite d0/d1a/d49/d5e/f68 [0,4194304] 0 2026-03-10T10:19:51.972 INFO:tasks.workunit.client.1.vm05.stdout:4/497: rename d1/d64/c79 to d1/d70/ca0 0 2026-03-10T10:19:51.972 INFO:tasks.workunit.client.1.vm05.stdout:4/498: chown d1/d3/f5f 2290 1 2026-03-10T10:19:51.972 INFO:tasks.workunit.client.0.vm02.stdout:6/609: creat d0/d8/d29/d2f/d4b/da5/d6f/fc3 x:0 0 0 2026-03-10T10:19:51.973 INFO:tasks.workunit.client.1.vm05.stdout:4/499: dread - d1/d31/d4b/f51 zero size 2026-03-10T10:19:51.984 INFO:tasks.workunit.client.1.vm05.stdout:9/566: truncate d0/df/d11/f50 1575147 0 2026-03-10T10:19:51.984 INFO:tasks.workunit.client.0.vm02.stdout:0/671: creat d9/d18/d1a/d22/fd6 x:0 0 0 2026-03-10T10:19:51.988 INFO:tasks.workunit.client.1.vm05.stdout:0/615: creat d1/d2/d9/d31/d13/d17/fd3 x:0 0 0 2026-03-10T10:19:51.991 INFO:tasks.workunit.client.0.vm02.stdout:7/631: creat d1/d1b/d8f/dad/d7e/fc0 x:0 0 0 2026-03-10T10:19:51.999 INFO:tasks.workunit.client.1.vm05.stdout:8/573: creat d7/d2f/d57/fae x:0 0 0 2026-03-10T10:19:52.000 INFO:tasks.workunit.client.1.vm05.stdout:5/637: dread da/db/d28/f44 [0,4194304] 0 2026-03-10T10:19:52.011 INFO:tasks.workunit.client.1.vm05.stdout:3/630: creat dd/d15/d24/d2c/d3b/fde x:0 0 0 2026-03-10T10:19:52.012 INFO:tasks.workunit.client.1.vm05.stdout:2/556: link db/d2d/l3e db/d28/d4f/d59/d94/d95/lad 0 2026-03-10T10:19:52.025 INFO:tasks.workunit.client.0.vm02.stdout:6/610: sync 2026-03-10T10:19:52.026 INFO:tasks.workunit.client.0.vm02.stdout:6/611: chown d0/d8/d9/f30 7 1 2026-03-10T10:19:52.031 INFO:tasks.workunit.client.0.vm02.stdout:1/665: mknod d4/da/d27/cd3 0 2026-03-10T10:19:52.032 INFO:tasks.workunit.client.1.vm05.stdout:5/638: sync 2026-03-10T10:19:52.033 INFO:tasks.workunit.client.1.vm05.stdout:5/639: fsync da/db/d26/d35/f1c 0 2026-03-10T10:19:52.042 INFO:tasks.workunit.client.0.vm02.stdout:6/612: dwrite d0/fa3 [0,4194304] 0 2026-03-10T10:19:52.053 INFO:tasks.workunit.client.0.vm02.stdout:8/603: dwrite d1/f40 [0,4194304] 0 2026-03-10T10:19:52.053 INFO:tasks.workunit.client.1.vm05.stdout:5/640: dread da/db/d26/d5c/fb5 [0,4194304] 0 2026-03-10T10:19:52.067 INFO:tasks.workunit.client.0.vm02.stdout:2/637: creat d0/d1a/d49/d5e/d65/db0/fda x:0 0 0 2026-03-10T10:19:52.068 INFO:tasks.workunit.client.0.vm02.stdout:2/638: stat d0/d1a/d24/dbf 0 2026-03-10T10:19:52.069 INFO:tasks.workunit.client.1.vm05.stdout:9/567: rmdir d0/d1/d13/d55 39 2026-03-10T10:19:52.069 INFO:tasks.workunit.client.1.vm05.stdout:0/616: symlink d1/d2/d9/d31/d12/d41/ld4 0 2026-03-10T10:19:52.069 INFO:tasks.workunit.client.0.vm02.stdout:5/763: getdents d1/d9c 0 2026-03-10T10:19:52.079 INFO:tasks.workunit.client.1.vm05.stdout:7/635: truncate d5/d1d/d20/d91/fc1 676321 0 2026-03-10T10:19:52.079 INFO:tasks.workunit.client.0.vm02.stdout:0/672: read - d9/d34/d3d/d67/fc3 zero size 2026-03-10T10:19:52.090 INFO:tasks.workunit.client.0.vm02.stdout:7/632: creat d1/d1b/d8f/d67/fc1 x:0 0 0 2026-03-10T10:19:52.115 INFO:tasks.workunit.client.0.vm02.stdout:1/666: read d4/fe [3341727,111980] 0 2026-03-10T10:19:52.115 INFO:tasks.workunit.client.0.vm02.stdout:1/667: write d4/d2c/fa2 [9610370,46800] 0 2026-03-10T10:19:52.125 INFO:tasks.workunit.client.0.vm02.stdout:3/637: dwrite d1/f3 [0,4194304] 0 2026-03-10T10:19:52.126 INFO:tasks.workunit.client.0.vm02.stdout:3/638: chown d1/d6/f63 3 1 2026-03-10T10:19:52.126 INFO:tasks.workunit.client.0.vm02.stdout:3/639: stat d1/d58/dc9/cd0 0 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.130+0000 7f24558cf700 1 -- 192.168.123.102:0/3166414738 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450075a40 msgr2=0x7f2450077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.130+0000 7f24558cf700 1 --2- 192.168.123.102:0/3166414738 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450075a40 0x7f2450077ed0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f244800cd40 tx=0x7f244800a320 comp rx=0 tx=0).stop 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.130+0000 7f24558cf700 1 -- 192.168.123.102:0/3166414738 shutdown_connections 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.130+0000 7f24558cf700 1 --2- 192.168.123.102:0/3166414738 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450075a40 0x7f2450077ed0 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.130+0000 7f24558cf700 1 --2- 192.168.123.102:0/3166414738 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2450072b50 0x7f2450072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.130+0000 7f24558cf700 1 -- 192.168.123.102:0/3166414738 >> 192.168.123.102:0/3166414738 conn(0x7f245006dae0 msgr2=0x7f245006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.130+0000 7f24558cf700 1 -- 192.168.123.102:0/3166414738 shutdown_connections 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.130+0000 7f24558cf700 1 -- 192.168.123.102:0/3166414738 wait complete. 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.131+0000 7f24558cf700 1 Processor -- start 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.131+0000 7f24558cf700 1 -- start start 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.131+0000 7f24558cf700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450072b50 0x7f2450082fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.131+0000 7f24558cf700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2450083510 0x7f24501b3080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.131+0000 7f24558cf700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2450083990 con 0x7f2450083510 2026-03-10T10:19:52.132 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.131+0000 7f24558cf700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2450083b00 con 0x7f2450072b50 2026-03-10T10:19:52.132 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:52 vm02.local ceph-mon[50200]: pgmap v7: 65 pgs: 65 active+clean; 2.5 GiB data, 8.8 GiB used, 111 GiB / 120 GiB avail; 28 MiB/s rd, 74 MiB/s wr, 187 op/s 2026-03-10T10:19:52.132 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:52 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:52.132 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:52 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:52.132 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:52 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:19:52.132 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:52 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:52.132 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:52 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:52.132 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:52 vm02.local ceph-mon[50200]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T10:19:52.132 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:52 vm02.local ceph-mon[50200]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T10:19:52.135 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.134+0000 7f244effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450072b50 0x7f2450082fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:52.135 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.134+0000 7f244effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450072b50 0x7f2450082fd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40506/0 (socket says 192.168.123.102:40506) 2026-03-10T10:19:52.135 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.134+0000 7f244effd700 1 -- 192.168.123.102:0/3861993960 learned_addr learned my addr 192.168.123.102:0/3861993960 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:52.135 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.135+0000 7f244e7fc700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2450083510 0x7f24501b3080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:52.135 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.135+0000 7f244effd700 1 -- 192.168.123.102:0/3861993960 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2450083510 msgr2=0x7f24501b3080 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.135+0000 7f244effd700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2450083510 0x7f24501b3080 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.135+0000 7f244effd700 1 -- 192.168.123.102:0/3861993960 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f244800c9f0 con 0x7f2450072b50 2026-03-10T10:19:52.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.135+0000 7f244effd700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450072b50 0x7f2450082fd0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f244000c8a0 tx=0x7f244000cbb0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:52.137 INFO:tasks.workunit.client.1.vm05.stdout:3/631: write dd/d15/d24/d2c/f3e [1622263,98523] 0 2026-03-10T10:19:52.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.135+0000 7f24548cd700 1 -- 192.168.123.102:0/3861993960 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f24400078c0 con 0x7f2450072b50 2026-03-10T10:19:52.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.136+0000 7f24548cd700 1 -- 192.168.123.102:0/3861993960 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f244000f450 con 0x7f2450072b50 2026-03-10T10:19:52.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.136+0000 7f24558cf700 1 -- 192.168.123.102:0/3861993960 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f24501b3620 con 0x7f2450072b50 2026-03-10T10:19:52.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.136+0000 7f24548cd700 1 -- 192.168.123.102:0/3861993960 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f244000e5c0 con 0x7f2450072b50 2026-03-10T10:19:52.137 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.136+0000 7f24558cf700 1 -- 192.168.123.102:0/3861993960 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f24501b3b20 con 0x7f2450072b50 2026-03-10T10:19:52.138 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.137+0000 7f24558cf700 1 -- 192.168.123.102:0/3861993960 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f243c005320 con 0x7f2450072b50 2026-03-10T10:19:52.138 INFO:tasks.workunit.client.1.vm05.stdout:3/632: fdatasync dd/d15/d1f/f53 0 2026-03-10T10:19:52.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.139+0000 7f24548cd700 1 -- 192.168.123.102:0/3861993960 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 24) v1 ==== 50383+0+0 (secure 0 0 0) 0x7f244000fa80 con 0x7f2450072b50 2026-03-10T10:19:52.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.139+0000 7f24548cd700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f2438040140 0x7f2438042600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:52.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.139+0000 7f24548cd700 1 -- 192.168.123.102:0/3861993960 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 5534+0+0 (secure 0 0 0) 0x7f24400530a0 con 0x7f2450072b50 2026-03-10T10:19:52.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.139+0000 7f244e7fc700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f2438040140 0x7f2438042600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:52.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.140+0000 7f244e7fc700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f2438040140 0x7f2438042600 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f244800a7a0 tx=0x7f2448008fe0 comp rx=0 tx=0).ready entity=mgr.14674 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:52.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.143+0000 7f24548cd700 1 -- 192.168.123.102:0/3861993960 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7f244000f5c0 con 0x7f2450072b50 2026-03-10T10:19:52.152 INFO:tasks.workunit.client.0.vm02.stdout:8/604: dread d1/d2/f36 [0,4194304] 0 2026-03-10T10:19:52.158 INFO:tasks.workunit.client.0.vm02.stdout:2/639: mkdir d0/d1a/d24/d80/ddb 0 2026-03-10T10:19:52.165 INFO:tasks.workunit.client.0.vm02.stdout:8/605: read d1/f6d [5510853,56482] 0 2026-03-10T10:19:52.196 INFO:tasks.workunit.client.0.vm02.stdout:7/633: creat d1/d1b/d8f/d67/fc2 x:0 0 0 2026-03-10T10:19:52.201 INFO:tasks.workunit.client.0.vm02.stdout:4/751: rename d1/d32/c47 to d1/d10/cf4 0 2026-03-10T10:19:52.201 INFO:tasks.workunit.client.0.vm02.stdout:4/752: chown d1/d41/d5e/d78/d1a/la1 3335 1 2026-03-10T10:19:52.205 INFO:tasks.workunit.client.0.vm02.stdout:4/753: dwrite d1/d41/d5e/d78/d1a/fad [4194304,4194304] 0 2026-03-10T10:19:52.213 INFO:tasks.workunit.client.0.vm02.stdout:9/608: creat da/d3c/d4c/fbf x:0 0 0 2026-03-10T10:19:52.218 INFO:tasks.workunit.client.0.vm02.stdout:1/668: mkdir d4/dc3/dd4 0 2026-03-10T10:19:52.234 INFO:tasks.workunit.client.0.vm02.stdout:1/669: dread d4/da/d27/f35 [0,4194304] 0 2026-03-10T10:19:52.234 INFO:tasks.workunit.client.0.vm02.stdout:1/670: stat d4/da/d1a/d47/cc0 0 2026-03-10T10:19:52.235 INFO:tasks.workunit.client.0.vm02.stdout:1/671: chown d4/da/d27/d38/d3c/c48 4877023 1 2026-03-10T10:19:52.244 INFO:tasks.workunit.client.0.vm02.stdout:2/640: creat d0/d1a/d49/d5e/d65/dc4/fdc x:0 0 0 2026-03-10T10:19:52.245 INFO:tasks.workunit.client.0.vm02.stdout:2/641: dread - d0/dd4/fd7 zero size 2026-03-10T10:19:52.251 INFO:tasks.workunit.client.0.vm02.stdout:8/606: readlink d1/d1c/l41 0 2026-03-10T10:19:52.252 INFO:tasks.workunit.client.0.vm02.stdout:8/607: truncate d1/d1c/d43/f52 868641 0 2026-03-10T10:19:52.262 INFO:tasks.workunit.client.0.vm02.stdout:5/764: truncate d1/db/d11/d62/fe8 571050 0 2026-03-10T10:19:52.271 INFO:tasks.workunit.client.0.vm02.stdout:0/673: creat d9/d34/d3d/d65/d89/dd3/da8/fd7 x:0 0 0 2026-03-10T10:19:52.275 INFO:tasks.workunit.client.0.vm02.stdout:7/634: mknod d1/d1b/d49/cc3 0 2026-03-10T10:19:52.292 INFO:tasks.workunit.client.0.vm02.stdout:4/754: unlink d1/d32/f69 0 2026-03-10T10:19:52.302 INFO:tasks.workunit.client.0.vm02.stdout:9/609: rename da/d3c/d4c/f33 to da/d3c/fc0 0 2026-03-10T10:19:52.316 INFO:tasks.workunit.client.0.vm02.stdout:6/613: creat d0/d8/d29/fc4 x:0 0 0 2026-03-10T10:19:52.318 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.317+0000 7f24558cf700 1 -- 192.168.123.102:0/3861993960 --> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f243c000bf0 con 0x7f2438040140 2026-03-10T10:19:52.320 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.319+0000 7f24548cd700 1 -- 192.168.123.102:0/3861993960 <== mgr.14674 v2:192.168.123.105:6828/1021252581 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+318 (secure 0 0 0) 0x7f243c000bf0 con 0x7f2438040140 2026-03-10T10:19:52.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.322+0000 7f24367fc700 1 -- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f2438040140 msgr2=0x7f2438042600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.322+0000 7f24367fc700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f2438040140 0x7f2438042600 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f244800a7a0 tx=0x7f2448008fe0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.323+0000 7f24367fc700 1 -- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450072b50 msgr2=0x7f2450082fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.323+0000 7f24367fc700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450072b50 0x7f2450082fd0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f244000c8a0 tx=0x7f244000cbb0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.326+0000 7f24367fc700 1 -- 192.168.123.102:0/3861993960 shutdown_connections 2026-03-10T10:19:52.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.326+0000 7f24367fc700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f2438040140 0x7f2438042600 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.326+0000 7f24367fc700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2450072b50 0x7f2450082fd0 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.326+0000 7f24367fc700 1 --2- 192.168.123.102:0/3861993960 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2450083510 0x7f24501b3080 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.326+0000 7f24367fc700 1 -- 192.168.123.102:0/3861993960 >> 192.168.123.102:0/3861993960 conn(0x7f245006dae0 msgr2=0x7f245006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:52.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.326+0000 7f24367fc700 1 -- 192.168.123.102:0/3861993960 shutdown_connections 2026-03-10T10:19:52.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.326+0000 7f24367fc700 1 -- 192.168.123.102:0/3861993960 wait complete. 2026-03-10T10:19:52.332 INFO:tasks.workunit.client.0.vm02.stdout:2/642: unlink d0/d1a/d49/l96 0 2026-03-10T10:19:52.337 INFO:tasks.workunit.client.0.vm02.stdout:5/765: fsync d1/db/d11/d84/d40/f66 0 2026-03-10T10:19:52.339 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:19:52.344 INFO:tasks.workunit.client.1.vm05.stdout:5/641: creat da/d9a/fda x:0 0 0 2026-03-10T10:19:52.352 INFO:tasks.workunit.client.0.vm02.stdout:7/635: creat d1/d1b/d49/fc4 x:0 0 0 2026-03-10T10:19:52.355 INFO:tasks.workunit.client.0.vm02.stdout:4/755: truncate d1/d75/ddd/f7d 1630946 0 2026-03-10T10:19:52.361 INFO:tasks.workunit.client.1.vm05.stdout:9/568: truncate d0/d1/d13/de/d21/f76 658996 0 2026-03-10T10:19:52.367 INFO:tasks.workunit.client.0.vm02.stdout:7/636: dread d1/dc/d16/faa [0,4194304] 0 2026-03-10T10:19:52.374 INFO:tasks.workunit.client.1.vm05.stdout:8/574: creat d7/d14/d15/da7/faf x:0 0 0 2026-03-10T10:19:52.375 INFO:tasks.workunit.client.1.vm05.stdout:8/575: chown d7/d14/d24/d3f/d4f/c5e 725188123 1 2026-03-10T10:19:52.375 INFO:tasks.workunit.client.1.vm05.stdout:8/576: write d7/d2f/f7f [89059,110138] 0 2026-03-10T10:19:52.390 INFO:tasks.workunit.client.0.vm02.stdout:6/614: unlink d0/l3 0 2026-03-10T10:19:52.393 INFO:tasks.workunit.client.1.vm05.stdout:2/557: write db/f2e [3659677,39494] 0 2026-03-10T10:19:52.398 INFO:tasks.workunit.client.1.vm05.stdout:1/703: rename d4/d20/l48 to d4/lcd 0 2026-03-10T10:19:52.407 INFO:tasks.workunit.client.0.vm02.stdout:2/643: creat d0/dd4/fdd x:0 0 0 2026-03-10T10:19:52.432 INFO:tasks.workunit.client.0.vm02.stdout:4/756: creat d1/d41/d5e/d78/d7f/ff5 x:0 0 0 2026-03-10T10:19:52.432 INFO:tasks.workunit.client.1.vm05.stdout:3/633: dwrite dd/d15/d24/d2c/f3c [0,4194304] 0 2026-03-10T10:19:52.432 INFO:tasks.workunit.client.0.vm02.stdout:4/757: chown d1/d10/db/f16 10 1 2026-03-10T10:19:52.444 INFO:tasks.workunit.client.1.vm05.stdout:5/642: chown da/d63/cb0 23207 1 2026-03-10T10:19:52.458 INFO:tasks.workunit.client.0.vm02.stdout:8/608: rename d1/d1c/d23/d25/c3c to d1/d1c/d43/d5b/d88/cb6 0 2026-03-10T10:19:52.460 INFO:tasks.workunit.client.1.vm05.stdout:6/624: getdents dd/d36/d3f/d12/d44/d30 0 2026-03-10T10:19:52.464 INFO:tasks.workunit.client.1.vm05.stdout:5/643: sync 2026-03-10T10:19:52.470 INFO:tasks.workunit.client.1.vm05.stdout:0/617: symlink d1/d2/d39/ld5 0 2026-03-10T10:19:52.473 INFO:tasks.workunit.client.0.vm02.stdout:3/640: link d1/d6/c71 d1/d8/d21/d73/d78/d79/cd1 0 2026-03-10T10:19:52.477 INFO:tasks.workunit.client.0.vm02.stdout:7/637: dwrite d1/dc/d55/f8b [4194304,4194304] 0 2026-03-10T10:19:52.488 INFO:tasks.workunit.client.0.vm02.stdout:6/615: creat d0/db9/fc5 x:0 0 0 2026-03-10T10:19:52.493 INFO:tasks.workunit.client.1.vm05.stdout:8/577: mkdir d7/d14/d24/d3f/d6a/db0 0 2026-03-10T10:19:52.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.497+0000 7f5a7a647700 1 -- 192.168.123.102:0/2281675702 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a6c0968f0 msgr2=0x7f5a6c098ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.497+0000 7f5a7a647700 1 --2- 192.168.123.102:0/2281675702 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a6c0968f0 0x7f5a6c098ce0 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f5a7404ed60 tx=0x7f5a7406a320 comp rx=0 tx=0).stop 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.497+0000 7f5a7a647700 1 -- 192.168.123.102:0/2281675702 shutdown_connections 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.497+0000 7f5a7a647700 1 --2- 192.168.123.102:0/2281675702 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a6c099220 0x7f5a6c09b610 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.497+0000 7f5a7a647700 1 --2- 192.168.123.102:0/2281675702 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a6c0968f0 0x7f5a6c098ce0 secure :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f5a7404ed60 tx=0x7f5a7406a320 comp rx=0 tx=0).stop 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.497+0000 7f5a7a647700 1 -- 192.168.123.102:0/2281675702 >> 192.168.123.102:0/2281675702 conn(0x7f5a6c090260 msgr2=0x7f5a6c0926a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.497+0000 7f5a7a647700 1 -- 192.168.123.102:0/2281675702 shutdown_connections 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.497+0000 7f5a7a647700 1 -- 192.168.123.102:0/2281675702 wait complete. 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.498+0000 7f5a7a647700 1 Processor -- start 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.498+0000 7f5a7a647700 1 -- start start 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.498+0000 7f5a7a647700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a6c099220 0x7f5a6c132190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.498+0000 7f5a7a647700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a6c1326d0 0x7f5a6c137740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.498+0000 7f5a7a647700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5a6c132be0 con 0x7f5a6c099220 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.498+0000 7f5a7a647700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5a6c132d50 con 0x7f5a6c1326d0 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.499+0000 7f5a737fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a6c1326d0 0x7f5a6c137740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.500+0000 7f5a73fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a6c099220 0x7f5a6c132190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.500+0000 7f5a737fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a6c1326d0 0x7f5a6c137740 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40524/0 (socket says 192.168.123.102:40524) 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.500+0000 7f5a737fe700 1 -- 192.168.123.102:0/1212409693 learned_addr learned my addr 192.168.123.102:0/1212409693 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.500+0000 7f5a73fff700 1 -- 192.168.123.102:0/1212409693 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a6c1326d0 msgr2=0x7f5a6c137740 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.500+0000 7f5a73fff700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a6c1326d0 0x7f5a6c137740 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.500+0000 7f5a73fff700 1 -- 192.168.123.102:0/1212409693 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5a74077040 con 0x7f5a6c099220 2026-03-10T10:19:52.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.500+0000 7f5a73fff700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a6c099220 0x7f5a6c132190 secure :-1 s=READY pgs=322 cs=0 l=1 rev1=1 crypto rx=0x7f5a6c006270 tx=0x7f5a740678d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:52.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.501+0000 7f5a717fa700 1 -- 192.168.123.102:0/1212409693 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5a74069a90 con 0x7f5a6c099220 2026-03-10T10:19:52.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.501+0000 7f5a717fa700 1 -- 192.168.123.102:0/1212409693 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5a74075050 con 0x7f5a6c099220 2026-03-10T10:19:52.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.501+0000 7f5a717fa700 1 -- 192.168.123.102:0/1212409693 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5a74087610 con 0x7f5a6c099220 2026-03-10T10:19:52.506 INFO:tasks.workunit.client.0.vm02.stdout:2/644: stat d0/d1a/d24/dc6/la4 0 2026-03-10T10:19:52.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.503+0000 7f5a7a647700 1 -- 192.168.123.102:0/1212409693 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5a6c137c80 con 0x7f5a6c099220 2026-03-10T10:19:52.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.503+0000 7f5a7a647700 1 -- 192.168.123.102:0/1212409693 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5a6c1380f0 con 0x7f5a6c099220 2026-03-10T10:19:52.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.503+0000 7f5a7a647700 1 -- 192.168.123.102:0/1212409693 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5a6c09e960 con 0x7f5a6c099220 2026-03-10T10:19:52.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.506+0000 7f5a717fa700 1 -- 192.168.123.102:0/1212409693 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 24) v1 ==== 50383+0+0 (secure 0 0 0) 0x7f5a74085040 con 0x7f5a6c099220 2026-03-10T10:19:52.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.506+0000 7f5a717fa700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f5a6403dd20 0x7f5a640401e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:52.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.507+0000 7f5a717fa700 1 -- 192.168.123.102:0/1212409693 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 5534+0+0 (secure 0 0 0) 0x7f5a7408f3b0 con 0x7f5a6c099220 2026-03-10T10:19:52.509 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.509+0000 7f5a737fe700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f5a6403dd20 0x7f5a640401e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:52.516 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.509+0000 7f5a717fa700 1 -- 192.168.123.102:0/1212409693 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7f5a7407c8d0 con 0x7f5a6c099220 2026-03-10T10:19:52.517 INFO:tasks.workunit.client.1.vm05.stdout:7/636: write d5/d1d/d29/f3a [1101387,8545] 0 2026-03-10T10:19:52.521 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.521+0000 7f5a737fe700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f5a6403dd20 0x7f5a640401e0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f5a68005950 tx=0x7f5a68009500 comp rx=0 tx=0).ready entity=mgr.14674 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:52.524 INFO:tasks.workunit.client.1.vm05.stdout:4/500: rename d1/d31/f6f to d1/d64/fa1 0 2026-03-10T10:19:52.524 INFO:tasks.workunit.client.1.vm05.stdout:4/501: stat d1/d31/dc/c6b 0 2026-03-10T10:19:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:52 vm05.local ceph-mon[59051]: pgmap v7: 65 pgs: 65 active+clean; 2.5 GiB data, 8.8 GiB used, 111 GiB / 120 GiB avail; 28 MiB/s rd, 74 MiB/s wr, 187 op/s 2026-03-10T10:19:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:52 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:52 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:52 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:19:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:52 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:52 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:52 vm05.local ceph-mon[59051]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T10:19:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:52 vm05.local ceph-mon[59051]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T10:19:52.545 INFO:tasks.workunit.client.1.vm05.stdout:1/704: dread d4/dd/f64 [0,4194304] 0 2026-03-10T10:19:52.563 INFO:tasks.workunit.client.1.vm05.stdout:6/625: unlink dd/d36/d3f/c92 0 2026-03-10T10:19:52.564 INFO:tasks.workunit.client.0.vm02.stdout:4/758: unlink d1/d52/d53/f5b 0 2026-03-10T10:19:52.574 INFO:tasks.workunit.client.0.vm02.stdout:9/610: rename da/d3c/d4c/d38/f9e to da/d3c/d4c/d2c/d34/d35/fc1 0 2026-03-10T10:19:52.576 INFO:tasks.workunit.client.0.vm02.stdout:8/609: creat d1/d1c/d43/d5b/d88/dac/fb7 x:0 0 0 2026-03-10T10:19:52.581 INFO:tasks.workunit.client.1.vm05.stdout:9/569: symlink d0/df/d74/d8c/d8f/lc1 0 2026-03-10T10:19:52.583 INFO:tasks.workunit.client.0.vm02.stdout:3/641: creat d1/d8/d86/da2/fd2 x:0 0 0 2026-03-10T10:19:52.585 INFO:tasks.workunit.client.1.vm05.stdout:8/578: mknod d7/d14/cb1 0 2026-03-10T10:19:52.603 INFO:tasks.workunit.client.1.vm05.stdout:7/637: fsync d5/d1d/d29/d3e/d8c/d96/fa6 0 2026-03-10T10:19:52.610 INFO:tasks.workunit.client.1.vm05.stdout:5/644: write da/db/d26/d70/f7c [288217,58531] 0 2026-03-10T10:19:52.614 INFO:tasks.workunit.client.1.vm05.stdout:2/558: rename db/d1c/d40/f5f to db/d2d/d5e/fae 0 2026-03-10T10:19:52.621 INFO:tasks.workunit.client.0.vm02.stdout:1/672: link d4/da/d27/d38/d3c/l5c d4/da/d1a/d47/ld5 0 2026-03-10T10:19:52.654 INFO:tasks.workunit.client.0.vm02.stdout:0/674: getdents d9/d18/d1a/d22/d24/d51 0 2026-03-10T10:19:52.655 INFO:tasks.workunit.client.0.vm02.stdout:7/638: write d1/d1b/f61 [6864352,15169] 0 2026-03-10T10:19:52.671 INFO:tasks.workunit.client.0.vm02.stdout:0/675: dwrite d9/d34/d3d/d65/d89/dd3/da8/fd7 [0,4194304] 0 2026-03-10T10:19:52.673 INFO:tasks.workunit.client.1.vm05.stdout:8/579: symlink d7/lb2 0 2026-03-10T10:19:52.676 INFO:tasks.workunit.client.1.vm05.stdout:4/502: write d1/d31/f34 [2508534,85352] 0 2026-03-10T10:19:52.677 INFO:tasks.workunit.client.0.vm02.stdout:5/766: rename d1/db/d11/d13/d28/d37/f76 to d1/db/d11/d16/d79/d85/d93/f107 0 2026-03-10T10:19:52.682 INFO:tasks.workunit.client.0.vm02.stdout:4/759: dwrite d1/d32/fc4 [0,4194304] 0 2026-03-10T10:19:52.688 INFO:tasks.workunit.client.0.vm02.stdout:2/645: write d0/d10/f46 [4720532,125787] 0 2026-03-10T10:19:52.692 INFO:tasks.workunit.client.0.vm02.stdout:2/646: readlink d0/d10/l79 0 2026-03-10T10:19:52.710 INFO:tasks.workunit.client.0.vm02.stdout:2/647: chown d0/d10/f5f 0 1 2026-03-10T10:19:52.719 INFO:tasks.workunit.client.1.vm05.stdout:7/638: rmdir d5 39 2026-03-10T10:19:52.728 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.728+0000 7f5a7a647700 1 -- 192.168.123.102:0/1212409693 --> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f5a6c002900 con 0x7f5a6403dd20 2026-03-10T10:19:52.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.732+0000 7f5a717fa700 1 -- 192.168.123.102:0/1212409693 <== mgr.14674 v2:192.168.123.105:6828/1021252581 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+318 (secure 0 0 0) 0x7f5a6c002900 con 0x7f5a6403dd20 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.736+0000 7f5a5effd700 1 -- 192.168.123.102:0/1212409693 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f5a6403dd20 msgr2=0x7f5a640401e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.736+0000 7f5a5effd700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f5a6403dd20 0x7f5a640401e0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f5a68005950 tx=0x7f5a68009500 comp rx=0 tx=0).stop 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.736+0000 7f5a5effd700 1 -- 192.168.123.102:0/1212409693 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a6c099220 msgr2=0x7f5a6c132190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.736+0000 7f5a5effd700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a6c099220 0x7f5a6c132190 secure :-1 s=READY pgs=322 cs=0 l=1 rev1=1 crypto rx=0x7f5a6c006270 tx=0x7f5a740678d0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.736+0000 7f5a5effd700 1 -- 192.168.123.102:0/1212409693 shutdown_connections 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.736+0000 7f5a5effd700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f5a6403dd20 0x7f5a640401e0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.736+0000 7f5a5effd700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a6c099220 0x7f5a6c132190 unknown :-1 s=CLOSED pgs=322 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.736+0000 7f5a5effd700 1 --2- 192.168.123.102:0/1212409693 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a6c1326d0 0x7f5a6c137740 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.736+0000 7f5a5effd700 1 -- 192.168.123.102:0/1212409693 >> 192.168.123.102:0/1212409693 conn(0x7f5a6c090260 msgr2=0x7f5a6c0926a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:52.738 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.738+0000 7f5a5effd700 1 -- 192.168.123.102:0/1212409693 shutdown_connections 2026-03-10T10:19:52.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.740+0000 7f5a5effd700 1 -- 192.168.123.102:0/1212409693 wait complete. 2026-03-10T10:19:52.759 INFO:tasks.workunit.client.1.vm05.stdout:2/559: fdatasync db/d28/f30 0 2026-03-10T10:19:52.776 INFO:tasks.workunit.client.0.vm02.stdout:1/673: mkdir d4/dc3/dd6 0 2026-03-10T10:19:52.786 INFO:tasks.workunit.client.1.vm05.stdout:0/618: creat d1/d2/d9/d31/d13/da2/fd6 x:0 0 0 2026-03-10T10:19:52.788 INFO:tasks.workunit.client.0.vm02.stdout:3/642: dwrite d1/d8/d21/f5e [0,4194304] 0 2026-03-10T10:19:52.798 INFO:tasks.workunit.client.0.vm02.stdout:7/639: symlink d1/dc/d10/d38/lc5 0 2026-03-10T10:19:52.802 INFO:tasks.workunit.client.0.vm02.stdout:9/611: mkdir da/d3c/d4c/d2c/d34/dc2 0 2026-03-10T10:19:52.811 INFO:tasks.workunit.client.0.vm02.stdout:5/767: creat d1/db/d11/d13/d28/d37/d3d/da3/f108 x:0 0 0 2026-03-10T10:19:52.814 INFO:tasks.workunit.client.1.vm05.stdout:2/560: creat db/d28/d4f/d59/da4/faf x:0 0 0 2026-03-10T10:19:52.814 INFO:tasks.workunit.client.1.vm05.stdout:2/561: fdatasync db/d28/d4f/d8b/fa8 0 2026-03-10T10:19:52.814 INFO:tasks.workunit.client.1.vm05.stdout:1/705: link d4/df/d1c/l59 d4/df/d1c/d92/lce 0 2026-03-10T10:19:52.815 INFO:tasks.workunit.client.0.vm02.stdout:5/768: dread - d1/db/d11/d84/d40/d4f/d5f/ffc zero size 2026-03-10T10:19:52.817 INFO:tasks.workunit.client.1.vm05.stdout:3/634: link dd/d20/f50 dd/d15/fdf 0 2026-03-10T10:19:52.820 INFO:tasks.workunit.client.1.vm05.stdout:0/619: truncate d1/d2/d9/d50/f93 739457 0 2026-03-10T10:19:52.831 INFO:tasks.workunit.client.1.vm05.stdout:3/635: dwrite dd/d39/d5c/fb9 [0,4194304] 0 2026-03-10T10:19:52.842 INFO:tasks.workunit.client.1.vm05.stdout:7/639: mknod d5/d1d/d29/d3e/d8c/d82/ccc 0 2026-03-10T10:19:52.842 INFO:tasks.workunit.client.1.vm05.stdout:7/640: fdatasync d5/fe 0 2026-03-10T10:19:52.855 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.855+0000 7f52ae6ac700 1 -- 192.168.123.102:0/1089256447 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f52a8072b50 msgr2=0x7f52a8072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.855 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.855+0000 7f52ae6ac700 1 --2- 192.168.123.102:0/1089256447 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f52a8072b50 0x7f52a8072f70 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f52a000d3e0 tx=0x7f52a000d6f0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.855+0000 7f52ae6ac700 1 -- 192.168.123.102:0/1089256447 shutdown_connections 2026-03-10T10:19:52.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.855+0000 7f52ae6ac700 1 --2- 192.168.123.102:0/1089256447 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52a8075a40 0x7f52a8077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.855+0000 7f52ae6ac700 1 --2- 192.168.123.102:0/1089256447 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f52a8072b50 0x7f52a8072f70 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.855+0000 7f52ae6ac700 1 -- 192.168.123.102:0/1089256447 >> 192.168.123.102:0/1089256447 conn(0x7f52a806dae0 msgr2=0x7f52a806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:52.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.855+0000 7f52ae6ac700 1 -- 192.168.123.102:0/1089256447 shutdown_connections 2026-03-10T10:19:52.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.855+0000 7f52ae6ac700 1 -- 192.168.123.102:0/1089256447 wait complete. 2026-03-10T10:19:52.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52ae6ac700 1 Processor -- start 2026-03-10T10:19:52.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52ae6ac700 1 -- start start 2026-03-10T10:19:52.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52ae6ac700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f52a8075a40 0x7f52a8082ff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:52.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52ae6ac700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52a8083530 0x7f52a81b3080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:52.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52ae6ac700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f52a8083a40 con 0x7f52a8083530 2026-03-10T10:19:52.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52ae6ac700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f52a8083bb0 con 0x7f52a8075a40 2026-03-10T10:19:52.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52a77fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52a8083530 0x7f52a81b3080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:52.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52a7fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f52a8075a40 0x7f52a8082ff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:52.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52a77fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52a8083530 0x7f52a81b3080 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:47444/0 (socket says 192.168.123.102:47444) 2026-03-10T10:19:52.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.856+0000 7f52a77fe700 1 -- 192.168.123.102:0/2168910441 learned_addr learned my addr 192.168.123.102:0/2168910441 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:52.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.857+0000 7f52a77fe700 1 -- 192.168.123.102:0/2168910441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f52a8075a40 msgr2=0x7f52a8082ff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:52.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.857+0000 7f52a77fe700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f52a8075a40 0x7f52a8082ff0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:52.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.857+0000 7f52a77fe700 1 -- 192.168.123.102:0/2168910441 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f52a000d090 con 0x7f52a8083530 2026-03-10T10:19:52.862 INFO:tasks.workunit.client.1.vm05.stdout:2/562: dwrite db/d28/f7d [0,4194304] 0 2026-03-10T10:19:52.863 INFO:tasks.workunit.client.1.vm05.stdout:2/563: chown db/d28/f30 8 1 2026-03-10T10:19:52.864 INFO:tasks.workunit.client.1.vm05.stdout:8/580: creat d7/d14/d24/d3f/fb3 x:0 0 0 2026-03-10T10:19:52.865 INFO:tasks.workunit.client.1.vm05.stdout:8/581: write d7/d14/d3a/d49/d65/f83 [963303,80527] 0 2026-03-10T10:19:52.865 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.864+0000 7f52a77fe700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52a8083530 0x7f52a81b3080 secure :-1 s=READY pgs=323 cs=0 l=1 rev1=1 crypto rx=0x7f5298008ca0 tx=0x7f529800e410 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:52.865 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.864+0000 7f52a57fa700 1 -- 192.168.123.102:0/2168910441 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f529800f040 con 0x7f52a8083530 2026-03-10T10:19:52.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.864+0000 7f52ae6ac700 1 -- 192.168.123.102:0/2168910441 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f52a81b3620 con 0x7f52a8083530 2026-03-10T10:19:52.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.864+0000 7f52ae6ac700 1 -- 192.168.123.102:0/2168910441 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f52a81b3b70 con 0x7f52a8083530 2026-03-10T10:19:52.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.866+0000 7f52a57fa700 1 -- 192.168.123.102:0/2168910441 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f529801b070 con 0x7f52a8083530 2026-03-10T10:19:52.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.866+0000 7f52a57fa700 1 -- 192.168.123.102:0/2168910441 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f529802bc40 con 0x7f52a8083530 2026-03-10T10:19:52.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.866+0000 7f52a57fa700 1 -- 192.168.123.102:0/2168910441 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 24) v1 ==== 50383+0+0 (secure 0 0 0) 0x7f5298021430 con 0x7f52a8083530 2026-03-10T10:19:52.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.868+0000 7f52a57fa700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f529003df80 0x7f5290040440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:52.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.868+0000 7f52a57fa700 1 -- 192.168.123.102:0/2168910441 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 5534+0+0 (secure 0 0 0) 0x7f529801d070 con 0x7f52a8083530 2026-03-10T10:19:52.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.868+0000 7f52a7fff700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f529003df80 0x7f5290040440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:52.870 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.870+0000 7f52a7fff700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f529003df80 0x7f5290040440 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f52a00095a0 tx=0x7f52a000da40 comp rx=0 tx=0).ready entity=mgr.14674 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:52.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.870+0000 7f52ae6ac700 1 -- 192.168.123.102:0/2168910441 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5294005320 con 0x7f52a8083530 2026-03-10T10:19:52.879 INFO:tasks.workunit.client.1.vm05.stdout:4/503: link d1/d31/d76/l93 d1/d31/d4b/la2 0 2026-03-10T10:19:52.883 INFO:tasks.workunit.client.1.vm05.stdout:4/504: readlink d1/d3/l7b 0 2026-03-10T10:19:52.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:52.887+0000 7f52a57fa700 1 -- 192.168.123.102:0/2168910441 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7f52980282e0 con 0x7f52a8083530 2026-03-10T10:19:52.905 INFO:tasks.workunit.client.1.vm05.stdout:6/626: rename dd/d36/d3f/c17 to dd/d36/d3f/d12/d44/cca 0 2026-03-10T10:19:52.920 INFO:tasks.workunit.client.1.vm05.stdout:0/620: mkdir d1/dd7 0 2026-03-10T10:19:52.925 INFO:tasks.workunit.client.1.vm05.stdout:7/641: dwrite d5/d1d/d29/d3e/d8c/d96/fa6 [4194304,4194304] 0 2026-03-10T10:19:52.925 INFO:tasks.workunit.client.1.vm05.stdout:2/564: truncate db/d28/d4f/d59/f7e 103901 0 2026-03-10T10:19:52.934 INFO:tasks.workunit.client.1.vm05.stdout:2/565: dread db/f23 [0,4194304] 0 2026-03-10T10:19:52.940 INFO:tasks.workunit.client.0.vm02.stdout:8/610: creat d1/d1c/d43/d5b/d88/dac/d83/d9f/fb8 x:0 0 0 2026-03-10T10:19:52.942 INFO:tasks.workunit.client.0.vm02.stdout:8/611: chown d1/d1c/d43/d6a/f82 3861 1 2026-03-10T10:19:52.942 INFO:tasks.workunit.client.0.vm02.stdout:8/612: chown d1/d1c/d43/d5b/d88/dac/fb7 70 1 2026-03-10T10:19:52.958 INFO:tasks.workunit.client.0.vm02.stdout:2/648: fsync d0/d1a/d49/d5e/d65/dc4/fd6 0 2026-03-10T10:19:52.962 INFO:tasks.workunit.client.1.vm05.stdout:9/570: rename d0/d1/d13/de/d21/l6c to d0/d1/d57/lc2 0 2026-03-10T10:19:52.967 INFO:tasks.workunit.client.1.vm05.stdout:0/621: dread - d1/d2/d9/d31/d12/d41/f6d zero size 2026-03-10T10:19:52.970 INFO:tasks.workunit.client.0.vm02.stdout:3/643: mkdir d1/d20/d52/dd3 0 2026-03-10T10:19:52.978 INFO:tasks.workunit.client.1.vm05.stdout:2/566: creat db/d28/d4f/fb0 x:0 0 0 2026-03-10T10:19:52.988 INFO:tasks.workunit.client.0.vm02.stdout:0/676: fsync d9/d34/d3d/f94 0 2026-03-10T10:19:52.988 INFO:tasks.workunit.client.0.vm02.stdout:0/677: read - d9/d34/d3d/d65/d89/dd3/f66 zero size 2026-03-10T10:19:52.988 INFO:tasks.workunit.client.1.vm05.stdout:4/505: mknod d1/d31/dc/ca3 0 2026-03-10T10:19:52.988 INFO:tasks.workunit.client.1.vm05.stdout:5/645: rename da/d96/fc9 to da/db/d26/d35/d38/fdb 0 2026-03-10T10:19:52.989 INFO:tasks.workunit.client.1.vm05.stdout:9/571: dread - d0/d1/d13/de/d21/fab zero size 2026-03-10T10:19:52.994 INFO:tasks.workunit.client.1.vm05.stdout:1/706: link d4/l86 d4/d79/d83/dc5/lcf 0 2026-03-10T10:19:52.995 INFO:tasks.workunit.client.0.vm02.stdout:1/674: symlink d4/d2c/d91/ld7 0 2026-03-10T10:19:52.995 INFO:tasks.workunit.client.0.vm02.stdout:8/613: sync 2026-03-10T10:19:53.006 INFO:tasks.workunit.client.0.vm02.stdout:7/640: write d1/dc/f2e [1361409,59121] 0 2026-03-10T10:19:53.007 INFO:tasks.workunit.client.1.vm05.stdout:8/582: creat d7/d2f/fb4 x:0 0 0 2026-03-10T10:19:53.008 INFO:tasks.workunit.client.1.vm05.stdout:7/642: creat d5/d1d/d29/d3e/d8c/d82/d90/d9a/fcd x:0 0 0 2026-03-10T10:19:53.008 INFO:tasks.workunit.client.1.vm05.stdout:8/583: read - d7/d14/d24/d3f/fab zero size 2026-03-10T10:19:53.008 INFO:tasks.workunit.client.0.vm02.stdout:9/612: symlink da/d3c/d4c/d38/d82/da3/lc3 0 2026-03-10T10:19:53.013 INFO:tasks.workunit.client.0.vm02.stdout:5/769: dwrite d1/db/d11/d84/d95/fd6 [4194304,4194304] 0 2026-03-10T10:19:53.022 INFO:tasks.workunit.client.1.vm05.stdout:3/636: rename dd/d20/d56/la6 to dd/d15/d1f/le0 0 2026-03-10T10:19:53.023 INFO:tasks.workunit.client.1.vm05.stdout:9/572: creat d0/df/d74/fc3 x:0 0 0 2026-03-10T10:19:53.024 INFO:tasks.workunit.client.1.vm05.stdout:6/627: link dd/d36/d3f/d12/d44/d30/la9 dd/d36/d3f/d12/d44/d30/d4a/lcb 0 2026-03-10T10:19:53.024 INFO:tasks.workunit.client.1.vm05.stdout:6/628: dread - dd/d36/d3f/d12/d44/d30/f8d zero size 2026-03-10T10:19:53.025 INFO:tasks.workunit.client.1.vm05.stdout:6/629: readlink dd/d36/d3f/d12/l1a 0 2026-03-10T10:19:53.029 INFO:tasks.workunit.client.1.vm05.stdout:0/622: creat d1/d2/d9/d31/d13/d15/d4e/d8a/fd8 x:0 0 0 2026-03-10T10:19:53.030 INFO:tasks.workunit.client.0.vm02.stdout:6/616: creat d0/d8/d29/d2f/d4b/da5/d6f/fc6 x:0 0 0 2026-03-10T10:19:53.039 INFO:tasks.workunit.client.0.vm02.stdout:5/770: dread d1/db/f56 [0,4194304] 0 2026-03-10T10:19:53.044 INFO:tasks.workunit.client.1.vm05.stdout:7/643: dread d5/dd/f28 [0,4194304] 0 2026-03-10T10:19:53.046 INFO:tasks.workunit.client.0.vm02.stdout:3/644: dwrite d1/d8/d86/f9b [0,4194304] 0 2026-03-10T10:19:53.054 INFO:tasks.workunit.client.0.vm02.stdout:8/614: rename d1/d1c/d43/f4b to d1/d1c/d43/d5b/d88/fb9 0 2026-03-10T10:19:53.060 INFO:tasks.workunit.client.0.vm02.stdout:8/615: chown d1/d1c/d43/f7e 4110817 1 2026-03-10T10:19:53.063 INFO:tasks.workunit.client.0.vm02.stdout:8/616: read d1/f40 [1545615,130149] 0 2026-03-10T10:19:53.068 INFO:tasks.workunit.client.1.vm05.stdout:2/567: symlink db/d12/lb1 0 2026-03-10T10:19:53.068 INFO:tasks.workunit.client.1.vm05.stdout:4/506: mknod d1/d31/ca4 0 2026-03-10T10:19:53.068 INFO:tasks.workunit.client.1.vm05.stdout:5/646: rename da/d9a/faa to da/d9a/dc7/db4/fdc 0 2026-03-10T10:19:53.068 INFO:tasks.workunit.client.1.vm05.stdout:4/507: stat d1/d31/d76 0 2026-03-10T10:19:53.069 INFO:tasks.workunit.client.1.vm05.stdout:3/637: creat dd/d15/d24/d74/d88/fe1 x:0 0 0 2026-03-10T10:19:53.071 INFO:tasks.workunit.client.0.vm02.stdout:0/678: getdents d9/d34/d3d/d65/d89/dd3/da7/db9/dc9 0 2026-03-10T10:19:53.072 INFO:tasks.workunit.client.0.vm02.stdout:4/760: link d1/d10/cf4 d1/d41/d5e/d78/d1a/d49/d81/dc6/df2/cf6 0 2026-03-10T10:19:53.074 INFO:tasks.workunit.client.1.vm05.stdout:1/707: creat d4/d79/d83/dc5/dcb/fd0 x:0 0 0 2026-03-10T10:19:53.075 INFO:tasks.workunit.client.0.vm02.stdout:2/649: rmdir d0/d1a/d24/dc6/d9f 0 2026-03-10T10:19:53.075 INFO:tasks.workunit.client.1.vm05.stdout:6/630: dread dd/d36/d3f/d12/d44/d2a/f98 [0,4194304] 0 2026-03-10T10:19:53.076 INFO:tasks.workunit.client.1.vm05.stdout:0/623: read - d1/d2/d9/d31/d13/f9c zero size 2026-03-10T10:19:53.078 INFO:tasks.workunit.client.1.vm05.stdout:0/624: chown d1/d2/d9/d31/d13/d17/c3a 3882503 1 2026-03-10T10:19:53.083 INFO:tasks.workunit.client.1.vm05.stdout:5/647: sync 2026-03-10T10:19:53.084 INFO:tasks.workunit.client.0.vm02.stdout:5/771: dread d1/db/d11/d84/d40/d4f/d5f/d6d/fb8 [0,4194304] 0 2026-03-10T10:19:53.090 INFO:tasks.workunit.client.1.vm05.stdout:7/644: mknod d5/d1d/d29/d3e/d8c/d82/d90/d9a/cce 0 2026-03-10T10:19:53.106 INFO:tasks.workunit.client.1.vm05.stdout:9/573: dwrite d0/d1/d13/f27 [0,4194304] 0 2026-03-10T10:19:53.109 INFO:tasks.workunit.client.0.vm02.stdout:6/617: write d0/d8/d8c/f75 [433920,112851] 0 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.112 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: Standby manager daemon vm02.zmavgl started 2026-03-10T10:19:53.113 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm02.zmavgl/crt"}]: dispatch 2026-03-10T10:19:53.113 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:19:53.113 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm02.zmavgl/key"}]: dispatch 2026-03-10T10:19:53.113 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:53 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:19:53.126 INFO:tasks.workunit.client.0.vm02.stdout:0/679: mknod d9/d34/d3d/d65/da2/cd8 0 2026-03-10T10:19:53.127 INFO:tasks.workunit.client.0.vm02.stdout:4/761: mkdir d1/d52/d53/dda/df7 0 2026-03-10T10:19:53.139 INFO:tasks.workunit.client.0.vm02.stdout:5/772: mknod d1/db/d11/d16/c109 0 2026-03-10T10:19:53.140 INFO:tasks.workunit.client.0.vm02.stdout:5/773: chown d1/db/d11/d84/d40/c7c 62248474 1 2026-03-10T10:19:53.152 INFO:tasks.workunit.client.0.vm02.stdout:9/613: link da/d3c/d4c/d38/d82/da3/lc3 da/d3c/d4c/d75/lc4 0 2026-03-10T10:19:53.153 INFO:tasks.workunit.client.0.vm02.stdout:6/618: rename d0/d8/d29/d2f/d4b/da5/l72 to d0/d8/d29/d2f/d4b/da5/lc7 0 2026-03-10T10:19:53.154 INFO:tasks.workunit.client.0.vm02.stdout:6/619: read d0/d87/fa7 [1326358,99612] 0 2026-03-10T10:19:53.157 INFO:tasks.workunit.client.0.vm02.stdout:8/617: dwrite d1/d1c/d23/d25/f98 [0,4194304] 0 2026-03-10T10:19:53.157 INFO:tasks.workunit.client.0.vm02.stdout:7/641: dread d1/d1b/d8f/f5c [4194304,4194304] 0 2026-03-10T10:19:53.167 INFO:tasks.workunit.client.0.vm02.stdout:7/642: dwrite d1/d1b/f61 [4194304,4194304] 0 2026-03-10T10:19:53.176 INFO:tasks.workunit.client.0.vm02.stdout:1/675: dwrite d4/da/d27/d38/d80/f94 [0,4194304] 0 2026-03-10T10:19:53.186 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.180+0000 7f52ae6ac700 1 -- 192.168.123.102:0/2168910441 --> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f5294000bf0 con 0x7f529003df80 2026-03-10T10:19:53.186 INFO:tasks.workunit.client.0.vm02.stdout:0/680: creat d9/d34/d3d/d65/d89/fd9 x:0 0 0 2026-03-10T10:19:53.186 INFO:tasks.workunit.client.0.vm02.stdout:0/681: dread - d9/d18/d1a/d22/fd6 zero size 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.187+0000 7f52a57fa700 1 -- 192.168.123.102:0/2168910441 <== mgr.14674 v2:192.168.123.105:6828/1021252581 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7f5294000bf0 con 0x7f529003df80 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (4m) 3s ago 5m 22.5M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (5m) 3s ago 5m 8446k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (4m) 5s ago 4m 11.0M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (5m) 3s ago 5m 7415k - 18.2.1 5be31c24972a 51802fb57170 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (4m) 5s ago 4m 7407k - 18.2.1 5be31c24972a f275982dc269 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (4m) 3s ago 4m 82.1M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (2m) 3s ago 2m 14.3M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (2m) 3s ago 2m 218M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (2m) 5s ago 2m 14.3M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (2m) 5s ago 2m 128M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:9283,8765,8443 running (5m) 3s ago 5m 270M - 18.2.1 5be31c24972a 8bea583521d3 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (20s) 5s ago 4m 541M - 19.2.3-678-ge911bdeb 654f31e6858e e97b68181f5c 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (5m) 3s ago 5m 51.6M 2048M 18.2.1 5be31c24972a ab92d831cc1d 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (4m) 5s ago 4m 37.6M 2048M 18.2.1 5be31c24972a cea7d23f93a6 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (5m) 3s ago 5m 16.3M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (4m) 5s ago 4m 14.9M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (4m) 3s ago 4m 306M 4096M 18.2.1 5be31c24972a 9d7f135a3f3b 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (4m) 3s ago 4m 307M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (3m) 3s ago 3m 263M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (3m) 5s ago 3m 388M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (3m) 5s ago 3m 329M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (3m) 5s ago 3m 333M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:19:53.188 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (4m) 3s ago 4m 48.2M - 2.43.0 a07b618ecd1d a607fd039cb6 2026-03-10T10:19:53.191 INFO:tasks.workunit.client.0.vm02.stdout:2/650: creat d0/d1a/d24/dd3/fde x:0 0 0 2026-03-10T10:19:53.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 -- 192.168.123.102:0/2168910441 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f529003df80 msgr2=0x7f5290040440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:53.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f529003df80 0x7f5290040440 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f52a00095a0 tx=0x7f52a000da40 comp rx=0 tx=0).stop 2026-03-10T10:19:53.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 -- 192.168.123.102:0/2168910441 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52a8083530 msgr2=0x7f52a81b3080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:53.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52a8083530 0x7f52a81b3080 secure :-1 s=READY pgs=323 cs=0 l=1 rev1=1 crypto rx=0x7f5298008ca0 tx=0x7f529800e410 comp rx=0 tx=0).stop 2026-03-10T10:19:53.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 -- 192.168.123.102:0/2168910441 shutdown_connections 2026-03-10T10:19:53.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f529003df80 0x7f5290040440 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f52a8075a40 0x7f52a8082ff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 --2- 192.168.123.102:0/2168910441 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f52a8083530 0x7f52a81b3080 unknown :-1 s=CLOSED pgs=323 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 -- 192.168.123.102:0/2168910441 >> 192.168.123.102:0/2168910441 conn(0x7f52a806dae0 msgr2=0x7f52a806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:53.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 -- 192.168.123.102:0/2168910441 shutdown_connections 2026-03-10T10:19:53.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.195+0000 7f528effd700 1 -- 192.168.123.102:0/2168910441 wait complete. 2026-03-10T10:19:53.205 INFO:tasks.workunit.client.1.vm05.stdout:8/584: getdents d7/d14/d15/d3b/da0 0 2026-03-10T10:19:53.206 INFO:tasks.workunit.client.1.vm05.stdout:8/585: chown d7/d2f/fb4 24132 1 2026-03-10T10:19:53.207 INFO:tasks.workunit.client.0.vm02.stdout:2/651: dwrite d0/dd4/fdd [0,4194304] 0 2026-03-10T10:19:53.217 INFO:tasks.workunit.client.0.vm02.stdout:3/645: link d1/d6/c71 d1/d20/db2/cd4 0 2026-03-10T10:19:53.251 INFO:tasks.workunit.client.1.vm05.stdout:2/568: fsync db/d12/f9c 0 2026-03-10T10:19:53.255 INFO:tasks.workunit.client.0.vm02.stdout:5/774: rename d1/db/f56 to d1/db/d11/d62/f10a 0 2026-03-10T10:19:53.299 INFO:tasks.workunit.client.1.vm05.stdout:1/708: fdatasync d4/df/d1c/d92/f9e 0 2026-03-10T10:19:53.300 INFO:tasks.workunit.client.1.vm05.stdout:1/709: readlink d4/d37/d4e/l5b 0 2026-03-10T10:19:53.334 INFO:tasks.workunit.client.0.vm02.stdout:7/643: symlink d1/d1b/d8f/dad/d7e/lc6 0 2026-03-10T10:19:53.336 INFO:tasks.workunit.client.0.vm02.stdout:7/644: chown d1/dc/d60/f89 220 1 2026-03-10T10:19:53.336 INFO:tasks.workunit.client.1.vm05.stdout:3/638: dwrite dd/d15/d24/f42 [0,4194304] 0 2026-03-10T10:19:53.337 INFO:tasks.workunit.client.0.vm02.stdout:7/645: stat d1/dc/d16/f95 0 2026-03-10T10:19:53.337 INFO:tasks.workunit.client.0.vm02.stdout:7/646: fdatasync d1/f80 0 2026-03-10T10:19:53.348 INFO:tasks.workunit.client.1.vm05.stdout:8/586: creat d7/fb5 x:0 0 0 2026-03-10T10:19:53.354 INFO:tasks.workunit.client.0.vm02.stdout:9/614: write da/d3c/d4c/f17 [6090286,61961] 0 2026-03-10T10:19:53.358 INFO:tasks.workunit.client.0.vm02.stdout:3/646: mknod d1/d8/d86/da2/cd5 0 2026-03-10T10:19:53.364 INFO:tasks.workunit.client.1.vm05.stdout:4/508: fdatasync d1/d64/fa1 0 2026-03-10T10:19:53.365 INFO:tasks.workunit.client.0.vm02.stdout:8/618: dwrite d1/d1c/d24/f8a [0,4194304] 0 2026-03-10T10:19:53.374 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.372+0000 7ff5922f1700 1 -- 192.168.123.102:0/426467354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c0759e0 msgr2=0x7ff58c077e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:53.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.372+0000 7ff5922f1700 1 --2- 192.168.123.102:0/426467354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c0759e0 0x7ff58c077e70 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7ff58400d3f0 tx=0x7ff58400d700 comp rx=0 tx=0).stop 2026-03-10T10:19:53.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 -- 192.168.123.102:0/426467354 shutdown_connections 2026-03-10T10:19:53.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 --2- 192.168.123.102:0/426467354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c0759e0 0x7ff58c077e70 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 --2- 192.168.123.102:0/426467354 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff58c072af0 0x7ff58c072f10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 -- 192.168.123.102:0/426467354 >> 192.168.123.102:0/426467354 conn(0x7ff58c06dad0 msgr2=0x7ff58c06ff30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 -- 192.168.123.102:0/426467354 shutdown_connections 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 -- 192.168.123.102:0/426467354 wait complete. 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 Processor -- start 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 -- start start 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c072af0 0x7ff58c0830b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff58c0835f0 0x7ff58c12e490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff58c083a70 con 0x7ff58c0835f0 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.373+0000 7ff5922f1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff58c083be0 con 0x7ff58c072af0 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.374+0000 7ff58bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c072af0 0x7ff58c0830b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.374+0000 7ff58bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c072af0 0x7ff58c0830b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40584/0 (socket says 192.168.123.102:40584) 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.374+0000 7ff58bfff700 1 -- 192.168.123.102:0/1865691008 learned_addr learned my addr 192.168.123.102:0/1865691008 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.374+0000 7ff58bfff700 1 -- 192.168.123.102:0/1865691008 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff58c0835f0 msgr2=0x7ff58c12e490 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.374+0000 7ff58bfff700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff58c0835f0 0x7ff58c12e490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.374+0000 7ff58bfff700 1 -- 192.168.123.102:0/1865691008 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff584007ed0 con 0x7ff58c072af0 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.375+0000 7ff58bfff700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c072af0 0x7ff58c0830b0 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7ff57c00d8d0 tx=0x7ff57c00dc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:53.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.375+0000 7ff5897fa700 1 -- 192.168.123.102:0/1865691008 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff57c009940 con 0x7ff58c072af0 2026-03-10T10:19:53.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.375+0000 7ff5922f1700 1 -- 192.168.123.102:0/1865691008 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff58c12e9d0 con 0x7ff58c072af0 2026-03-10T10:19:53.385 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.375+0000 7ff5922f1700 1 -- 192.168.123.102:0/1865691008 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff58c12eed0 con 0x7ff58c072af0 2026-03-10T10:19:53.385 INFO:tasks.workunit.client.0.vm02.stdout:0/682: dread d9/f6c [0,4194304] 0 2026-03-10T10:19:53.385 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.381+0000 7ff5897fa700 1 -- 192.168.123.102:0/1865691008 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff57c010460 con 0x7ff58c072af0 2026-03-10T10:19:53.385 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.381+0000 7ff5897fa700 1 -- 192.168.123.102:0/1865691008 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff57c00f5d0 con 0x7ff58c072af0 2026-03-10T10:19:53.385 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.382+0000 7ff5897fa700 1 -- 192.168.123.102:0/1865691008 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 25) v1 ==== 95238+0+0 (secure 0 0 0) 0x7ff57c00f7e0 con 0x7ff58c072af0 2026-03-10T10:19:53.385 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.382+0000 7ff5897fa700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff574071ea0 0x7ff574074360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:53.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.386+0000 7ff5897fa700 1 -- 192.168.123.102:0/1865691008 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 5534+0+0 (secure 0 0 0) 0x7ff57c093820 con 0x7ff58c072af0 2026-03-10T10:19:53.387 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.386+0000 7ff572ffd700 1 -- 192.168.123.102:0/1865691008 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff578005320 con 0x7ff58c072af0 2026-03-10T10:19:53.389 INFO:tasks.workunit.client.1.vm05.stdout:7/645: dwrite d5/d1d/d20/d2d/fb0 [0,4194304] 0 2026-03-10T10:19:53.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.390+0000 7ff58b7fe700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff574071ea0 0x7ff574074360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:53.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.391+0000 7ff5897fa700 1 -- 192.168.123.102:0/1865691008 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7ff57c05bd40 con 0x7ff58c072af0 2026-03-10T10:19:53.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.391+0000 7ff58b7fe700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff574071ea0 0x7ff574074360 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7ff58400db80 tx=0x7ff584006040 comp rx=0 tx=0).ready entity=mgr.14674 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:53.407 INFO:tasks.workunit.client.1.vm05.stdout:1/710: truncate d4/d20/f31 2385738 0 2026-03-10T10:19:53.409 INFO:tasks.workunit.client.0.vm02.stdout:1/676: write d4/d2c/d53/f97 [333263,115240] 0 2026-03-10T10:19:53.410 INFO:tasks.workunit.client.0.vm02.stdout:1/677: chown d4/d2c/d53/f74 18 1 2026-03-10T10:19:53.422 INFO:tasks.workunit.client.1.vm05.stdout:2/569: dwrite db/d12/f31 [0,4194304] 0 2026-03-10T10:19:53.432 INFO:tasks.workunit.client.1.vm05.stdout:3/639: truncate dd/d39/f96 126445 0 2026-03-10T10:19:53.433 INFO:tasks.workunit.client.0.vm02.stdout:0/683: sync 2026-03-10T10:19:53.434 INFO:tasks.workunit.client.1.vm05.stdout:1/711: sync 2026-03-10T10:19:53.434 INFO:tasks.workunit.client.0.vm02.stdout:0/684: readlink d9/d18/d1a/l1d 0 2026-03-10T10:19:53.437 INFO:tasks.workunit.client.0.vm02.stdout:7/647: rmdir d1/dc/d60 39 2026-03-10T10:19:53.439 INFO:tasks.workunit.client.0.vm02.stdout:6/620: write d0/d8/d29/d2f/d4b/da5/f73 [1201486,2383] 0 2026-03-10T10:19:53.444 INFO:tasks.workunit.client.1.vm05.stdout:8/587: symlink d7/d14/d62/d90/lb6 0 2026-03-10T10:19:53.449 INFO:tasks.workunit.client.1.vm05.stdout:4/509: mknod d1/d31/dc/d40/d63/ca5 0 2026-03-10T10:19:53.453 INFO:tasks.workunit.client.1.vm05.stdout:7/646: mknod d5/d1d/d29/d3e/d8c/d82/ccf 0 2026-03-10T10:19:53.454 INFO:tasks.workunit.client.0.vm02.stdout:7/648: sync 2026-03-10T10:19:53.459 INFO:tasks.workunit.client.1.vm05.stdout:4/510: dread d1/d3/f5f [0,4194304] 0 2026-03-10T10:19:53.466 INFO:tasks.workunit.client.0.vm02.stdout:8/619: dwrite d1/d1c/d43/d6a/da8/f44 [0,4194304] 0 2026-03-10T10:19:53.467 INFO:tasks.workunit.client.0.vm02.stdout:8/620: chown d1/f73 100493 1 2026-03-10T10:19:53.470 INFO:tasks.workunit.client.1.vm05.stdout:0/625: creat d1/d2/d9/d31/d54/fd9 x:0 0 0 2026-03-10T10:19:53.471 INFO:tasks.workunit.client.0.vm02.stdout:5/775: chown d1/db/d11/d16/ld5 2 1 2026-03-10T10:19:53.473 INFO:tasks.workunit.client.1.vm05.stdout:0/626: sync 2026-03-10T10:19:53.475 INFO:tasks.workunit.client.0.vm02.stdout:1/678: creat d4/dc3/fd8 x:0 0 0 2026-03-10T10:19:53.476 INFO:tasks.workunit.client.0.vm02.stdout:1/679: write d4/da/d1a/d5b/f79 [4431080,104476] 0 2026-03-10T10:19:53.476 INFO:tasks.workunit.client.0.vm02.stdout:1/680: readlink d4/da/d1a/d47/d65/l8b 0 2026-03-10T10:19:53.482 INFO:tasks.workunit.client.1.vm05.stdout:5/648: rename da/d96/f9d to da/db/d26/fdd 0 2026-03-10T10:19:53.483 INFO:tasks.workunit.client.0.vm02.stdout:4/762: link d1/d75/ddd/fa6 d1/def/ff8 0 2026-03-10T10:19:53.485 INFO:tasks.workunit.client.1.vm05.stdout:3/640: unlink dd/d39/d5f/fb8 0 2026-03-10T10:19:53.487 INFO:tasks.workunit.client.0.vm02.stdout:0/685: mknod d9/d34/d3d/d65/cda 0 2026-03-10T10:19:53.489 INFO:tasks.workunit.client.1.vm05.stdout:1/712: fsync d4/d39/d3e/da0/fa1 0 2026-03-10T10:19:53.490 INFO:tasks.workunit.client.1.vm05.stdout:1/713: read d4/d20/f2d [1690359,76091] 0 2026-03-10T10:19:53.491 INFO:tasks.workunit.client.1.vm05.stdout:1/714: chown d4/d3d/d6e/faf 6 1 2026-03-10T10:19:53.493 INFO:tasks.workunit.client.1.vm05.stdout:8/588: mkdir d7/d14/d24/d3f/d6a/d8a/d96/db7 0 2026-03-10T10:19:53.497 INFO:tasks.workunit.client.1.vm05.stdout:4/511: symlink d1/d31/d72/la6 0 2026-03-10T10:19:53.505 INFO:tasks.workunit.client.0.vm02.stdout:5/776: creat d1/db/d11/d13/d28/d37/dce/f10b x:0 0 0 2026-03-10T10:19:53.512 INFO:tasks.workunit.client.1.vm05.stdout:6/631: getdents dd/d36/d3f/d12/d44/d30/d4a 0 2026-03-10T10:19:53.513 INFO:tasks.workunit.client.1.vm05.stdout:6/632: readlink dd/d1b/la7 0 2026-03-10T10:19:53.515 INFO:tasks.workunit.client.0.vm02.stdout:2/652: truncate d0/dd4/fdd 4084115 0 2026-03-10T10:19:53.526 INFO:tasks.workunit.client.0.vm02.stdout:4/763: creat d1/d10/d88/db2/ff9 x:0 0 0 2026-03-10T10:19:53.533 INFO:tasks.workunit.client.0.vm02.stdout:0/686: rename d9/d18/d1a/d22/d24/d79/d7d/f85 to d9/d34/d3d/d7b/fdb 0 2026-03-10T10:19:53.533 INFO:tasks.workunit.client.0.vm02.stdout:0/687: chown d9/d34/d3d/d65/d89/dd3/da7 0 1 2026-03-10T10:19:53.536 INFO:tasks.workunit.client.1.vm05.stdout:5/649: chown da/db/d26/c88 3081 1 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: Standby manager daemon vm02.zmavgl started 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm02.zmavgl/crt"}]: dispatch 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm02.zmavgl/key"}]: dispatch 2026-03-10T10:19:53.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:53 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.102:0/2' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:19:53.538 INFO:tasks.workunit.client.0.vm02.stdout:6/621: symlink d0/d8/d9/d7a/dc0/lc8 0 2026-03-10T10:19:53.546 INFO:tasks.workunit.client.0.vm02.stdout:9/615: getdents da 0 2026-03-10T10:19:53.551 INFO:tasks.workunit.client.0.vm02.stdout:9/616: dread da/d3c/d4c/f23 [0,4194304] 0 2026-03-10T10:19:53.560 INFO:tasks.workunit.client.1.vm05.stdout:3/641: write dd/d20/d56/f68 [2177851,10827] 0 2026-03-10T10:19:53.567 INFO:tasks.workunit.client.0.vm02.stdout:7/649: mknod d1/dc/d60/cc7 0 2026-03-10T10:19:53.569 INFO:tasks.workunit.client.1.vm05.stdout:1/715: creat d4/d3d/d6e/fd1 x:0 0 0 2026-03-10T10:19:53.573 INFO:tasks.workunit.client.0.vm02.stdout:8/621: mknod d1/d1c/d43/d5b/dab/cba 0 2026-03-10T10:19:53.574 INFO:tasks.workunit.client.0.vm02.stdout:8/622: rename d1/d1c/d43 to d1/d1c/d43/d5b/d88/dac/d83/d9f/dbb 22 2026-03-10T10:19:53.576 INFO:tasks.workunit.client.0.vm02.stdout:5/777: creat d1/db/d11/d13/d28/d37/dce/f10c x:0 0 0 2026-03-10T10:19:53.583 INFO:tasks.workunit.client.0.vm02.stdout:1/681: mkdir d4/d2c/d53/da6/db8/dd9 0 2026-03-10T10:19:53.584 INFO:tasks.workunit.client.1.vm05.stdout:4/512: read - d1/d64/fa1 zero size 2026-03-10T10:19:53.587 INFO:tasks.workunit.client.1.vm05.stdout:8/589: write d7/f59 [260912,63267] 0 2026-03-10T10:19:53.589 INFO:tasks.workunit.client.0.vm02.stdout:2/653: dwrite d0/d10/fa1 [0,4194304] 0 2026-03-10T10:19:53.590 INFO:tasks.workunit.client.0.vm02.stdout:2/654: chown d0/d8c 747690976 1 2026-03-10T10:19:53.600 INFO:tasks.workunit.client.1.vm05.stdout:0/627: creat d1/d2/d9/d31/d13/d17/da1/dbd/fda x:0 0 0 2026-03-10T10:19:53.601 INFO:tasks.workunit.client.0.vm02.stdout:0/688: dread d9/d34/d3d/f94 [0,4194304] 0 2026-03-10T10:19:53.605 INFO:tasks.workunit.client.1.vm05.stdout:2/570: creat db/d12/fb2 x:0 0 0 2026-03-10T10:19:53.607 INFO:tasks.workunit.client.0.vm02.stdout:9/617: symlink da/d3c/lc5 0 2026-03-10T10:19:53.609 INFO:tasks.workunit.client.1.vm05.stdout:5/650: write da/db/d26/d5c/f68 [264768,85208] 0 2026-03-10T10:19:53.621 INFO:tasks.workunit.client.0.vm02.stdout:3/647: link d1/d8/d21/f47 d1/d8/fd6 0 2026-03-10T10:19:53.622 INFO:tasks.workunit.client.0.vm02.stdout:3/648: dread - d1/d8/d21/d73/fcf zero size 2026-03-10T10:19:53.627 INFO:tasks.workunit.client.1.vm05.stdout:6/633: dwrite dd/d36/d3f/d12/d44/d2a/d3d/fa2 [0,4194304] 0 2026-03-10T10:19:53.638 INFO:tasks.workunit.client.0.vm02.stdout:7/650: creat d1/dc/d99/fc8 x:0 0 0 2026-03-10T10:19:53.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.656+0000 7ff572ffd700 1 -- 192.168.123.102:0/1865691008 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7ff578005cc0 con 0x7ff58c072af0 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.660+0000 7ff5897fa700 1 -- 192.168.123.102:0/1865691008 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+770 (secure 0 0 0) 0x7ff57c020070 con 0x7ff58c072af0 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 1, 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 6 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 13, 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:19:53.661 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:19:53.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.664+0000 7ff5922f1700 1 -- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff574071ea0 msgr2=0x7ff574074360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:53.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.664+0000 7ff5922f1700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff574071ea0 0x7ff574074360 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7ff58400db80 tx=0x7ff584006040 comp rx=0 tx=0).stop 2026-03-10T10:19:53.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.664+0000 7ff5922f1700 1 -- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c072af0 msgr2=0x7ff58c0830b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:53.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.664+0000 7ff5922f1700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c072af0 0x7ff58c0830b0 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7ff57c00d8d0 tx=0x7ff57c00dc90 comp rx=0 tx=0).stop 2026-03-10T10:19:53.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.664+0000 7ff5922f1700 1 -- 192.168.123.102:0/1865691008 shutdown_connections 2026-03-10T10:19:53.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.664+0000 7ff5922f1700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff574071ea0 0x7ff574074360 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.664+0000 7ff5922f1700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff58c072af0 0x7ff58c0830b0 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.664+0000 7ff5922f1700 1 --2- 192.168.123.102:0/1865691008 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff58c0835f0 0x7ff58c12e490 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.664+0000 7ff5922f1700 1 -- 192.168.123.102:0/1865691008 >> 192.168.123.102:0/1865691008 conn(0x7ff58c06dad0 msgr2=0x7ff58c077230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:53.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.665+0000 7ff5922f1700 1 -- 192.168.123.102:0/1865691008 shutdown_connections 2026-03-10T10:19:53.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.666+0000 7ff5922f1700 1 -- 192.168.123.102:0/1865691008 wait complete. 2026-03-10T10:19:53.672 INFO:tasks.workunit.client.0.vm02.stdout:8/623: dread d1/d2/f29 [0,4194304] 0 2026-03-10T10:19:53.673 INFO:tasks.workunit.client.0.vm02.stdout:8/624: write d1/d1c/d23/f75 [803690,5132] 0 2026-03-10T10:19:53.675 INFO:tasks.workunit.client.0.vm02.stdout:4/764: mknod d1/cfa 0 2026-03-10T10:19:53.680 INFO:tasks.workunit.client.1.vm05.stdout:7/647: rmdir d5/d1d/d20/d91/da7/dab/dad 0 2026-03-10T10:19:53.681 INFO:tasks.workunit.client.1.vm05.stdout:7/648: readlink d5/dd/l10 0 2026-03-10T10:19:53.682 INFO:tasks.workunit.client.1.vm05.stdout:8/590: mkdir d7/d14/d3a/d49/d65/db8 0 2026-03-10T10:19:53.685 INFO:tasks.workunit.client.0.vm02.stdout:2/655: unlink d0/d71/l84 0 2026-03-10T10:19:53.691 INFO:tasks.workunit.client.1.vm05.stdout:1/716: write d4/d37/f89 [4750974,107578] 0 2026-03-10T10:19:53.694 INFO:tasks.workunit.client.1.vm05.stdout:4/513: dwrite d1/d64/f84 [0,4194304] 0 2026-03-10T10:19:53.695 INFO:tasks.workunit.client.0.vm02.stdout:3/649: creat d1/d20/d52/fd7 x:0 0 0 2026-03-10T10:19:53.696 INFO:tasks.workunit.client.1.vm05.stdout:9/574: rename d0/d1/d4c to d0/dc4 0 2026-03-10T10:19:53.696 INFO:tasks.workunit.client.1.vm05.stdout:2/571: creat db/d28/d4f/d59/da4/fb3 x:0 0 0 2026-03-10T10:19:53.697 INFO:tasks.workunit.client.1.vm05.stdout:9/575: stat d0/df/d11 0 2026-03-10T10:19:53.706 INFO:tasks.workunit.client.1.vm05.stdout:5/651: creat da/d9a/fde x:0 0 0 2026-03-10T10:19:53.707 INFO:tasks.workunit.client.0.vm02.stdout:5/778: getdents d1/d9c 0 2026-03-10T10:19:53.721 INFO:tasks.workunit.client.0.vm02.stdout:1/682: symlink d4/d2c/d53/lda 0 2026-03-10T10:19:53.729 INFO:tasks.workunit.client.0.vm02.stdout:6/622: dwrite d0/d8/d29/d2f/d4b/f26 [0,4194304] 0 2026-03-10T10:19:53.732 INFO:tasks.workunit.client.0.vm02.stdout:8/625: creat d1/d1c/d43/d5b/fbc x:0 0 0 2026-03-10T10:19:53.755 INFO:tasks.workunit.client.1.vm05.stdout:7/649: write d5/d1d/d29/d3e/d8c/d7f/f93 [687905,80698] 0 2026-03-10T10:19:53.755 INFO:tasks.workunit.client.1.vm05.stdout:7/650: fsync d5/d1d/f7c 0 2026-03-10T10:19:53.756 INFO:tasks.workunit.client.1.vm05.stdout:7/651: readlink d5/d1d/d20/d35/l6b 0 2026-03-10T10:19:53.761 INFO:tasks.workunit.client.1.vm05.stdout:0/628: symlink d1/d2/d9/d31/d13/da2/dab/dce/ldb 0 2026-03-10T10:19:53.766 INFO:tasks.workunit.client.1.vm05.stdout:1/717: symlink d4/d3d/d6e/dac/ld2 0 2026-03-10T10:19:53.767 INFO:tasks.workunit.client.0.vm02.stdout:9/618: mknod da/d3c/d4c/cc6 0 2026-03-10T10:19:53.783 INFO:tasks.workunit.client.1.vm05.stdout:4/514: fdatasync d1/d31/dc/d40/d63/f89 0 2026-03-10T10:19:53.783 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.779+0000 7f7061796700 1 -- 192.168.123.102:0/1783533250 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c072b50 msgr2=0x7f705c072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.779+0000 7f7061796700 1 --2- 192.168.123.102:0/1783533250 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c072b50 0x7f705c072f70 secure :-1 s=READY pgs=324 cs=0 l=1 rev1=1 crypto rx=0x7f704c007780 tx=0x7f704c00c050 comp rx=0 tx=0).stop 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.779+0000 7f7061796700 1 -- 192.168.123.102:0/1783533250 shutdown_connections 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.779+0000 7f7061796700 1 --2- 192.168.123.102:0/1783533250 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f705c075a40 0x7f705c077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.779+0000 7f7061796700 1 --2- 192.168.123.102:0/1783533250 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c072b50 0x7f705c072f70 unknown :-1 s=CLOSED pgs=324 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.779+0000 7f7061796700 1 -- 192.168.123.102:0/1783533250 >> 192.168.123.102:0/1783533250 conn(0x7f705c06dae0 msgr2=0x7f705c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f7061796700 1 -- 192.168.123.102:0/1783533250 shutdown_connections 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f7061796700 1 -- 192.168.123.102:0/1783533250 wait complete. 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f7061796700 1 Processor -- start 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f7061796700 1 -- start start 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f7061796700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c075a40 0x7f705c083160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f7061796700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f705c0836a0 0x7f705c1b31b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f7061796700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f705c083b20 con 0x7f705c075a40 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f7061796700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f705c083c90 con 0x7f705c0836a0 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f705affd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c075a40 0x7f705c083160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f705affd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c075a40 0x7f705c083160 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:47484/0 (socket says 192.168.123.102:47484) 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.780+0000 7f705affd700 1 -- 192.168.123.102:0/1026868304 learned_addr learned my addr 192.168.123.102:0/1026868304 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.781+0000 7f705a7fc700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f705c0836a0 0x7f705c1b31b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.781+0000 7f705affd700 1 -- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f705c0836a0 msgr2=0x7f705c1b31b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.781+0000 7f705affd700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f705c0836a0 0x7f705c1b31b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.781+0000 7f705affd700 1 -- 192.168.123.102:0/1026868304 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f704c007430 con 0x7f705c075a40 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.781+0000 7f705affd700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c075a40 0x7f705c083160 secure :-1 s=READY pgs=325 cs=0 l=1 rev1=1 crypto rx=0x7f704c007cc0 tx=0x7f704c007cf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.781+0000 7f7043fff700 1 -- 192.168.123.102:0/1026868304 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f704c00f050 con 0x7f705c075a40 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.782+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f705c1b36f0 con 0x7f705c075a40 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.782+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f705c1b3c10 con 0x7f705c075a40 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.782+0000 7f7043fff700 1 -- 192.168.123.102:0/1026868304 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f704c0072b0 con 0x7f705c075a40 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.782+0000 7f7043fff700 1 -- 192.168.123.102:0/1026868304 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f704c00a4a0 con 0x7f705c075a40 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.783+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7048005320 con 0x7f705c075a40 2026-03-10T10:19:53.784 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.784+0000 7f7043fff700 1 -- 192.168.123.102:0/1026868304 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 25) v1 ==== 95238+0+0 (secure 0 0 0) 0x7f704c01a040 con 0x7f705c075a40 2026-03-10T10:19:53.785 INFO:tasks.workunit.client.1.vm05.stdout:0/629: sync 2026-03-10T10:19:53.785 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.785+0000 7f7043fff700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f7044071f50 0x7f7044074410 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:53.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.785+0000 7f705a7fc700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f7044071f50 0x7f7044074410 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:53.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.785+0000 7f7043fff700 1 -- 192.168.123.102:0/1026868304 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 5534+0+0 (secure 0 0 0) 0x7f704c092cc0 con 0x7f705c075a40 2026-03-10T10:19:53.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.787+0000 7f705a7fc700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f7044071f50 0x7f7044074410 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f7054010a40 tx=0x7f7054012040 comp rx=0 tx=0).ready entity=mgr.14674 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:53.791 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:53.791+0000 7f7043fff700 1 -- 192.168.123.102:0/1026868304 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7f704c05b9f0 con 0x7f705c075a40 2026-03-10T10:19:53.792 INFO:tasks.workunit.client.1.vm05.stdout:2/572: fsync db/d1c/f3d 0 2026-03-10T10:19:53.793 INFO:tasks.workunit.client.1.vm05.stdout:2/573: dread - db/d28/d4f/d8b/d9a/d9d/fab zero size 2026-03-10T10:19:53.800 INFO:tasks.workunit.client.1.vm05.stdout:5/652: creat da/d9a/daf/fdf x:0 0 0 2026-03-10T10:19:53.811 INFO:tasks.workunit.client.1.vm05.stdout:9/576: dwrite d0/d1/f7b [0,4194304] 0 2026-03-10T10:19:53.816 INFO:tasks.workunit.client.0.vm02.stdout:3/650: dread d1/d8/f3d [0,4194304] 0 2026-03-10T10:19:53.824 INFO:tasks.workunit.client.1.vm05.stdout:6/634: symlink dd/d36/d3f/lcc 0 2026-03-10T10:19:53.824 INFO:tasks.workunit.client.1.vm05.stdout:3/642: link dd/d15/d24/d2c/c4b dd/d20/d56/d5e/ce2 0 2026-03-10T10:19:53.824 INFO:tasks.workunit.client.1.vm05.stdout:8/591: creat d7/d14/d24/d3f/d6a/d8a/d96/db7/fb9 x:0 0 0 2026-03-10T10:19:53.825 INFO:tasks.workunit.client.1.vm05.stdout:7/652: chown d5/d1d/c6e 83831120 1 2026-03-10T10:19:53.826 INFO:tasks.workunit.client.1.vm05.stdout:7/653: chown d5/dd/l27 176215 1 2026-03-10T10:19:53.827 INFO:tasks.workunit.client.1.vm05.stdout:7/654: chown d5/d1d/d20/fb5 45 1 2026-03-10T10:19:53.840 INFO:tasks.workunit.client.1.vm05.stdout:0/630: rmdir d1/d2/d9/d31/d13/d2f/d49 39 2026-03-10T10:19:53.840 INFO:tasks.workunit.client.1.vm05.stdout:0/631: dread - d1/d2/d39/fd1 zero size 2026-03-10T10:19:53.853 INFO:tasks.workunit.client.1.vm05.stdout:2/574: rmdir db/d1c/d40/d62/d85 39 2026-03-10T10:19:53.861 INFO:tasks.workunit.client.1.vm05.stdout:1/718: write d4/d39/f54 [1003093,109982] 0 2026-03-10T10:19:53.866 INFO:tasks.workunit.client.1.vm05.stdout:8/592: creat d7/d14/fba x:0 0 0 2026-03-10T10:19:53.867 INFO:tasks.workunit.client.1.vm05.stdout:6/635: dread - dd/d36/d3f/d12/d44/d2a/d3d/f76 zero size 2026-03-10T10:19:53.873 INFO:tasks.workunit.client.1.vm05.stdout:3/643: creat dd/d15/d24/d8e/dac/fe3 x:0 0 0 2026-03-10T10:19:53.876 INFO:tasks.workunit.client.1.vm05.stdout:9/577: dwrite d0/df/d74/fbb [0,4194304] 0 2026-03-10T10:19:53.877 INFO:tasks.workunit.client.1.vm05.stdout:9/578: chown d0/f7 40398270 1 2026-03-10T10:19:53.879 INFO:tasks.workunit.client.1.vm05.stdout:7/655: symlink d5/d17/d85/ld0 0 2026-03-10T10:19:53.899 INFO:tasks.workunit.client.1.vm05.stdout:0/632: mknod d1/d2/d9/d31/d13/d15/d4e/cdc 0 2026-03-10T10:19:53.900 INFO:tasks.workunit.client.1.vm05.stdout:0/633: read - d1/d2/d9/d31/d13/f9c zero size 2026-03-10T10:19:53.900 INFO:tasks.workunit.client.0.vm02.stdout:5/779: write d1/db/d11/d1a/f27 [2151500,27573] 0 2026-03-10T10:19:53.904 INFO:tasks.workunit.client.1.vm05.stdout:2/575: stat db/d28/d4f/d59/f7e 0 2026-03-10T10:19:53.904 INFO:tasks.workunit.client.0.vm02.stdout:6/623: stat d0/d8/d29/d2f/d4b/l24 0 2026-03-10T10:19:53.905 INFO:tasks.workunit.client.1.vm05.stdout:2/576: write db/d2d/d5e/fac [544684,104339] 0 2026-03-10T10:19:53.906 INFO:tasks.workunit.client.1.vm05.stdout:1/719: truncate d4/d3d/d6e/faf 1147866 0 2026-03-10T10:19:53.907 INFO:tasks.workunit.client.0.vm02.stdout:8/626: truncate d1/d1c/d43/d5b/d88/dac/fa5 36234 0 2026-03-10T10:19:53.911 INFO:tasks.workunit.client.0.vm02.stdout:8/627: dwrite d1/d1c/d24/d71/fb4 [0,4194304] 0 2026-03-10T10:19:53.936 INFO:tasks.workunit.client.0.vm02.stdout:0/689: rename d9/d34/d3d/c64 to d9/d18/cdc 0 2026-03-10T10:19:53.946 INFO:tasks.workunit.client.1.vm05.stdout:8/593: dread d7/d2f/f7e [0,4194304] 0 2026-03-10T10:19:53.947 INFO:tasks.workunit.client.1.vm05.stdout:3/644: creat dd/d20/d9e/dc0/fe4 x:0 0 0 2026-03-10T10:19:53.948 INFO:tasks.workunit.client.1.vm05.stdout:3/645: stat dd/d15/d1f/l46 0 2026-03-10T10:19:53.955 INFO:tasks.workunit.client.0.vm02.stdout:9/619: dwrite da/d3c/d4c/d38/fb2 [0,4194304] 0 2026-03-10T10:19:53.968 INFO:tasks.workunit.client.0.vm02.stdout:1/683: mkdir d4/da/d27/d38/d80/ddb 0 2026-03-10T10:19:53.981 INFO:tasks.workunit.client.1.vm05.stdout:2/577: creat db/d28/d4f/d59/d94/d95/fb4 x:0 0 0 2026-03-10T10:19:53.982 INFO:tasks.workunit.client.0.vm02.stdout:4/765: link d1/d10/d88/db2/lc7 d1/d41/d5e/d78/d1a/d49/d81/dc6/lfb 0 2026-03-10T10:19:53.987 INFO:tasks.workunit.client.0.vm02.stdout:0/690: creat d9/fdd x:0 0 0 2026-03-10T10:19:53.989 INFO:tasks.workunit.client.1.vm05.stdout:5/653: rmdir da/d9a/daf/dce 0 2026-03-10T10:19:53.992 INFO:tasks.workunit.client.0.vm02.stdout:2/656: rename d0/d1a/d24/dc6/la4 to d0/d1a/d24/d80/ddb/ldf 0 2026-03-10T10:19:53.992 INFO:tasks.workunit.client.0.vm02.stdout:2/657: dread - d0/f8f zero size 2026-03-10T10:19:53.997 INFO:tasks.workunit.client.1.vm05.stdout:9/579: dwrite d0/df/d74/d90/fa4 [0,4194304] 0 2026-03-10T10:19:53.997 INFO:tasks.workunit.client.1.vm05.stdout:0/634: dwrite d1/d2/d9/d31/d12/d41/fa9 [0,4194304] 0 2026-03-10T10:19:53.997 INFO:tasks.workunit.client.1.vm05.stdout:6/636: creat dd/d36/d3f/d12/d96/fcd x:0 0 0 2026-03-10T10:19:53.998 INFO:tasks.workunit.client.0.vm02.stdout:3/651: mknod d1/d20/cd8 0 2026-03-10T10:19:53.999 INFO:tasks.workunit.client.0.vm02.stdout:3/652: read - d1/d6/d8e/fc7 zero size 2026-03-10T10:19:54.005 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.004+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f7048005cc0 con 0x7f705c075a40 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.005+0000 7f7043fff700 1 -- 192.168.123.102:0/1026868304 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1854 (secure 0 0 0) 0x7f704c02a750 con 0x7f705c075a40 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:19:54.006 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:19:54.007 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:19:54.008 INFO:tasks.workunit.client.0.vm02.stdout:7/651: link d1/dc/d55/d9a/da5/lbb d1/dc/d16/lc9 0 2026-03-10T10:19:54.009 INFO:tasks.workunit.client.1.vm05.stdout:8/594: mknod d7/d14/cbb 0 2026-03-10T10:19:54.011 INFO:tasks.workunit.client.1.vm05.stdout:0/635: dwrite d1/d2/d9/d31/d54/fd9 [0,4194304] 0 2026-03-10T10:19:54.013 INFO:tasks.workunit.client.0.vm02.stdout:9/620: truncate da/d3c/d53/f73 2152985 0 2026-03-10T10:19:54.015 INFO:tasks.workunit.client.1.vm05.stdout:0/636: stat d1/d2/d9/d31/d13/d15/d4e/d8a/fd8 0 2026-03-10T10:19:54.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.015+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f7044071f50 msgr2=0x7f7044074410 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.015+0000 7f7061796700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f7044071f50 0x7f7044074410 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f7054010a40 tx=0x7f7054012040 comp rx=0 tx=0).stop 2026-03-10T10:19:54.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.015+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c075a40 msgr2=0x7f705c083160 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.015+0000 7f7061796700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c075a40 0x7f705c083160 secure :-1 s=READY pgs=325 cs=0 l=1 rev1=1 crypto rx=0x7f704c007cc0 tx=0x7f704c007cf0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.016 INFO:tasks.workunit.client.1.vm05.stdout:4/515: link d1/d31/dc/d40/d63/ca5 d1/ca7 0 2026-03-10T10:19:54.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.017+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 shutdown_connections 2026-03-10T10:19:54.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.017+0000 7f7061796700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7f7044071f50 0x7f7044074410 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.017+0000 7f7061796700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f705c075a40 0x7f705c083160 unknown :-1 s=CLOSED pgs=325 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.017+0000 7f7061796700 1 --2- 192.168.123.102:0/1026868304 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f705c0836a0 0x7f705c1b31b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.017+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 >> 192.168.123.102:0/1026868304 conn(0x7f705c06dae0 msgr2=0x7f705c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:54.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.018+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 shutdown_connections 2026-03-10T10:19:54.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.021+0000 7f7061796700 1 -- 192.168.123.102:0/1026868304 wait complete. 2026-03-10T10:19:54.024 INFO:tasks.workunit.client.1.vm05.stdout:1/720: rename d4/d20/c3c to d4/d79/cd3 0 2026-03-10T10:19:54.033 INFO:tasks.workunit.client.0.vm02.stdout:1/684: fdatasync d4/da/d27/f66 0 2026-03-10T10:19:54.034 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:19:54.034 INFO:tasks.workunit.client.0.vm02.stdout:5/780: read d1/db/d11/d16/d79/d85/fa0 [783502,20478] 0 2026-03-10T10:19:54.035 INFO:tasks.workunit.client.0.vm02.stdout:5/781: chown d1/db/d11/d62/d67/lae 3 1 2026-03-10T10:19:54.039 INFO:tasks.workunit.client.0.vm02.stdout:0/691: read d9/d18/d1a/d22/d24/d80/d74/f96 [1140238,127610] 0 2026-03-10T10:19:54.047 INFO:tasks.workunit.client.1.vm05.stdout:5/654: fsync da/db/d26/d35/d38/fa2 0 2026-03-10T10:19:54.056 INFO:tasks.workunit.client.0.vm02.stdout:7/652: rmdir d1/dc/d16/d28/d2d 39 2026-03-10T10:19:54.056 INFO:tasks.workunit.client.1.vm05.stdout:6/637: symlink dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/lce 0 2026-03-10T10:19:54.059 INFO:tasks.workunit.client.1.vm05.stdout:6/638: dread dd/d36/d3f/d12/d44/daa/fae [0,4194304] 0 2026-03-10T10:19:54.063 INFO:tasks.workunit.client.1.vm05.stdout:6/639: dread dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b [0,4194304] 0 2026-03-10T10:19:54.066 INFO:tasks.workunit.client.1.vm05.stdout:2/578: dread db/d2d/f47 [0,4194304] 0 2026-03-10T10:19:54.066 INFO:tasks.workunit.client.1.vm05.stdout:2/579: read - db/d28/d4f/fb0 zero size 2026-03-10T10:19:54.067 INFO:tasks.workunit.client.1.vm05.stdout:3/646: unlink dd/d20/d9e/dc0/fe4 0 2026-03-10T10:19:54.067 INFO:tasks.workunit.client.1.vm05.stdout:3/647: stat dd 0 2026-03-10T10:19:54.069 INFO:tasks.workunit.client.0.vm02.stdout:4/766: mkdir d1/d10/dfc 0 2026-03-10T10:19:54.080 INFO:tasks.workunit.client.0.vm02.stdout:1/685: sync 2026-03-10T10:19:54.083 INFO:tasks.workunit.client.0.vm02.stdout:3/653: creat d1/d20/d52/dd3/fd9 x:0 0 0 2026-03-10T10:19:54.087 INFO:tasks.workunit.client.0.vm02.stdout:3/654: dwrite d1/d8/d21/f4c [0,4194304] 0 2026-03-10T10:19:54.099 INFO:tasks.workunit.client.0.vm02.stdout:9/621: unlink da/d3c/d4c/d38/d82/d89/c95 0 2026-03-10T10:19:54.107 INFO:tasks.workunit.client.1.vm05.stdout:8/595: write d7/d14/d24/d3f/f7d [2404510,19836] 0 2026-03-10T10:19:54.119 INFO:tasks.workunit.client.0.vm02.stdout:5/782: dwrite d1/db/d11/d84/d40/d4f/d5f/d6d/d71/f80 [0,4194304] 0 2026-03-10T10:19:54.129 INFO:tasks.workunit.client.1.vm05.stdout:4/516: write d1/d3/f26 [4362133,102379] 0 2026-03-10T10:19:54.130 INFO:tasks.workunit.client.0.vm02.stdout:0/692: write d9/d34/d3d/d67/fc3 [893222,128853] 0 2026-03-10T10:19:54.131 INFO:tasks.workunit.client.0.vm02.stdout:2/658: write d0/f70 [1033617,31097] 0 2026-03-10T10:19:54.132 INFO:tasks.workunit.client.0.vm02.stdout:2/659: chown d0/f91 130081457 1 2026-03-10T10:19:54.132 INFO:tasks.workunit.client.0.vm02.stdout:2/660: write d0/d71/fb9 [1785432,109543] 0 2026-03-10T10:19:54.134 INFO:tasks.workunit.client.1.vm05.stdout:4/517: dwrite d1/d31/dc/d40/d63/f94 [0,4194304] 0 2026-03-10T10:19:54.137 INFO:tasks.workunit.client.1.vm05.stdout:5/655: truncate da/db/f9f 384755 0 2026-03-10T10:19:54.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.151+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1283185540 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7d8100cc0 msgr2=0x7fa7d81010e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.151+0000 7fa7dfd28700 1 --2- 192.168.123.102:0/1283185540 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7d8100cc0 0x7fa7d81010e0 secure :-1 s=READY pgs=326 cs=0 l=1 rev1=1 crypto rx=0x7fa7d4009b00 tx=0x7fa7d4009e10 comp rx=0 tx=0).stop 2026-03-10T10:19:54.153 INFO:tasks.workunit.client.1.vm05.stdout:1/721: write d4/df/d1c/d92/f97 [412997,100076] 0 2026-03-10T10:19:54.154 INFO:tasks.workunit.client.0.vm02.stdout:6/624: getdents d0/d8 0 2026-03-10T10:19:54.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.154+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1283185540 shutdown_connections 2026-03-10T10:19:54.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.154+0000 7fa7dfd28700 1 --2- 192.168.123.102:0/1283185540 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7d8101ec0 0x7fa7d8102320 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.154+0000 7fa7dfd28700 1 --2- 192.168.123.102:0/1283185540 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7d8100cc0 0x7fa7d81010e0 unknown :-1 s=CLOSED pgs=326 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.154+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1283185540 >> 192.168.123.102:0/1283185540 conn(0x7fa7d80fc240 msgr2=0x7fa7d80fe6a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:54.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.155+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1283185540 shutdown_connections 2026-03-10T10:19:54.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.155+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1283185540 wait complete. 2026-03-10T10:19:54.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.156+0000 7fa7dfd28700 1 Processor -- start 2026-03-10T10:19:54.160 INFO:tasks.workunit.client.0.vm02.stdout:8/628: link d1/c13 d1/d1c/d23/d25/cbd 0 2026-03-10T10:19:54.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.161+0000 7fa7dfd28700 1 -- start start 2026-03-10T10:19:54.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.162+0000 7fa7dfd28700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7d8100cc0 0x7fa7d8196570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:54.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.162+0000 7fa7dfd28700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7d8101ec0 0x7fa7d8196ab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:54.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.162+0000 7fa7dfd28700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa7d81970d0 con 0x7fa7d8101ec0 2026-03-10T10:19:54.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.162+0000 7fa7dfd28700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa7d8197210 con 0x7fa7d8100cc0 2026-03-10T10:19:54.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.162+0000 7fa7ddac4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7d8100cc0 0x7fa7d8196570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:54.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.162+0000 7fa7ddac4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7d8100cc0 0x7fa7d8196570 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40630/0 (socket says 192.168.123.102:40630) 2026-03-10T10:19:54.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.162+0000 7fa7ddac4700 1 -- 192.168.123.102:0/1357429524 learned_addr learned my addr 192.168.123.102:0/1357429524 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:54.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.163+0000 7fa7dd2c3700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7d8101ec0 0x7fa7d8196ab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:54.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.165+0000 7fa7dd2c3700 1 -- 192.168.123.102:0/1357429524 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7d8100cc0 msgr2=0x7fa7d8196570 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.165+0000 7fa7dd2c3700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7d8100cc0 0x7fa7d8196570 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.165+0000 7fa7dd2c3700 1 -- 192.168.123.102:0/1357429524 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa7d40097e0 con 0x7fa7d8101ec0 2026-03-10T10:19:54.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.165+0000 7fa7dd2c3700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7d8101ec0 0x7fa7d8196ab0 secure :-1 s=READY pgs=327 cs=0 l=1 rev1=1 crypto rx=0x7fa7c800d8d0 tx=0x7fa7c800dbe0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:54.168 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.167+0000 7fa7ceffd700 1 -- 192.168.123.102:0/1357429524 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa7c800f840 con 0x7fa7d8101ec0 2026-03-10T10:19:54.168 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.167+0000 7fa7ceffd700 1 -- 192.168.123.102:0/1357429524 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa7c800fe80 con 0x7fa7d8101ec0 2026-03-10T10:19:54.169 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.167+0000 7fa7ceffd700 1 -- 192.168.123.102:0/1357429524 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa7c800e5c0 con 0x7fa7d8101ec0 2026-03-10T10:19:54.169 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.167+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa7d819bcc0 con 0x7fa7d8101ec0 2026-03-10T10:19:54.169 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.167+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa7d819c210 con 0x7fa7d8101ec0 2026-03-10T10:19:54.169 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.169+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa7d804ea90 con 0x7fa7d8101ec0 2026-03-10T10:19:54.174 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.172+0000 7fa7ceffd700 1 -- 192.168.123.102:0/1357429524 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 25) v1 ==== 95238+0+0 (secure 0 0 0) 0x7fa7c8010460 con 0x7fa7d8101ec0 2026-03-10T10:19:54.174 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.173+0000 7fa7ceffd700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7fa7c4071f00 0x7fa7c40743c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:54.174 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.173+0000 7fa7ceffd700 1 -- 192.168.123.102:0/1357429524 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 5534+0+0 (secure 0 0 0) 0x7fa7c80927d0 con 0x7fa7d8101ec0 2026-03-10T10:19:54.174 INFO:tasks.workunit.client.0.vm02.stdout:4/767: creat d1/d41/d5e/d78/d7f/ffd x:0 0 0 2026-03-10T10:19:54.174 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.174+0000 7fa7ddac4700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7fa7c4071f00 0x7fa7c40743c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:54.174 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.174+0000 7fa7ddac4700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7fa7c4071f00 0x7fa7c40743c0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fa7d4006010 tx=0x7fa7d400b540 comp rx=0 tx=0).ready entity=mgr.14674 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:54.176 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.176+0000 7fa7ceffd700 1 -- 192.168.123.102:0/1357429524 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7fa7c805b440 con 0x7fa7d8101ec0 2026-03-10T10:19:54.188 INFO:tasks.workunit.client.0.vm02.stdout:1/686: creat d4/da/d1a/d47/d78/fdc x:0 0 0 2026-03-10T10:19:54.216 INFO:tasks.workunit.client.0.vm02.stdout:3/655: dread d1/d6/d8e/f96 [0,4194304] 0 2026-03-10T10:19:54.229 INFO:tasks.workunit.client.0.vm02.stdout:5/783: dread d1/db/d11/d13/d28/da7/dd9/fe6 [0,4194304] 0 2026-03-10T10:19:54.236 INFO:tasks.workunit.client.0.vm02.stdout:5/784: dread d1/f12 [0,4194304] 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.0.vm02.stdout:1/687: fdatasync d4/d1b/f5d 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.0.vm02.stdout:3/656: unlink d1/d8/d21/d7d/fb3 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.0.vm02.stdout:7/653: creat d1/dc/d16/d28/fca x:0 0 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.0.vm02.stdout:0/693: getdents d9/d18/d1a/d22/d24/d8e/d9b/daa 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.0.vm02.stdout:6/625: symlink d0/d8/d29/d2f/d50/d7e/db2/dbb/lc9 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.1.vm05.stdout:6/640: rmdir dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d 39 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.1.vm05.stdout:2/580: unlink db/d28/f35 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.1.vm05.stdout:3/648: dread dd/d39/d5f/fa2 [0,4194304] 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.1.vm05.stdout:0/637: creat d1/d2/d39/d6e/fdd x:0 0 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.1.vm05.stdout:7/656: getdents d5/dd 0 2026-03-10T10:19:54.278 INFO:tasks.workunit.client.1.vm05.stdout:8/596: mknod d7/d14/d24/d3f/d4f/cbc 0 2026-03-10T10:19:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:54 vm02.local ceph-mon[50200]: Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-10T10:19:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:54 vm02.local ceph-mon[50200]: from='client.24471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:54 vm02.local ceph-mon[50200]: Reconfiguring daemon prometheus.vm02 on vm02 2026-03-10T10:19:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:54 vm02.local ceph-mon[50200]: pgmap v8: 65 pgs: 65 active+clean; 2.5 GiB data, 8.8 GiB used, 111 GiB / 120 GiB avail; 22 MiB/s rd, 57 MiB/s wr, 145 op/s 2026-03-10T10:19:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:54 vm02.local ceph-mon[50200]: from='client.14698 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:54 vm02.local ceph-mon[50200]: mgrmap e25: vm05.coparq(active, since 10s), standbys: vm02.zmavgl 2026-03-10T10:19:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:54 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:19:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:54 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/1865691008' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:19:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:54 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/1026868304' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:19:54.286 INFO:tasks.workunit.client.0.vm02.stdout:8/629: mkdir d1/d1c/d24/dad/dbe 0 2026-03-10T10:19:54.292 INFO:tasks.workunit.client.1.vm05.stdout:5/656: symlink da/db/d28/d97/le0 0 2026-03-10T10:19:54.297 INFO:tasks.workunit.client.0.vm02.stdout:1/688: dread d4/da/f12 [0,4194304] 0 2026-03-10T10:19:54.299 INFO:tasks.workunit.client.1.vm05.stdout:4/518: dread d1/d31/f13 [0,4194304] 0 2026-03-10T10:19:54.303 INFO:tasks.workunit.client.1.vm05.stdout:9/580: link d0/d1/d13/d26/l6a d0/d1/d13/d26/lc5 0 2026-03-10T10:19:54.305 INFO:tasks.workunit.client.0.vm02.stdout:3/657: symlink d1/d8/d86/db1/lda 0 2026-03-10T10:19:54.308 INFO:tasks.workunit.client.0.vm02.stdout:7/654: rename d1/dc/d16/d28/la0 to d1/d1b/d49/lcb 0 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.0.vm02.stdout:9/622: mknod da/d3c/d4c/d38/d7c/cc7 0 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.0.vm02.stdout:0/694: chown d9/d34/d3d/f69 4969 1 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.0.vm02.stdout:3/658: creat d1/d8/d86/fdb x:0 0 0 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.0.vm02.stdout:3/659: readlink d1/l27 0 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.0.vm02.stdout:9/623: fdatasync da/d3c/d4c/f49 0 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.1.vm05.stdout:6/641: read - dd/d36/d3f/fbe zero size 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.1.vm05.stdout:6/642: readlink dd/d1b/l54 0 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.1.vm05.stdout:3/649: unlink dd/d39/fb6 0 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.1.vm05.stdout:3/650: dread dd/d20/d56/d5e/dab/d9c/fdc [0,4194304] 0 2026-03-10T10:19:54.348 INFO:tasks.workunit.client.1.vm05.stdout:0/638: truncate d1/d2/d9/d31/d13/d15/f62 4437250 0 2026-03-10T10:19:54.349 INFO:tasks.workunit.client.1.vm05.stdout:7/657: truncate d5/d1d/d20/d3b/fba 702213 0 2026-03-10T10:19:54.349 INFO:tasks.workunit.client.1.vm05.stdout:7/658: dread d5/f22 [0,4194304] 0 2026-03-10T10:19:54.349 INFO:tasks.workunit.client.1.vm05.stdout:9/581: rmdir d0/d1/d16 39 2026-03-10T10:19:54.349 INFO:tasks.workunit.client.1.vm05.stdout:6/643: mkdir dd/d36/d3f/d12/d58/dcf 0 2026-03-10T10:19:54.349 INFO:tasks.workunit.client.1.vm05.stdout:5/657: read da/db/f1e [2025914,98280] 0 2026-03-10T10:19:54.351 INFO:tasks.workunit.client.0.vm02.stdout:3/660: mknod d1/d6/d8b/cdc 0 2026-03-10T10:19:54.354 INFO:tasks.workunit.client.1.vm05.stdout:3/651: creat dd/d15/d24/d2c/d6d/da7/dbb/fe5 x:0 0 0 2026-03-10T10:19:54.355 INFO:tasks.workunit.client.1.vm05.stdout:0/639: dread - d1/d2/d9/d31/d12/d20/f81 zero size 2026-03-10T10:19:54.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.356+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 --> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fa7d819c4c0 con 0x7fa7c4071f00 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.357+0000 7fa7ceffd700 1 -- 192.168.123.102:0/1357429524 <== mgr.14674 v2:192.168.123.105:6828/1021252581 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+318 (secure 0 0 0) 0x7fa7d819c4c0 con 0x7fa7c4071f00 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "1/23 daemons upgraded", 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stdout: "message": "", 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:19:54.358 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:19:54.360 INFO:tasks.workunit.client.0.vm02.stdout:9/624: creat da/d3c/d4c/d38/d7c/fc8 x:0 0 0 2026-03-10T10:19:54.362 INFO:tasks.workunit.client.1.vm05.stdout:8/597: rename d7/d14/d3a/c8c to d7/d14/d62/d90/cbd 0 2026-03-10T10:19:54.362 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7fa7c4071f00 msgr2=0x7fa7c40743c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.403 INFO:tasks.workunit.client.1.vm05.stdout:8/598: dwrite d7/d14/d15/da7/faf [0,4194304] 0 2026-03-10T10:19:54.403 INFO:tasks.workunit.client.1.vm05.stdout:8/599: truncate d7/d2f/fb4 489918 0 2026-03-10T10:19:54.403 INFO:tasks.workunit.client.1.vm05.stdout:4/519: mknod d1/d31/ca8 0 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7fa7c4071f00 0x7fa7c40743c0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fa7d4006010 tx=0x7fa7d400b540 comp rx=0 tx=0).stop 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7d8101ec0 msgr2=0x7fa7d8196ab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7d8101ec0 0x7fa7d8196ab0 secure :-1 s=READY pgs=327 cs=0 l=1 rev1=1 crypto rx=0x7fa7c800d8d0 tx=0x7fa7c800dbe0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 shutdown_connections 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7fa7c4071f00 0x7fa7c40743c0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa7d8100cc0 0x7fa7d8196570 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 --2- 192.168.123.102:0/1357429524 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa7d8101ec0 0x7fa7d8196ab0 unknown :-1 s=CLOSED pgs=327 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 >> 192.168.123.102:0/1357429524 conn(0x7fa7d80fc240 msgr2=0x7fa7d81050f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 shutdown_connections 2026-03-10T10:19:54.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.362+0000 7fa7dfd28700 1 -- 192.168.123.102:0/1357429524 wait complete. 2026-03-10T10:19:54.404 INFO:tasks.workunit.client.0.vm02.stdout:1/689: rmdir d4/da/d27/d38/d80/ddb 0 2026-03-10T10:19:54.404 INFO:tasks.workunit.client.0.vm02.stdout:1/690: fsync d4/dc3/fd8 0 2026-03-10T10:19:54.404 INFO:tasks.workunit.client.0.vm02.stdout:3/661: rename d1/d8/d86/f9b to d1/d8/d21/d7d/fdd 0 2026-03-10T10:19:54.404 INFO:tasks.workunit.client.0.vm02.stdout:3/662: write d1/d20/d52/f6f [3415607,77342] 0 2026-03-10T10:19:54.404 INFO:tasks.workunit.client.0.vm02.stdout:9/625: fdatasync da/d3c/d4c/d38/f84 0 2026-03-10T10:19:54.404 INFO:tasks.workunit.client.0.vm02.stdout:6/626: getdents d0/d8/d29/d6d 0 2026-03-10T10:19:54.413 INFO:tasks.workunit.client.1.vm05.stdout:2/581: creat db/d12/fb5 x:0 0 0 2026-03-10T10:19:54.430 INFO:tasks.workunit.client.0.vm02.stdout:6/627: truncate d0/d87/f90 362195 0 2026-03-10T10:19:54.432 INFO:tasks.workunit.client.0.vm02.stdout:3/663: rename d1/d6/d8e/c98 to d1/d20/db2/dcb/cde 0 2026-03-10T10:19:54.432 INFO:tasks.workunit.client.1.vm05.stdout:7/659: mknod d5/cd1 0 2026-03-10T10:19:54.433 INFO:tasks.workunit.client.1.vm05.stdout:1/722: link d4/d3d/c93 d4/d37/d4e/cd4 0 2026-03-10T10:19:54.434 INFO:tasks.workunit.client.1.vm05.stdout:1/723: write d4/d20/dbe/fc8 [896133,20729] 0 2026-03-10T10:19:54.434 INFO:tasks.workunit.client.1.vm05.stdout:6/644: creat dd/d36/d3f/dbd/fd0 x:0 0 0 2026-03-10T10:19:54.439 INFO:tasks.workunit.client.0.vm02.stdout:6/628: fdatasync d0/d8/d9/f84 0 2026-03-10T10:19:54.480 INFO:tasks.workunit.client.0.vm02.stdout:6/629: rename d0/d8/d29/d6d/d96/lb0 to d0/d8/d9/d7a/dc0/lca 0 2026-03-10T10:19:54.480 INFO:tasks.workunit.client.0.vm02.stdout:6/630: readlink d0/d8/d9/d7a/l88 0 2026-03-10T10:19:54.480 INFO:tasks.workunit.client.0.vm02.stdout:6/631: readlink d0/d8/d8c/l5b 0 2026-03-10T10:19:54.480 INFO:tasks.workunit.client.1.vm05.stdout:3/652: truncate f2 748604 0 2026-03-10T10:19:54.480 INFO:tasks.workunit.client.1.vm05.stdout:3/653: stat dd/d15/c43 0 2026-03-10T10:19:54.480 INFO:tasks.workunit.client.1.vm05.stdout:4/520: dread d1/d31/dc/f2a [0,4194304] 0 2026-03-10T10:19:54.480 INFO:tasks.workunit.client.1.vm05.stdout:0/640: symlink d1/d2/d39/d3d/d9f/lde 0 2026-03-10T10:19:54.480 INFO:tasks.workunit.client.1.vm05.stdout:2/582: mknod db/d2d/cb6 0 2026-03-10T10:19:54.480 INFO:tasks.workunit.client.1.vm05.stdout:0/641: creat d1/d2/d9/d31/d13/da2/dab/dce/fdf x:0 0 0 2026-03-10T10:19:54.480 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.472+0000 7ff20909e700 1 -- 192.168.123.102:0/3335308497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff204072b20 msgr2=0x7ff204072f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.480 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.472+0000 7ff20909e700 1 --2- 192.168.123.102:0/3335308497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff204072b20 0x7ff204072f40 secure :-1 s=READY pgs=328 cs=0 l=1 rev1=1 crypto rx=0x7ff1f4007780 tx=0x7ff1f400c050 comp rx=0 tx=0).stop 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.473+0000 7ff20909e700 1 -- 192.168.123.102:0/3335308497 shutdown_connections 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.473+0000 7ff20909e700 1 --2- 192.168.123.102:0/3335308497 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff204075a10 0x7ff204077ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.473+0000 7ff20909e700 1 --2- 192.168.123.102:0/3335308497 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff204072b20 0x7ff204072f40 unknown :-1 s=CLOSED pgs=328 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.473+0000 7ff20909e700 1 -- 192.168.123.102:0/3335308497 >> 192.168.123.102:0/3335308497 conn(0x7ff20406daa0 msgr2=0x7ff20406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.473+0000 7ff20909e700 1 -- 192.168.123.102:0/3335308497 shutdown_connections 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.473+0000 7ff20909e700 1 -- 192.168.123.102:0/3335308497 wait complete. 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.473+0000 7ff20909e700 1 Processor -- start 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.473+0000 7ff20909e700 1 -- start start 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.473+0000 7ff20909e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff204075a10 0x7ff204083130 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff20909e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff204083670 0x7ff2041b3120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff20909e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff204083b80 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff20909e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff204083cf0 con 0x7ff204083670 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff2037fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff204083670 0x7ff2041b3120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff2037fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff204083670 0x7ff2041b3120 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:40652/0 (socket says 192.168.123.102:40652) 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff2037fe700 1 -- 192.168.123.102:0/2494167367 learned_addr learned my addr 192.168.123.102:0/2494167367 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff203fff700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff204075a10 0x7ff204083130 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff203fff700 1 -- 192.168.123.102:0/2494167367 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff204083670 msgr2=0x7ff2041b3120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff203fff700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff204083670 0x7ff2041b3120 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.474+0000 7ff203fff700 1 -- 192.168.123.102:0/2494167367 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff1f4007430 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.475+0000 7ff203fff700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff204075a10 0x7ff204083130 secure :-1 s=READY pgs=329 cs=0 l=1 rev1=1 crypto rx=0x7ff1f4000c00 tx=0x7ff1f400a300 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.475+0000 7ff2017fa700 1 -- 192.168.123.102:0/2494167367 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff1f400f040 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.475+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff2041b36c0 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.475+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff2041b3b80 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.476+0000 7ff2017fa700 1 -- 192.168.123.102:0/2494167367 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff1f4007ca0 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.476+0000 7ff2017fa700 1 -- 192.168.123.102:0/2494167367 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff1f4003890 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.477+0000 7ff2017fa700 1 -- 192.168.123.102:0/2494167367 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 25) v1 ==== 95238+0+0 (secure 0 0 0) 0x7ff1f40039f0 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.478+0000 7ff2017fa700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff1ec071e80 0x7ff1ec074340 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.478+0000 7ff2017fa700 1 -- 192.168.123.102:0/2494167367 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 5534+0+0 (secure 0 0 0) 0x7ff1f40979a0 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.478+0000 7ff2037fe700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff1ec071e80 0x7ff1ec074340 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.479+0000 7ff2037fe700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff1ec071e80 0x7ff1ec074340 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7ff1fc0060b0 tx=0x7ff1fc009040 comp rx=0 tx=0).ready entity=mgr.14674 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:19:54.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.479+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff1f0005320 con 0x7ff204075a10 2026-03-10T10:19:54.481 INFO:tasks.workunit.client.1.vm05.stdout:3/654: dread dd/d15/d1f/f2a [0,4194304] 0 2026-03-10T10:19:54.483 INFO:tasks.workunit.client.1.vm05.stdout:7/660: fsync d5/d1d/d20/d91/fbd 0 2026-03-10T10:19:54.483 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.483+0000 7ff2017fa700 1 -- 192.168.123.102:0/2494167367 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7ff1f4067e60 con 0x7ff204075a10 2026-03-10T10:19:54.488 INFO:tasks.workunit.client.1.vm05.stdout:6/645: mknod dd/d36/d3f/cd1 0 2026-03-10T10:19:54.492 INFO:tasks.workunit.client.1.vm05.stdout:0/642: fdatasync d1/d2/d9/d31/d12/d20/f71 0 2026-03-10T10:19:54.506 INFO:tasks.workunit.client.1.vm05.stdout:3/655: creat dd/dbe/fe6 x:0 0 0 2026-03-10T10:19:54.507 INFO:tasks.workunit.client.1.vm05.stdout:7/661: mkdir d5/d1d/d20/d35/dd2 0 2026-03-10T10:19:54.507 INFO:tasks.workunit.client.1.vm05.stdout:9/582: dread d0/d1/d16/f40 [0,4194304] 0 2026-03-10T10:19:54.507 INFO:tasks.workunit.client.1.vm05.stdout:4/521: dread d1/d31/d4b/f8a [0,4194304] 0 2026-03-10T10:19:54.528 INFO:tasks.workunit.client.1.vm05.stdout:6/646: dread dd/d36/d3f/d12/d44/d2a/f84 [0,4194304] 0 2026-03-10T10:19:54.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:54 vm05.local ceph-mon[59051]: Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-10T10:19:54.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:54 vm05.local ceph-mon[59051]: from='client.24471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:54.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:54 vm05.local ceph-mon[59051]: Reconfiguring daemon prometheus.vm02 on vm02 2026-03-10T10:19:54.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:54 vm05.local ceph-mon[59051]: pgmap v8: 65 pgs: 65 active+clean; 2.5 GiB data, 8.8 GiB used, 111 GiB / 120 GiB avail; 22 MiB/s rd, 57 MiB/s wr, 145 op/s 2026-03-10T10:19:54.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:54 vm05.local ceph-mon[59051]: from='client.14698 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:54.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:54 vm05.local ceph-mon[59051]: mgrmap e25: vm05.coparq(active, since 10s), standbys: vm02.zmavgl 2026-03-10T10:19:54.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:54 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:19:54.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:54 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/1865691008' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:19:54.566 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:54 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/1026868304' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:19:54.581 INFO:tasks.workunit.client.1.vm05.stdout:0/643: fdatasync d1/d2/d9/d31/fa8 0 2026-03-10T10:19:54.582 INFO:tasks.workunit.client.1.vm05.stdout:3/656: mknod dd/d15/d1f/ce7 0 2026-03-10T10:19:54.582 INFO:tasks.workunit.client.1.vm05.stdout:4/522: mkdir d1/d64/da9 0 2026-03-10T10:19:54.582 INFO:tasks.workunit.client.1.vm05.stdout:7/662: rename d5/d1d/d29/d3e/d8c/d82/ccf to d5/d1d/d29/d3e/cd3 0 2026-03-10T10:19:54.592 INFO:tasks.workunit.client.1.vm05.stdout:6/647: unlink dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/f6b 0 2026-03-10T10:19:54.592 INFO:tasks.workunit.client.1.vm05.stdout:3/657: fsync dd/d15/d24/d8e/dac/fd7 0 2026-03-10T10:19:54.594 INFO:tasks.workunit.client.1.vm05.stdout:6/648: symlink dd/d36/d3f/d12/d44/d63/ld2 0 2026-03-10T10:19:54.594 INFO:tasks.workunit.client.1.vm05.stdout:0/644: link d1/d2/d9/d31/d13/d15/d4e/cdc d1/d2/d39/d6e/dc0/ce0 0 2026-03-10T10:19:54.595 INFO:tasks.workunit.client.1.vm05.stdout:7/663: getdents d5/d1d/d20 0 2026-03-10T10:19:54.596 INFO:tasks.workunit.client.1.vm05.stdout:6/649: creat dd/d36/d3f/d12/d44/d2a/d7f/fd3 x:0 0 0 2026-03-10T10:19:54.598 INFO:tasks.workunit.client.1.vm05.stdout:7/664: rmdir d5/d1d/d20/d2d/d5d/d7a 39 2026-03-10T10:19:54.608 INFO:tasks.workunit.client.1.vm05.stdout:3/658: link dd/d20/d56/d5e/dab/lb1 dd/d15/d24/d2c/d6d/da7/le8 0 2026-03-10T10:19:54.682 INFO:tasks.workunit.client.0.vm02.stdout:2/661: write d0/d8c/faf [36146,27814] 0 2026-03-10T10:19:54.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.698+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7ff1f0005190 con 0x7ff204075a10 2026-03-10T10:19:54.701 INFO:tasks.workunit.client.0.vm02.stdout:5/785: dwrite d1/fdd [0,4194304] 0 2026-03-10T10:19:54.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.701+0000 7ff2017fa700 1 -- 192.168.123.102:0/2494167367 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7ff1f4020090 con 0x7ff204075a10 2026-03-10T10:19:54.703 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff1ec071e80 msgr2=0x7ff1ec074340 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff1ec071e80 0x7ff1ec074340 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7ff1fc0060b0 tx=0x7ff1fc009040 comp rx=0 tx=0).stop 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff204075a10 msgr2=0x7ff204083130 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff204075a10 0x7ff204083130 secure :-1 s=READY pgs=329 cs=0 l=1 rev1=1 crypto rx=0x7ff1f4000c00 tx=0x7ff1f400a300 comp rx=0 tx=0).stop 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 shutdown_connections 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.105:6828/1021252581,v1:192.168.123.105:6829/1021252581] conn(0x7ff1ec071e80 0x7ff1ec074340 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff204075a10 0x7ff204083130 unknown :-1 s=CLOSED pgs=329 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 --2- 192.168.123.102:0/2494167367 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff204083670 0x7ff2041b3120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 >> 192.168.123.102:0/2494167367 conn(0x7ff20406daa0 msgr2=0x7ff20406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 shutdown_connections 2026-03-10T10:19:54.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:19:54.709+0000 7ff20909e700 1 -- 192.168.123.102:0/2494167367 wait complete. 2026-03-10T10:19:54.718 INFO:tasks.workunit.client.0.vm02.stdout:2/662: mkdir d0/d1a/d49/d5e/d65/dc4/de0 0 2026-03-10T10:19:54.720 INFO:tasks.workunit.client.0.vm02.stdout:4/768: write d1/d75/ddd/f7d [2528795,61553] 0 2026-03-10T10:19:54.740 INFO:tasks.workunit.client.0.vm02.stdout:4/769: dread d1/d41/d5e/d78/d7f/fb9 [0,4194304] 0 2026-03-10T10:19:54.740 INFO:tasks.workunit.client.0.vm02.stdout:4/770: chown d1/d41/d7e/ld2 19 1 2026-03-10T10:19:54.744 INFO:tasks.workunit.client.0.vm02.stdout:4/771: dwrite d1/d75/ddd/f7d [0,4194304] 0 2026-03-10T10:19:54.745 INFO:tasks.workunit.client.0.vm02.stdout:4/772: chown d1/d52/lb0 334168 1 2026-03-10T10:19:54.746 INFO:tasks.workunit.client.0.vm02.stdout:4/773: chown d1/d52/d53/dda/df7 0 1 2026-03-10T10:19:54.766 INFO:tasks.workunit.client.0.vm02.stdout:8/630: dwrite d1/d1c/d23/d25/f64 [0,4194304] 0 2026-03-10T10:19:54.767 INFO:tasks.workunit.client.0.vm02.stdout:8/631: readlink d1/d1c/d43/laa 0 2026-03-10T10:19:54.776 INFO:tasks.workunit.client.0.vm02.stdout:7/655: write d1/dc/d10/d38/f9b [375681,72117] 0 2026-03-10T10:19:54.784 INFO:tasks.workunit.client.0.vm02.stdout:0/695: dwrite d9/d34/d3d/f41 [0,4194304] 0 2026-03-10T10:19:54.821 INFO:tasks.workunit.client.0.vm02.stdout:1/691: dwrite d4/da/d27/d38/f3b [4194304,4194304] 0 2026-03-10T10:19:54.862 INFO:tasks.workunit.client.1.vm05.stdout:5/658: write da/db/d26/d70/fcc [212968,68489] 0 2026-03-10T10:19:54.868 INFO:tasks.workunit.client.0.vm02.stdout:2/663: creat d0/d1a/d49/d5e/d65/dc4/fe1 x:0 0 0 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.1.vm05.stdout:5/659: truncate da/f41 1185525 0 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm02.stdout:9/626: write da/d3c/d53/f6a [3501709,61569] 0 2026-03-10T10:19:54.890 INFO:tasks.workunit.client.1.vm05.stdout:8/600: dwrite d7/f11 [4194304,4194304] 0 2026-03-10T10:19:54.903 INFO:tasks.workunit.client.1.vm05.stdout:8/601: creat d7/d14/d15/da7/fbe x:0 0 0 2026-03-10T10:19:54.915 INFO:tasks.workunit.client.0.vm02.stdout:0/696: creat d9/d34/d3d/d7b/fde x:0 0 0 2026-03-10T10:19:54.923 INFO:tasks.workunit.client.1.vm05.stdout:8/602: creat d7/d14/d24/d3f/d4f/fbf x:0 0 0 2026-03-10T10:19:54.930 INFO:tasks.workunit.client.0.vm02.stdout:3/664: dwrite d1/d8/d86/f87 [0,4194304] 0 2026-03-10T10:19:54.931 INFO:tasks.workunit.client.1.vm05.stdout:1/724: write d4/d37/d4e/f62 [5119491,83883] 0 2026-03-10T10:19:54.942 INFO:tasks.workunit.client.0.vm02.stdout:6/632: dwrite d0/d8/d29/d2f/d50/d98/fb1 [0,4194304] 0 2026-03-10T10:19:54.942 INFO:tasks.workunit.client.1.vm05.stdout:5/660: rmdir da/db/d28/d8a/dca 0 2026-03-10T10:19:54.959 INFO:tasks.workunit.client.1.vm05.stdout:1/725: unlink d4/d3d/f4c 0 2026-03-10T10:19:54.960 INFO:tasks.workunit.client.1.vm05.stdout:2/583: dwrite db/f26 [0,4194304] 0 2026-03-10T10:19:54.967 INFO:tasks.workunit.client.1.vm05.stdout:5/661: creat da/d63/fe1 x:0 0 0 2026-03-10T10:19:54.967 INFO:tasks.workunit.client.1.vm05.stdout:5/662: readlink da/db/d26/d5c/l19 0 2026-03-10T10:19:54.968 INFO:tasks.workunit.client.1.vm05.stdout:5/663: chown da/db/d26/f7e 49234 1 2026-03-10T10:19:54.989 INFO:tasks.workunit.client.1.vm05.stdout:8/603: creat d7/d14/d62/d90/dac/fc0 x:0 0 0 2026-03-10T10:19:54.997 INFO:tasks.workunit.client.1.vm05.stdout:1/726: rmdir d4/d79/d83 39 2026-03-10T10:19:55.009 INFO:tasks.workunit.client.1.vm05.stdout:2/584: dread - db/d28/d4f/f75 zero size 2026-03-10T10:19:55.037 INFO:tasks.workunit.client.0.vm02.stdout:3/665: truncate d1/d6/f42 1370124 0 2026-03-10T10:19:55.041 INFO:tasks.workunit.client.1.vm05.stdout:2/585: symlink db/d28/d4f/d8b/d9a/lb7 0 2026-03-10T10:19:55.053 INFO:tasks.workunit.client.1.vm05.stdout:2/586: creat db/d61/fb8 x:0 0 0 2026-03-10T10:19:55.063 INFO:tasks.workunit.client.0.vm02.stdout:9/627: creat da/d3c/d4c/d38/da6/fc9 x:0 0 0 2026-03-10T10:19:55.080 INFO:tasks.workunit.client.0.vm02.stdout:8/632: creat d1/d1c/d43/d6a/da8/fbf x:0 0 0 2026-03-10T10:19:55.080 INFO:tasks.workunit.client.0.vm02.stdout:0/697: symlink d9/d18/d1a/d22/d24/d80/dcc/ldf 0 2026-03-10T10:19:55.097 INFO:tasks.workunit.client.0.vm02.stdout:5/786: getdents d1/db/d11/d16/d48/dcf 0 2026-03-10T10:19:55.114 INFO:tasks.workunit.client.0.vm02.stdout:8/633: truncate d1/d1c/d24/d71/fa2 483655 0 2026-03-10T10:19:55.121 INFO:tasks.workunit.client.1.vm05.stdout:1/727: creat d4/d3d/d6e/fd5 x:0 0 0 2026-03-10T10:19:55.131 INFO:tasks.workunit.client.0.vm02.stdout:8/634: creat d1/d1c/d43/d5b/d88/dac/d83/d9f/fc0 x:0 0 0 2026-03-10T10:19:55.134 INFO:tasks.workunit.client.1.vm05.stdout:7/665: mknod d5/d1d/d20/d2d/cd4 0 2026-03-10T10:19:55.154 INFO:tasks.workunit.client.0.vm02.stdout:5/787: link d1/lf d1/db/d11/d13/d28/da7/dd9/l10d 0 2026-03-10T10:19:55.159 INFO:tasks.workunit.client.1.vm05.stdout:4/523: write d1/f19 [719280,130992] 0 2026-03-10T10:19:55.162 INFO:tasks.workunit.client.1.vm05.stdout:9/583: dwrite d0/d1/d57/f91 [0,4194304] 0 2026-03-10T10:19:55.164 INFO:tasks.workunit.client.0.vm02.stdout:5/788: link d1/db/d11/d16/d79/d85/fa0 d1/db/d11/d16/d48/dcf/f10e 0 2026-03-10T10:19:55.175 INFO:tasks.workunit.client.1.vm05.stdout:6/650: truncate dd/d36/d3f/f41 2015464 0 2026-03-10T10:19:55.176 INFO:tasks.workunit.client.1.vm05.stdout:0/645: dwrite d1/d2/d9/d31/d12/f1e [0,4194304] 0 2026-03-10T10:19:55.197 INFO:tasks.workunit.client.1.vm05.stdout:2/587: truncate db/f4a 4518712 0 2026-03-10T10:19:55.211 INFO:tasks.workunit.client.1.vm05.stdout:5/664: getdents da/db/d26/d5c 0 2026-03-10T10:19:55.212 INFO:tasks.workunit.client.0.vm02.stdout:5/789: rmdir d1/db/d11/d13/d28/d37/dce/dfb 0 2026-03-10T10:19:55.220 INFO:tasks.workunit.client.1.vm05.stdout:1/728: dread d4/d39/fb2 [0,4194304] 0 2026-03-10T10:19:55.229 INFO:tasks.workunit.client.1.vm05.stdout:4/524: mkdir d1/d31/dc/d40/d45/daa 0 2026-03-10T10:19:55.229 INFO:tasks.workunit.client.0.vm02.stdout:5/790: truncate d1/f12 3510211 0 2026-03-10T10:19:55.229 INFO:tasks.workunit.client.1.vm05.stdout:4/525: chown d1/d31/dc/d40/d45/f50 28 1 2026-03-10T10:19:55.233 INFO:tasks.workunit.client.1.vm05.stdout:9/584: mkdir d0/df/d11/dc6 0 2026-03-10T10:19:55.254 INFO:tasks.workunit.client.1.vm05.stdout:5/665: fsync da/db/d26/d35/f7d 0 2026-03-10T10:19:55.260 INFO:tasks.workunit.client.1.vm05.stdout:9/585: fdatasync d0/d1/d13/de/d21/f53 0 2026-03-10T10:19:55.260 INFO:tasks.workunit.client.1.vm05.stdout:9/586: fdatasync d0/df/d74/f9e 0 2026-03-10T10:19:55.262 INFO:tasks.workunit.client.1.vm05.stdout:0/646: unlink d1/d2/d9/d31/d13/d15/f62 0 2026-03-10T10:19:55.272 INFO:tasks.workunit.client.1.vm05.stdout:5/666: creat da/d9a/dc7/db4/fe2 x:0 0 0 2026-03-10T10:19:55.272 INFO:tasks.workunit.client.1.vm05.stdout:4/526: symlink d1/d31/dc/lab 0 2026-03-10T10:19:55.272 INFO:tasks.workunit.client.1.vm05.stdout:6/651: link dd/d36/d3f/lcc dd/d36/d3f/d12/d44/d2a/d3d/d48/ld4 0 2026-03-10T10:19:55.273 INFO:tasks.workunit.client.1.vm05.stdout:4/527: chown d1/d31/dc/d40/d45/f50 90 1 2026-03-10T10:19:55.273 INFO:tasks.workunit.client.1.vm05.stdout:6/652: write dd/d36/d3f/dbd/fd0 [1077,97258] 0 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: from='client.14702 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: from='client.14712 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: pgmap v9: 65 pgs: 65 active+clean; 2.7 GiB data, 9.4 GiB used, 111 GiB / 120 GiB avail; 35 MiB/s rd, 86 MiB/s wr, 217 op/s 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/2494167367' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: Upgrade: Need to upgrade myself (mgr.vm05.coparq) 2026-03-10T10:19:55.275 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:55 vm02.local ceph-mon[50200]: Upgrade: Need to upgrade myself (mgr.vm05.coparq) 2026-03-10T10:19:55.276 INFO:tasks.workunit.client.1.vm05.stdout:6/653: dread - dd/d36/d3f/d12/d44/d30/f9f zero size 2026-03-10T10:19:55.277 INFO:tasks.workunit.client.1.vm05.stdout:6/654: dread - dd/d36/d3f/d12/d44/d63/fc5 zero size 2026-03-10T10:19:55.287 INFO:tasks.workunit.client.1.vm05.stdout:0/647: creat d1/d2/d9/d31/d13/da2/dab/fe1 x:0 0 0 2026-03-10T10:19:55.288 INFO:tasks.workunit.client.1.vm05.stdout:0/648: chown d1/d2/d9/d31/d13/f9c 16 1 2026-03-10T10:19:55.299 INFO:tasks.workunit.client.1.vm05.stdout:5/667: mkdir da/db/d28/d8a/de3 0 2026-03-10T10:19:55.300 INFO:tasks.workunit.client.1.vm05.stdout:4/528: mkdir d1/d31/d76/dac 0 2026-03-10T10:19:55.302 INFO:tasks.workunit.client.1.vm05.stdout:9/587: mknod d0/d1/d16/d6e/daf/db7/cc7 0 2026-03-10T10:19:55.302 INFO:tasks.workunit.client.1.vm05.stdout:9/588: dread - d0/df/f97 zero size 2026-03-10T10:19:55.302 INFO:tasks.workunit.client.1.vm05.stdout:0/649: write d1/d2/d9/d31/f8c [4938724,22568] 0 2026-03-10T10:19:55.305 INFO:tasks.workunit.client.1.vm05.stdout:6/655: sync 2026-03-10T10:19:55.306 INFO:tasks.workunit.client.1.vm05.stdout:0/650: dread - d1/d2/d9/d31/d13/da2/dab/fe1 zero size 2026-03-10T10:19:55.307 INFO:tasks.workunit.client.1.vm05.stdout:6/656: chown dd/d36/d3f/d12/d44/d30/d4a/c60 3891470 1 2026-03-10T10:19:55.316 INFO:tasks.workunit.client.1.vm05.stdout:1/729: getdents d4/df/d76 0 2026-03-10T10:19:55.316 INFO:tasks.workunit.client.1.vm05.stdout:4/529: fsync d1/d64/f8f 0 2026-03-10T10:19:55.317 INFO:tasks.workunit.client.1.vm05.stdout:5/668: chown da/l5f 10723806 1 2026-03-10T10:19:55.318 INFO:tasks.workunit.client.1.vm05.stdout:5/669: read da/d9a/fae [70834,52455] 0 2026-03-10T10:19:55.361 INFO:tasks.workunit.client.1.vm05.stdout:5/670: creat da/db/d26/d35/d38/fe4 x:0 0 0 2026-03-10T10:19:55.367 INFO:tasks.workunit.client.1.vm05.stdout:1/730: getdents d4 0 2026-03-10T10:19:55.368 INFO:tasks.workunit.client.0.vm02.stdout:7/656: write d1/dc/d16/d28/f4e [822266,77702] 0 2026-03-10T10:19:55.369 INFO:tasks.workunit.client.0.vm02.stdout:1/692: write d4/da/f28 [480738,66163] 0 2026-03-10T10:19:55.372 INFO:tasks.workunit.client.1.vm05.stdout:1/731: creat d4/d39/d3e/db1/db8/fd6 x:0 0 0 2026-03-10T10:19:55.374 INFO:tasks.workunit.client.0.vm02.stdout:7/657: creat d1/d1b/d8f/dad/d7e/dba/fcc x:0 0 0 2026-03-10T10:19:55.376 INFO:tasks.workunit.client.1.vm05.stdout:1/732: creat d4/d39/d3e/fd7 x:0 0 0 2026-03-10T10:19:55.377 INFO:tasks.workunit.client.0.vm02.stdout:7/658: fsync d1/dc/d60/f53 0 2026-03-10T10:19:55.378 INFO:tasks.workunit.client.1.vm05.stdout:1/733: mknod d4/d37/d4e/d82/cd8 0 2026-03-10T10:19:55.380 INFO:tasks.workunit.client.1.vm05.stdout:1/734: dread - d4/d79/d83/dc5/dcb/fd0 zero size 2026-03-10T10:19:55.390 INFO:tasks.workunit.client.0.vm02.stdout:0/698: creat d9/d18/d1a/d22/d24/d80/fe0 x:0 0 0 2026-03-10T10:19:55.391 INFO:tasks.workunit.client.0.vm02.stdout:0/699: chown d9/d34/d3d/d7b/fdb 40556 1 2026-03-10T10:19:55.401 INFO:tasks.workunit.client.0.vm02.stdout:0/700: rmdir d9/d34/d3d/d65/d89 39 2026-03-10T10:19:55.413 INFO:tasks.workunit.client.0.vm02.stdout:6/633: dwrite d0/f20 [0,4194304] 0 2026-03-10T10:19:55.420 INFO:tasks.workunit.client.0.vm02.stdout:6/634: creat d0/d8/d9/fcb x:0 0 0 2026-03-10T10:19:55.425 INFO:tasks.workunit.client.0.vm02.stdout:6/635: fdatasync d0/d8/d29/d2f/d4b/da5/d6f/fa2 0 2026-03-10T10:19:55.439 INFO:tasks.workunit.client.0.vm02.stdout:9/628: write da/d3c/d4c/f49 [3833215,80885] 0 2026-03-10T10:19:55.440 INFO:tasks.workunit.client.0.vm02.stdout:9/629: chown da/d3c/d4c/d56/fac 1955879 1 2026-03-10T10:19:55.453 INFO:tasks.workunit.client.0.vm02.stdout:8/635: unlink d1/d1c/d43/fa4 0 2026-03-10T10:19:55.470 INFO:tasks.workunit.client.0.vm02.stdout:8/636: fdatasync d1/f6d 0 2026-03-10T10:19:55.470 INFO:tasks.workunit.client.0.vm02.stdout:8/637: link d1/f91 d1/d1c/d43/d6a/da8/d56/db5/fc1 0 2026-03-10T10:19:55.474 INFO:tasks.workunit.client.0.vm02.stdout:7/659: sync 2026-03-10T10:19:55.474 INFO:tasks.workunit.client.0.vm02.stdout:9/630: sync 2026-03-10T10:19:55.480 INFO:tasks.workunit.client.0.vm02.stdout:7/660: mknod d1/dc/d55/ccd 0 2026-03-10T10:19:55.488 INFO:tasks.workunit.client.1.vm05.stdout:7/666: link d5/dd/f12 d5/d1d/fd5 0 2026-03-10T10:19:55.492 INFO:tasks.workunit.client.1.vm05.stdout:7/667: mkdir d5/d1d/d20/d2d/d80/dd6 0 2026-03-10T10:19:55.494 INFO:tasks.workunit.client.1.vm05.stdout:7/668: read d5/d1d/d29/d3e/d8c/d7f/f93 [442724,53456] 0 2026-03-10T10:19:55.495 INFO:tasks.workunit.client.1.vm05.stdout:7/669: dread - d5/d1d/d20/d2d/d68/fc4 zero size 2026-03-10T10:19:55.496 INFO:tasks.workunit.client.0.vm02.stdout:2/664: rename d0/f4d to d0/fe2 0 2026-03-10T10:19:55.512 INFO:tasks.workunit.client.0.vm02.stdout:4/774: unlink d1/d41/d5e/d78/d7f/ldb 0 2026-03-10T10:19:55.519 INFO:tasks.workunit.client.1.vm05.stdout:7/670: getdents d5/d1d/d20/d35 0 2026-03-10T10:19:55.521 INFO:tasks.workunit.client.0.vm02.stdout:4/775: creat d1/d41/d5e/d78/d44/dd0/ffe x:0 0 0 2026-03-10T10:19:55.521 INFO:tasks.workunit.client.0.vm02.stdout:4/776: chown d1/d41/d5e/d78/d7f/fb9 2460151 1 2026-03-10T10:19:55.535 INFO:tasks.workunit.client.0.vm02.stdout:4/777: dread d1/d52/fbd [0,4194304] 0 2026-03-10T10:19:55.535 INFO:tasks.workunit.client.1.vm05.stdout:2/588: write db/f23 [1244182,87738] 0 2026-03-10T10:19:55.535 INFO:tasks.workunit.client.0.vm02.stdout:5/791: write d1/db/d11/d7b/ff4 [4494494,50602] 0 2026-03-10T10:19:55.536 INFO:tasks.workunit.client.1.vm05.stdout:2/589: stat db/d61/f99 0 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: from='client.14702 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: from='client.14712 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: pgmap v9: 65 pgs: 65 active+clean; 2.7 GiB data, 9.4 GiB used, 111 GiB / 120 GiB avail; 35 MiB/s rd, 86 MiB/s wr, 217 op/s 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/2494167367' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: Upgrade: Need to upgrade myself (mgr.vm05.coparq) 2026-03-10T10:19:55.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:55 vm05.local ceph-mon[59051]: Upgrade: Need to upgrade myself (mgr.vm05.coparq) 2026-03-10T10:19:55.541 INFO:tasks.workunit.client.0.vm02.stdout:4/778: mkdir d1/d52/dff 0 2026-03-10T10:19:55.542 INFO:tasks.workunit.client.0.vm02.stdout:5/792: truncate d1/db/d11/d13/d28/f91 3383881 0 2026-03-10T10:19:55.547 INFO:tasks.workunit.client.0.vm02.stdout:4/779: truncate d1/d10/db/f15 3155901 0 2026-03-10T10:19:55.548 INFO:tasks.workunit.client.0.vm02.stdout:4/780: stat d1/d41/d5e/d78/d37/c8b 0 2026-03-10T10:19:55.552 INFO:tasks.workunit.client.1.vm05.stdout:4/530: unlink d1/d31/f1b 0 2026-03-10T10:19:55.552 INFO:tasks.workunit.client.0.vm02.stdout:5/793: mknod d1/db/d11/d84/d40/d4f/c10f 0 2026-03-10T10:19:55.557 INFO:tasks.workunit.client.1.vm05.stdout:4/531: creat d1/d70/fad x:0 0 0 2026-03-10T10:19:55.559 INFO:tasks.workunit.client.1.vm05.stdout:2/590: getdents db/d1c/d40/d62 0 2026-03-10T10:19:55.559 INFO:tasks.workunit.client.0.vm02.stdout:5/794: creat d1/db/d11/d13/dc9/f110 x:0 0 0 2026-03-10T10:19:55.560 INFO:tasks.workunit.client.0.vm02.stdout:5/795: symlink d1/db/d11/d13/d28/d37/d3d/l111 0 2026-03-10T10:19:55.567 INFO:tasks.workunit.client.1.vm05.stdout:4/532: dwrite d1/d31/d4b/f9e [0,4194304] 0 2026-03-10T10:19:55.575 INFO:tasks.workunit.client.1.vm05.stdout:2/591: symlink db/d28/d4f/d59/da4/d6c/lb9 0 2026-03-10T10:19:55.576 INFO:tasks.workunit.client.1.vm05.stdout:4/533: mkdir d1/d64/da9/dae 0 2026-03-10T10:19:55.580 INFO:tasks.workunit.client.1.vm05.stdout:2/592: mknod db/d61/cba 0 2026-03-10T10:19:55.581 INFO:tasks.workunit.client.0.vm02.stdout:5/796: rename d1/db/d11/d62/fe8 to d1/db/d11/d16/d48/dcf/f112 0 2026-03-10T10:19:55.585 INFO:tasks.workunit.client.1.vm05.stdout:2/593: chown db/d12/d74/ca0 18834494 1 2026-03-10T10:19:55.587 INFO:tasks.workunit.client.1.vm05.stdout:9/589: write d0/d1/d13/d26/f4f [236637,30317] 0 2026-03-10T10:19:55.587 INFO:tasks.workunit.client.1.vm05.stdout:2/594: fdatasync db/d12/f31 0 2026-03-10T10:19:55.588 INFO:tasks.workunit.client.0.vm02.stdout:5/797: truncate d1/db/d11/d13/d28/d37/ff6 2322989 0 2026-03-10T10:19:55.589 INFO:tasks.workunit.client.1.vm05.stdout:6/657: write dd/d36/d3f/d12/d44/d30/f9f [262852,41948] 0 2026-03-10T10:19:55.590 INFO:tasks.workunit.client.0.vm02.stdout:3/666: truncate d1/d8/d21/f5e 4257059 0 2026-03-10T10:19:55.590 INFO:tasks.workunit.client.1.vm05.stdout:2/595: dread - db/d2d/d5e/fae zero size 2026-03-10T10:19:55.591 INFO:tasks.workunit.client.1.vm05.stdout:0/651: dwrite d1/d2/d9/d31/d12/d20/f81 [0,4194304] 0 2026-03-10T10:19:55.591 INFO:tasks.workunit.client.0.vm02.stdout:5/798: fsync d1/f7f 0 2026-03-10T10:19:55.597 INFO:tasks.workunit.client.1.vm05.stdout:2/596: dwrite db/d28/d4f/d59/da4/faf [0,4194304] 0 2026-03-10T10:19:55.612 INFO:tasks.workunit.client.0.vm02.stdout:3/667: sync 2026-03-10T10:19:55.613 INFO:tasks.workunit.client.0.vm02.stdout:3/668: dread - d1/d20/d52/fd7 zero size 2026-03-10T10:19:55.624 INFO:tasks.workunit.client.1.vm05.stdout:9/590: creat d0/df/fc8 x:0 0 0 2026-03-10T10:19:55.628 INFO:tasks.workunit.client.1.vm05.stdout:3/659: rename dd/c34 to dd/d20/d56/d5e/dab/ce9 0 2026-03-10T10:19:55.633 INFO:tasks.workunit.client.0.vm02.stdout:4/781: rmdir d1/d41 39 2026-03-10T10:19:55.633 INFO:tasks.workunit.client.1.vm05.stdout:5/671: dwrite da/db/d26/d5c/f2c [0,4194304] 0 2026-03-10T10:19:55.637 INFO:tasks.workunit.client.0.vm02.stdout:1/693: dwrite d4/f26 [0,4194304] 0 2026-03-10T10:19:55.638 INFO:tasks.workunit.client.0.vm02.stdout:3/669: creat d1/d8/d21/d7d/fdf x:0 0 0 2026-03-10T10:19:55.641 INFO:tasks.workunit.client.1.vm05.stdout:3/660: sync 2026-03-10T10:19:55.652 INFO:tasks.workunit.client.1.vm05.stdout:6/658: truncate dd/d36/d3f/d12/d44/d2a/d3d/d3e/f73 843773 0 2026-03-10T10:19:55.654 INFO:tasks.workunit.client.1.vm05.stdout:6/659: write dd/d36/d3f/d12/d44/d30/d4a/fc9 [24044,45597] 0 2026-03-10T10:19:55.654 INFO:tasks.workunit.client.0.vm02.stdout:5/799: rmdir d1/db/d11/d16/d48/dcf/df5 0 2026-03-10T10:19:55.655 INFO:tasks.workunit.client.1.vm05.stdout:6/660: fdatasync dd/d36/d3f/d12/d44/d2a/d3d/d48/fb2 0 2026-03-10T10:19:55.658 INFO:tasks.workunit.client.0.vm02.stdout:1/694: dread - d4/da/d27/d38/fad zero size 2026-03-10T10:19:55.659 INFO:tasks.workunit.client.1.vm05.stdout:1/735: truncate d4/df/f73 481696 0 2026-03-10T10:19:55.659 INFO:tasks.workunit.client.0.vm02.stdout:0/701: write d9/d34/d3d/fae [1386734,110582] 0 2026-03-10T10:19:55.660 INFO:tasks.workunit.client.0.vm02.stdout:0/702: chown d9/d34/d3d/d67/fc3 3 1 2026-03-10T10:19:55.664 INFO:tasks.workunit.client.1.vm05.stdout:2/597: link db/f19 db/d28/d4f/d59/da4/d81/fbb 0 2026-03-10T10:19:55.669 INFO:tasks.workunit.client.0.vm02.stdout:6/636: dwrite d0/f43 [4194304,4194304] 0 2026-03-10T10:19:55.676 INFO:tasks.workunit.client.1.vm05.stdout:3/661: chown dd/d15/d4c/c8d 1 1 2026-03-10T10:19:55.681 INFO:tasks.workunit.client.0.vm02.stdout:8/638: write d1/d2/f36 [3875206,48532] 0 2026-03-10T10:19:55.681 INFO:tasks.workunit.client.0.vm02.stdout:9/631: write da/d3c/d4c/d2c/f93 [601579,102915] 0 2026-03-10T10:19:55.681 INFO:tasks.workunit.client.0.vm02.stdout:2/665: chown d0/fe2 50293041 1 2026-03-10T10:19:55.681 INFO:tasks.workunit.client.0.vm02.stdout:7/661: write d1/dc/f69 [5003125,28355] 0 2026-03-10T10:19:55.683 INFO:tasks.workunit.client.0.vm02.stdout:5/800: mkdir d1/db/d11/d13/d28/d37/d3d/da3/d113 0 2026-03-10T10:19:55.685 INFO:tasks.workunit.client.1.vm05.stdout:9/591: creat d0/d1/d13/d55/fc9 x:0 0 0 2026-03-10T10:19:55.692 INFO:tasks.workunit.client.1.vm05.stdout:7/671: dwrite d5/d17/f19 [0,4194304] 0 2026-03-10T10:19:55.702 INFO:tasks.workunit.client.1.vm05.stdout:6/661: dread f2 [0,4194304] 0 2026-03-10T10:19:55.719 INFO:tasks.workunit.client.0.vm02.stdout:6/637: mknod d0/d8/d29/d52/ccc 0 2026-03-10T10:19:55.719 INFO:tasks.workunit.client.0.vm02.stdout:6/638: dread - d0/db9/fc5 zero size 2026-03-10T10:19:55.738 INFO:tasks.workunit.client.0.vm02.stdout:0/703: dread d9/d18/d1a/d22/d24/d80/d49/f8b [0,4194304] 0 2026-03-10T10:19:55.740 INFO:tasks.workunit.client.0.vm02.stdout:9/632: creat da/d3c/d4c/d38/d82/d8c/fca x:0 0 0 2026-03-10T10:19:55.745 INFO:tasks.workunit.client.0.vm02.stdout:2/666: mknod d0/d10/da6/ce3 0 2026-03-10T10:19:55.757 INFO:tasks.workunit.client.0.vm02.stdout:2/667: sync 2026-03-10T10:19:55.758 INFO:tasks.workunit.client.0.vm02.stdout:2/668: read - d0/d1a/fb4 zero size 2026-03-10T10:19:55.759 INFO:tasks.workunit.client.0.vm02.stdout:8/639: rmdir d1/d1c/d43/d5b/d88/dac 39 2026-03-10T10:19:55.765 INFO:tasks.workunit.client.1.vm05.stdout:9/592: creat d0/d1/d16/fca x:0 0 0 2026-03-10T10:19:55.769 INFO:tasks.workunit.client.0.vm02.stdout:7/662: unlink d1/d1b/d8f/dad/d7e/dba/c63 0 2026-03-10T10:19:55.771 INFO:tasks.workunit.client.1.vm05.stdout:6/662: rmdir dd/d36/d3f/d12/d96 39 2026-03-10T10:19:55.772 INFO:tasks.workunit.client.1.vm05.stdout:4/534: dwrite d1/d31/f13 [0,4194304] 0 2026-03-10T10:19:55.774 INFO:tasks.workunit.client.1.vm05.stdout:6/663: write dd/d36/d3f/d12/d44/d30/d4a/fc9 [768080,105721] 0 2026-03-10T10:19:55.790 INFO:tasks.workunit.client.1.vm05.stdout:0/652: truncate d1/d2/d9/d31/d12/d20/f81 2981496 0 2026-03-10T10:19:55.801 INFO:tasks.workunit.client.0.vm02.stdout:4/782: dwrite d1/d10/db/f15 [0,4194304] 0 2026-03-10T10:19:55.807 INFO:tasks.workunit.client.0.vm02.stdout:5/801: mkdir d1/db/d11/d84/d40/d4f/d5f/d6d/d71/d114 0 2026-03-10T10:19:55.812 INFO:tasks.workunit.client.0.vm02.stdout:3/670: write d1/d8/d44/f56 [25141,10878] 0 2026-03-10T10:19:55.812 INFO:tasks.workunit.client.1.vm05.stdout:8/604: rename d7/d14/d24/d3f/d6a/f6c to d7/d14/d24/fc1 0 2026-03-10T10:19:55.813 INFO:tasks.workunit.client.1.vm05.stdout:8/605: fsync d7/d14/d24/d3f/d6a/d8a/d96/fa2 0 2026-03-10T10:19:55.820 INFO:tasks.workunit.client.1.vm05.stdout:1/736: write d4/d39/d3e/da0/fa1 [446594,75786] 0 2026-03-10T10:19:55.822 INFO:tasks.workunit.client.1.vm05.stdout:2/598: mkdir db/d28/dbc 0 2026-03-10T10:19:55.836 INFO:tasks.workunit.client.1.vm05.stdout:9/593: creat d0/df/d11/fcb x:0 0 0 2026-03-10T10:19:55.836 INFO:tasks.workunit.client.1.vm05.stdout:9/594: chown d0/d1/d16/c9a 133699602 1 2026-03-10T10:19:55.842 INFO:tasks.workunit.client.1.vm05.stdout:3/662: dwrite dd/d15/f6a [0,4194304] 0 2026-03-10T10:19:55.849 INFO:tasks.workunit.client.1.vm05.stdout:4/535: fdatasync d1/d31/dc/f1f 0 2026-03-10T10:19:55.856 INFO:tasks.workunit.client.1.vm05.stdout:6/664: mkdir dd/d36/d3f/dbd/dd5 0 2026-03-10T10:19:55.857 INFO:tasks.workunit.client.1.vm05.stdout:6/665: stat dd/d36/d3f/d12/d44/d2a/f84 0 2026-03-10T10:19:55.868 INFO:tasks.workunit.client.1.vm05.stdout:0/653: symlink d1/d2/d9/le2 0 2026-03-10T10:19:55.872 INFO:tasks.workunit.client.1.vm05.stdout:5/672: rename da/db/d26/f4c to da/db/d26/d35/db3/fe5 0 2026-03-10T10:19:55.872 INFO:tasks.workunit.client.1.vm05.stdout:8/606: mknod d7/d14/cc2 0 2026-03-10T10:19:55.872 INFO:tasks.workunit.client.1.vm05.stdout:8/607: chown d7/d14/d3a/d49/d65/db8 79 1 2026-03-10T10:19:55.873 INFO:tasks.workunit.client.1.vm05.stdout:1/737: stat d4/d39/d3e/f3f 0 2026-03-10T10:19:55.873 INFO:tasks.workunit.client.1.vm05.stdout:6/666: sync 2026-03-10T10:19:55.873 INFO:tasks.workunit.client.1.vm05.stdout:8/608: chown d7/d14/d24/f9c 2057393142 1 2026-03-10T10:19:55.874 INFO:tasks.workunit.client.1.vm05.stdout:2/599: creat db/d28/d4f/fbd x:0 0 0 2026-03-10T10:19:55.877 INFO:tasks.workunit.client.1.vm05.stdout:4/536: creat d1/d31/d76/faf x:0 0 0 2026-03-10T10:19:55.880 INFO:tasks.workunit.client.1.vm05.stdout:0/654: fdatasync d1/d2/d9/d31/d13/f9c 0 2026-03-10T10:19:55.897 INFO:tasks.workunit.client.1.vm05.stdout:7/672: rename d5/l1b to d5/d17/d85/ld7 0 2026-03-10T10:19:55.898 INFO:tasks.workunit.client.1.vm05.stdout:6/667: rename dd/d36/d3f/d12/d44/d30/d4a to dd/d36/d3f/d12/d44/d30/d4a/d6e/dc3/dd6 22 2026-03-10T10:19:55.898 INFO:tasks.workunit.client.1.vm05.stdout:5/673: fdatasync da/d9a/dc7/db4/fc0 0 2026-03-10T10:19:55.898 INFO:tasks.workunit.client.1.vm05.stdout:7/673: chown d5/d1d/d29/d3e/d8c/d82/cc7 23082875 1 2026-03-10T10:19:55.901 INFO:tasks.workunit.client.1.vm05.stdout:8/609: creat d7/d14/d24/d3f/d6a/d8a/d96/fc3 x:0 0 0 2026-03-10T10:19:55.902 INFO:tasks.workunit.client.1.vm05.stdout:9/595: mkdir d0/d1/dcc 0 2026-03-10T10:19:55.906 INFO:tasks.workunit.client.1.vm05.stdout:2/600: creat db/d28/d4f/d59/fbe x:0 0 0 2026-03-10T10:19:55.913 INFO:tasks.workunit.client.1.vm05.stdout:3/663: link dd/d39/d66/fad dd/d20/d9e/dc0/ddd/fea 0 2026-03-10T10:19:55.915 INFO:tasks.workunit.client.1.vm05.stdout:2/601: dread db/d28/d4f/d59/da4/d81/fbb [0,4194304] 0 2026-03-10T10:19:55.917 INFO:tasks.workunit.client.1.vm05.stdout:1/738: dwrite d4/df/d1c/d53/daa/fa9 [0,4194304] 0 2026-03-10T10:19:55.921 INFO:tasks.workunit.client.1.vm05.stdout:4/537: mkdir d1/d3/d65/db0 0 2026-03-10T10:19:55.921 INFO:tasks.workunit.client.1.vm05.stdout:0/655: rmdir d1/d2/d9/d31/d13/da2/dab/dce 39 2026-03-10T10:19:55.921 INFO:tasks.workunit.client.0.vm02.stdout:6/639: creat d0/d8/d29/d2f/fcd x:0 0 0 2026-03-10T10:19:55.922 INFO:tasks.workunit.client.0.vm02.stdout:0/704: chown d9/d34/d3d/d65/d89/dd3/da7/db7 1 1 2026-03-10T10:19:55.925 INFO:tasks.workunit.client.0.vm02.stdout:9/633: fsync da/d3c/d4c/d2c/d34/d35/fc1 0 2026-03-10T10:19:55.927 INFO:tasks.workunit.client.1.vm05.stdout:6/668: symlink dd/d36/d3f/d12/d44/d2a/d3d/d3e/ld7 0 2026-03-10T10:19:55.937 INFO:tasks.workunit.client.1.vm05.stdout:3/664: fsync dd/d15/d24/d2c/d3b/f40 0 2026-03-10T10:19:55.980 INFO:tasks.workunit.client.0.vm02.stdout:0/705: mkdir d9/d34/d3d/d65/d89/dd3/da7/db7/de1 0 2026-03-10T10:19:55.981 INFO:tasks.workunit.client.0.vm02.stdout:8/640: dwrite d1/d1c/d23/d25/f8c [0,4194304] 0 2026-03-10T10:19:55.987 INFO:tasks.workunit.client.0.vm02.stdout:9/634: creat da/d3c/d4c/d38/d82/fcb x:0 0 0 2026-03-10T10:19:55.988 INFO:tasks.workunit.client.1.vm05.stdout:8/610: write d7/d14/d62/f9d [684947,58507] 0 2026-03-10T10:19:55.993 INFO:tasks.workunit.client.1.vm05.stdout:4/538: mknod d1/d31/d4b/cb1 0 2026-03-10T10:19:55.996 INFO:tasks.workunit.client.0.vm02.stdout:2/669: rename d0/d10/c2b to d0/d1a/d49/d5e/d8a/ce4 0 2026-03-10T10:19:55.996 INFO:tasks.workunit.client.0.vm02.stdout:2/670: readlink d0/d1a/d24/dc6/l9c 0 2026-03-10T10:19:55.998 INFO:tasks.workunit.client.0.vm02.stdout:2/671: chown d0/d1a/d49/d5e/c7b 3 1 2026-03-10T10:19:55.999 INFO:tasks.workunit.client.0.vm02.stdout:5/802: dwrite d1/db/d11/d62/d67/ff7 [0,4194304] 0 2026-03-10T10:19:56.002 INFO:tasks.workunit.client.0.vm02.stdout:5/803: chown d1/db/d11/d84/d95/cea 950 1 2026-03-10T10:19:56.007 INFO:tasks.workunit.client.0.vm02.stdout:3/671: dwrite d1/d20/d52/f92 [0,4194304] 0 2026-03-10T10:19:56.017 INFO:tasks.workunit.client.0.vm02.stdout:6/640: mkdir d0/d8/d29/dce 0 2026-03-10T10:19:56.017 INFO:tasks.workunit.client.0.vm02.stdout:1/695: creat d4/da/d1a/d47/d88/fdd x:0 0 0 2026-03-10T10:19:56.018 INFO:tasks.workunit.client.0.vm02.stdout:7/663: dread d1/d1b/d49/f4b [4194304,4194304] 0 2026-03-10T10:19:56.025 INFO:tasks.workunit.client.1.vm05.stdout:2/602: mknod db/d28/dbc/cbf 0 2026-03-10T10:19:56.026 INFO:tasks.workunit.client.0.vm02.stdout:9/635: rmdir da/d3c/d4c/db1 39 2026-03-10T10:19:56.026 INFO:tasks.workunit.client.1.vm05.stdout:2/603: chown db/d28/d4f/d59/d94/d95/fa5 0 1 2026-03-10T10:19:56.029 INFO:tasks.workunit.client.1.vm05.stdout:4/539: fdatasync d1/d31/dc/d40/d45/f57 0 2026-03-10T10:19:56.031 INFO:tasks.workunit.client.0.vm02.stdout:5/804: unlink d1/db/fd3 0 2026-03-10T10:19:56.032 INFO:tasks.workunit.client.0.vm02.stdout:5/805: stat d1/db/d11/cad 0 2026-03-10T10:19:56.033 INFO:tasks.workunit.client.0.vm02.stdout:5/806: readlink d1/db/d11/d84/d40/d4f/d5f/d6d/l86 0 2026-03-10T10:19:56.034 INFO:tasks.workunit.client.1.vm05.stdout:0/656: truncate d1/d2/d9/f40 3357508 0 2026-03-10T10:19:56.035 INFO:tasks.workunit.client.1.vm05.stdout:0/657: chown d1/d2/d9/d31/d13/f73 399328853 1 2026-03-10T10:19:56.035 INFO:tasks.workunit.client.1.vm05.stdout:0/658: write d1/d2/d39/fd1 [81078,60514] 0 2026-03-10T10:19:56.037 INFO:tasks.workunit.client.0.vm02.stdout:8/641: dread d1/d1c/f33 [0,4194304] 0 2026-03-10T10:19:56.041 INFO:tasks.workunit.client.1.vm05.stdout:1/739: write d4/f46 [596402,99899] 0 2026-03-10T10:19:56.042 INFO:tasks.workunit.client.1.vm05.stdout:9/596: link d0/d1/fa7 d0/d1/d16/d6e/daf/fcd 0 2026-03-10T10:19:56.046 INFO:tasks.workunit.client.1.vm05.stdout:9/597: sync 2026-03-10T10:19:56.050 INFO:tasks.workunit.client.0.vm02.stdout:6/641: creat d0/d8/d29/d2f/d4b/da5/fcf x:0 0 0 2026-03-10T10:19:56.051 INFO:tasks.workunit.client.0.vm02.stdout:7/664: dread d1/dc/d55/f8b [0,4194304] 0 2026-03-10T10:19:56.051 INFO:tasks.workunit.client.0.vm02.stdout:6/642: write d0/d8/d29/d52/fbc [2102358,79244] 0 2026-03-10T10:19:56.057 INFO:tasks.workunit.client.1.vm05.stdout:3/665: rename dd/d15/d24/d2c/d3b/f40 to dd/d20/d56/d5e/feb 0 2026-03-10T10:19:56.064 INFO:tasks.workunit.client.0.vm02.stdout:2/672: truncate d0/d1a/d49/d5e/f60 4420293 0 2026-03-10T10:19:56.064 INFO:tasks.workunit.client.1.vm05.stdout:5/674: dwrite da/f2e [0,4194304] 0 2026-03-10T10:19:56.065 INFO:tasks.workunit.client.0.vm02.stdout:2/673: chown d0/d8c/fb1 162922 1 2026-03-10T10:19:56.067 INFO:tasks.workunit.client.1.vm05.stdout:5/675: read da/db/f29 [1525232,15358] 0 2026-03-10T10:19:56.070 INFO:tasks.workunit.client.0.vm02.stdout:5/807: rmdir d1/db/d11/d84 39 2026-03-10T10:19:56.071 INFO:tasks.workunit.client.0.vm02.stdout:3/672: getdents d1/d8/d86/db1/dbc 0 2026-03-10T10:19:56.072 INFO:tasks.workunit.client.1.vm05.stdout:2/604: symlink db/d61/d67/lc0 0 2026-03-10T10:19:56.072 INFO:tasks.workunit.client.1.vm05.stdout:4/540: mknod d1/d31/dc/d40/d45/cb2 0 2026-03-10T10:19:56.073 INFO:tasks.workunit.client.1.vm05.stdout:0/659: chown d1/d2/d9/d31/d54/l5 51 1 2026-03-10T10:19:56.079 INFO:tasks.workunit.client.1.vm05.stdout:1/740: dread d4/d39/d3e/f3f [0,4194304] 0 2026-03-10T10:19:56.086 INFO:tasks.workunit.client.0.vm02.stdout:4/783: dwrite d1/d41/d5e/d78/d1a/f8c [0,4194304] 0 2026-03-10T10:19:56.103 INFO:tasks.workunit.client.0.vm02.stdout:6/643: rmdir d0/d8/d29/d2f/d50/d98 39 2026-03-10T10:19:56.103 INFO:tasks.workunit.client.0.vm02.stdout:9/636: mkdir da/d3c/d4c/d2c/d34/dc2/dcc 0 2026-03-10T10:19:56.103 INFO:tasks.workunit.client.0.vm02.stdout:9/637: chown da/f5c 0 1 2026-03-10T10:19:56.104 INFO:tasks.workunit.client.0.vm02.stdout:6/644: dwrite d0/d8/d29/d2f/d4b/da5/fcf [0,4194304] 0 2026-03-10T10:19:56.104 INFO:tasks.workunit.client.0.vm02.stdout:1/696: creat d4/da/fde x:0 0 0 2026-03-10T10:19:56.104 INFO:tasks.workunit.client.0.vm02.stdout:4/784: mkdir d1/d41/d5e/d78/d7f/d100 0 2026-03-10T10:19:56.105 INFO:tasks.workunit.client.0.vm02.stdout:2/674: rename d0/c41 to d0/d1a/d49/d5e/d8a/ce5 0 2026-03-10T10:19:56.105 INFO:tasks.workunit.client.0.vm02.stdout:4/785: chown d1/d52/d53/c54 3925885 1 2026-03-10T10:19:56.108 INFO:tasks.workunit.client.0.vm02.stdout:8/642: creat d1/d1c/d23/d25/fc2 x:0 0 0 2026-03-10T10:19:56.108 INFO:tasks.workunit.client.0.vm02.stdout:8/643: write d1/d1c/d23/f75 [1787247,48025] 0 2026-03-10T10:19:56.109 INFO:tasks.workunit.client.0.vm02.stdout:7/665: creat d1/dc/d16/d28/d2d/dae/fce x:0 0 0 2026-03-10T10:19:56.112 INFO:tasks.workunit.client.0.vm02.stdout:2/675: mkdir d0/d1a/d49/d5e/d8a/de6 0 2026-03-10T10:19:56.114 INFO:tasks.workunit.client.0.vm02.stdout:2/676: readlink d0/d8c/lbc 0 2026-03-10T10:19:56.114 INFO:tasks.workunit.client.0.vm02.stdout:2/677: chown d0/d10/f6b 1744170986 1 2026-03-10T10:19:56.118 INFO:tasks.workunit.client.0.vm02.stdout:8/644: chown d1/d1c/c1a 93612 1 2026-03-10T10:19:56.121 INFO:tasks.workunit.client.0.vm02.stdout:5/808: getdents d1/d6a 0 2026-03-10T10:19:56.125 INFO:tasks.workunit.client.1.vm05.stdout:3/666: symlink dd/d15/d24/d2c/d6d/da7/dbb/lec 0 2026-03-10T10:19:56.133 INFO:tasks.workunit.client.0.vm02.stdout:8/645: dwrite d1/f40 [0,4194304] 0 2026-03-10T10:19:56.134 INFO:tasks.workunit.client.0.vm02.stdout:8/646: chown d1/d1c/d43/f52 1 1 2026-03-10T10:19:56.134 INFO:tasks.workunit.client.0.vm02.stdout:8/647: chown d1/d1c/f66 13999 1 2026-03-10T10:19:56.134 INFO:tasks.workunit.client.1.vm05.stdout:2/605: chown db/d12/l46 316483 1 2026-03-10T10:19:56.134 INFO:tasks.workunit.client.1.vm05.stdout:3/667: write dd/d15/d1f/dae/fcd [5190774,56187] 0 2026-03-10T10:19:56.134 INFO:tasks.workunit.client.1.vm05.stdout:3/668: write dd/d15/d24/d2c/d6d/da7/dbb/fe5 [293946,33521] 0 2026-03-10T10:19:56.134 INFO:tasks.workunit.client.1.vm05.stdout:6/669: getdents dd/d36/d3f/d12 0 2026-03-10T10:19:56.140 INFO:tasks.workunit.client.0.vm02.stdout:3/673: sync 2026-03-10T10:19:56.141 INFO:tasks.workunit.client.0.vm02.stdout:2/678: sync 2026-03-10T10:19:56.148 INFO:tasks.workunit.client.0.vm02.stdout:4/786: link d1/d10/db/f91 d1/d52/d53/f101 0 2026-03-10T10:19:56.157 INFO:tasks.workunit.client.0.vm02.stdout:6/645: getdents d0/d8/d29/d2f/d50 0 2026-03-10T10:19:56.157 INFO:tasks.workunit.client.1.vm05.stdout:0/660: truncate d1/d2/d9/f1d 2276469 0 2026-03-10T10:19:56.158 INFO:tasks.workunit.client.1.vm05.stdout:7/674: truncate d5/d1d/d29/d3e/d8c/f81 543520 0 2026-03-10T10:19:56.158 INFO:tasks.workunit.client.1.vm05.stdout:7/675: read - d5/d1d/d20/d2d/d68/fc4 zero size 2026-03-10T10:19:56.158 INFO:tasks.workunit.client.0.vm02.stdout:8/648: creat d1/d1c/d24/dad/fc3 x:0 0 0 2026-03-10T10:19:56.158 INFO:tasks.workunit.client.0.vm02.stdout:8/649: read d1/d1c/d23/d25/f8c [1831830,18137] 0 2026-03-10T10:19:56.158 INFO:tasks.workunit.client.0.vm02.stdout:2/679: dread d0/d10/f46 [0,4194304] 0 2026-03-10T10:19:56.158 INFO:tasks.workunit.client.0.vm02.stdout:8/650: read d1/d1c/d43/f7e [777914,97795] 0 2026-03-10T10:19:56.163 INFO:tasks.workunit.client.1.vm05.stdout:2/606: creat db/d28/d4f/d8b/d9a/d9d/fc1 x:0 0 0 2026-03-10T10:19:56.166 INFO:tasks.workunit.client.1.vm05.stdout:3/669: chown f9 147911990 1 2026-03-10T10:19:56.170 INFO:tasks.workunit.client.1.vm05.stdout:3/670: chown dd/d20/d94/fa9 0 1 2026-03-10T10:19:56.170 INFO:tasks.workunit.client.1.vm05.stdout:3/671: chown dd/d20/d56/db3 4808769 1 2026-03-10T10:19:56.170 INFO:tasks.workunit.client.1.vm05.stdout:6/670: dread dd/d36/d3f/d12/d44/daa/fae [0,4194304] 0 2026-03-10T10:19:56.171 INFO:tasks.workunit.client.1.vm05.stdout:3/672: dread dd/d15/d24/d2c/f3c [0,4194304] 0 2026-03-10T10:19:56.177 INFO:tasks.workunit.client.0.vm02.stdout:7/666: creat d1/d1b/fcf x:0 0 0 2026-03-10T10:19:56.177 INFO:tasks.workunit.client.0.vm02.stdout:5/809: creat d1/db/d11/d13/d28/d37/d3d/da3/d113/f115 x:0 0 0 2026-03-10T10:19:56.177 INFO:tasks.workunit.client.0.vm02.stdout:9/638: getdents da/d3c/d4c/db1 0 2026-03-10T10:19:56.178 INFO:tasks.workunit.client.0.vm02.stdout:4/787: mkdir d1/d75/ddd/d102 0 2026-03-10T10:19:56.183 INFO:tasks.workunit.client.1.vm05.stdout:3/673: read dd/d15/d1f/dae/fcd [4887470,61349] 0 2026-03-10T10:19:56.200 INFO:tasks.workunit.client.0.vm02.stdout:6/646: rename d0/d8/d29/d6d/l76 to d0/d8/d29/dce/ld0 0 2026-03-10T10:19:56.201 INFO:tasks.workunit.client.0.vm02.stdout:6/647: readlink d0/d8/d29/d2f/d50/d7e/db2/dbb/lc9 0 2026-03-10T10:19:56.204 INFO:tasks.workunit.client.1.vm05.stdout:0/661: symlink d1/d2/d9/d31/d12/d20/le3 0 2026-03-10T10:19:56.207 INFO:tasks.workunit.client.0.vm02.stdout:0/706: dwrite d9/d34/d3d/d7b/f8a [0,4194304] 0 2026-03-10T10:19:56.214 INFO:tasks.workunit.client.0.vm02.stdout:0/707: stat d9/d18/d1a/d22/d24/d80/d49/f53 0 2026-03-10T10:19:56.237 INFO:tasks.workunit.client.1.vm05.stdout:9/598: rename d0/d1/d13/de/d93/la2 to d0/d1/lce 0 2026-03-10T10:19:56.237 INFO:tasks.workunit.client.1.vm05.stdout:7/676: rmdir d5/d17 39 2026-03-10T10:19:56.237 INFO:tasks.workunit.client.1.vm05.stdout:2/607: mknod db/d28/d4f/d59/da4/d81/cc2 0 2026-03-10T10:19:56.245 INFO:tasks.workunit.client.1.vm05.stdout:2/608: dwrite db/d12/f31 [0,4194304] 0 2026-03-10T10:19:56.246 INFO:tasks.workunit.client.1.vm05.stdout:6/671: symlink dd/d36/d3f/ld8 0 2026-03-10T10:19:56.248 INFO:tasks.workunit.client.1.vm05.stdout:6/672: truncate dd/d36/d3f/d12/d44/d30/f9f 1088263 0 2026-03-10T10:19:56.248 INFO:tasks.workunit.client.1.vm05.stdout:2/609: chown db/d1c/d40/d80/f9e 1696 1 2026-03-10T10:19:56.249 INFO:tasks.workunit.client.1.vm05.stdout:2/610: chown db/d28/d4f/d59/d94/d95/fb4 39 1 2026-03-10T10:19:56.254 INFO:tasks.workunit.client.0.vm02.stdout:3/674: mknod d1/d8/d21/d73/d78/d79/ce0 0 2026-03-10T10:19:56.255 INFO:tasks.workunit.client.0.vm02.stdout:2/680: dread - d0/d1a/d49/d5e/d8a/f98 zero size 2026-03-10T10:19:56.256 INFO:tasks.workunit.client.0.vm02.stdout:7/667: rmdir d1/dc/d10/d38 39 2026-03-10T10:19:56.261 INFO:tasks.workunit.client.1.vm05.stdout:2/611: truncate db/d1c/f9b 1466262 0 2026-03-10T10:19:56.262 INFO:tasks.workunit.client.1.vm05.stdout:0/662: dread d1/d2/d9/d31/d13/d17/f57 [0,4194304] 0 2026-03-10T10:19:56.266 INFO:tasks.workunit.client.1.vm05.stdout:2/612: write db/d28/d4f/fb0 [500650,100329] 0 2026-03-10T10:19:56.270 INFO:tasks.workunit.client.1.vm05.stdout:2/613: dwrite db/f36 [8388608,4194304] 0 2026-03-10T10:19:56.291 INFO:tasks.workunit.client.0.vm02.stdout:8/651: mknod d1/d1c/d43/d5b/d88/dac/d83/d9f/cc4 0 2026-03-10T10:19:56.299 INFO:tasks.workunit.client.0.vm02.stdout:3/675: mknod d1/d8/d86/da2/ce1 0 2026-03-10T10:19:56.311 INFO:tasks.workunit.client.0.vm02.stdout:4/788: fdatasync d1/d75/ddd/fea 0 2026-03-10T10:19:56.345 INFO:tasks.workunit.client.1.vm05.stdout:8/611: truncate d7/f11 5485664 0 2026-03-10T10:19:56.345 INFO:tasks.workunit.client.1.vm05.stdout:8/612: readlink d7/d2f/d57/l60 0 2026-03-10T10:19:56.348 INFO:tasks.workunit.client.1.vm05.stdout:4/541: getdents d1/d31/dc/d40/d45 0 2026-03-10T10:19:56.348 INFO:tasks.workunit.client.1.vm05.stdout:4/542: stat d1/f39 0 2026-03-10T10:19:56.349 INFO:tasks.workunit.client.1.vm05.stdout:4/543: chown d1/d31/f36 19285 1 2026-03-10T10:19:56.366 INFO:tasks.workunit.client.1.vm05.stdout:6/673: getdents dd/d36/d3f/d12/d44/d2a/d3d/d3e 0 2026-03-10T10:19:56.369 INFO:tasks.workunit.client.1.vm05.stdout:2/614: symlink db/d28/d4f/da3/lc3 0 2026-03-10T10:19:56.370 INFO:tasks.workunit.client.0.vm02.stdout:8/652: mknod d1/d1c/d43/d5b/dab/cc5 0 2026-03-10T10:19:56.370 INFO:tasks.workunit.client.0.vm02.stdout:8/653: readlink d1/d1c/d43/d6a/da8/l94 0 2026-03-10T10:19:56.371 INFO:tasks.workunit.client.0.vm02.stdout:8/654: stat d1/d1c/d43/d6a/d7c/la7 0 2026-03-10T10:19:56.374 INFO:tasks.workunit.client.0.vm02.stdout:8/655: dwrite d1/d1c/d23/f75 [0,4194304] 0 2026-03-10T10:19:56.384 INFO:tasks.workunit.client.0.vm02.stdout:1/697: dwrite d4/da/d1a/d22/f23 [0,4194304] 0 2026-03-10T10:19:56.385 INFO:tasks.workunit.client.1.vm05.stdout:6/674: fsync dd/d36/d3f/d12/d59/fb1 0 2026-03-10T10:19:56.392 INFO:tasks.workunit.client.0.vm02.stdout:4/789: rmdir d1/d75/ddd 39 2026-03-10T10:19:56.402 INFO:tasks.workunit.client.1.vm05.stdout:2/615: mknod db/d28/d4f/d8b/d9a/d9d/cc4 0 2026-03-10T10:19:56.406 INFO:tasks.workunit.client.1.vm05.stdout:4/544: symlink d1/d31/d76/dac/lb3 0 2026-03-10T10:19:56.409 INFO:tasks.workunit.client.0.vm02.stdout:8/656: readlink d1/d1c/d43/d6a/d7c/laf 0 2026-03-10T10:19:56.410 INFO:tasks.workunit.client.0.vm02.stdout:8/657: dread - d1/d1c/d43/d6a/da8/fbf zero size 2026-03-10T10:19:56.418 INFO:tasks.workunit.client.0.vm02.stdout:1/698: read d4/d2c/f43 [203508,92629] 0 2026-03-10T10:19:56.422 INFO:tasks.workunit.client.1.vm05.stdout:4/545: symlink d1/d3/lb4 0 2026-03-10T10:19:56.423 INFO:tasks.workunit.client.1.vm05.stdout:4/546: chown d1/d31/d4b/f51 3600106 1 2026-03-10T10:19:56.427 INFO:tasks.workunit.client.1.vm05.stdout:6/675: creat dd/d36/d3f/d12/d44/d2a/fd9 x:0 0 0 2026-03-10T10:19:56.430 INFO:tasks.workunit.client.0.vm02.stdout:4/790: mknod d1/c103 0 2026-03-10T10:19:56.440 INFO:tasks.workunit.client.1.vm05.stdout:6/676: fdatasync dd/d36/d3f/d12/f20 0 2026-03-10T10:19:56.441 INFO:tasks.workunit.client.1.vm05.stdout:6/677: read - dd/d36/d3f/d12/d44/d2a/d7f/fd3 zero size 2026-03-10T10:19:56.442 INFO:tasks.workunit.client.1.vm05.stdout:6/678: dread - dd/d36/d3f/d12/d96/f9a zero size 2026-03-10T10:19:56.442 INFO:tasks.workunit.client.0.vm02.stdout:1/699: mknod d4/da/d1a/d47/d78/cdf 0 2026-03-10T10:19:56.449 INFO:tasks.workunit.client.1.vm05.stdout:6/679: dread dd/d36/d3f/d12/f35 [0,4194304] 0 2026-03-10T10:19:56.450 INFO:tasks.workunit.client.0.vm02.stdout:5/810: getdents d1/db/d11/d84/d40/d4f/d5f 0 2026-03-10T10:19:56.451 INFO:tasks.workunit.client.1.vm05.stdout:6/680: dread dd/d36/d3f/d12/f35 [4194304,4194304] 0 2026-03-10T10:19:56.457 INFO:tasks.workunit.client.1.vm05.stdout:1/741: write d4/d20/f2c [3409027,50759] 0 2026-03-10T10:19:56.464 INFO:tasks.workunit.client.1.vm05.stdout:6/681: chown dd/d36/d3f/d12/d59/l5b 509 1 2026-03-10T10:19:56.473 INFO:tasks.workunit.client.0.vm02.stdout:4/791: rmdir d1/d41/d5e/d78/d1a 39 2026-03-10T10:19:56.476 INFO:tasks.workunit.client.1.vm05.stdout:1/742: creat d4/d39/d3e/fd9 x:0 0 0 2026-03-10T10:19:56.477 INFO:tasks.workunit.client.0.vm02.stdout:1/700: truncate d4/da/d1a/fc8 1008412 0 2026-03-10T10:19:56.477 INFO:tasks.workunit.client.1.vm05.stdout:6/682: rmdir dd/d36/d3f/d12/d44 39 2026-03-10T10:19:56.490 INFO:tasks.workunit.client.1.vm05.stdout:5/676: truncate da/f2e 723363 0 2026-03-10T10:19:56.490 INFO:tasks.workunit.client.1.vm05.stdout:5/677: chown da/l5f 0 1 2026-03-10T10:19:56.490 INFO:tasks.workunit.client.0.vm02.stdout:4/792: dwrite d1/d41/d5e/d78/d1a/f8c [0,4194304] 0 2026-03-10T10:19:56.492 INFO:tasks.workunit.client.0.vm02.stdout:4/793: stat d1/d52/d53/c54 0 2026-03-10T10:19:56.492 INFO:tasks.workunit.client.0.vm02.stdout:4/794: readlink d1/d41/d5e/d78/l29 0 2026-03-10T10:19:56.495 INFO:tasks.workunit.client.0.vm02.stdout:4/795: fdatasync d1/d41/d5e/d78/d44/dd0/ffe 0 2026-03-10T10:19:56.498 INFO:tasks.workunit.client.1.vm05.stdout:5/678: mknod da/d9a/ce6 0 2026-03-10T10:19:56.502 INFO:tasks.workunit.client.1.vm05.stdout:5/679: symlink da/d9a/dc7/db4/dbd/le7 0 2026-03-10T10:19:56.503 INFO:tasks.workunit.client.0.vm02.stdout:4/796: stat d1/d52/d53/f101 0 2026-03-10T10:19:56.504 INFO:tasks.workunit.client.0.vm02.stdout:4/797: chown d1/d10/db/f15 12918420 1 2026-03-10T10:19:56.506 INFO:tasks.workunit.client.0.vm02.stdout:4/798: read d1/d41/d5e/d78/d7f/fb9 [1113046,61011] 0 2026-03-10T10:19:56.506 INFO:tasks.workunit.client.0.vm02.stdout:4/799: readlink d1/d41/d5e/d78/d1a/d49/la2 0 2026-03-10T10:19:56.509 INFO:tasks.workunit.client.1.vm05.stdout:5/680: truncate da/db/d26/d35/d38/fa6 1007707 0 2026-03-10T10:19:56.511 INFO:tasks.workunit.client.1.vm05.stdout:1/743: dread d4/d3d/f77 [0,4194304] 0 2026-03-10T10:19:56.514 INFO:tasks.workunit.client.0.vm02.stdout:4/800: truncate d1/d32/da3/fd7 454170 0 2026-03-10T10:19:56.522 INFO:tasks.workunit.client.1.vm05.stdout:5/681: dread - da/db/d28/d97/fb2 zero size 2026-03-10T10:19:56.531 INFO:tasks.workunit.client.0.vm02.stdout:4/801: rename d1/d75/ddd/la7 to d1/d10/db/l104 0 2026-03-10T10:19:56.544 INFO:tasks.workunit.client.1.vm05.stdout:5/682: read da/db/d26/d35/d38/f6c [2185067,109134] 0 2026-03-10T10:19:56.546 INFO:tasks.workunit.client.1.vm05.stdout:1/744: dread d4/d3d/d6e/faf [0,4194304] 0 2026-03-10T10:19:56.547 INFO:tasks.workunit.client.1.vm05.stdout:1/745: dread - d4/d39/d3e/fd7 zero size 2026-03-10T10:19:56.554 INFO:tasks.workunit.client.1.vm05.stdout:1/746: creat d4/fda x:0 0 0 2026-03-10T10:19:56.566 INFO:tasks.workunit.client.0.vm02.stdout:4/802: rmdir d1/d41/d5e/d78/d7f/d100 0 2026-03-10T10:19:56.570 INFO:tasks.workunit.client.0.vm02.stdout:6/648: rmdir d0/d8/d29/d6d/d96 39 2026-03-10T10:19:56.573 INFO:tasks.workunit.client.0.vm02.stdout:4/803: symlink d1/d41/d5e/d78/d37/l105 0 2026-03-10T10:19:56.579 INFO:tasks.workunit.client.0.vm02.stdout:6/649: creat d0/d8/d29/d2f/d50/d7e/db2/dbb/fd1 x:0 0 0 2026-03-10T10:19:56.579 INFO:tasks.workunit.client.0.vm02.stdout:6/650: chown d0/d8/d9/fa0 804626059 1 2026-03-10T10:19:56.580 INFO:tasks.workunit.client.0.vm02.stdout:6/651: readlink d0/d8/d29/d2f/d4b/l24 0 2026-03-10T10:19:56.581 INFO:tasks.workunit.client.1.vm05.stdout:3/674: write dd/d15/d24/d8e/dac/fd7 [905744,7696] 0 2026-03-10T10:19:56.586 INFO:tasks.workunit.client.0.vm02.stdout:9/639: dwrite da/d3c/d4c/f2b [0,4194304] 0 2026-03-10T10:19:56.588 INFO:tasks.workunit.client.0.vm02.stdout:9/640: dread da/d3c/d4c/f2b [0,4194304] 0 2026-03-10T10:19:56.590 INFO:tasks.workunit.client.0.vm02.stdout:9/641: dread - da/fae zero size 2026-03-10T10:19:56.595 INFO:tasks.workunit.client.0.vm02.stdout:6/652: symlink d0/d8/d29/d52/ld2 0 2026-03-10T10:19:56.598 INFO:tasks.workunit.client.1.vm05.stdout:3/675: mkdir dd/d20/d56/d5e/ded 0 2026-03-10T10:19:56.599 INFO:tasks.workunit.client.1.vm05.stdout:3/676: chown dd/d20/d56/d5e/c9a 898547777 1 2026-03-10T10:19:56.606 INFO:tasks.workunit.client.1.vm05.stdout:3/677: creat dd/d15/d24/fee x:0 0 0 2026-03-10T10:19:56.608 INFO:tasks.workunit.client.0.vm02.stdout:0/708: dwrite d9/d18/d1a/d22/d24/d80/d49/f5e [0,4194304] 0 2026-03-10T10:19:56.619 INFO:tasks.workunit.client.1.vm05.stdout:3/678: rename dd/d15/d24/d8e/fc1 to dd/d20/d56/d5e/dab/d9c/fef 0 2026-03-10T10:19:56.619 INFO:tasks.workunit.client.1.vm05.stdout:7/677: write d5/d17/d66/f94 [203995,4726] 0 2026-03-10T10:19:56.620 INFO:tasks.workunit.client.1.vm05.stdout:7/678: dread - d5/d1d/d20/d2d/d68/fc4 zero size 2026-03-10T10:19:56.625 INFO:tasks.workunit.client.1.vm05.stdout:9/599: write d0/d1/d13/de/f5b [106392,18038] 0 2026-03-10T10:19:56.631 INFO:tasks.workunit.client.1.vm05.stdout:3/679: symlink dd/d20/d56/d5e/dab/d9c/lf0 0 2026-03-10T10:19:56.632 INFO:tasks.workunit.client.0.vm02.stdout:2/681: write d0/fcd [590979,83619] 0 2026-03-10T10:19:56.637 INFO:tasks.workunit.client.1.vm05.stdout:0/663: write d1/d2/d39/d3d/f44 [2041847,41043] 0 2026-03-10T10:19:56.657 INFO:tasks.workunit.client.0.vm02.stdout:9/642: link da/d3c/d4c/d38/d82/d8c/fca da/d3c/d4c/d2c/d34/d35/fcd 0 2026-03-10T10:19:56.665 INFO:tasks.workunit.client.0.vm02.stdout:7/668: dwrite d1/d1b/d8f/f66 [0,4194304] 0 2026-03-10T10:19:56.679 INFO:tasks.workunit.client.0.vm02.stdout:4/804: link d1/c103 d1/d41/d5e/d78/c106 0 2026-03-10T10:19:56.684 INFO:tasks.workunit.client.0.vm02.stdout:0/709: creat d9/d18/d1a/d22/d24/d8e/d9b/daa/fe2 x:0 0 0 2026-03-10T10:19:56.690 INFO:tasks.workunit.client.0.vm02.stdout:9/643: mknod da/d3c/d4c/d38/d4a/d70/cce 0 2026-03-10T10:19:56.690 INFO:tasks.workunit.client.1.vm05.stdout:8/613: write d7/d14/d24/fc1 [910407,104392] 0 2026-03-10T10:19:56.690 INFO:tasks.workunit.client.1.vm05.stdout:8/614: readlink d7/d14/d3a/d49/l85 0 2026-03-10T10:19:56.690 INFO:tasks.workunit.client.0.vm02.stdout:9/644: read da/d3c/f8b [1772270,20142] 0 2026-03-10T10:19:56.701 INFO:tasks.workunit.client.0.vm02.stdout:7/669: creat d1/dc/d60/fd0 x:0 0 0 2026-03-10T10:19:56.704 INFO:tasks.workunit.client.0.vm02.stdout:3/676: dwrite d1/d8/d21/f88 [0,4194304] 0 2026-03-10T10:19:56.704 INFO:tasks.workunit.client.1.vm05.stdout:2/616: dwrite db/d1c/f3d [0,4194304] 0 2026-03-10T10:19:56.706 INFO:tasks.workunit.client.1.vm05.stdout:4/547: write d1/d31/dc/d40/d45/f66 [567258,95356] 0 2026-03-10T10:19:56.710 INFO:tasks.workunit.client.0.vm02.stdout:8/658: dwrite d1/d1c/f72 [0,4194304] 0 2026-03-10T10:19:56.719 INFO:tasks.workunit.client.1.vm05.stdout:7/679: getdents d5/d26 0 2026-03-10T10:19:56.720 INFO:tasks.workunit.client.1.vm05.stdout:8/615: mkdir d7/d14/d24/d3f/dc4 0 2026-03-10T10:19:56.725 INFO:tasks.workunit.client.1.vm05.stdout:8/616: dwrite d7/d14/d24/d3f/d6a/d8a/d96/fc3 [0,4194304] 0 2026-03-10T10:19:56.729 INFO:tasks.workunit.client.0.vm02.stdout:2/682: creat d0/d1a/d49/d5e/d8a/de6/fe7 x:0 0 0 2026-03-10T10:19:56.731 INFO:tasks.workunit.client.0.vm02.stdout:5/811: write d1/db/d11/d16/d48/dcf/f100 [1035565,8577] 0 2026-03-10T10:19:56.734 INFO:tasks.workunit.client.0.vm02.stdout:1/701: dwrite d4/d2c/d53/fb3 [0,4194304] 0 2026-03-10T10:19:56.742 INFO:tasks.workunit.client.1.vm05.stdout:2/617: rmdir db/d28/d4f/d59/da4 39 2026-03-10T10:19:56.745 INFO:tasks.workunit.client.1.vm05.stdout:6/683: write dd/d36/d3f/d12/d44/d2a/f84 [1307459,44184] 0 2026-03-10T10:19:56.746 INFO:tasks.workunit.client.1.vm05.stdout:4/548: mknod d1/d31/d72/cb5 0 2026-03-10T10:19:56.746 INFO:tasks.workunit.client.1.vm05.stdout:5/683: rmdir da/d9a/dc7/db4/dbd 39 2026-03-10T10:19:56.750 INFO:tasks.workunit.client.0.vm02.stdout:9/645: mknod da/d9d/ccf 0 2026-03-10T10:19:56.758 INFO:tasks.workunit.client.1.vm05.stdout:7/680: creat d5/d26/db2/fd8 x:0 0 0 2026-03-10T10:19:56.763 INFO:tasks.workunit.client.1.vm05.stdout:7/681: write d5/d1d/d20/d91/fc1 [1458937,84733] 0 2026-03-10T10:19:56.763 INFO:tasks.workunit.client.1.vm05.stdout:7/682: truncate d5/dd/f2f 4763585 0 2026-03-10T10:19:56.763 INFO:tasks.workunit.client.0.vm02.stdout:3/677: dread - d1/d8/d21/d73/f82 zero size 2026-03-10T10:19:56.763 INFO:tasks.workunit.client.0.vm02.stdout:3/678: chown d1/d8 1 1 2026-03-10T10:19:56.763 INFO:tasks.workunit.client.0.vm02.stdout:3/679: dread - d1/d8/d21/d73/f82 zero size 2026-03-10T10:19:56.768 INFO:tasks.workunit.client.1.vm05.stdout:8/617: creat d7/d14/d15/d3b/fc5 x:0 0 0 2026-03-10T10:19:56.774 INFO:tasks.workunit.client.1.vm05.stdout:9/600: dwrite d0/d1/d13/d62/fa8 [0,4194304] 0 2026-03-10T10:19:56.782 INFO:tasks.workunit.client.0.vm02.stdout:6/653: write d0/d8/d29/d2f/d4b/f53 [1994197,24348] 0 2026-03-10T10:19:56.782 INFO:tasks.workunit.client.1.vm05.stdout:0/664: write d1/d2/d9/d31/daa/fca [560062,15415] 0 2026-03-10T10:19:56.782 INFO:tasks.workunit.client.1.vm05.stdout:3/680: dwrite dd/d15/d24/f63 [0,4194304] 0 2026-03-10T10:19:56.782 INFO:tasks.workunit.client.1.vm05.stdout:0/665: truncate d1/d2/d9/d31/d13/d2f/f88 3029640 0 2026-03-10T10:19:56.782 INFO:tasks.workunit.client.1.vm05.stdout:3/681: stat dd/d20/d56/d5e/dab/d9c/fd4 0 2026-03-10T10:19:56.782 INFO:tasks.workunit.client.1.vm05.stdout:1/747: stat d4/d3d/d6e/faf 0 2026-03-10T10:19:56.794 INFO:tasks.workunit.client.1.vm05.stdout:4/549: dread d1/d3/f4a [0,4194304] 0 2026-03-10T10:19:56.813 INFO:tasks.workunit.client.1.vm05.stdout:6/684: write dd/d36/d3f/d12/d44/daa/fae [1590037,120108] 0 2026-03-10T10:19:56.823 INFO:tasks.workunit.client.1.vm05.stdout:9/601: mknod d0/df/d74/d8c/d8f/ccf 0 2026-03-10T10:19:56.833 INFO:tasks.workunit.client.1.vm05.stdout:5/684: mknod da/d9a/dc7/db4/dbd/ce8 0 2026-03-10T10:19:56.840 INFO:tasks.workunit.client.1.vm05.stdout:1/748: fdatasync d4/d20/dbe/fc8 0 2026-03-10T10:19:56.853 INFO:tasks.workunit.client.1.vm05.stdout:2/618: write db/f4a [4414618,113975] 0 2026-03-10T10:19:56.854 INFO:tasks.workunit.client.1.vm05.stdout:2/619: truncate db/d61/fb8 110535 0 2026-03-10T10:19:56.861 INFO:tasks.workunit.client.1.vm05.stdout:3/682: write dd/d20/d56/d5e/dab/fc4 [2332836,50290] 0 2026-03-10T10:19:56.862 INFO:tasks.workunit.client.1.vm05.stdout:4/550: write d1/d31/f36 [1652065,17989] 0 2026-03-10T10:19:56.863 INFO:tasks.workunit.client.1.vm05.stdout:0/666: dwrite d1/d2/d9/d31/d13/d17/f56 [0,4194304] 0 2026-03-10T10:19:56.876 INFO:tasks.workunit.client.1.vm05.stdout:8/618: getdents d7/d14/d24/d3f/d6a/db0 0 2026-03-10T10:19:56.876 INFO:tasks.workunit.client.1.vm05.stdout:8/619: readlink d7/d14/d3a/l63 0 2026-03-10T10:19:56.877 INFO:tasks.workunit.client.1.vm05.stdout:7/683: fsync d5/d1d/d29/d3e/d8c/f81 0 2026-03-10T10:19:56.878 INFO:tasks.workunit.client.1.vm05.stdout:8/620: truncate d7/d14/d24/d3f/fb3 54274 0 2026-03-10T10:19:56.879 INFO:tasks.workunit.client.1.vm05.stdout:8/621: truncate d7/d14/d15/da7/fbe 304644 0 2026-03-10T10:19:56.888 INFO:tasks.workunit.client.0.vm02.stdout:8/659: mknod d1/d1c/d43/d5b/cc6 0 2026-03-10T10:19:56.891 INFO:tasks.workunit.client.0.vm02.stdout:4/805: creat d1/d52/dff/f107 x:0 0 0 2026-03-10T10:19:56.895 INFO:tasks.workunit.client.0.vm02.stdout:2/683: mkdir d0/d1a/d24/dd3/de8 0 2026-03-10T10:19:56.905 INFO:tasks.workunit.client.0.vm02.stdout:5/812: unlink d1/db/d11/d16/d79/d85/d93/fb4 0 2026-03-10T10:19:56.905 INFO:tasks.workunit.client.1.vm05.stdout:1/749: unlink d4/d20/f2d 0 2026-03-10T10:19:56.906 INFO:tasks.workunit.client.1.vm05.stdout:1/750: chown d4/d3d 415069929 1 2026-03-10T10:19:56.906 INFO:tasks.workunit.client.1.vm05.stdout:2/620: stat db/d28/d4f/d59/da4/d81/da7 0 2026-03-10T10:19:56.907 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:56 vm02.local ceph-mon[50200]: Upgrade: Updating mgr.vm02.zmavgl 2026-03-10T10:19:56.907 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:56 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:56.907 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:56 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.zmavgl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:19:56.907 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:56 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:19:56.907 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:56 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:56.907 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:56 vm02.local ceph-mon[50200]: Deploying daemon mgr.vm02.zmavgl on vm02 2026-03-10T10:19:56.907 INFO:tasks.workunit.client.1.vm05.stdout:7/684: sync 2026-03-10T10:19:56.911 INFO:tasks.workunit.client.0.vm02.stdout:0/710: symlink d9/d18/le3 0 2026-03-10T10:19:56.912 INFO:tasks.workunit.client.0.vm02.stdout:0/711: chown d9/d34/c3b 721083562 1 2026-03-10T10:19:56.916 INFO:tasks.workunit.client.0.vm02.stdout:1/702: write d4/da/f12 [201631,125761] 0 2026-03-10T10:19:56.918 INFO:tasks.workunit.client.1.vm05.stdout:3/683: read - dd/d20/d56/fb7 zero size 2026-03-10T10:19:56.923 INFO:tasks.workunit.client.1.vm05.stdout:0/667: dread d1/d2/d39/d3d/d9f/fb1 [0,4194304] 0 2026-03-10T10:19:56.924 INFO:tasks.workunit.client.0.vm02.stdout:7/670: mknod d1/cd1 0 2026-03-10T10:19:56.926 INFO:tasks.workunit.client.1.vm05.stdout:6/685: creat dd/d36/d3f/d12/d44/d30/d4a/d6e/dc3/fda x:0 0 0 2026-03-10T10:19:56.927 INFO:tasks.workunit.client.0.vm02.stdout:3/680: write d1/d6/f36 [917620,86254] 0 2026-03-10T10:19:56.933 INFO:tasks.workunit.client.0.vm02.stdout:3/681: dread d1/fe [0,4194304] 0 2026-03-10T10:19:56.938 INFO:tasks.workunit.client.0.vm02.stdout:8/660: mkdir d1/dc7 0 2026-03-10T10:19:56.938 INFO:tasks.workunit.client.0.vm02.stdout:6/654: chown d0/c25 85 1 2026-03-10T10:19:56.938 INFO:tasks.workunit.client.0.vm02.stdout:4/806: truncate d1/d41/d5e/d78/d44/de7/fed 182028 0 2026-03-10T10:19:56.938 INFO:tasks.workunit.client.0.vm02.stdout:2/684: symlink d0/d1a/d49/d5e/d65/dc4/le9 0 2026-03-10T10:19:56.948 INFO:tasks.workunit.client.1.vm05.stdout:8/622: symlink d7/d14/d15/d3b/da0/lc6 0 2026-03-10T10:19:56.956 INFO:tasks.workunit.client.1.vm05.stdout:6/686: symlink dd/d1b/ldb 0 2026-03-10T10:19:56.956 INFO:tasks.workunit.client.0.vm02.stdout:7/671: mkdir d1/d1b/d8f/dad/d7e/dd2 0 2026-03-10T10:19:56.957 INFO:tasks.workunit.client.1.vm05.stdout:6/687: write dd/d36/d7d/f97 [549715,23081] 0 2026-03-10T10:19:56.960 INFO:tasks.workunit.client.1.vm05.stdout:9/602: dwrite d0/d1/d13/de/d21/f76 [0,4194304] 0 2026-03-10T10:19:56.963 INFO:tasks.workunit.client.0.vm02.stdout:5/813: dwrite d1/db/d11/d13/d28/f31 [0,4194304] 0 2026-03-10T10:19:56.971 INFO:tasks.workunit.client.1.vm05.stdout:5/685: rename da/db/d26/d35/db3 to da/db/de9 0 2026-03-10T10:19:56.972 INFO:tasks.workunit.client.1.vm05.stdout:5/686: dread - da/db/d26/f7e zero size 2026-03-10T10:19:56.976 INFO:tasks.workunit.client.0.vm02.stdout:1/703: write d4/da/d1a/d47/d65/f6e [215774,102929] 0 2026-03-10T10:19:56.976 INFO:tasks.workunit.client.0.vm02.stdout:1/704: fsync d4/da/d1a/d22/f23 0 2026-03-10T10:19:56.983 INFO:tasks.workunit.client.1.vm05.stdout:4/551: creat d1/fb6 x:0 0 0 2026-03-10T10:19:56.984 INFO:tasks.workunit.client.0.vm02.stdout:2/685: symlink d0/dd4/lea 0 2026-03-10T10:19:56.985 INFO:tasks.workunit.client.0.vm02.stdout:2/686: truncate d0/d1a/d24/dd3/fde 547411 0 2026-03-10T10:19:56.986 INFO:tasks.workunit.client.1.vm05.stdout:1/751: creat d4/df/d1c/fdb x:0 0 0 2026-03-10T10:19:56.996 INFO:tasks.workunit.client.1.vm05.stdout:6/688: mkdir dd/d36/d3f/d12/d44/daa/ddc 0 2026-03-10T10:19:56.999 INFO:tasks.workunit.client.1.vm05.stdout:9/603: rmdir d0/df/d74/d8c/d8f 39 2026-03-10T10:19:57.003 INFO:tasks.workunit.client.1.vm05.stdout:9/604: dwrite d0/df/d74/fc3 [0,4194304] 0 2026-03-10T10:19:57.005 INFO:tasks.workunit.client.0.vm02.stdout:7/672: rmdir d1/d1b/d8f/dad/d7e 39 2026-03-10T10:19:57.007 INFO:tasks.workunit.client.1.vm05.stdout:5/687: truncate da/d9a/dc7/f4e 863354 0 2026-03-10T10:19:57.027 INFO:tasks.workunit.client.0.vm02.stdout:4/807: dwrite d1/d52/d53/fbb [0,4194304] 0 2026-03-10T10:19:57.027 INFO:tasks.workunit.client.0.vm02.stdout:0/712: write d9/d18/d1a/d22/d24/d79/d7d/fbe [3699835,101774] 0 2026-03-10T10:19:57.028 INFO:tasks.workunit.client.0.vm02.stdout:4/808: stat d1/d41/d5e/d78/d7f/d82/l8f 0 2026-03-10T10:19:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:56 vm05.local ceph-mon[59051]: Upgrade: Updating mgr.vm02.zmavgl 2026-03-10T10:19:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:56 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:56 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.zmavgl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:19:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:56 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:19:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:56 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:56 vm05.local ceph-mon[59051]: Deploying daemon mgr.vm02.zmavgl on vm02 2026-03-10T10:19:57.042 INFO:tasks.workunit.client.1.vm05.stdout:2/621: rename db/d28/f3f to db/d28/fc5 0 2026-03-10T10:19:57.072 INFO:tasks.workunit.client.1.vm05.stdout:0/668: rmdir d1/d2/d9/d31/d12/d41/db7 0 2026-03-10T10:19:57.073 INFO:tasks.workunit.client.0.vm02.stdout:3/682: write d1/f81 [2887124,13627] 0 2026-03-10T10:19:57.074 INFO:tasks.workunit.client.1.vm05.stdout:0/669: dread d1/d2/d9/d31/d12/d20/f81 [0,4194304] 0 2026-03-10T10:19:57.078 INFO:tasks.workunit.client.1.vm05.stdout:4/552: write d1/d64/f99 [792760,64388] 0 2026-03-10T10:19:57.080 INFO:tasks.workunit.client.1.vm05.stdout:7/685: rename d5/c50 to d5/d26/cd9 0 2026-03-10T10:19:57.083 INFO:tasks.workunit.client.0.vm02.stdout:2/687: rename d0/d1a/d49/d5e/d8a to d0/d1a/d49/deb 0 2026-03-10T10:19:57.083 INFO:tasks.workunit.client.0.vm02.stdout:5/814: write d1/d9c/fa8 [855097,118937] 0 2026-03-10T10:19:57.083 INFO:tasks.workunit.client.0.vm02.stdout:6/655: symlink d0/d8/d29/d2f/d4b/da5/d6f/ld3 0 2026-03-10T10:19:57.087 INFO:tasks.workunit.client.0.vm02.stdout:8/661: symlink d1/d1c/d43/d6a/da8/d56/db5/lc8 0 2026-03-10T10:19:57.088 INFO:tasks.workunit.client.1.vm05.stdout:9/605: chown d0/d1/lce 0 1 2026-03-10T10:19:57.092 INFO:tasks.workunit.client.0.vm02.stdout:9/646: getdents da/d3c/d4c/d38/d82/d89 0 2026-03-10T10:19:57.113 INFO:tasks.workunit.client.1.vm05.stdout:2/622: rmdir db/d12 39 2026-03-10T10:19:57.114 INFO:tasks.workunit.client.1.vm05.stdout:2/623: chown db/d28/d4f/d59/da4/d6c 520794 1 2026-03-10T10:19:57.120 INFO:tasks.workunit.client.0.vm02.stdout:1/705: dwrite d4/d2c/f43 [0,4194304] 0 2026-03-10T10:19:57.127 INFO:tasks.workunit.client.1.vm05.stdout:1/752: mkdir d4/d3d/ddc 0 2026-03-10T10:19:57.137 INFO:tasks.workunit.client.0.vm02.stdout:1/706: dread d4/da/d1a/d5b/f79 [4194304,4194304] 0 2026-03-10T10:19:57.145 INFO:tasks.workunit.client.0.vm02.stdout:4/809: unlink d1/d41/d5e/d78/d7f/ffd 0 2026-03-10T10:19:57.149 INFO:tasks.workunit.client.0.vm02.stdout:3/683: mknod d1/d20/d52/dd3/ce2 0 2026-03-10T10:19:57.149 INFO:tasks.workunit.client.0.vm02.stdout:6/656: fdatasync d0/f5d 0 2026-03-10T10:19:57.149 INFO:tasks.workunit.client.0.vm02.stdout:5/815: creat d1/db/d11/d1a/f116 x:0 0 0 2026-03-10T10:19:57.161 INFO:tasks.workunit.client.1.vm05.stdout:6/689: dread dd/d36/d3f/d12/f4f [0,4194304] 0 2026-03-10T10:19:57.162 INFO:tasks.workunit.client.0.vm02.stdout:9/647: truncate da/d3c/d4c/d38/d82/d89/fb0 256450 0 2026-03-10T10:19:57.163 INFO:tasks.workunit.client.0.vm02.stdout:8/662: chown d1/d1c/d43/d5b/d88/dac/c62 55011 1 2026-03-10T10:19:57.167 INFO:tasks.workunit.client.0.vm02.stdout:7/673: creat d1/dc/d10/d38/fd3 x:0 0 0 2026-03-10T10:19:57.177 INFO:tasks.workunit.client.0.vm02.stdout:8/663: dwrite d1/d1c/d24/d71/fb4 [0,4194304] 0 2026-03-10T10:19:57.210 INFO:tasks.workunit.client.0.vm02.stdout:1/707: mknod d4/d4a/ce0 0 2026-03-10T10:19:57.211 INFO:tasks.workunit.client.0.vm02.stdout:1/708: chown d4/da/f73 338184 1 2026-03-10T10:19:57.217 INFO:tasks.workunit.client.0.vm02.stdout:1/709: dwrite d4/d2c/f43 [0,4194304] 0 2026-03-10T10:19:57.258 INFO:tasks.workunit.client.0.vm02.stdout:3/684: mkdir d1/d6/d8b/de3 0 2026-03-10T10:19:57.266 INFO:tasks.workunit.client.0.vm02.stdout:5/816: mknod d1/d6a/c117 0 2026-03-10T10:19:57.272 INFO:tasks.workunit.client.0.vm02.stdout:6/657: mknod d0/d8/d29/d2f/d50/cd4 0 2026-03-10T10:19:57.279 INFO:tasks.workunit.client.0.vm02.stdout:9/648: rename da/l24 to da/d3c/d4c/d2c/d34/d35/ld0 0 2026-03-10T10:19:57.299 INFO:tasks.workunit.client.0.vm02.stdout:7/674: write d1/dc/d16/d28/d2d/fb0 [442245,120477] 0 2026-03-10T10:19:57.304 INFO:tasks.workunit.client.1.vm05.stdout:0/670: symlink d1/d2/d39/d6e/dc0/le4 0 2026-03-10T10:19:57.307 INFO:tasks.workunit.client.0.vm02.stdout:0/713: creat d9/d18/d1a/d22/d24/d80/fe4 x:0 0 0 2026-03-10T10:19:57.311 INFO:tasks.workunit.client.1.vm05.stdout:9/606: readlink d0/d1/d13/l6 0 2026-03-10T10:19:57.314 INFO:tasks.workunit.client.1.vm05.stdout:7/686: dread d5/d1d/d20/d91/da7/dab/fb4 [0,4194304] 0 2026-03-10T10:19:57.315 INFO:tasks.workunit.client.1.vm05.stdout:7/687: write d5/d1d/d20/d91/fc9 [1704013,12921] 0 2026-03-10T10:19:57.319 INFO:tasks.workunit.client.0.vm02.stdout:1/710: dread d4/d2c/d53/f6c [0,4194304] 0 2026-03-10T10:19:57.319 INFO:tasks.workunit.client.0.vm02.stdout:1/711: chown d4 14975 1 2026-03-10T10:19:57.320 INFO:tasks.workunit.client.1.vm05.stdout:7/688: dread d5/d26/f92 [0,4194304] 0 2026-03-10T10:19:57.324 INFO:tasks.workunit.client.1.vm05.stdout:5/688: truncate da/db/d26/d35/f1c 223481 0 2026-03-10T10:19:57.324 INFO:tasks.workunit.client.1.vm05.stdout:5/689: stat da/db/f6d 0 2026-03-10T10:19:57.333 INFO:tasks.workunit.client.0.vm02.stdout:4/810: truncate d1/d41/d5e/d78/d1a/f93 3170792 0 2026-03-10T10:19:57.334 INFO:tasks.workunit.client.0.vm02.stdout:2/688: link d0/d1a/d24/d80/lc1 d0/d1a/d24/dbf/lec 0 2026-03-10T10:19:57.338 INFO:tasks.workunit.client.0.vm02.stdout:3/685: chown d1/d8/d21/d73/d78/d79/cd1 2641421 1 2026-03-10T10:19:57.340 INFO:tasks.workunit.client.0.vm02.stdout:3/686: dwrite d1/d8/d86/fdb [0,4194304] 0 2026-03-10T10:19:57.352 INFO:tasks.workunit.client.1.vm05.stdout:6/690: truncate dd/d36/d3f/d12/d44/d2a/d77/fb4 659457 0 2026-03-10T10:19:57.353 INFO:tasks.workunit.client.0.vm02.stdout:5/817: mknod d1/db/d11/d84/d95/c118 0 2026-03-10T10:19:57.361 INFO:tasks.workunit.client.1.vm05.stdout:4/553: symlink d1/lb7 0 2026-03-10T10:19:57.368 INFO:tasks.workunit.client.0.vm02.stdout:5/818: dread d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fc0 [0,4194304] 0 2026-03-10T10:19:57.368 INFO:tasks.workunit.client.0.vm02.stdout:5/819: readlink d1/db/d11/d62/d67/lae 0 2026-03-10T10:19:57.383 INFO:tasks.workunit.client.0.vm02.stdout:0/714: mknod d9/d18/d1a/d46/ce5 0 2026-03-10T10:19:57.395 INFO:tasks.workunit.client.0.vm02.stdout:7/675: write d1/dc/d55/f8b [6731243,71870] 0 2026-03-10T10:19:57.395 INFO:tasks.workunit.client.0.vm02.stdout:8/664: write d1/d1c/d24/d71/fa2 [992173,28670] 0 2026-03-10T10:19:57.406 INFO:tasks.workunit.client.0.vm02.stdout:4/811: mknod d1/d52/dff/c108 0 2026-03-10T10:19:57.424 INFO:tasks.workunit.client.0.vm02.stdout:2/689: rename d0/d1a/d49/d5e/d65/dc4/fd6 to d0/d1a/d24/dbf/fed 0 2026-03-10T10:19:57.424 INFO:tasks.workunit.client.0.vm02.stdout:3/687: rmdir d1/d6 39 2026-03-10T10:19:57.424 INFO:tasks.workunit.client.1.vm05.stdout:5/690: creat da/d96/fea x:0 0 0 2026-03-10T10:19:57.424 INFO:tasks.workunit.client.1.vm05.stdout:2/624: mkdir db/d2d/dc6 0 2026-03-10T10:19:57.425 INFO:tasks.workunit.client.0.vm02.stdout:5/820: fdatasync d1/db/d11/d13/ff0 0 2026-03-10T10:19:57.425 INFO:tasks.workunit.client.1.vm05.stdout:4/554: mkdir d1/d31/d76/dac/db8 0 2026-03-10T10:19:57.426 INFO:tasks.workunit.client.1.vm05.stdout:3/684: rename dd/d20/d56/c8b to dd/d20/cf1 0 2026-03-10T10:19:57.426 INFO:tasks.workunit.client.1.vm05.stdout:9/607: mkdir d0/d1/dcc/dd0 0 2026-03-10T10:19:57.427 INFO:tasks.workunit.client.1.vm05.stdout:9/608: stat d0/d1/fb0 0 2026-03-10T10:19:57.428 INFO:tasks.workunit.client.0.vm02.stdout:9/649: truncate da/d3c/d4c/d38/d82/fcb 80629 0 2026-03-10T10:19:57.428 INFO:tasks.workunit.client.0.vm02.stdout:8/665: truncate d1/d1c/d43/d6a/da8/fbf 39643 0 2026-03-10T10:19:57.429 INFO:tasks.workunit.client.0.vm02.stdout:9/650: stat da/d3c/d4c/f29 0 2026-03-10T10:19:57.429 INFO:tasks.workunit.client.0.vm02.stdout:5/821: dwrite d1/db/f96 [0,4194304] 0 2026-03-10T10:19:57.433 INFO:tasks.workunit.client.0.vm02.stdout:9/651: chown da/d3c/d4c/d2c/d34/d35/l55 154 1 2026-03-10T10:19:57.433 INFO:tasks.workunit.client.0.vm02.stdout:1/712: creat d4/dc3/dd4/fe1 x:0 0 0 2026-03-10T10:19:57.434 INFO:tasks.workunit.client.0.vm02.stdout:1/713: readlink d4/da/d27/d38/d3c/l84 0 2026-03-10T10:19:57.440 INFO:tasks.workunit.client.0.vm02.stdout:4/812: dread - d1/d75/ddd/fe4 zero size 2026-03-10T10:19:57.442 INFO:tasks.workunit.client.0.vm02.stdout:3/688: unlink d1/d8/d44/l74 0 2026-03-10T10:19:57.448 INFO:tasks.workunit.client.1.vm05.stdout:7/689: write d5/d1d/d20/d2d/f30 [66728,83689] 0 2026-03-10T10:19:57.457 INFO:tasks.workunit.client.1.vm05.stdout:7/690: dread d5/d1d/fd5 [4194304,4194304] 0 2026-03-10T10:19:57.462 INFO:tasks.workunit.client.0.vm02.stdout:0/715: creat d9/d34/d3d/d65/d89/dd3/da7/db7/de1/fe6 x:0 0 0 2026-03-10T10:19:57.464 INFO:tasks.workunit.client.1.vm05.stdout:8/623: rename d7/d14/d15/l82 to d7/d14/lc7 0 2026-03-10T10:19:57.464 INFO:tasks.workunit.client.0.vm02.stdout:6/658: write d0/d87/f90 [1044047,11515] 0 2026-03-10T10:19:57.465 INFO:tasks.workunit.client.1.vm05.stdout:8/624: write d7/d14/d15/da7/fbe [841999,787] 0 2026-03-10T10:19:57.465 INFO:tasks.workunit.client.0.vm02.stdout:6/659: chown d0/d8/d29/d2f/d4b/da5 1278 1 2026-03-10T10:19:57.467 INFO:tasks.workunit.client.1.vm05.stdout:3/685: creat dd/d15/d24/d2c/d6d/da7/dbb/ff2 x:0 0 0 2026-03-10T10:19:57.467 INFO:tasks.workunit.client.0.vm02.stdout:7/676: truncate d1/dc/d10/d38/f96 1235936 0 2026-03-10T10:19:57.472 INFO:tasks.workunit.client.0.vm02.stdout:8/666: mknod d1/d1c/d24/dad/cc9 0 2026-03-10T10:19:57.472 INFO:tasks.workunit.client.0.vm02.stdout:5/822: dread - d1/db/d11/d13/d28/da7/dd9/fe2 zero size 2026-03-10T10:19:57.481 INFO:tasks.workunit.client.0.vm02.stdout:1/714: mknod d4/da/d27/d38/d3c/ce2 0 2026-03-10T10:19:57.482 INFO:tasks.workunit.client.0.vm02.stdout:1/715: chown d4/d2c/d53/da6/l9b 11723667 1 2026-03-10T10:19:57.484 INFO:tasks.workunit.client.0.vm02.stdout:9/652: truncate da/d3c/d4c/d38/d4a/f54 308626 0 2026-03-10T10:19:57.487 INFO:tasks.workunit.client.1.vm05.stdout:4/555: write d1/f39 [821179,44693] 0 2026-03-10T10:19:57.489 INFO:tasks.workunit.client.1.vm05.stdout:5/691: write da/db/de9/fe5 [170707,42820] 0 2026-03-10T10:19:57.491 INFO:tasks.workunit.client.1.vm05.stdout:2/625: dwrite db/d1c/f1f [0,4194304] 0 2026-03-10T10:19:57.491 INFO:tasks.workunit.client.1.vm05.stdout:6/691: creat dd/fdd x:0 0 0 2026-03-10T10:19:57.492 INFO:tasks.workunit.client.1.vm05.stdout:2/626: fsync db/d28/d4f/fbd 0 2026-03-10T10:19:57.495 INFO:tasks.workunit.client.0.vm02.stdout:2/690: mkdir d0/d10/dee 0 2026-03-10T10:19:57.498 INFO:tasks.workunit.client.1.vm05.stdout:7/691: creat d5/d1d/d29/d3e/d8c/d82/fda x:0 0 0 2026-03-10T10:19:57.499 INFO:tasks.workunit.client.1.vm05.stdout:8/625: fdatasync d7/d2f/f4b 0 2026-03-10T10:19:57.506 INFO:tasks.workunit.client.0.vm02.stdout:9/653: readlink da/d3c/d4c/d2c/d34/l4e 0 2026-03-10T10:19:57.506 INFO:tasks.workunit.client.0.vm02.stdout:4/813: mkdir d1/de8/d109 0 2026-03-10T10:19:57.506 INFO:tasks.workunit.client.1.vm05.stdout:9/609: mknod d0/d1/d13/de/cd1 0 2026-03-10T10:19:57.506 INFO:tasks.workunit.client.1.vm05.stdout:1/753: getdents d4/d79/d83/dc5/dcb 0 2026-03-10T10:19:57.508 INFO:tasks.workunit.client.1.vm05.stdout:4/556: creat d1/d64/da9/fb9 x:0 0 0 2026-03-10T10:19:57.512 INFO:tasks.workunit.client.1.vm05.stdout:0/671: getdents d1/d2/d9/d31/d13/da2/dab/dce 0 2026-03-10T10:19:57.516 INFO:tasks.workunit.client.0.vm02.stdout:3/689: dread d1/d20/f38 [0,4194304] 0 2026-03-10T10:19:57.516 INFO:tasks.workunit.client.1.vm05.stdout:6/692: dread dd/d36/d3f/d12/d44/d2a/d3d/d3e/f7c [0,4194304] 0 2026-03-10T10:19:57.516 INFO:tasks.workunit.client.0.vm02.stdout:1/716: symlink d4/da/d27/d38/le3 0 2026-03-10T10:19:57.520 INFO:tasks.workunit.client.1.vm05.stdout:7/692: creat d5/d1d/d20/d2d/fdb x:0 0 0 2026-03-10T10:19:57.521 INFO:tasks.workunit.client.1.vm05.stdout:7/693: dread - d5/d1d/d20/d35/f78 zero size 2026-03-10T10:19:57.527 INFO:tasks.workunit.client.0.vm02.stdout:4/814: dread d1/d52/d53/f101 [4194304,4194304] 0 2026-03-10T10:19:57.531 INFO:tasks.workunit.client.1.vm05.stdout:8/626: truncate d7/d14/d15/d3b/f7c 854622 0 2026-03-10T10:19:57.564 INFO:tasks.workunit.client.1.vm05.stdout:3/686: creat dd/d15/d4c/db5/ff3 x:0 0 0 2026-03-10T10:19:57.564 INFO:tasks.workunit.client.1.vm05.stdout:4/557: symlink d1/d31/dc/d40/d45/lba 0 2026-03-10T10:19:57.564 INFO:tasks.workunit.client.1.vm05.stdout:6/693: symlink dd/d36/d7d/lde 0 2026-03-10T10:19:57.564 INFO:tasks.workunit.client.1.vm05.stdout:7/694: symlink d5/d1d/d20/ldc 0 2026-03-10T10:19:57.565 INFO:tasks.workunit.client.1.vm05.stdout:8/627: creat d7/d14/d15/d3b/da0/fc8 x:0 0 0 2026-03-10T10:19:57.565 INFO:tasks.workunit.client.0.vm02.stdout:6/660: creat d0/d8/d29/d2f/d4b/fd5 x:0 0 0 2026-03-10T10:19:57.565 INFO:tasks.workunit.client.0.vm02.stdout:5/823: rename d1/db/d11/d84/d40/fe3 to d1/f119 0 2026-03-10T10:19:57.565 INFO:tasks.workunit.client.0.vm02.stdout:8/667: link d1/d1c/d23/d25/f8c d1/d1c/d43/d6a/fca 0 2026-03-10T10:19:57.565 INFO:tasks.workunit.client.0.vm02.stdout:3/690: symlink d1/d58/le4 0 2026-03-10T10:19:57.565 INFO:tasks.workunit.client.0.vm02.stdout:1/717: symlink d4/da/d1a/d47/d65/le4 0 2026-03-10T10:19:57.565 INFO:tasks.workunit.client.0.vm02.stdout:1/718: write d4/da/f28 [471040,74809] 0 2026-03-10T10:19:57.565 INFO:tasks.workunit.client.0.vm02.stdout:4/815: truncate d1/d52/d53/f101 4418366 0 2026-03-10T10:19:57.565 INFO:tasks.workunit.client.0.vm02.stdout:8/668: fsync d1/d1c/d43/d6a/da8/f97 0 2026-03-10T10:19:57.567 INFO:tasks.workunit.client.1.vm05.stdout:7/695: dread d5/d17/f3c [0,4194304] 0 2026-03-10T10:19:57.568 INFO:tasks.workunit.client.0.vm02.stdout:3/691: creat d1/d8/d21/d73/d78/d84/fe5 x:0 0 0 2026-03-10T10:19:57.570 INFO:tasks.workunit.client.1.vm05.stdout:5/692: sync 2026-03-10T10:19:57.571 INFO:tasks.workunit.client.1.vm05.stdout:7/696: dwrite d5/d1d/d29/d3e/d8c/d82/d90/d9a/fcd [0,4194304] 0 2026-03-10T10:19:57.577 INFO:tasks.workunit.client.0.vm02.stdout:4/816: creat d1/d41/d5e/d78/d7f/d82/f10a x:0 0 0 2026-03-10T10:19:57.578 INFO:tasks.workunit.client.1.vm05.stdout:7/697: chown d5/d1d/d20/d2d/d5d 36 1 2026-03-10T10:19:57.578 INFO:tasks.workunit.client.1.vm05.stdout:0/672: sync 2026-03-10T10:19:57.580 INFO:tasks.workunit.client.1.vm05.stdout:7/698: fdatasync d5/d1d/d20/d2d/fdb 0 2026-03-10T10:19:57.580 INFO:tasks.workunit.client.0.vm02.stdout:2/691: link d0/d1a/d49/deb/ce4 d0/d1a/d24/cef 0 2026-03-10T10:19:57.584 INFO:tasks.workunit.client.0.vm02.stdout:5/824: mkdir d1/db/d11/d13/d28/d11a 0 2026-03-10T10:19:57.588 INFO:tasks.workunit.client.1.vm05.stdout:3/687: creat dd/d20/d56/db3/ff4 x:0 0 0 2026-03-10T10:19:57.588 INFO:tasks.workunit.client.0.vm02.stdout:8/669: mknod d1/d1c/d43/d6a/da8/d56/db5/ccb 0 2026-03-10T10:19:57.590 INFO:tasks.workunit.client.0.vm02.stdout:3/692: sync 2026-03-10T10:19:57.590 INFO:tasks.workunit.client.0.vm02.stdout:7/677: sync 2026-03-10T10:19:57.594 INFO:tasks.workunit.client.0.vm02.stdout:0/716: write d9/d18/d1a/d22/d24/f4f [263527,87874] 0 2026-03-10T10:19:57.595 INFO:tasks.workunit.client.0.vm02.stdout:0/717: truncate d9/d18/d1a/d22/d24/d80/fe0 288192 0 2026-03-10T10:19:57.601 INFO:tasks.workunit.client.1.vm05.stdout:2/627: dwrite db/d1c/d40/f4d [0,4194304] 0 2026-03-10T10:19:57.606 INFO:tasks.workunit.client.0.vm02.stdout:5/825: symlink d1/db/d11/d13/d28/d37/dce/l11b 0 2026-03-10T10:19:57.611 INFO:tasks.workunit.client.1.vm05.stdout:4/558: mknod d1/d31/dc/d40/d45/daa/cbb 0 2026-03-10T10:19:57.611 INFO:tasks.workunit.client.0.vm02.stdout:9/654: write da/d3c/d4c/d56/f61 [4278101,57525] 0 2026-03-10T10:19:57.612 INFO:tasks.workunit.client.0.vm02.stdout:8/670: truncate d1/d1c/f14 565445 0 2026-03-10T10:19:57.612 INFO:tasks.workunit.client.0.vm02.stdout:8/671: chown d1/d1c/f42 0 1 2026-03-10T10:19:57.613 INFO:tasks.workunit.client.0.vm02.stdout:8/672: chown d1/d1c/d43/d6a/d7c 0 1 2026-03-10T10:19:57.617 INFO:tasks.workunit.client.1.vm05.stdout:6/694: mknod dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6/cdf 0 2026-03-10T10:19:57.625 INFO:tasks.workunit.client.1.vm05.stdout:7/699: symlink d5/d17/d85/ldd 0 2026-03-10T10:19:57.628 INFO:tasks.workunit.client.1.vm05.stdout:3/688: mknod dd/d15/d4c/cf5 0 2026-03-10T10:19:57.630 INFO:tasks.workunit.client.0.vm02.stdout:0/718: symlink d9/d18/d1a/d3c/le7 0 2026-03-10T10:19:57.638 INFO:tasks.workunit.client.1.vm05.stdout:5/693: dread da/db/d28/f56 [0,4194304] 0 2026-03-10T10:19:57.639 INFO:tasks.workunit.client.1.vm05.stdout:1/754: link d4/dd/f64 d4/df/d1c/d53/daa/fdd 0 2026-03-10T10:19:57.646 INFO:tasks.workunit.client.1.vm05.stdout:9/610: dwrite d0/df/d11/f64 [0,4194304] 0 2026-03-10T10:19:57.666 INFO:tasks.workunit.client.0.vm02.stdout:6/661: write d0/d8/d9/f6a [72959,67614] 0 2026-03-10T10:19:57.674 INFO:tasks.workunit.client.0.vm02.stdout:5/826: read d1/db/d11/d84/d40/fb3 [803684,6934] 0 2026-03-10T10:19:57.678 INFO:tasks.workunit.client.1.vm05.stdout:0/673: truncate d1/d2/d9/d50/f93 1312487 0 2026-03-10T10:19:57.678 INFO:tasks.workunit.client.0.vm02.stdout:9/655: symlink da/d3c/d4c/d2c/d34/d35/ld1 0 2026-03-10T10:19:57.679 INFO:tasks.workunit.client.1.vm05.stdout:4/559: creat d1/d31/d4b/d6d/fbc x:0 0 0 2026-03-10T10:19:57.683 INFO:tasks.workunit.client.0.vm02.stdout:1/719: write d4/da/d1a/d47/fa0 [1913820,6173] 0 2026-03-10T10:19:57.684 INFO:tasks.workunit.client.0.vm02.stdout:1/720: write d4/dc3/dd4/fe1 [611424,127273] 0 2026-03-10T10:19:57.690 INFO:tasks.workunit.client.1.vm05.stdout:8/628: rename d7/d14/d3a/d49/d65/f83 to d7/d14/d24/d3f/fc9 0 2026-03-10T10:19:57.699 INFO:tasks.workunit.client.1.vm05.stdout:7/700: fsync d5/d1d/f53 0 2026-03-10T10:19:57.700 INFO:tasks.workunit.client.1.vm05.stdout:7/701: write d5/fe [4314775,20576] 0 2026-03-10T10:19:57.706 INFO:tasks.workunit.client.1.vm05.stdout:5/694: dread - da/db/de9/fb7 zero size 2026-03-10T10:19:57.706 INFO:tasks.workunit.client.1.vm05.stdout:5/695: chown da/db/d26/d35/d38/c93 1017004250 1 2026-03-10T10:19:57.713 INFO:tasks.workunit.client.1.vm05.stdout:1/755: write d4/f46 [4625545,126783] 0 2026-03-10T10:19:57.721 INFO:tasks.workunit.client.0.vm02.stdout:6/662: dread d0/d8/d29/d2f/f33 [4194304,4194304] 0 2026-03-10T10:19:57.730 INFO:tasks.workunit.client.0.vm02.stdout:4/817: write d1/d41/d5e/d78/d1a/f93 [1495264,102920] 0 2026-03-10T10:19:57.731 INFO:tasks.workunit.client.0.vm02.stdout:4/818: chown d1/d41/fd6 996 1 2026-03-10T10:19:57.732 INFO:tasks.workunit.client.0.vm02.stdout:7/678: dwrite d1/dc/d10/f27 [0,4194304] 0 2026-03-10T10:19:57.740 INFO:tasks.workunit.client.1.vm05.stdout:9/611: mknod d0/d1/d13/d62/cd2 0 2026-03-10T10:19:57.741 INFO:tasks.workunit.client.1.vm05.stdout:2/628: mkdir db/d2d/dc6/dc7 0 2026-03-10T10:19:57.745 INFO:tasks.workunit.client.1.vm05.stdout:0/674: unlink d1/d2/c42 0 2026-03-10T10:19:57.754 INFO:tasks.workunit.client.0.vm02.stdout:1/721: mknod d4/d1b/ce5 0 2026-03-10T10:19:57.756 INFO:tasks.workunit.client.1.vm05.stdout:6/695: symlink dd/d36/d3f/dbd/dd5/le0 0 2026-03-10T10:19:57.791 INFO:tasks.workunit.client.1.vm05.stdout:8/629: rmdir d7/d14/d3a 39 2026-03-10T10:19:57.791 INFO:tasks.workunit.client.0.vm02.stdout:2/692: getdents d0/d1a/d49/d5e/d65/dc4 0 2026-03-10T10:19:57.791 INFO:tasks.workunit.client.1.vm05.stdout:8/630: write d7/f59 [813326,119177] 0 2026-03-10T10:19:57.791 INFO:tasks.workunit.client.0.vm02.stdout:2/693: chown d0/d1a/f66 2 1 2026-03-10T10:19:57.792 INFO:tasks.workunit.client.0.vm02.stdout:2/694: chown d0/d1a/d49/d5e/f63 1080882 1 2026-03-10T10:19:57.800 INFO:tasks.workunit.client.0.vm02.stdout:8/673: write d1/d1c/f33 [5301090,59320] 0 2026-03-10T10:19:57.802 INFO:tasks.workunit.client.0.vm02.stdout:8/674: truncate d1/d1c/d43/d5b/fbc 198362 0 2026-03-10T10:19:57.802 INFO:tasks.workunit.client.0.vm02.stdout:8/675: dread - d1/d1c/d24/dad/fc3 zero size 2026-03-10T10:19:57.803 INFO:tasks.workunit.client.0.vm02.stdout:8/676: write d1/d1c/f33 [6198662,50818] 0 2026-03-10T10:19:57.808 INFO:tasks.workunit.client.1.vm05.stdout:7/702: truncate d5/f13 1812679 0 2026-03-10T10:19:57.809 INFO:tasks.workunit.client.0.vm02.stdout:0/719: write d9/d18/f6a [3105563,57211] 0 2026-03-10T10:19:57.818 INFO:tasks.workunit.client.1.vm05.stdout:3/689: dwrite dd/d15/d24/f79 [0,4194304] 0 2026-03-10T10:19:57.829 INFO:tasks.workunit.client.1.vm05.stdout:5/696: creat da/db/de9/feb x:0 0 0 2026-03-10T10:19:57.832 INFO:tasks.workunit.client.1.vm05.stdout:9/612: dread - d0/d1/f75 zero size 2026-03-10T10:19:57.839 INFO:tasks.workunit.client.1.vm05.stdout:8/631: symlink d7/d14/d24/d3f/d6a/d8a/lca 0 2026-03-10T10:19:57.839 INFO:tasks.workunit.client.1.vm05.stdout:7/703: fdatasync d5/d26/f39 0 2026-03-10T10:19:57.842 INFO:tasks.workunit.client.1.vm05.stdout:1/756: chown d4/d37/d4e/d82/lb4 358 1 2026-03-10T10:19:57.843 INFO:tasks.workunit.client.1.vm05.stdout:2/629: mknod db/d28/d4f/d59/cc8 0 2026-03-10T10:19:57.844 INFO:tasks.workunit.client.1.vm05.stdout:2/630: chown db/d28/d4f/d8b/d9a/d9d 1411 1 2026-03-10T10:19:57.844 INFO:tasks.workunit.client.1.vm05.stdout:0/675: mknod d1/d2/d9/d31/d54/ce5 0 2026-03-10T10:19:57.847 INFO:tasks.workunit.client.1.vm05.stdout:5/697: sync 2026-03-10T10:19:57.853 INFO:tasks.workunit.client.1.vm05.stdout:1/757: rename d4/d37/f89 to d4/d79/d83/dc5/dcb/fde 0 2026-03-10T10:19:57.870 INFO:tasks.workunit.client.0.vm02.stdout:9/656: dwrite da/d3c/d4c/d75/fbb [0,4194304] 0 2026-03-10T10:19:57.882 INFO:tasks.workunit.client.1.vm05.stdout:9/613: symlink d0/df/d74/d8c/ld3 0 2026-03-10T10:19:57.883 INFO:tasks.workunit.client.0.vm02.stdout:9/657: dwrite da/d3c/d4c/d75/fbb [0,4194304] 0 2026-03-10T10:19:57.894 INFO:tasks.workunit.client.1.vm05.stdout:4/560: write d1/d3/f6c [975968,100503] 0 2026-03-10T10:19:57.895 INFO:tasks.workunit.client.0.vm02.stdout:6/663: dread - d0/d8/d29/d94/fa9 zero size 2026-03-10T10:19:57.896 INFO:tasks.workunit.client.0.vm02.stdout:6/664: chown d0/d8/d8c/lae 0 1 2026-03-10T10:19:57.897 INFO:tasks.workunit.client.1.vm05.stdout:2/631: dwrite db/d12/fb5 [0,4194304] 0 2026-03-10T10:19:57.899 INFO:tasks.workunit.client.1.vm05.stdout:2/632: readlink db/d28/d4f/d8b/d9a/lb7 0 2026-03-10T10:19:57.902 INFO:tasks.workunit.client.1.vm05.stdout:0/676: mknod d1/d2/d39/d6e/ce6 0 2026-03-10T10:19:57.904 INFO:tasks.workunit.client.1.vm05.stdout:0/677: dread - d1/d2/d9/d31/d54/f86 zero size 2026-03-10T10:19:57.918 INFO:tasks.workunit.client.1.vm05.stdout:8/632: write d7/d14/d3a/f68 [9048681,121691] 0 2026-03-10T10:19:57.918 INFO:tasks.workunit.client.1.vm05.stdout:8/633: readlink d7/d14/d24/l31 0 2026-03-10T10:19:57.921 INFO:tasks.workunit.client.0.vm02.stdout:7/679: symlink d1/dc/d99/ld4 0 2026-03-10T10:19:57.928 INFO:tasks.workunit.client.1.vm05.stdout:5/698: read da/db/d26/d35/d38/fab [493987,3061] 0 2026-03-10T10:19:57.935 INFO:tasks.workunit.client.0.vm02.stdout:3/693: link d1/d6/c71 d1/d8/d21/d7d/ce6 0 2026-03-10T10:19:57.939 INFO:tasks.workunit.client.1.vm05.stdout:3/690: dwrite dd/d20/d56/d5e/feb [0,4194304] 0 2026-03-10T10:19:57.952 INFO:tasks.workunit.client.0.vm02.stdout:1/722: symlink d4/da/d27/d38/d3c/le6 0 2026-03-10T10:19:57.964 INFO:tasks.workunit.client.1.vm05.stdout:6/696: getdents dd/d36/d3f/d12/d44/d2a/d7f 0 2026-03-10T10:19:57.969 INFO:tasks.workunit.client.1.vm05.stdout:7/704: dwrite d5/d1d/d29/fb7 [0,4194304] 0 2026-03-10T10:19:57.980 INFO:tasks.workunit.client.1.vm05.stdout:8/634: mknod d7/ccb 0 2026-03-10T10:19:57.985 INFO:tasks.workunit.client.1.vm05.stdout:8/635: dwrite d7/d2f/fb4 [0,4194304] 0 2026-03-10T10:19:57.999 INFO:tasks.workunit.client.0.vm02.stdout:0/720: dread d9/f28 [0,4194304] 0 2026-03-10T10:19:57.999 INFO:tasks.workunit.client.0.vm02.stdout:0/721: stat d9/d18/d1a/d3c/c56 0 2026-03-10T10:19:58.008 INFO:tasks.workunit.client.0.vm02.stdout:9/658: creat da/d3c/d4c/d2c/d34/d35/fd2 x:0 0 0 2026-03-10T10:19:58.015 INFO:tasks.workunit.client.0.vm02.stdout:6/665: creat d0/d8/d9/fd6 x:0 0 0 2026-03-10T10:19:58.016 INFO:tasks.workunit.client.0.vm02.stdout:6/666: stat d0/d8/d9/d7a/dc0 0 2026-03-10T10:19:58.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:57 vm02.local ceph-mon[50200]: pgmap v10: 65 pgs: 65 active+clean; 2.7 GiB data, 9.4 GiB used, 111 GiB / 120 GiB avail; 31 MiB/s rd, 78 MiB/s wr, 196 op/s 2026-03-10T10:19:58.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:57 vm05.local ceph-mon[59051]: pgmap v10: 65 pgs: 65 active+clean; 2.7 GiB data, 9.4 GiB used, 111 GiB / 120 GiB avail; 31 MiB/s rd, 78 MiB/s wr, 196 op/s 2026-03-10T10:19:58.038 INFO:tasks.workunit.client.1.vm05.stdout:3/691: creat dd/d15/d24/d2c/d6d/da7/dbb/dbd/ff6 x:0 0 0 2026-03-10T10:19:58.040 INFO:tasks.workunit.client.0.vm02.stdout:4/819: write d1/d75/ddd/f42 [1979892,46710] 0 2026-03-10T10:19:58.056 INFO:tasks.workunit.client.0.vm02.stdout:5/827: write d1/db/f88 [1113842,89155] 0 2026-03-10T10:19:58.061 INFO:tasks.workunit.client.1.vm05.stdout:4/561: dwrite d1/d64/fa1 [0,4194304] 0 2026-03-10T10:19:58.062 INFO:tasks.workunit.client.1.vm05.stdout:4/562: stat d1/d31/f13 0 2026-03-10T10:19:58.065 INFO:tasks.workunit.client.1.vm05.stdout:6/697: chown dd/d36/d3f/c31 12404 1 2026-03-10T10:19:58.089 INFO:tasks.workunit.client.1.vm05.stdout:7/705: mknod d5/d1d/d20/d91/da7/cde 0 2026-03-10T10:19:58.090 INFO:tasks.workunit.client.0.vm02.stdout:0/722: mknod d9/d18/dc7/ce8 0 2026-03-10T10:19:58.090 INFO:tasks.workunit.client.1.vm05.stdout:0/678: mkdir d1/d2/dc6/de7 0 2026-03-10T10:19:58.093 INFO:tasks.workunit.client.1.vm05.stdout:7/706: dwrite d5/d17/f19 [0,4194304] 0 2026-03-10T10:19:58.095 INFO:tasks.workunit.client.1.vm05.stdout:8/636: mkdir d7/d2f/d57/dcc 0 2026-03-10T10:19:58.108 INFO:tasks.workunit.client.0.vm02.stdout:3/694: unlink d1/d6/l57 0 2026-03-10T10:19:58.121 INFO:tasks.workunit.client.0.vm02.stdout:5/828: symlink d1/db/d11/d84/d40/d4f/d5f/d6d/d71/l11c 0 2026-03-10T10:19:58.122 INFO:tasks.workunit.client.0.vm02.stdout:5/829: stat d1/db/d11/d84/d40/d4f/c10f 0 2026-03-10T10:19:58.124 INFO:tasks.workunit.client.1.vm05.stdout:1/758: creat d4/fdf x:0 0 0 2026-03-10T10:19:58.129 INFO:tasks.workunit.client.0.vm02.stdout:8/677: link d1/f1b d1/d1c/d24/dad/dbe/fcc 0 2026-03-10T10:19:58.129 INFO:tasks.workunit.client.0.vm02.stdout:9/659: fsync da/d3c/d53/f73 0 2026-03-10T10:19:58.129 INFO:tasks.workunit.client.0.vm02.stdout:2/695: creat d0/d1a/d49/d5e/ff0 x:0 0 0 2026-03-10T10:19:58.147 INFO:tasks.workunit.client.0.vm02.stdout:0/723: dread d9/d18/d1a/f7e [0,4194304] 0 2026-03-10T10:19:58.156 INFO:tasks.workunit.client.0.vm02.stdout:5/830: truncate d1/db/f2f 3934847 0 2026-03-10T10:19:58.158 INFO:tasks.workunit.client.0.vm02.stdout:5/831: fdatasync d1/d9c/fa8 0 2026-03-10T10:19:58.158 INFO:tasks.workunit.client.0.vm02.stdout:7/680: dwrite d1/f17 [0,4194304] 0 2026-03-10T10:19:58.171 INFO:tasks.workunit.client.0.vm02.stdout:2/696: chown d0/c40 2 1 2026-03-10T10:19:58.174 INFO:tasks.workunit.client.0.vm02.stdout:1/723: dwrite d4/d2c/fc7 [0,4194304] 0 2026-03-10T10:19:58.188 INFO:tasks.workunit.client.0.vm02.stdout:2/697: dread d0/f9 [0,4194304] 0 2026-03-10T10:19:58.194 INFO:tasks.workunit.client.0.vm02.stdout:0/724: mknod d9/d18/d1a/d22/d24/d80/d74/ce9 0 2026-03-10T10:19:58.205 INFO:tasks.workunit.client.1.vm05.stdout:2/633: dwrite db/d1c/d40/d62/f9f [0,4194304] 0 2026-03-10T10:19:58.210 INFO:tasks.workunit.client.0.vm02.stdout:5/832: mknod d1/db/d11/d84/d95/c11d 0 2026-03-10T10:19:58.217 INFO:tasks.workunit.client.0.vm02.stdout:6/667: dwrite d0/d8/d29/d2f/f33 [4194304,4194304] 0 2026-03-10T10:19:58.220 INFO:tasks.workunit.client.1.vm05.stdout:4/563: fdatasync d1/d31/dc/f3a 0 2026-03-10T10:19:58.232 INFO:tasks.workunit.client.0.vm02.stdout:1/724: readlink d4/lbf 0 2026-03-10T10:19:58.239 INFO:tasks.workunit.client.0.vm02.stdout:9/660: rename da/f1b to da/d3c/d4c/d56/fd3 0 2026-03-10T10:19:58.242 INFO:tasks.workunit.client.1.vm05.stdout:0/679: write d1/d2/d9/d31/d13/da2/fd6 [935310,101738] 0 2026-03-10T10:19:58.263 INFO:tasks.workunit.client.0.vm02.stdout:9/661: dread da/d3c/d4c/d38/fb2 [0,4194304] 0 2026-03-10T10:19:58.271 INFO:tasks.workunit.client.0.vm02.stdout:2/698: mkdir d0/d1a/d24/df1 0 2026-03-10T10:19:58.275 INFO:tasks.workunit.client.1.vm05.stdout:7/707: truncate d5/d1d/f53 274153 0 2026-03-10T10:19:58.278 INFO:tasks.workunit.client.0.vm02.stdout:9/662: dwrite da/d3c/d53/f6a [4194304,4194304] 0 2026-03-10T10:19:58.279 INFO:tasks.workunit.client.0.vm02.stdout:4/820: dwrite d1/d32/f46 [4194304,4194304] 0 2026-03-10T10:19:58.287 INFO:tasks.workunit.client.1.vm05.stdout:8/637: rmdir d7/d14/d24 39 2026-03-10T10:19:58.289 INFO:tasks.workunit.client.1.vm05.stdout:5/699: creat da/db/d28/fec x:0 0 0 2026-03-10T10:19:58.290 INFO:tasks.workunit.client.1.vm05.stdout:3/692: dwrite dd/d15/f1c [0,4194304] 0 2026-03-10T10:19:58.302 INFO:tasks.workunit.client.0.vm02.stdout:0/725: creat d9/d34/d3d/d65/d89/fea x:0 0 0 2026-03-10T10:19:58.304 INFO:tasks.workunit.client.1.vm05.stdout:9/614: getdents d0/d1/d13 0 2026-03-10T10:19:58.307 INFO:tasks.workunit.client.0.vm02.stdout:5/833: truncate d1/db/d11/d13/d28/da7/dd9/fe2 353151 0 2026-03-10T10:19:58.318 INFO:tasks.workunit.client.1.vm05.stdout:2/634: dwrite db/d1c/d40/f70 [0,4194304] 0 2026-03-10T10:19:58.318 INFO:tasks.workunit.client.1.vm05.stdout:2/635: chown db/d2d/l2f 3 1 2026-03-10T10:19:58.318 INFO:tasks.workunit.client.0.vm02.stdout:2/699: unlink d0/d1a/f52 0 2026-03-10T10:19:58.318 INFO:tasks.workunit.client.0.vm02.stdout:5/834: dwrite d1/db/d11/d13/d28/d37/dce/f10b [0,4194304] 0 2026-03-10T10:19:58.344 INFO:tasks.workunit.client.1.vm05.stdout:0/680: truncate d1/d2/d9/d31/d12/d41/f6d 713982 0 2026-03-10T10:19:58.350 INFO:tasks.workunit.client.0.vm02.stdout:4/821: read d1/d41/d5e/d78/d44/f59 [955434,129968] 0 2026-03-10T10:19:58.352 INFO:tasks.workunit.client.1.vm05.stdout:7/708: symlink d5/dd/ldf 0 2026-03-10T10:19:58.356 INFO:tasks.workunit.client.1.vm05.stdout:3/693: mkdir dd/d39/d5f/df7 0 2026-03-10T10:19:58.356 INFO:tasks.workunit.client.0.vm02.stdout:3/695: getdents d1/d8/d44 0 2026-03-10T10:19:58.356 INFO:tasks.workunit.client.1.vm05.stdout:3/694: chown dd/d39 219 1 2026-03-10T10:19:58.360 INFO:tasks.workunit.client.1.vm05.stdout:3/695: dwrite dd/d20/d56/db3/ff4 [0,4194304] 0 2026-03-10T10:19:58.378 INFO:tasks.workunit.client.1.vm05.stdout:2/636: rename db/d61/c76 to db/d2d/dc6/cc9 0 2026-03-10T10:19:58.381 INFO:tasks.workunit.client.1.vm05.stdout:2/637: dwrite db/d2d/f5d [4194304,4194304] 0 2026-03-10T10:19:58.383 INFO:tasks.workunit.client.0.vm02.stdout:2/700: creat d0/dd4/ff2 x:0 0 0 2026-03-10T10:19:58.395 INFO:tasks.workunit.client.1.vm05.stdout:7/709: dread d5/d17/f4f [0,4194304] 0 2026-03-10T10:19:58.401 INFO:tasks.workunit.client.0.vm02.stdout:4/822: creat d1/d10/d88/db2/f10b x:0 0 0 2026-03-10T10:19:58.401 INFO:tasks.workunit.client.1.vm05.stdout:8/638: dread d7/d14/f5b [8388608,4194304] 0 2026-03-10T10:19:58.401 INFO:tasks.workunit.client.1.vm05.stdout:9/615: dread d0/dc4/f7e [0,4194304] 0 2026-03-10T10:19:58.401 INFO:tasks.workunit.client.1.vm05.stdout:0/681: dread d1/f11 [0,4194304] 0 2026-03-10T10:19:58.403 INFO:tasks.workunit.client.0.vm02.stdout:7/681: getdents d1/dc/d60 0 2026-03-10T10:19:58.405 INFO:tasks.workunit.client.1.vm05.stdout:3/696: read dd/d39/f96 [81646,44549] 0 2026-03-10T10:19:58.405 INFO:tasks.workunit.client.0.vm02.stdout:6/668: mknod d0/d8/d29/d6d/cd7 0 2026-03-10T10:19:58.407 INFO:tasks.workunit.client.1.vm05.stdout:3/697: dread dd/d20/d56/d5e/feb [0,4194304] 0 2026-03-10T10:19:58.409 INFO:tasks.workunit.client.1.vm05.stdout:2/638: creat db/d28/d4f/d59/da4/fca x:0 0 0 2026-03-10T10:19:58.410 INFO:tasks.workunit.client.1.vm05.stdout:7/710: fdatasync d5/d1d/d20/fa2 0 2026-03-10T10:19:58.411 INFO:tasks.workunit.client.1.vm05.stdout:7/711: read d5/d1d/d20/d35/f37 [1566428,118160] 0 2026-03-10T10:19:58.412 INFO:tasks.workunit.client.1.vm05.stdout:7/712: stat d5/d26/f39 0 2026-03-10T10:19:58.413 INFO:tasks.workunit.client.1.vm05.stdout:9/616: rmdir d0/df 39 2026-03-10T10:19:58.418 INFO:tasks.workunit.client.1.vm05.stdout:2/639: dread db/f36 [0,4194304] 0 2026-03-10T10:19:58.418 INFO:tasks.workunit.client.1.vm05.stdout:0/682: rename d1/d2/d9/d31/d12/f5b to d1/dd7/fe8 0 2026-03-10T10:19:58.419 INFO:tasks.workunit.client.0.vm02.stdout:9/663: getdents da/d3c/d4c/d38 0 2026-03-10T10:19:58.420 INFO:tasks.workunit.client.0.vm02.stdout:4/823: getdents d1/d52/d53/dda/de0 0 2026-03-10T10:19:58.420 INFO:tasks.workunit.client.1.vm05.stdout:3/698: symlink dd/d20/d9e/lf8 0 2026-03-10T10:19:58.427 INFO:tasks.workunit.client.0.vm02.stdout:8/678: dwrite d1/d1c/d43/d5b/f63 [0,4194304] 0 2026-03-10T10:19:58.434 INFO:tasks.workunit.client.0.vm02.stdout:3/696: sync 2026-03-10T10:19:58.465 INFO:tasks.workunit.client.1.vm05.stdout:2/640: unlink db/d12/l96 0 2026-03-10T10:19:58.466 INFO:tasks.workunit.client.0.vm02.stdout:9/664: symlink da/d3c/d4c/ld4 0 2026-03-10T10:19:58.480 INFO:tasks.workunit.client.1.vm05.stdout:3/699: readlink dd/d15/d24/d2c/d3b/lc2 0 2026-03-10T10:19:58.483 INFO:tasks.workunit.client.0.vm02.stdout:4/824: symlink d1/de8/d109/l10c 0 2026-03-10T10:19:58.484 INFO:tasks.workunit.client.0.vm02.stdout:7/682: link d1/dc/d16/d28/l82 d1/d1b/d8f/dad/d7e/ld5 0 2026-03-10T10:19:58.493 INFO:tasks.workunit.client.0.vm02.stdout:9/665: rename da/d3c/d4c/f60 to da/d3c/d4c/d38/fd5 0 2026-03-10T10:19:58.497 INFO:tasks.workunit.client.1.vm05.stdout:8/639: rename d7/d14/c75 to d7/d14/d24/d3f/d4f/ccd 0 2026-03-10T10:19:58.498 INFO:tasks.workunit.client.1.vm05.stdout:8/640: dread - d7/fb5 zero size 2026-03-10T10:19:58.502 INFO:tasks.workunit.client.1.vm05.stdout:0/683: mknod d1/d2/d39/ce9 0 2026-03-10T10:19:58.505 INFO:tasks.workunit.client.1.vm05.stdout:0/684: dread d1/d2/d9/d31/d13/d17/f57 [0,4194304] 0 2026-03-10T10:19:58.509 INFO:tasks.workunit.client.0.vm02.stdout:4/825: symlink d1/d41/d5e/d78/d44/de7/l10d 0 2026-03-10T10:19:58.528 INFO:tasks.workunit.client.1.vm05.stdout:7/713: creat d5/d17/fe0 x:0 0 0 2026-03-10T10:19:58.531 INFO:tasks.workunit.client.1.vm05.stdout:9/617: creat d0/d1/d13/de/fd4 x:0 0 0 2026-03-10T10:19:58.539 INFO:tasks.workunit.client.1.vm05.stdout:0/685: creat d1/d2/d39/d6e/dc0/fea x:0 0 0 2026-03-10T10:19:58.540 INFO:tasks.workunit.client.1.vm05.stdout:6/698: dwrite dd/d36/d3f/f61 [4194304,4194304] 0 2026-03-10T10:19:58.541 INFO:tasks.workunit.client.1.vm05.stdout:7/714: readlink d5/d17/l5f 0 2026-03-10T10:19:58.541 INFO:tasks.workunit.client.1.vm05.stdout:9/618: symlink d0/d1/d13/d62/ld5 0 2026-03-10T10:19:58.541 INFO:tasks.workunit.client.1.vm05.stdout:2/641: creat db/d2d/fcb x:0 0 0 2026-03-10T10:19:58.545 INFO:tasks.workunit.client.1.vm05.stdout:2/642: dread db/d1c/d40/f4d [0,4194304] 0 2026-03-10T10:19:58.553 INFO:tasks.workunit.client.1.vm05.stdout:2/643: readlink db/d2d/l3e 0 2026-03-10T10:19:58.564 INFO:tasks.workunit.client.1.vm05.stdout:0/686: mknod d1/d2/d9/d31/ceb 0 2026-03-10T10:19:58.564 INFO:tasks.workunit.client.1.vm05.stdout:2/644: mkdir db/d61/dcc 0 2026-03-10T10:19:58.564 INFO:tasks.workunit.client.1.vm05.stdout:2/645: creat db/d28/d4f/d8b/d9a/fcd x:0 0 0 2026-03-10T10:19:58.564 INFO:tasks.workunit.client.1.vm05.stdout:0/687: truncate d1/d2/d9/d50/f93 953166 0 2026-03-10T10:19:58.564 INFO:tasks.workunit.client.1.vm05.stdout:0/688: read d1/d2/d9/d31/f8c [4533746,48860] 0 2026-03-10T10:19:58.564 INFO:tasks.workunit.client.1.vm05.stdout:0/689: dread d1/d2/d9/d31/d12/d20/f37 [4194304,4194304] 0 2026-03-10T10:19:58.565 INFO:tasks.workunit.client.1.vm05.stdout:0/690: fdatasync d1/d2/d9/f32 0 2026-03-10T10:19:58.566 INFO:tasks.workunit.client.1.vm05.stdout:0/691: readlink d1/d2/d9/d31/d13/d2f/l65 0 2026-03-10T10:19:58.569 INFO:tasks.workunit.client.1.vm05.stdout:9/619: sync 2026-03-10T10:19:58.570 INFO:tasks.workunit.client.1.vm05.stdout:9/620: readlink d0/d70/lb4 0 2026-03-10T10:19:58.576 INFO:tasks.workunit.client.1.vm05.stdout:9/621: fdatasync d0/df/d11/f8d 0 2026-03-10T10:19:58.576 INFO:tasks.workunit.client.0.vm02.stdout:1/725: write d4/d1b/f44 [1239548,74712] 0 2026-03-10T10:19:58.577 INFO:tasks.workunit.client.1.vm05.stdout:9/622: stat d0/d1/f6d 0 2026-03-10T10:19:58.580 INFO:tasks.workunit.client.1.vm05.stdout:0/692: dread d1/d2/d9/d31/d13/f7a [0,4194304] 0 2026-03-10T10:19:58.587 INFO:tasks.workunit.client.1.vm05.stdout:9/623: unlink d0/d1/fb 0 2026-03-10T10:19:58.588 INFO:tasks.workunit.client.0.vm02.stdout:1/726: unlink d4/da/d1a/l1e 0 2026-03-10T10:19:58.588 INFO:tasks.workunit.client.1.vm05.stdout:9/624: write d0/d1/d13/de/fd4 [599465,63815] 0 2026-03-10T10:19:58.589 INFO:tasks.workunit.client.0.vm02.stdout:1/727: truncate d4/dc3/fd8 787797 0 2026-03-10T10:19:58.613 INFO:tasks.workunit.client.1.vm05.stdout:4/564: write d1/d70/f78 [4685110,31385] 0 2026-03-10T10:19:58.616 INFO:tasks.workunit.client.1.vm05.stdout:4/565: symlink d1/d64/da9/dae/lbd 0 2026-03-10T10:19:58.617 INFO:tasks.workunit.client.1.vm05.stdout:4/566: read - d1/d64/da9/fb9 zero size 2026-03-10T10:19:58.619 INFO:tasks.workunit.client.1.vm05.stdout:4/567: rename d1/d64/da9/dae/lbd to d1/d31/d72/lbe 0 2026-03-10T10:19:58.659 INFO:tasks.workunit.client.0.vm02.stdout:5/835: dwrite d1/db/d11/d16/d48/fb5 [0,4194304] 0 2026-03-10T10:19:58.664 INFO:tasks.workunit.client.0.vm02.stdout:5/836: dread - d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fd8 zero size 2026-03-10T10:19:58.668 INFO:tasks.workunit.client.0.vm02.stdout:5/837: dwrite d1/db/d11/d62/d67/ff7 [0,4194304] 0 2026-03-10T10:19:58.668 INFO:tasks.workunit.client.0.vm02.stdout:5/838: readlink d1/l8 0 2026-03-10T10:19:58.669 INFO:tasks.workunit.client.0.vm02.stdout:5/839: chown d1/db/d11/d84/d95/cea 22132262 1 2026-03-10T10:19:58.671 INFO:tasks.workunit.client.0.vm02.stdout:5/840: symlink d1/db/d11/d13/d28/da7/l11e 0 2026-03-10T10:19:58.676 INFO:tasks.workunit.client.0.vm02.stdout:5/841: creat d1/db/d11/d13/d28/f11f x:0 0 0 2026-03-10T10:19:58.702 INFO:tasks.workunit.client.1.vm05.stdout:5/700: dwrite da/d9a/dc7/f83 [0,4194304] 0 2026-03-10T10:19:58.704 INFO:tasks.workunit.client.1.vm05.stdout:5/701: mkdir da/d9a/daf/ded 0 2026-03-10T10:19:58.706 INFO:tasks.workunit.client.1.vm05.stdout:5/702: rename da/db/d26/d35 to da/db/dee 0 2026-03-10T10:19:58.707 INFO:tasks.workunit.client.1.vm05.stdout:5/703: mkdir da/db/d28/d32/def 0 2026-03-10T10:19:58.709 INFO:tasks.workunit.client.1.vm05.stdout:5/704: rename da/db/fa1 to da/db/d26/d70/ff0 0 2026-03-10T10:19:58.722 INFO:tasks.workunit.client.1.vm05.stdout:5/705: getdents da/d9a/dc7/db4/dbd 0 2026-03-10T10:19:58.738 INFO:tasks.workunit.client.1.vm05.stdout:1/759: write d4/df/d1c/d53/daa/fdd [3537023,76020] 0 2026-03-10T10:19:58.765 INFO:tasks.workunit.client.1.vm05.stdout:1/760: getdents d4/d39/d3e 0 2026-03-10T10:19:58.767 INFO:tasks.workunit.client.1.vm05.stdout:1/761: rename d4/d37/d4e to d4/df/de0 0 2026-03-10T10:19:58.776 INFO:tasks.workunit.client.1.vm05.stdout:1/762: dread d4/d39/d3e/f96 [0,4194304] 0 2026-03-10T10:19:58.777 INFO:tasks.workunit.client.1.vm05.stdout:1/763: creat d4/d39/d3e/db1/db8/fe1 x:0 0 0 2026-03-10T10:19:58.779 INFO:tasks.workunit.client.1.vm05.stdout:1/764: getdents d4/dd 0 2026-03-10T10:19:58.783 INFO:tasks.workunit.client.1.vm05.stdout:1/765: symlink d4/d39/le2 0 2026-03-10T10:19:58.784 INFO:tasks.workunit.client.1.vm05.stdout:1/766: symlink d4/le3 0 2026-03-10T10:19:58.785 INFO:tasks.workunit.client.1.vm05.stdout:1/767: rmdir d4/df/d1c/d92 39 2026-03-10T10:19:58.786 INFO:tasks.workunit.client.1.vm05.stdout:1/768: chown d4/d79/d83/dc5/dcb 3612 1 2026-03-10T10:19:58.788 INFO:tasks.workunit.client.1.vm05.stdout:1/769: mknod d4/d39/d88/ce4 0 2026-03-10T10:19:58.789 INFO:tasks.workunit.client.0.vm02.stdout:0/726: truncate d9/d18/d1a/d22/d24/d80/fe0 122380 0 2026-03-10T10:19:58.789 INFO:tasks.workunit.client.0.vm02.stdout:0/727: readlink d9/l1c 0 2026-03-10T10:19:58.797 INFO:tasks.workunit.client.1.vm05.stdout:1/770: dread d4/df/de0/f62 [0,4194304] 0 2026-03-10T10:19:58.808 INFO:tasks.workunit.client.0.vm02.stdout:0/728: creat d9/d18/d1a/d22/d24/d80/d49/feb x:0 0 0 2026-03-10T10:19:58.809 INFO:tasks.workunit.client.1.vm05.stdout:1/771: rmdir d4/df/d1c 39 2026-03-10T10:19:58.811 INFO:tasks.workunit.client.1.vm05.stdout:1/772: symlink d4/d39/d3e/db1/le5 0 2026-03-10T10:19:58.813 INFO:tasks.workunit.client.0.vm02.stdout:0/729: unlink d9/d34/d3d/d65/d89/dd3/da7/fc6 0 2026-03-10T10:19:58.819 INFO:tasks.workunit.client.0.vm02.stdout:0/730: dread d9/d18/dc7/dca/f95 [0,4194304] 0 2026-03-10T10:19:58.819 INFO:tasks.workunit.client.0.vm02.stdout:0/731: chown d9/d34/d3d/d65/d89/dd3/da8/fd7 259152190 1 2026-03-10T10:19:58.820 INFO:tasks.workunit.client.0.vm02.stdout:0/732: write d9/d18/d1a/d22/d24/d80/d49/f5e [5179580,40415] 0 2026-03-10T10:19:58.828 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:58 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:58.829 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:58 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:58.829 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:58 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:58.829 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:58 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:58.839 INFO:tasks.workunit.client.1.vm05.stdout:1/773: dread d4/df/d1c/f63 [4194304,4194304] 0 2026-03-10T10:19:58.839 INFO:tasks.workunit.client.1.vm05.stdout:1/774: dread - d4/fdf zero size 2026-03-10T10:19:58.840 INFO:tasks.workunit.client.1.vm05.stdout:1/775: chown d4/d3d 54747187 1 2026-03-10T10:19:58.841 INFO:tasks.workunit.client.1.vm05.stdout:1/776: rmdir d4/d79/d83/dc5/dcb 39 2026-03-10T10:19:58.842 INFO:tasks.workunit.client.1.vm05.stdout:1/777: write d4/fda [1007603,30510] 0 2026-03-10T10:19:58.844 INFO:tasks.workunit.client.1.vm05.stdout:1/778: write d4/df/d1c/d92/f97 [440643,126090] 0 2026-03-10T10:19:58.847 INFO:tasks.workunit.client.0.vm02.stdout:2/701: write d0/d10/f4b [712341,96862] 0 2026-03-10T10:19:58.852 INFO:tasks.workunit.client.1.vm05.stdout:1/779: truncate d4/d3d/d6e/fc3 897092 0 2026-03-10T10:19:58.878 INFO:tasks.workunit.client.0.vm02.stdout:2/702: fsync d0/d1a/f25 0 2026-03-10T10:19:58.880 INFO:tasks.workunit.client.0.vm02.stdout:2/703: stat d0/d1a/d49/d5e/f68 0 2026-03-10T10:19:58.885 INFO:tasks.workunit.client.0.vm02.stdout:6/669: dwrite d0/d8/d29/fc4 [0,4194304] 0 2026-03-10T10:19:58.890 INFO:tasks.workunit.client.1.vm05.stdout:1/780: mkdir d4/d79/de6 0 2026-03-10T10:19:58.900 INFO:tasks.workunit.client.0.vm02.stdout:6/670: dread d0/d87/f90 [0,4194304] 0 2026-03-10T10:19:58.906 INFO:tasks.workunit.client.0.vm02.stdout:6/671: dwrite d0/d8/d9/fd6 [0,4194304] 0 2026-03-10T10:19:58.914 INFO:tasks.workunit.client.0.vm02.stdout:2/704: dread d0/d1a/d49/d5e/f60 [4194304,4194304] 0 2026-03-10T10:19:58.924 INFO:tasks.workunit.client.1.vm05.stdout:1/781: fsync d4/df/d1c/f38 0 2026-03-10T10:19:58.942 INFO:tasks.workunit.client.0.vm02.stdout:3/697: dwrite d1/d6/f3a [0,4194304] 0 2026-03-10T10:19:58.945 INFO:tasks.workunit.client.0.vm02.stdout:3/698: chown d1/d6/c71 57393236 1 2026-03-10T10:19:58.964 INFO:tasks.workunit.client.0.vm02.stdout:8/679: write d1/d1c/d43/d6a/da8/d56/db5/fc1 [3756879,14761] 0 2026-03-10T10:19:58.968 INFO:tasks.workunit.client.1.vm05.stdout:1/782: symlink d4/d79/d83/dc5/dcb/le7 0 2026-03-10T10:19:58.976 INFO:tasks.workunit.client.0.vm02.stdout:8/680: dread d1/d2/f29 [0,4194304] 0 2026-03-10T10:19:58.978 INFO:tasks.workunit.client.0.vm02.stdout:3/699: creat d1/d20/d52/dd3/fe7 x:0 0 0 2026-03-10T10:19:58.989 INFO:tasks.workunit.client.0.vm02.stdout:7/683: dwrite d1/dc/d16/d28/f73 [0,4194304] 0 2026-03-10T10:19:58.992 INFO:tasks.workunit.client.0.vm02.stdout:8/681: chown d1/d1c/f1e 7110 1 2026-03-10T10:19:58.997 INFO:tasks.workunit.client.0.vm02.stdout:8/682: dwrite d1/d1c/f33 [4194304,4194304] 0 2026-03-10T10:19:59.009 INFO:tasks.workunit.client.1.vm05.stdout:3/700: write dd/f52 [470156,23794] 0 2026-03-10T10:19:59.009 INFO:tasks.workunit.client.0.vm02.stdout:4/826: rename d1/d41 to d1/d75/ddd/d10e 0 2026-03-10T10:19:59.009 INFO:tasks.workunit.client.0.vm02.stdout:9/666: write da/d3c/d4c/d2c/d96/f9b [181333,6722] 0 2026-03-10T10:19:59.010 INFO:tasks.workunit.client.1.vm05.stdout:3/701: write dd/d15/d4c/db5/ff3 [571177,72105] 0 2026-03-10T10:19:59.013 INFO:tasks.workunit.client.1.vm05.stdout:3/702: dwrite dd/d20/d56/d5e/dab/fc4 [0,4194304] 0 2026-03-10T10:19:59.019 INFO:tasks.workunit.client.0.vm02.stdout:7/684: symlink d1/dc/d10/d38/ld6 0 2026-03-10T10:19:59.022 INFO:tasks.workunit.client.1.vm05.stdout:3/703: rename f1 to dd/d20/d9e/ff9 0 2026-03-10T10:19:59.028 INFO:tasks.workunit.client.0.vm02.stdout:8/683: rmdir d1/d1c/d43/d6a/da8/d8e 39 2026-03-10T10:19:59.028 INFO:tasks.workunit.client.0.vm02.stdout:8/684: write d1/d1c/d24/d71/fa2 [625221,38068] 0 2026-03-10T10:19:59.029 INFO:tasks.workunit.client.0.vm02.stdout:8/685: readlink d1/d2/l47 0 2026-03-10T10:19:59.037 INFO:tasks.workunit.client.1.vm05.stdout:3/704: fdatasync dd/d15/d24/d2c/d3b/f48 0 2026-03-10T10:19:59.038 INFO:tasks.workunit.client.1.vm05.stdout:8/641: dwrite d7/d14/d15/f3c [4194304,4194304] 0 2026-03-10T10:19:59.039 INFO:tasks.workunit.client.1.vm05.stdout:8/642: fdatasync d7/d14/d15/da7/faf 0 2026-03-10T10:19:59.040 INFO:tasks.workunit.client.1.vm05.stdout:8/643: write d7/d14/d15/da7/faf [4957381,1429] 0 2026-03-10T10:19:59.043 INFO:tasks.workunit.client.1.vm05.stdout:8/644: write d7/d14/d15/f3c [4188991,81222] 0 2026-03-10T10:19:59.059 INFO:tasks.workunit.client.1.vm05.stdout:3/705: truncate dd/d39/d66/f6e 1315193 0 2026-03-10T10:19:59.059 INFO:tasks.workunit.client.1.vm05.stdout:6/699: write f2 [4987413,70547] 0 2026-03-10T10:19:59.063 INFO:tasks.workunit.client.0.vm02.stdout:7/685: unlink d1/dc/c84 0 2026-03-10T10:19:59.084 INFO:tasks.workunit.client.1.vm05.stdout:8/645: rename d7/d2f/da3 to d7/d14/d15/d3b/da0/dce 0 2026-03-10T10:19:59.085 INFO:tasks.workunit.client.1.vm05.stdout:7/715: write d5/dd/fa9 [1110051,76733] 0 2026-03-10T10:19:59.092 INFO:tasks.workunit.client.1.vm05.stdout:2/646: truncate db/d1c/f1f 3247232 0 2026-03-10T10:19:59.109 INFO:tasks.workunit.client.0.vm02.stdout:3/700: getdents d1/d6/d8e 0 2026-03-10T10:19:59.113 INFO:tasks.workunit.client.0.vm02.stdout:8/686: unlink d1/d1c/d43/d6a/da8/c58 0 2026-03-10T10:19:59.130 INFO:tasks.workunit.client.0.vm02.stdout:9/667: creat da/d3c/d4c/d38/d82/d89/fd6 x:0 0 0 2026-03-10T10:19:59.136 INFO:tasks.workunit.client.1.vm05.stdout:0/693: dwrite d1/d2/d5d/f5f [0,4194304] 0 2026-03-10T10:19:59.138 INFO:tasks.workunit.client.0.vm02.stdout:7/686: symlink d1/dc/ld7 0 2026-03-10T10:19:59.143 INFO:tasks.workunit.client.0.vm02.stdout:1/728: write d4/da/d27/d38/fad [341254,50577] 0 2026-03-10T10:19:59.148 INFO:tasks.workunit.client.1.vm05.stdout:9/625: write d0/d1/fb0 [801905,32923] 0 2026-03-10T10:19:59.158 INFO:tasks.workunit.client.1.vm05.stdout:9/626: sync 2026-03-10T10:19:59.167 INFO:tasks.workunit.client.1.vm05.stdout:9/627: dread d0/df/d11/f8d [0,4194304] 0 2026-03-10T10:19:59.188 INFO:tasks.workunit.client.1.vm05.stdout:4/568: write d1/d31/dc/f21 [2003660,23091] 0 2026-03-10T10:19:59.189 INFO:tasks.workunit.client.1.vm05.stdout:4/569: write d1/d31/dc/d40/d45/f66 [59299,4066] 0 2026-03-10T10:19:59.235 INFO:tasks.workunit.client.0.vm02.stdout:5/842: dwrite d1/db/d11/d1a/fc6 [0,4194304] 0 2026-03-10T10:19:59.262 INFO:tasks.workunit.client.0.vm02.stdout:9/668: symlink da/d3c/d4c/d38/d4a/d70/ld7 0 2026-03-10T10:19:59.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:58 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:59.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:58 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:19:59.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:58 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:59.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:58 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:59.289 INFO:tasks.workunit.client.0.vm02.stdout:5/843: creat d1/db/d11/d16/d48/f120 x:0 0 0 2026-03-10T10:19:59.290 INFO:tasks.workunit.client.0.vm02.stdout:5/844: chown d1/db/d11/d84/l78 557961209 1 2026-03-10T10:19:59.293 INFO:tasks.workunit.client.0.vm02.stdout:5/845: readlink d1/db/d11/d84/l78 0 2026-03-10T10:19:59.295 INFO:tasks.workunit.client.0.vm02.stdout:5/846: chown d1/db/d11/d13 30016488 1 2026-03-10T10:19:59.310 INFO:tasks.workunit.client.1.vm05.stdout:5/706: write f5 [810597,90787] 0 2026-03-10T10:19:59.311 INFO:tasks.workunit.client.0.vm02.stdout:7/687: creat d1/d1b/d8f/dad/fd8 x:0 0 0 2026-03-10T10:19:59.320 INFO:tasks.workunit.client.0.vm02.stdout:1/729: creat d4/da/d27/d38/d80/fe7 x:0 0 0 2026-03-10T10:19:59.337 INFO:tasks.workunit.client.0.vm02.stdout:7/688: mkdir d1/dc/d55/d9a/dd9 0 2026-03-10T10:19:59.379 INFO:tasks.workunit.client.1.vm05.stdout:3/706: truncate dd/d39/f96 281154 0 2026-03-10T10:19:59.382 INFO:tasks.workunit.client.0.vm02.stdout:9/669: getdents da/d3c/d4c/d56 0 2026-03-10T10:19:59.389 INFO:tasks.workunit.client.1.vm05.stdout:6/700: unlink dd/d36/d3f/c31 0 2026-03-10T10:19:59.402 INFO:tasks.workunit.client.0.vm02.stdout:7/689: fsync d1/dc/d10/d38/f96 0 2026-03-10T10:19:59.402 INFO:tasks.workunit.client.0.vm02.stdout:0/733: dwrite d9/d18/d1a/f88 [0,4194304] 0 2026-03-10T10:19:59.405 INFO:tasks.workunit.client.0.vm02.stdout:0/734: dread - d9/d34/d3d/d65/d89/dd3/da7/db7/de1/fe6 zero size 2026-03-10T10:19:59.406 INFO:tasks.workunit.client.1.vm05.stdout:7/716: mkdir d5/d1d/d29/d60/de1 0 2026-03-10T10:19:59.420 INFO:tasks.workunit.client.1.vm05.stdout:0/694: unlink d1/d2/d9/d31/d13/d15/f52 0 2026-03-10T10:19:59.421 INFO:tasks.workunit.client.1.vm05.stdout:9/628: dread - d0/d1/d16/f92 zero size 2026-03-10T10:19:59.421 INFO:tasks.workunit.client.0.vm02.stdout:6/672: write d0/d8/d9/f82 [2099654,81876] 0 2026-03-10T10:19:59.427 INFO:tasks.workunit.client.1.vm05.stdout:1/783: truncate d4/df/d1c/d53/daa/fa9 2306883 0 2026-03-10T10:19:59.453 INFO:tasks.workunit.client.0.vm02.stdout:7/690: creat d1/dc/d16/fda x:0 0 0 2026-03-10T10:19:59.453 INFO:tasks.workunit.client.0.vm02.stdout:6/673: symlink d0/d8/d29/d2f/d4b/da5/ld8 0 2026-03-10T10:19:59.453 INFO:tasks.workunit.client.0.vm02.stdout:2/705: dwrite d0/d10/f93 [0,4194304] 0 2026-03-10T10:19:59.453 INFO:tasks.workunit.client.0.vm02.stdout:4/827: dwrite d1/d75/ddd/d10e/fc3 [0,4194304] 0 2026-03-10T10:19:59.453 INFO:tasks.workunit.client.1.vm05.stdout:4/570: rename d1/d70 to d1/d31/d76/dac/db8/dbf 0 2026-03-10T10:19:59.454 INFO:tasks.workunit.client.1.vm05.stdout:4/571: dwrite d1/fb6 [0,4194304] 0 2026-03-10T10:19:59.454 INFO:tasks.workunit.client.1.vm05.stdout:5/707: read - da/db/dee/fd3 zero size 2026-03-10T10:19:59.463 INFO:tasks.workunit.client.1.vm05.stdout:2/647: mkdir db/d28/d4f/d59/dce 0 2026-03-10T10:19:59.468 INFO:tasks.workunit.client.1.vm05.stdout:3/707: dread dd/d15/d1f/f2b [0,4194304] 0 2026-03-10T10:19:59.469 INFO:tasks.workunit.client.0.vm02.stdout:2/706: unlink d0/d1a/d49/deb/f98 0 2026-03-10T10:19:59.473 INFO:tasks.workunit.client.1.vm05.stdout:0/695: dread d1/d2/d9/d31/d13/d17/f1b [0,4194304] 0 2026-03-10T10:19:59.479 INFO:tasks.workunit.client.0.vm02.stdout:9/670: sync 2026-03-10T10:19:59.484 INFO:tasks.workunit.client.0.vm02.stdout:0/735: creat d9/d18/d1a/d22/d24/d8e/fec x:0 0 0 2026-03-10T10:19:59.487 INFO:tasks.workunit.client.1.vm05.stdout:1/784: truncate d4/d39/d3e/da0/fc9 418037 0 2026-03-10T10:19:59.487 INFO:tasks.workunit.client.0.vm02.stdout:8/687: dwrite d1/d1c/d43/d6a/da8/f4f [4194304,4194304] 0 2026-03-10T10:19:59.487 INFO:tasks.workunit.client.0.vm02.stdout:3/701: dwrite d1/d20/fa4 [0,4194304] 0 2026-03-10T10:19:59.487 INFO:tasks.workunit.client.0.vm02.stdout:8/688: stat d1/d1c/d24/dad/dbe 0 2026-03-10T10:19:59.613 INFO:tasks.workunit.client.1.vm05.stdout:4/572: creat d1/d64/da9/fc0 x:0 0 0 2026-03-10T10:19:59.614 INFO:tasks.workunit.client.0.vm02.stdout:1/730: write d4/d1b/fc6 [238883,115213] 0 2026-03-10T10:19:59.614 INFO:tasks.workunit.client.0.vm02.stdout:5/847: write d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fb6 [2041105,48572] 0 2026-03-10T10:19:59.619 INFO:tasks.workunit.client.1.vm05.stdout:5/708: fsync da/db/d26/d5c/fc5 0 2026-03-10T10:19:59.623 INFO:tasks.workunit.client.0.vm02.stdout:9/671: fdatasync da/d3c/d4c/d2c/f32 0 2026-03-10T10:19:59.624 INFO:tasks.workunit.client.0.vm02.stdout:9/672: stat da/d3c/d4c/d38/d82/d89/fb5 0 2026-03-10T10:19:59.628 INFO:tasks.workunit.client.0.vm02.stdout:8/689: rename d1/d1c/d24/f8a to d1/dc7/fcd 0 2026-03-10T10:19:59.642 INFO:tasks.workunit.client.0.vm02.stdout:6/674: read d0/d8/d29/d2f/d50/d98/f9f [281695,69554] 0 2026-03-10T10:19:59.685 INFO:tasks.workunit.client.1.vm05.stdout:9/629: creat d0/d1/dcc/dd0/fd6 x:0 0 0 2026-03-10T10:19:59.705 INFO:tasks.workunit.client.0.vm02.stdout:7/691: dwrite d1/dc/f2e [0,4194304] 0 2026-03-10T10:19:59.723 INFO:tasks.workunit.client.0.vm02.stdout:4/828: link d1/d52/cb7 d1/d75/ddd/d10e/d5e/d78/d1a/c10f 0 2026-03-10T10:19:59.755 INFO:tasks.workunit.client.0.vm02.stdout:9/673: fsync da/d3c/d4c/f41 0 2026-03-10T10:19:59.757 INFO:tasks.workunit.client.0.vm02.stdout:9/674: chown da/d3c/d4c/d2c/d34/d35/l46 1443 1 2026-03-10T10:19:59.768 INFO:tasks.workunit.client.1.vm05.stdout:8/646: link d7/d2f/d57/c5a d7/d14/d62/d90/dac/ccf 0 2026-03-10T10:19:59.769 INFO:tasks.workunit.client.1.vm05.stdout:3/708: getdents dd/d39/d5f/df7 0 2026-03-10T10:19:59.812 INFO:tasks.workunit.client.0.vm02.stdout:7/692: rename d1/f80 to d1/dc/d99/fdb 0 2026-03-10T10:19:59.828 INFO:tasks.workunit.client.1.vm05.stdout:6/701: rename dd/d36/d3f/d12/d44/d2a/d3d/f76 to dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/fe1 0 2026-03-10T10:19:59.859 INFO:tasks.workunit.client.1.vm05.stdout:7/717: getdents d5/d1d/d20/d91/da7 0 2026-03-10T10:19:59.871 INFO:tasks.workunit.client.1.vm05.stdout:0/696: link d1/d2/d9/d31/d13/d17/fd3 d1/d2/d9/d50/fec 0 2026-03-10T10:19:59.907 INFO:tasks.workunit.client.1.vm05.stdout:3/709: creat dd/d15/d24/d2c/dd0/dd9/ffa x:0 0 0 2026-03-10T10:19:59.931 INFO:tasks.workunit.client.1.vm05.stdout:7/718: creat d5/d1d/fe2 x:0 0 0 2026-03-10T10:19:59.931 INFO:tasks.workunit.client.1.vm05.stdout:4/573: getdents d1/d31/dc/d40/d45/daa 0 2026-03-10T10:19:59.932 INFO:tasks.workunit.client.1.vm05.stdout:4/574: chown d1/d31/d4b/d6d/f85 8865 1 2026-03-10T10:19:59.935 INFO:tasks.workunit.client.1.vm05.stdout:4/575: dread d1/f39 [0,4194304] 0 2026-03-10T10:19:59.940 INFO:tasks.workunit.client.0.vm02.stdout:1/731: mkdir d4/da/d1a/d5b/d93/de8 0 2026-03-10T10:19:59.951 INFO:tasks.workunit.client.0.vm02.stdout:3/702: creat d1/fe8 x:0 0 0 2026-03-10T10:19:59.962 INFO:tasks.workunit.client.0.vm02.stdout:9/675: chown da/f6f 24445254 1 2026-03-10T10:19:59.973 INFO:tasks.workunit.client.0.vm02.stdout:8/690: symlink d1/d1c/d43/d5b/d88/lce 0 2026-03-10T10:19:59.974 INFO:tasks.workunit.client.1.vm05.stdout:7/719: truncate d5/d1d/d20/d2d/d68/f98 163813 0 2026-03-10T10:19:59.974 INFO:tasks.workunit.client.1.vm05.stdout:4/576: mknod d1/d31/d76/cc1 0 2026-03-10T10:19:59.974 INFO:tasks.workunit.client.0.vm02.stdout:8/691: stat d1/d1c/d43/c96 0 2026-03-10T10:19:59.974 INFO:tasks.workunit.client.1.vm05.stdout:3/710: truncate dd/d39/f51 60796 0 2026-03-10T10:19:59.974 INFO:tasks.workunit.client.1.vm05.stdout:3/711: readlink dd/d15/d1f/l46 0 2026-03-10T10:19:59.978 INFO:tasks.workunit.client.1.vm05.stdout:8/647: dread d7/d14/d3a/d49/f72 [0,4194304] 0 2026-03-10T10:20:00.003 INFO:tasks.workunit.client.0.vm02.stdout:4/829: mknod d1/d52/d53/dda/de0/c110 0 2026-03-10T10:20:00.013 INFO:tasks.workunit.client.0.vm02.stdout:1/732: read - d4/da/d1a/d47/d78/fdc zero size 2026-03-10T10:20:00.018 INFO:tasks.workunit.client.0.vm02.stdout:1/733: dwrite d4/d1b/fc6 [0,4194304] 0 2026-03-10T10:20:00.031 INFO:tasks.workunit.client.1.vm05.stdout:7/720: mkdir d5/d1d/de3 0 2026-03-10T10:20:00.031 INFO:tasks.workunit.client.0.vm02.stdout:9/676: rmdir da/d3c/d4c/d38/d4a/d70 39 2026-03-10T10:20:00.052 INFO:tasks.workunit.client.1.vm05.stdout:4/577: dread d1/d64/f84 [0,4194304] 0 2026-03-10T10:20:00.063 INFO:tasks.workunit.client.1.vm05.stdout:0/697: getdents d1/d2/d9/d31/d54 0 2026-03-10T10:20:00.068 INFO:tasks.workunit.client.0.vm02.stdout:2/707: dwrite d0/d1a/f66 [0,4194304] 0 2026-03-10T10:20:00.086 INFO:tasks.workunit.client.1.vm05.stdout:7/721: unlink d5/d17/d85/ld0 0 2026-03-10T10:20:00.107 INFO:tasks.workunit.client.0.vm02.stdout:9/677: write da/d3c/d4c/f23 [5291317,71321] 0 2026-03-10T10:20:00.107 INFO:tasks.workunit.client.0.vm02.stdout:9/678: read - da/fae zero size 2026-03-10T10:20:00.112 INFO:tasks.workunit.client.0.vm02.stdout:8/692: mkdir d1/d1c/d24/dcf 0 2026-03-10T10:20:00.125 INFO:tasks.workunit.client.1.vm05.stdout:2/648: write db/f2e [2852730,88506] 0 2026-03-10T10:20:00.126 INFO:tasks.workunit.client.0.vm02.stdout:2/708: readlink d0/l16 0 2026-03-10T10:20:00.139 INFO:tasks.workunit.client.0.vm02.stdout:8/693: rmdir d1/d1c/d43/d5b/dab 39 2026-03-10T10:20:00.151 INFO:tasks.workunit.client.0.vm02.stdout:2/709: truncate d0/d1a/d49/f78 491899 0 2026-03-10T10:20:00.164 INFO:tasks.workunit.client.0.vm02.stdout:8/694: fsync d1/f65 0 2026-03-10T10:20:00.177 INFO:tasks.workunit.client.0.vm02.stdout:4/830: link d1/d75/ddd/d10e/d5e/d78/d44/f90 d1/f111 0 2026-03-10T10:20:00.178 INFO:tasks.workunit.client.0.vm02.stdout:4/831: chown d1/d75/ddd/d10e/d5e/d78/d37/f48 786814 1 2026-03-10T10:20:00.197 INFO:tasks.workunit.client.1.vm05.stdout:8/648: link d7/d14/d24/c3e d7/d14/d24/d3f/d4f/cd0 0 2026-03-10T10:20:00.210 INFO:tasks.workunit.client.0.vm02.stdout:2/710: creat d0/d1a/d49/dcc/ff3 x:0 0 0 2026-03-10T10:20:00.210 INFO:tasks.workunit.client.0.vm02.stdout:9/679: link da/d9d/ca9 da/d3c/d4c/d56/dbc/cd8 0 2026-03-10T10:20:00.210 INFO:tasks.workunit.client.0.vm02.stdout:9/680: truncate da/d3c/d4c/d2c/d34/d35/fcd 614786 0 2026-03-10T10:20:00.211 INFO:tasks.workunit.client.1.vm05.stdout:7/722: mknod d5/d17/ce4 0 2026-03-10T10:20:00.213 INFO:tasks.workunit.client.0.vm02.stdout:8/695: sync 2026-03-10T10:20:00.216 INFO:tasks.workunit.client.1.vm05.stdout:2/649: dread db/f19 [0,4194304] 0 2026-03-10T10:20:00.217 INFO:tasks.workunit.client.1.vm05.stdout:2/650: chown db/d1c/d40/f4d 111137 1 2026-03-10T10:20:00.220 INFO:tasks.workunit.client.1.vm05.stdout:7/723: dread d5/d26/f39 [0,4194304] 0 2026-03-10T10:20:00.230 INFO:tasks.workunit.client.0.vm02.stdout:2/711: rmdir d0/d1a/d49/d5e/d65 39 2026-03-10T10:20:00.230 INFO:tasks.workunit.client.0.vm02.stdout:2/712: chown d0/d1a/d24/df1 683809 1 2026-03-10T10:20:00.235 INFO:tasks.workunit.client.0.vm02.stdout:2/713: sync 2026-03-10T10:20:00.243 INFO:tasks.workunit.client.0.vm02.stdout:5/848: write d1/db/f2f [4690837,82490] 0 2026-03-10T10:20:00.244 INFO:tasks.workunit.client.0.vm02.stdout:5/849: chown d1/d6a 89 1 2026-03-10T10:20:00.252 INFO:tasks.workunit.client.1.vm05.stdout:8/649: truncate d7/d14/d15/d3b/f7c 431514 0 2026-03-10T10:20:00.253 INFO:tasks.workunit.client.0.vm02.stdout:5/850: rmdir d1/db/d11/d62/d67 39 2026-03-10T10:20:00.253 INFO:tasks.workunit.client.0.vm02.stdout:2/714: dread - d0/d10/f8b zero size 2026-03-10T10:20:00.254 INFO:tasks.workunit.client.0.vm02.stdout:8/696: read d1/d1c/d43/d5b/d88/fb9 [628133,41125] 0 2026-03-10T10:20:00.264 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:19:59 vm02.local ceph-mon[50200]: pgmap v11: 65 pgs: 65 active+clean; 2.7 GiB data, 9.4 GiB used, 111 GiB / 120 GiB avail; 31 MiB/s rd, 78 MiB/s wr, 196 op/s 2026-03-10T10:20:00.280 INFO:tasks.workunit.client.1.vm05.stdout:9/630: write d0/f7 [318710,121551] 0 2026-03-10T10:20:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:19:59 vm05.local ceph-mon[59051]: pgmap v11: 65 pgs: 65 active+clean; 2.7 GiB data, 9.4 GiB used, 111 GiB / 120 GiB avail; 31 MiB/s rd, 78 MiB/s wr, 196 op/s 2026-03-10T10:20:00.293 INFO:tasks.workunit.client.1.vm05.stdout:1/785: unlink d4/df/d1c/f63 0 2026-03-10T10:20:00.308 INFO:tasks.workunit.client.0.vm02.stdout:4/832: getdents d1/d75/ddd/d10e/d7e 0 2026-03-10T10:20:00.330 INFO:tasks.workunit.client.0.vm02.stdout:6/675: rmdir d0/d8/d29/d2f/d50/d98 39 2026-03-10T10:20:00.341 INFO:tasks.workunit.client.0.vm02.stdout:5/851: creat d1/db/d11/d1a/f121 x:0 0 0 2026-03-10T10:20:00.344 INFO:tasks.workunit.client.1.vm05.stdout:9/631: creat d0/d1/dcc/dd0/fd7 x:0 0 0 2026-03-10T10:20:00.345 INFO:tasks.workunit.client.0.vm02.stdout:8/697: mknod d1/d2/cd0 0 2026-03-10T10:20:00.360 INFO:tasks.workunit.client.0.vm02.stdout:4/833: dread d1/d75/ddd/d10e/d5e/d78/d7f/f74 [0,4194304] 0 2026-03-10T10:20:00.374 INFO:tasks.workunit.client.1.vm05.stdout:6/702: mknod dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/ce2 0 2026-03-10T10:20:00.375 INFO:tasks.workunit.client.0.vm02.stdout:2/715: truncate d0/d1a/d49/d5e/d65/f9e 369465 0 2026-03-10T10:20:00.379 INFO:tasks.workunit.client.1.vm05.stdout:8/650: creat d7/d14/d24/fd1 x:0 0 0 2026-03-10T10:20:00.383 INFO:tasks.workunit.client.0.vm02.stdout:7/693: write d1/dc/d16/f1e [1074507,55883] 0 2026-03-10T10:20:00.387 INFO:tasks.workunit.client.0.vm02.stdout:0/736: rmdir d9/d34/d3d/d65/da3 0 2026-03-10T10:20:00.389 INFO:tasks.workunit.client.1.vm05.stdout:6/703: sync 2026-03-10T10:20:00.389 INFO:tasks.workunit.client.1.vm05.stdout:8/651: sync 2026-03-10T10:20:00.395 INFO:tasks.workunit.client.1.vm05.stdout:6/704: stat dd/d1b/l51 0 2026-03-10T10:20:00.398 INFO:tasks.workunit.client.1.vm05.stdout:6/705: dwrite dd/d36/d3f/d12/d44/d2a/d7f/fd3 [0,4194304] 0 2026-03-10T10:20:00.403 INFO:tasks.workunit.client.0.vm02.stdout:3/703: dwrite d1/d8/d21/d73/f82 [0,4194304] 0 2026-03-10T10:20:00.458 INFO:tasks.workunit.client.1.vm05.stdout:3/712: write f6 [7854079,85252] 0 2026-03-10T10:20:00.463 INFO:tasks.workunit.client.0.vm02.stdout:1/734: dwrite d4/da/d27/d38/d3c/f8f [0,4194304] 0 2026-03-10T10:20:00.465 INFO:tasks.workunit.client.0.vm02.stdout:1/735: readlink d4/da/d27/l9c 0 2026-03-10T10:20:00.470 INFO:tasks.workunit.client.0.vm02.stdout:1/736: dwrite d4/da/d27/d38/f3b [4194304,4194304] 0 2026-03-10T10:20:00.493 INFO:tasks.workunit.client.1.vm05.stdout:8/652: mknod d7/d14/d24/cd2 0 2026-03-10T10:20:00.495 INFO:tasks.workunit.client.0.vm02.stdout:7/694: truncate d1/dc/d60/f79 2847687 0 2026-03-10T10:20:00.497 INFO:tasks.workunit.client.1.vm05.stdout:6/706: mknod dd/ce3 0 2026-03-10T10:20:00.498 INFO:tasks.workunit.client.1.vm05.stdout:6/707: stat dd/fdd 0 2026-03-10T10:20:00.499 INFO:tasks.workunit.client.1.vm05.stdout:3/713: truncate dd/d39/d66/f7e 966605 0 2026-03-10T10:20:00.506 INFO:tasks.workunit.client.0.vm02.stdout:0/737: mknod d9/d34/d3d/d65/d89/dd3/da7/db9/ced 0 2026-03-10T10:20:00.506 INFO:tasks.workunit.client.1.vm05.stdout:0/698: dwrite d1/d2/d9/d31/f84 [0,4194304] 0 2026-03-10T10:20:00.506 INFO:tasks.workunit.client.1.vm05.stdout:6/708: mkdir dd/d36/d3f/d12/d44/daa/de4 0 2026-03-10T10:20:00.507 INFO:tasks.workunit.client.1.vm05.stdout:8/653: dread d7/d14/d24/d3f/d6a/d8a/d96/fc3 [0,4194304] 0 2026-03-10T10:20:00.508 INFO:tasks.workunit.client.1.vm05.stdout:8/654: chown d7/d2f 929162 1 2026-03-10T10:20:00.508 INFO:tasks.workunit.client.1.vm05.stdout:0/699: sync 2026-03-10T10:20:00.509 INFO:tasks.workunit.client.1.vm05.stdout:0/700: stat d1/d2/d9/d31/d13/d2f/c34 0 2026-03-10T10:20:00.509 INFO:tasks.workunit.client.1.vm05.stdout:0/701: chown d1/d2/d9/d31/d13/f4c 13571757 1 2026-03-10T10:20:00.510 INFO:tasks.workunit.client.0.vm02.stdout:4/834: symlink d1/l112 0 2026-03-10T10:20:00.514 INFO:tasks.workunit.client.1.vm05.stdout:3/714: readlink dd/d20/d56/d5e/dab/lb1 0 2026-03-10T10:20:00.520 INFO:tasks.workunit.client.0.vm02.stdout:6/676: mkdir d0/d8/d29/d6d/d96/dd9 0 2026-03-10T10:20:00.523 INFO:tasks.workunit.client.1.vm05.stdout:5/709: rename da/f2e to da/db/d26/ff1 0 2026-03-10T10:20:00.524 INFO:tasks.workunit.client.0.vm02.stdout:9/681: write da/f5c [3975457,35949] 0 2026-03-10T10:20:00.541 INFO:tasks.workunit.client.0.vm02.stdout:3/704: rename d1/d20/f38 to d1/d8/d86/db1/fe9 0 2026-03-10T10:20:00.551 INFO:tasks.workunit.client.0.vm02.stdout:5/852: creat d1/db/d11/d84/d40/f122 x:0 0 0 2026-03-10T10:20:00.568 INFO:tasks.workunit.client.1.vm05.stdout:8/655: mkdir d7/d14/d62/d90/dd3 0 2026-03-10T10:20:00.585 INFO:tasks.workunit.client.1.vm05.stdout:2/651: creat db/d1c/fcf x:0 0 0 2026-03-10T10:20:00.585 INFO:tasks.workunit.client.1.vm05.stdout:0/702: rmdir d1/d2/d39/d3d 39 2026-03-10T10:20:00.586 INFO:tasks.workunit.client.0.vm02.stdout:1/737: symlink d4/da/d1a/d47/d78/le9 0 2026-03-10T10:20:00.595 INFO:tasks.workunit.client.1.vm05.stdout:1/786: write d4/df/d1c/d53/d66/fb3 [1651717,64825] 0 2026-03-10T10:20:00.600 INFO:tasks.workunit.client.1.vm05.stdout:1/787: dwrite d4/fdf [0,4194304] 0 2026-03-10T10:20:00.601 INFO:tasks.workunit.client.1.vm05.stdout:4/578: rename d1/d31/d72/la6 to d1/d31/d76/dac/lc2 0 2026-03-10T10:20:00.602 INFO:tasks.workunit.client.1.vm05.stdout:4/579: chown d1/lb7 1 1 2026-03-10T10:20:00.614 INFO:tasks.workunit.client.1.vm05.stdout:5/710: mkdir da/d63/df2 0 2026-03-10T10:20:00.618 INFO:tasks.workunit.client.0.vm02.stdout:2/716: dwrite d0/d1a/d49/d5e/fa0 [0,4194304] 0 2026-03-10T10:20:00.644 INFO:tasks.workunit.client.1.vm05.stdout:6/709: mknod dd/d36/d3f/d12/d59/ce5 0 2026-03-10T10:20:00.673 INFO:tasks.workunit.client.0.vm02.stdout:6/677: truncate d0/d8/d29/d2f/f55 1001596 0 2026-03-10T10:20:00.674 INFO:tasks.workunit.client.0.vm02.stdout:6/678: chown d0/c25 129472 1 2026-03-10T10:20:00.682 INFO:tasks.workunit.client.1.vm05.stdout:2/652: creat db/d28/d4f/d59/da4/d6c/fd0 x:0 0 0 2026-03-10T10:20:00.705 INFO:tasks.workunit.client.0.vm02.stdout:9/682: truncate da/d3c/d4c/f27 192443 0 2026-03-10T10:20:00.709 INFO:tasks.workunit.client.1.vm05.stdout:0/703: fdatasync d1/d2/d9/d31/d12/d20/dbe/fc5 0 2026-03-10T10:20:00.736 INFO:tasks.workunit.client.0.vm02.stdout:8/698: link d1/d1c/d43/d5b/d88/fb9 d1/d1c/d43/d5b/d88/fd1 0 2026-03-10T10:20:00.741 INFO:tasks.workunit.client.1.vm05.stdout:7/724: rename d5/dd/f1a to d5/d1d/d20/d3b/fe5 0 2026-03-10T10:20:00.744 INFO:tasks.workunit.client.0.vm02.stdout:7/695: write d1/dc/d60/f53 [4232224,15084] 0 2026-03-10T10:20:00.744 INFO:tasks.workunit.client.1.vm05.stdout:4/580: truncate d1/d31/dc/f69 1902774 0 2026-03-10T10:20:00.745 INFO:tasks.workunit.client.1.vm05.stdout:4/581: write d1/d31/f13 [1633232,113140] 0 2026-03-10T10:20:00.745 INFO:tasks.workunit.client.1.vm05.stdout:4/582: readlink d1/d31/dc/l8e 0 2026-03-10T10:20:00.747 INFO:tasks.workunit.client.0.vm02.stdout:0/738: truncate d9/d18/d1a/d22/d24/d80/fe0 618780 0 2026-03-10T10:20:00.753 INFO:tasks.workunit.client.1.vm05.stdout:8/656: write d7/d14/f23 [4678027,50290] 0 2026-03-10T10:20:00.753 INFO:tasks.workunit.client.0.vm02.stdout:4/835: write d1/d75/ddd/d10e/d5e/d78/d1a/d49/f7a [4214160,86795] 0 2026-03-10T10:20:00.758 INFO:tasks.workunit.client.1.vm05.stdout:6/710: creat dd/d1b/fe6 x:0 0 0 2026-03-10T10:20:00.776 INFO:tasks.workunit.client.0.vm02.stdout:9/683: rename da/d3c/d4c/d56/dbc to da/d3c/d4c/d38/d82/dd9 0 2026-03-10T10:20:00.779 INFO:tasks.workunit.client.0.vm02.stdout:9/684: chown da/d3c/d4c/cc6 17 1 2026-03-10T10:20:00.791 INFO:tasks.workunit.client.1.vm05.stdout:3/715: truncate dd/d15/d24/f63 196016 0 2026-03-10T10:20:00.793 INFO:tasks.workunit.client.0.vm02.stdout:3/705: dwrite d1/d6/f49 [0,4194304] 0 2026-03-10T10:20:00.795 INFO:tasks.workunit.client.0.vm02.stdout:3/706: chown d1/d6/f36 46611043 1 2026-03-10T10:20:00.798 INFO:tasks.workunit.client.0.vm02.stdout:1/738: write d4/d2c/d53/f99 [6376228,32586] 0 2026-03-10T10:20:00.800 INFO:tasks.workunit.client.1.vm05.stdout:1/788: mkdir d4/d20/dbe/de8 0 2026-03-10T10:20:00.811 INFO:tasks.workunit.client.1.vm05.stdout:9/632: rename d0/dc4/d63/f78 to d0/d1/d13/de/d93/fd8 0 2026-03-10T10:20:00.823 INFO:tasks.workunit.client.1.vm05.stdout:4/583: fsync d1/d31/dc/d40/f7d 0 2026-03-10T10:20:00.838 INFO:tasks.workunit.client.1.vm05.stdout:8/657: symlink d7/ld4 0 2026-03-10T10:20:00.838 INFO:tasks.workunit.client.1.vm05.stdout:7/725: write d5/d1d/d29/d3e/d8c/d96/fa6 [9220474,7565] 0 2026-03-10T10:20:00.858 INFO:tasks.workunit.client.1.vm05.stdout:6/711: unlink dd/d36/d3f/d12/d59/ce5 0 2026-03-10T10:20:00.860 INFO:tasks.workunit.client.1.vm05.stdout:0/704: mkdir d1/d2/d9/d31/d13/da2/dab/ded 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.1.vm05.stdout:3/716: creat dd/d15/d24/d74/ffb x:0 0 0 2026-03-10T10:20:00.866 INFO:tasks.workunit.client.1.vm05.stdout:3/717: write dd/f52 [70404,67097] 0 2026-03-10T10:20:00.869 INFO:tasks.workunit.client.1.vm05.stdout:3/718: dread dd/d20/d56/db3/ff4 [0,4194304] 0 2026-03-10T10:20:00.874 INFO:tasks.workunit.client.1.vm05.stdout:2/653: rename db/f23 to db/d1c/d40/d62/d85/fd1 0 2026-03-10T10:20:00.893 INFO:tasks.workunit.client.1.vm05.stdout:5/711: creat da/ff3 x:0 0 0 2026-03-10T10:20:00.895 INFO:tasks.workunit.client.1.vm05.stdout:8/658: mkdir d7/dd5 0 2026-03-10T10:20:00.899 INFO:tasks.workunit.client.1.vm05.stdout:7/726: symlink d5/d1d/d29/d3e/d8c/le6 0 2026-03-10T10:20:00.900 INFO:tasks.workunit.client.1.vm05.stdout:9/633: dwrite d0/d1/d16/f92 [0,4194304] 0 2026-03-10T10:20:00.905 INFO:tasks.workunit.client.1.vm05.stdout:7/727: dread d5/d1d/d20/d35/f36 [0,4194304] 0 2026-03-10T10:20:00.918 INFO:tasks.workunit.client.1.vm05.stdout:0/705: mkdir d1/d2/d9/d31/d13/d17/da1/dee 0 2026-03-10T10:20:00.926 INFO:tasks.workunit.client.1.vm05.stdout:3/719: dread dd/d39/f51 [0,4194304] 0 2026-03-10T10:20:00.928 INFO:tasks.workunit.client.1.vm05.stdout:2/654: creat db/d28/d4f/d8b/d9a/fd2 x:0 0 0 2026-03-10T10:20:00.932 INFO:tasks.workunit.client.0.vm02.stdout:2/717: getdents d0/d1a/d24/df1 0 2026-03-10T10:20:00.932 INFO:tasks.workunit.client.0.vm02.stdout:0/739: rmdir d9/d18/d1a/d3c 39 2026-03-10T10:20:00.932 INFO:tasks.workunit.client.0.vm02.stdout:7/696: fsync d1/dc/f69 0 2026-03-10T10:20:00.932 INFO:tasks.workunit.client.0.vm02.stdout:0/740: readlink d9/d18/d1a/l1d 0 2026-03-10T10:20:00.933 INFO:tasks.workunit.client.0.vm02.stdout:7/697: chown d1/d1b/d49/d98/l9e 23 1 2026-03-10T10:20:00.933 INFO:tasks.workunit.client.0.vm02.stdout:7/698: readlink d1/dc/ld7 0 2026-03-10T10:20:00.937 INFO:tasks.workunit.client.0.vm02.stdout:7/699: dwrite d1/dc/d16/d28/d2d/fb0 [0,4194304] 0 2026-03-10T10:20:00.960 INFO:tasks.workunit.client.0.vm02.stdout:4/836: creat d1/d75/ddd/d10e/f113 x:0 0 0 2026-03-10T10:20:00.973 INFO:tasks.workunit.client.1.vm05.stdout:5/712: unlink da/db/d28/d32/c49 0 2026-03-10T10:20:00.980 INFO:tasks.workunit.client.0.vm02.stdout:6/679: unlink d0/d8/d29/d2f/d4b/da5/c6e 0 2026-03-10T10:20:00.981 INFO:tasks.workunit.client.0.vm02.stdout:6/680: write d0/d8/d29/d2f/f33 [8488506,120033] 0 2026-03-10T10:20:00.981 INFO:tasks.workunit.client.0.vm02.stdout:6/681: chown d0/d8/d29/d52 62810557 1 2026-03-10T10:20:00.997 INFO:tasks.workunit.client.0.vm02.stdout:3/707: unlink d1/d6/f63 0 2026-03-10T10:20:00.999 INFO:tasks.workunit.client.1.vm05.stdout:0/706: truncate d1/d2/d39/d6e/fdd 714263 0 2026-03-10T10:20:01.008 INFO:tasks.workunit.client.0.vm02.stdout:5/853: link d1/db/d11/d16/d79/d85/f94 d1/db/d11/d13/d28/d37/dce/f123 0 2026-03-10T10:20:01.013 INFO:tasks.workunit.client.1.vm05.stdout:2/655: dread db/d12/f9c [0,4194304] 0 2026-03-10T10:20:01.013 INFO:tasks.workunit.client.0.vm02.stdout:1/739: dread d4/da/f13 [8388608,4194304] 0 2026-03-10T10:20:01.018 INFO:tasks.workunit.client.0.vm02.stdout:0/741: read d9/d34/d3d/fae [44630,104794] 0 2026-03-10T10:20:01.028 INFO:tasks.workunit.client.1.vm05.stdout:1/789: rename d4/d20/dbe/fc8 to d4/fe9 0 2026-03-10T10:20:01.030 INFO:tasks.workunit.client.1.vm05.stdout:7/728: dread d5/f4e [0,4194304] 0 2026-03-10T10:20:01.031 INFO:tasks.workunit.client.1.vm05.stdout:7/729: chown d5/fe 31853 1 2026-03-10T10:20:01.031 INFO:tasks.workunit.client.1.vm05.stdout:7/730: dread - d5/d1d/d20/d2d/d68/fc4 zero size 2026-03-10T10:20:01.041 INFO:tasks.workunit.client.1.vm05.stdout:6/712: write dd/d36/d3f/d12/d96/f9a [276614,97484] 0 2026-03-10T10:20:01.047 INFO:tasks.workunit.client.1.vm05.stdout:4/584: getdents d1/d64/da9/dae 0 2026-03-10T10:20:01.048 INFO:tasks.workunit.client.1.vm05.stdout:4/585: chown d1/d64/da9/fb9 522472827 1 2026-03-10T10:20:01.049 INFO:tasks.workunit.client.1.vm05.stdout:3/720: dwrite dd/d20/d56/f7d [0,4194304] 0 2026-03-10T10:20:01.054 INFO:tasks.workunit.client.0.vm02.stdout:3/708: creat d1/d8/d21/d73/d78/d79/fea x:0 0 0 2026-03-10T10:20:01.070 INFO:tasks.workunit.client.0.vm02.stdout:5/854: creat d1/db/d11/d84/d40/d4f/d5f/d6d/d71/f124 x:0 0 0 2026-03-10T10:20:01.087 INFO:tasks.workunit.client.0.vm02.stdout:8/699: dwrite d1/dc7/fcd [0,4194304] 0 2026-03-10T10:20:01.088 INFO:tasks.workunit.client.1.vm05.stdout:0/707: creat d1/d2/d9/d31/d13/d17/da1/dee/fef x:0 0 0 2026-03-10T10:20:01.097 INFO:tasks.workunit.client.0.vm02.stdout:1/740: write d4/da/d1a/d47/d88/fdd [808296,118302] 0 2026-03-10T10:20:01.104 INFO:tasks.workunit.client.0.vm02.stdout:1/741: dwrite d4/da/d1a/d22/f23 [0,4194304] 0 2026-03-10T10:20:01.107 INFO:tasks.workunit.client.1.vm05.stdout:1/790: dread d4/f36 [0,4194304] 0 2026-03-10T10:20:01.107 INFO:tasks.workunit.client.1.vm05.stdout:1/791: chown d4 16 1 2026-03-10T10:20:01.111 INFO:tasks.workunit.client.1.vm05.stdout:6/713: readlink dd/d36/d3f/l25 0 2026-03-10T10:20:01.114 INFO:tasks.workunit.client.0.vm02.stdout:1/742: dwrite d4/da/d1a/d47/d65/f6e [0,4194304] 0 2026-03-10T10:20:01.118 INFO:tasks.workunit.client.0.vm02.stdout:3/709: mkdir d1/d8/d44/deb 0 2026-03-10T10:20:01.119 INFO:tasks.workunit.client.0.vm02.stdout:3/710: dread - d1/f14 zero size 2026-03-10T10:20:01.139 INFO:tasks.workunit.client.1.vm05.stdout:2/656: write db/d1c/f56 [253943,103053] 0 2026-03-10T10:20:01.139 INFO:tasks.workunit.client.1.vm05.stdout:5/713: truncate da/d9a/dc7/f4e 1364162 0 2026-03-10T10:20:01.139 INFO:tasks.workunit.client.0.vm02.stdout:7/700: write d1/dc/d55/f8d [176470,127403] 0 2026-03-10T10:20:01.139 INFO:tasks.workunit.client.0.vm02.stdout:4/837: write d1/d75/ddd/d10e/d5e/d78/d1a/d49/f5c [2374212,45855] 0 2026-03-10T10:20:01.141 INFO:tasks.workunit.client.1.vm05.stdout:9/634: dwrite d0/d1/d13/de/f46 [0,4194304] 0 2026-03-10T10:20:01.143 INFO:tasks.workunit.client.0.vm02.stdout:4/838: write d1/d10/db/f15 [1939653,44790] 0 2026-03-10T10:20:01.146 INFO:tasks.workunit.client.0.vm02.stdout:6/682: dwrite d0/d8/f9b [4194304,4194304] 0 2026-03-10T10:20:01.157 INFO:tasks.workunit.client.0.vm02.stdout:8/700: sync 2026-03-10T10:20:01.158 INFO:tasks.workunit.client.0.vm02.stdout:4/839: dwrite d1/d10/db/f15 [0,4194304] 0 2026-03-10T10:20:01.171 INFO:tasks.workunit.client.0.vm02.stdout:8/701: dwrite d1/d1c/d43/d5b/fbc [0,4194304] 0 2026-03-10T10:20:01.204 INFO:tasks.workunit.client.0.vm02.stdout:9/685: rename da/d3c/d4c/d38/d4a/d70/cad to da/d3c/d4c/cda 0 2026-03-10T10:20:01.226 INFO:tasks.workunit.client.0.vm02.stdout:2/718: dwrite d0/d1a/d49/f78 [0,4194304] 0 2026-03-10T10:20:01.234 INFO:tasks.workunit.client.0.vm02.stdout:3/711: symlink d1/d20/db2/dcb/lec 0 2026-03-10T10:20:01.243 INFO:tasks.workunit.client.0.vm02.stdout:0/742: link d9/fdd d9/d34/d3d/d65/d89/dd3/da7/db9/dc9/fee 0 2026-03-10T10:20:01.249 INFO:tasks.workunit.client.0.vm02.stdout:2/719: sync 2026-03-10T10:20:01.261 INFO:tasks.workunit.client.0.vm02.stdout:7/701: symlink d1/d1b/d49/d98/ldc 0 2026-03-10T10:20:01.272 INFO:tasks.workunit.client.0.vm02.stdout:6/683: symlink d0/db9/lda 0 2026-03-10T10:20:01.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:00 vm02.local ceph-mon[50200]: overall HEALTH_OK 2026-03-10T10:20:01.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:00 vm05.local ceph-mon[59051]: overall HEALTH_OK 2026-03-10T10:20:01.288 INFO:tasks.workunit.client.0.vm02.stdout:4/840: creat d1/d75/ddd/d10e/d5e/d78/d44/de7/f114 x:0 0 0 2026-03-10T10:20:01.302 INFO:tasks.workunit.client.0.vm02.stdout:4/841: dwrite d1/d32/f46 [4194304,4194304] 0 2026-03-10T10:20:01.329 INFO:tasks.workunit.client.0.vm02.stdout:1/743: mkdir d4/d2c/d53/da6/db8/dd9/dea 0 2026-03-10T10:20:01.345 INFO:tasks.workunit.client.0.vm02.stdout:0/743: creat d9/d18/d1a/d46/fef x:0 0 0 2026-03-10T10:20:01.350 INFO:tasks.workunit.client.0.vm02.stdout:2/720: creat d0/d1a/d49/dcc/ff4 x:0 0 0 2026-03-10T10:20:01.350 INFO:tasks.workunit.client.0.vm02.stdout:7/702: fsync d1/d1b/d8f/d67/fc2 0 2026-03-10T10:20:01.351 INFO:tasks.workunit.client.0.vm02.stdout:3/712: dwrite d1/d20/d52/f6c [0,4194304] 0 2026-03-10T10:20:01.354 INFO:tasks.workunit.client.0.vm02.stdout:7/703: stat d1/d1b/d8f/dad/c5a 0 2026-03-10T10:20:01.354 INFO:tasks.workunit.client.0.vm02.stdout:3/713: write d1/f81 [4407858,81157] 0 2026-03-10T10:20:01.362 INFO:tasks.workunit.client.0.vm02.stdout:3/714: sync 2026-03-10T10:20:01.372 INFO:tasks.workunit.client.0.vm02.stdout:9/686: mknod da/d3c/d4c/d38/d4a/cdb 0 2026-03-10T10:20:01.384 INFO:tasks.workunit.client.0.vm02.stdout:5/855: link d1/db/d11/d62/d67/lc8 d1/db/d11/d13/d28/d37/dce/l125 0 2026-03-10T10:20:01.391 INFO:tasks.workunit.client.1.vm05.stdout:7/731: mkdir d5/d26/d9c/de7 0 2026-03-10T10:20:01.398 INFO:tasks.workunit.client.0.vm02.stdout:6/684: dread - d0/d8/d29/d2f/d50/d7e/fb3 zero size 2026-03-10T10:20:01.399 INFO:tasks.workunit.client.1.vm05.stdout:1/792: mkdir d4/df/de0/d82/dea 0 2026-03-10T10:20:01.399 INFO:tasks.workunit.client.1.vm05.stdout:1/793: dread - d4/d39/d3e/fd7 zero size 2026-03-10T10:20:01.412 INFO:tasks.workunit.client.0.vm02.stdout:4/842: symlink d1/d75/ddd/d10e/d5e/d78/d1a/d49/d81/dc6/df2/l115 0 2026-03-10T10:20:01.417 INFO:tasks.workunit.client.1.vm05.stdout:5/714: dread da/db/d26/d70/f82 [0,4194304] 0 2026-03-10T10:20:01.418 INFO:tasks.workunit.client.1.vm05.stdout:5/715: dread da/db/d26/d70/f82 [0,4194304] 0 2026-03-10T10:20:01.420 INFO:tasks.workunit.client.0.vm02.stdout:8/702: write d1/d1c/d24/dad/dbe/fcc [565422,8100] 0 2026-03-10T10:20:01.425 INFO:tasks.workunit.client.0.vm02.stdout:0/744: write d9/d18/d1a/d22/d24/f40 [3682410,41875] 0 2026-03-10T10:20:01.427 INFO:tasks.workunit.client.1.vm05.stdout:0/708: write d1/d2/d9/d31/d54/fd9 [1596460,74198] 0 2026-03-10T10:20:01.428 INFO:tasks.workunit.client.1.vm05.stdout:2/657: creat db/d1c/d40/d80/fd3 x:0 0 0 2026-03-10T10:20:01.429 INFO:tasks.workunit.client.0.vm02.stdout:5/856: dread - d1/db/d11/ffd zero size 2026-03-10T10:20:01.430 INFO:tasks.workunit.client.1.vm05.stdout:9/635: creat d0/d1/dcc/dd0/fd9 x:0 0 0 2026-03-10T10:20:01.431 INFO:tasks.workunit.client.1.vm05.stdout:3/721: link dd/d15/d24/d2c/dd0/dd9/ffa dd/d20/d56/d5e/ffc 0 2026-03-10T10:20:01.436 INFO:tasks.workunit.client.1.vm05.stdout:8/659: rename f6 to d7/d2f/fd6 0 2026-03-10T10:20:01.436 INFO:tasks.workunit.client.1.vm05.stdout:5/716: sync 2026-03-10T10:20:01.437 INFO:tasks.workunit.client.0.vm02.stdout:7/704: mknod d1/dc/d55/d9a/dd9/cdd 0 2026-03-10T10:20:01.440 INFO:tasks.workunit.client.1.vm05.stdout:1/794: mknod d4/d20/dbe/ceb 0 2026-03-10T10:20:01.444 INFO:tasks.workunit.client.1.vm05.stdout:4/586: creat d1/d31/dc/d40/fc3 x:0 0 0 2026-03-10T10:20:01.446 INFO:tasks.workunit.client.0.vm02.stdout:4/843: readlink d1/d32/da3/lc5 0 2026-03-10T10:20:01.460 INFO:tasks.workunit.client.0.vm02.stdout:8/703: dread d1/d2/f36 [0,4194304] 0 2026-03-10T10:20:01.467 INFO:tasks.workunit.client.0.vm02.stdout:1/744: write d4/da/d27/d38/d80/fb7 [44397,4416] 0 2026-03-10T10:20:01.473 INFO:tasks.workunit.client.0.vm02.stdout:9/687: dwrite da/d3c/d4c/d2c/f93 [0,4194304] 0 2026-03-10T10:20:01.473 INFO:tasks.workunit.client.0.vm02.stdout:8/704: sync 2026-03-10T10:20:01.473 INFO:tasks.workunit.client.0.vm02.stdout:9/688: write da/d3c/d4c/d38/d82/fcb [1059809,10553] 0 2026-03-10T10:20:01.473 INFO:tasks.workunit.client.0.vm02.stdout:9/689: readlink da/d3c/l43 0 2026-03-10T10:20:01.482 INFO:tasks.workunit.client.1.vm05.stdout:2/658: unlink db/d2d/c89 0 2026-03-10T10:20:01.484 INFO:tasks.workunit.client.0.vm02.stdout:5/857: symlink d1/db/d11/d7b/l126 0 2026-03-10T10:20:01.484 INFO:tasks.workunit.client.0.vm02.stdout:5/858: readlink d1/db/d11/d62/d67/lf9 0 2026-03-10T10:20:01.485 INFO:tasks.workunit.client.0.vm02.stdout:5/859: dread - d1/db/d11/d84/d40/d4f/f6e zero size 2026-03-10T10:20:01.491 INFO:tasks.workunit.client.1.vm05.stdout:9/636: dread d0/d1/d13/de/d93/fbd [0,4194304] 0 2026-03-10T10:20:01.493 INFO:tasks.workunit.client.1.vm05.stdout:6/714: write dd/d36/d3f/d12/f35 [2282532,44294] 0 2026-03-10T10:20:01.494 INFO:tasks.workunit.client.1.vm05.stdout:0/709: write d1/d2/d39/fd1 [540071,9929] 0 2026-03-10T10:20:01.494 INFO:tasks.workunit.client.0.vm02.stdout:6/685: dwrite d0/d8/d9/f84 [4194304,4194304] 0 2026-03-10T10:20:01.494 INFO:tasks.workunit.client.0.vm02.stdout:0/745: write d9/d34/d3d/d65/d89/dd3/da7/db9/fba [220078,61933] 0 2026-03-10T10:20:01.507 INFO:tasks.workunit.client.0.vm02.stdout:2/721: link d0/d1a/d49/d5e/d65/f75 d0/d1a/d24/dd3/de8/ff5 0 2026-03-10T10:20:01.509 INFO:tasks.workunit.client.0.vm02.stdout:3/715: link d1/d8/d21/d73/c67 d1/d8/d21/d73/d78/d79/ced 0 2026-03-10T10:20:01.511 INFO:tasks.workunit.client.0.vm02.stdout:7/705: dread - d1/d1b/d8f/dad/d7e/dba/fcc zero size 2026-03-10T10:20:01.520 INFO:tasks.workunit.client.0.vm02.stdout:4/844: rename d1/d10/db/f20 to d1/d10/db/f116 0 2026-03-10T10:20:01.525 INFO:tasks.workunit.client.0.vm02.stdout:1/745: truncate d4/da/d1a/d47/d78/fb4 3845877 0 2026-03-10T10:20:01.536 INFO:tasks.workunit.client.1.vm05.stdout:7/732: write d5/d17/fa0 [800986,100921] 0 2026-03-10T10:20:01.542 INFO:tasks.workunit.client.0.vm02.stdout:8/705: dwrite d1/d2/f36 [0,4194304] 0 2026-03-10T10:20:01.553 INFO:tasks.workunit.client.0.vm02.stdout:9/690: creat da/d3c/d53/fdc x:0 0 0 2026-03-10T10:20:01.559 INFO:tasks.workunit.client.0.vm02.stdout:5/860: creat d1/db/d11/d84/d95/f127 x:0 0 0 2026-03-10T10:20:01.564 INFO:tasks.workunit.client.0.vm02.stdout:6/686: readlink d0/d8/d29/d2f/d4b/l66 0 2026-03-10T10:20:01.564 INFO:tasks.workunit.client.0.vm02.stdout:6/687: chown d0/d7f/fbe 39 1 2026-03-10T10:20:01.565 INFO:tasks.workunit.client.1.vm05.stdout:9/637: dread d0/f2f [0,4194304] 0 2026-03-10T10:20:01.568 INFO:tasks.workunit.client.1.vm05.stdout:3/722: creat dd/d39/d5f/df7/ffd x:0 0 0 2026-03-10T10:20:01.570 INFO:tasks.workunit.client.1.vm05.stdout:5/717: mknod da/db/d28/cf4 0 2026-03-10T10:20:01.573 INFO:tasks.workunit.client.0.vm02.stdout:3/716: creat d1/d58/fee x:0 0 0 2026-03-10T10:20:01.574 INFO:tasks.workunit.client.0.vm02.stdout:3/717: readlink d1/d8/d44/lbf 0 2026-03-10T10:20:01.576 INFO:tasks.workunit.client.1.vm05.stdout:8/660: mknod d7/d14/d24/d3f/d6a/db0/cd7 0 2026-03-10T10:20:01.578 INFO:tasks.workunit.client.0.vm02.stdout:7/706: creat d1/dc/d55/d9a/da5/fde x:0 0 0 2026-03-10T10:20:01.580 INFO:tasks.workunit.client.0.vm02.stdout:2/722: dread d0/d10/f6b [0,4194304] 0 2026-03-10T10:20:01.590 INFO:tasks.workunit.client.1.vm05.stdout:7/733: mkdir d5/d1d/d29/d60/de8 0 2026-03-10T10:20:01.593 INFO:tasks.workunit.client.0.vm02.stdout:4/845: dwrite d1/d10/d88/db2/ff9 [0,4194304] 0 2026-03-10T10:20:01.593 INFO:tasks.workunit.client.1.vm05.stdout:2/659: mkdir db/d28/dd4 0 2026-03-10T10:20:01.600 INFO:tasks.workunit.client.0.vm02.stdout:5/861: dread - d1/db/d11/d13/d28/da7/ffa zero size 2026-03-10T10:20:01.605 INFO:tasks.workunit.client.0.vm02.stdout:6/688: symlink d0/d8/d29/d2f/d50/d7e/ldb 0 2026-03-10T10:20:01.614 INFO:tasks.workunit.client.1.vm05.stdout:0/710: symlink d1/d2/d39/d3d/d9f/lf0 0 2026-03-10T10:20:01.614 INFO:tasks.workunit.client.1.vm05.stdout:5/718: mkdir da/d96/df5 0 2026-03-10T10:20:01.614 INFO:tasks.workunit.client.1.vm05.stdout:5/719: chown da/db/ca3 93 1 2026-03-10T10:20:01.614 INFO:tasks.workunit.client.0.vm02.stdout:0/746: link d9/d18/f6a d9/d18/d1a/d22/db4/ff0 0 2026-03-10T10:20:01.614 INFO:tasks.workunit.client.0.vm02.stdout:3/718: truncate d1/d20/f4b 4312224 0 2026-03-10T10:20:01.614 INFO:tasks.workunit.client.0.vm02.stdout:3/719: write d1/d6/f36 [1464803,20292] 0 2026-03-10T10:20:01.616 INFO:tasks.workunit.client.0.vm02.stdout:7/707: mkdir d1/d1b/d8f/dad/d7e/dba/ddf 0 2026-03-10T10:20:01.617 INFO:tasks.workunit.client.0.vm02.stdout:7/708: stat d1/d1b/f61 0 2026-03-10T10:20:01.617 INFO:tasks.workunit.client.0.vm02.stdout:7/709: chown d1/dc/d10 6098 1 2026-03-10T10:20:01.621 INFO:tasks.workunit.client.1.vm05.stdout:8/661: dread d7/f44 [0,4194304] 0 2026-03-10T10:20:01.621 INFO:tasks.workunit.client.1.vm05.stdout:1/795: truncate d4/df/d1c/d53/daa/fab 4303196 0 2026-03-10T10:20:01.622 INFO:tasks.workunit.client.1.vm05.stdout:8/662: write d7/d14/d15/f1f [5176346,55542] 0 2026-03-10T10:20:01.630 INFO:tasks.workunit.client.0.vm02.stdout:7/710: dread d1/f5 [0,4194304] 0 2026-03-10T10:20:01.662 INFO:tasks.workunit.client.1.vm05.stdout:7/734: mknod d5/d17/d66/ce9 0 2026-03-10T10:20:01.677 INFO:tasks.workunit.client.1.vm05.stdout:2/660: write db/d12/f3c [777290,130875] 0 2026-03-10T10:20:01.679 INFO:tasks.workunit.client.0.vm02.stdout:8/706: write d1/d1c/d43/d6a/fca [4933757,69357] 0 2026-03-10T10:20:01.692 INFO:tasks.workunit.client.1.vm05.stdout:9/638: write d0/d1/d13/d26/f58 [2735189,34840] 0 2026-03-10T10:20:01.692 INFO:tasks.workunit.client.0.vm02.stdout:4/846: mkdir d1/d75/ddd/d10e/d117 0 2026-03-10T10:20:01.693 INFO:tasks.workunit.client.1.vm05.stdout:0/711: mkdir d1/d2/d9/d31/d12/d20/dbe/df1 0 2026-03-10T10:20:01.694 INFO:tasks.workunit.client.1.vm05.stdout:6/715: dwrite dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/fe1 [0,4194304] 0 2026-03-10T10:20:01.701 INFO:tasks.workunit.client.0.vm02.stdout:5/862: fdatasync d1/db/d11/d84/d40/d4f/d5f/ffc 0 2026-03-10T10:20:01.706 INFO:tasks.workunit.client.1.vm05.stdout:3/723: symlink dd/lfe 0 2026-03-10T10:20:01.707 INFO:tasks.workunit.client.1.vm05.stdout:3/724: dread - dd/d20/d56/d5e/ffc zero size 2026-03-10T10:20:01.708 INFO:tasks.workunit.client.1.vm05.stdout:5/720: dread da/db/d28/fd7 [0,4194304] 0 2026-03-10T10:20:01.718 INFO:tasks.workunit.client.0.vm02.stdout:3/720: symlink d1/d20/d52/lef 0 2026-03-10T10:20:01.718 INFO:tasks.workunit.client.1.vm05.stdout:4/587: getdents d1 0 2026-03-10T10:20:01.723 INFO:tasks.workunit.client.0.vm02.stdout:1/746: dwrite d4/da/d1a/d47/d78/fb4 [0,4194304] 0 2026-03-10T10:20:01.723 INFO:tasks.workunit.client.0.vm02.stdout:1/747: stat d4/d2c/d53/da6/fab 0 2026-03-10T10:20:01.724 INFO:tasks.workunit.client.1.vm05.stdout:7/735: truncate d5/d1d/f7d 634048 0 2026-03-10T10:20:01.725 INFO:tasks.workunit.client.0.vm02.stdout:8/707: mkdir d1/dc7/dd2 0 2026-03-10T10:20:01.726 INFO:tasks.workunit.client.0.vm02.stdout:9/691: rename da/d3c/d4c/d2c/f32 to da/d3c/d4c/d2c/d34/fdd 0 2026-03-10T10:20:01.751 INFO:tasks.workunit.client.1.vm05.stdout:2/661: unlink db/d1c/d40/d80/f9e 0 2026-03-10T10:20:01.754 INFO:tasks.workunit.client.0.vm02.stdout:0/747: dwrite d9/d18/d1a/f6f [0,4194304] 0 2026-03-10T10:20:01.754 INFO:tasks.workunit.client.1.vm05.stdout:2/662: dread db/d12/fb5 [0,4194304] 0 2026-03-10T10:20:01.755 INFO:tasks.workunit.client.1.vm05.stdout:2/663: chown db/d28/d4f/d59/f7c 749672 1 2026-03-10T10:20:01.755 INFO:tasks.workunit.client.1.vm05.stdout:8/663: write d7/d14/d15/d3b/f43 [3829210,68506] 0 2026-03-10T10:20:01.756 INFO:tasks.workunit.client.1.vm05.stdout:2/664: write db/f26 [7321238,34771] 0 2026-03-10T10:20:01.756 INFO:tasks.workunit.client.1.vm05.stdout:8/664: write d7/d14/d24/f7a [7078546,47090] 0 2026-03-10T10:20:01.758 INFO:tasks.workunit.client.1.vm05.stdout:2/665: write db/d28/d4f/d59/f7e [596641,10822] 0 2026-03-10T10:20:01.784 INFO:tasks.workunit.client.0.vm02.stdout:7/711: dread d1/dc/d60/f79 [0,4194304] 0 2026-03-10T10:20:01.795 INFO:tasks.workunit.client.0.vm02.stdout:4/847: dread d1/d32/fc4 [0,4194304] 0 2026-03-10T10:20:01.812 INFO:tasks.workunit.client.1.vm05.stdout:1/796: symlink d4/df/de0/lec 0 2026-03-10T10:20:01.817 INFO:tasks.workunit.client.1.vm05.stdout:4/588: rename d1/d31/f34 to d1/d31/d76/dac/fc4 0 2026-03-10T10:20:01.827 INFO:tasks.workunit.client.1.vm05.stdout:3/725: dwrite dd/d15/f84 [0,4194304] 0 2026-03-10T10:20:01.887 INFO:tasks.workunit.client.1.vm05.stdout:0/712: mknod d1/d2/d9/d31/d13/d2f/d49/cf2 0 2026-03-10T10:20:01.893 INFO:tasks.workunit.client.1.vm05.stdout:6/716: symlink dd/d36/d3f/d12/d44/d2a/le7 0 2026-03-10T10:20:01.893 INFO:tasks.workunit.client.1.vm05.stdout:6/717: write dd/d36/d3f/d12/d44/d2a/fd9 [572577,23283] 0 2026-03-10T10:20:01.899 INFO:tasks.workunit.client.1.vm05.stdout:7/736: dwrite d5/d1d/d29/d3e/d8c/d96/f9e [0,4194304] 0 2026-03-10T10:20:01.915 INFO:tasks.workunit.client.1.vm05.stdout:5/721: getdents da/db/d26/d5c 0 2026-03-10T10:20:01.921 INFO:tasks.workunit.client.1.vm05.stdout:1/797: rename d4/d39/le2 to d4/d20/dbe/de8/led 0 2026-03-10T10:20:01.930 INFO:tasks.workunit.client.1.vm05.stdout:8/665: write d7/d14/d24/f61 [124130,20975] 0 2026-03-10T10:20:01.931 INFO:tasks.workunit.client.1.vm05.stdout:8/666: stat d7/l80 0 2026-03-10T10:20:01.934 INFO:tasks.workunit.client.0.vm02.stdout:2/723: creat d0/d10/ff6 x:0 0 0 2026-03-10T10:20:01.935 INFO:tasks.workunit.client.1.vm05.stdout:9/639: dwrite d0/d1/f4a [0,4194304] 0 2026-03-10T10:20:01.944 INFO:tasks.workunit.client.1.vm05.stdout:3/726: read fa [276305,48776] 0 2026-03-10T10:20:01.946 INFO:tasks.workunit.client.1.vm05.stdout:4/589: dread d1/d31/dc/d40/d63/f74 [0,4194304] 0 2026-03-10T10:20:01.955 INFO:tasks.workunit.client.0.vm02.stdout:1/748: mknod d4/d2c/d53/da6/ceb 0 2026-03-10T10:20:01.956 INFO:tasks.workunit.client.0.vm02.stdout:1/749: read d4/d2c/d53/f99 [620573,7862] 0 2026-03-10T10:20:01.970 INFO:tasks.workunit.client.0.vm02.stdout:9/692: fsync da/d3c/d4c/d56/f77 0 2026-03-10T10:20:01.977 INFO:tasks.workunit.client.1.vm05.stdout:0/713: mkdir d1/d2/d9/d31/d13/d2f/d49/df3 0 2026-03-10T10:20:01.986 INFO:tasks.workunit.client.0.vm02.stdout:6/689: creat d0/d8/d29/d6d/fdc x:0 0 0 2026-03-10T10:20:01.989 INFO:tasks.workunit.client.0.vm02.stdout:3/721: truncate d1/d8/d21/f88 2413993 0 2026-03-10T10:20:01.994 INFO:tasks.workunit.client.1.vm05.stdout:2/666: truncate db/d1c/f3d 1388675 0 2026-03-10T10:20:01.996 INFO:tasks.workunit.client.0.vm02.stdout:4/848: rename d1/d75/ddd/d10e/d5e/d78/d1a/la1 to d1/d52/d53/dda/df7/l118 0 2026-03-10T10:20:01.996 INFO:tasks.workunit.client.0.vm02.stdout:7/712: creat d1/d1b/d8e/fe0 x:0 0 0 2026-03-10T10:20:02.005 INFO:tasks.workunit.client.1.vm05.stdout:5/722: write da/db/f9f [392176,24750] 0 2026-03-10T10:20:02.007 INFO:tasks.workunit.client.1.vm05.stdout:7/737: dwrite d5/d1d/d20/d2d/d68/fc2 [0,4194304] 0 2026-03-10T10:20:02.010 INFO:tasks.workunit.client.1.vm05.stdout:7/738: chown d5/d1d/d20/d2d/d80/lc3 5312562 1 2026-03-10T10:20:02.016 INFO:tasks.workunit.client.1.vm05.stdout:1/798: write d4/d3d/f77 [1993277,47764] 0 2026-03-10T10:20:02.017 INFO:tasks.workunit.client.1.vm05.stdout:1/799: chown d4/d39/d3e/f96 145973 1 2026-03-10T10:20:02.022 INFO:tasks.workunit.client.0.vm02.stdout:2/724: unlink d0/d1a/d24/dd3/de8/ff5 0 2026-03-10T10:20:02.027 INFO:tasks.workunit.client.1.vm05.stdout:8/667: rename d7/d14/d15/d3b/l5c to d7/d14/d62/d90/dac/ld8 0 2026-03-10T10:20:02.029 INFO:tasks.workunit.client.0.vm02.stdout:8/708: mknod d1/d1c/d43/d6a/da8/cd3 0 2026-03-10T10:20:02.030 INFO:tasks.workunit.client.0.vm02.stdout:1/750: truncate d4/da/d1a/f40 2385471 0 2026-03-10T10:20:02.039 INFO:tasks.workunit.client.1.vm05.stdout:2/667: truncate db/d12/f3b 718361 0 2026-03-10T10:20:02.044 INFO:tasks.workunit.client.0.vm02.stdout:5/863: creat d1/db/d11/d16/f128 x:0 0 0 2026-03-10T10:20:02.044 INFO:tasks.workunit.client.0.vm02.stdout:6/690: truncate d0/d8/d29/d2f/f4e 4795059 0 2026-03-10T10:20:02.051 INFO:tasks.workunit.client.0.vm02.stdout:7/713: truncate d1/dc/d16/d28/fca 187362 0 2026-03-10T10:20:02.054 INFO:tasks.workunit.client.1.vm05.stdout:4/590: dread d1/d31/f7a [0,4194304] 0 2026-03-10T10:20:02.056 INFO:tasks.workunit.client.0.vm02.stdout:4/849: creat d1/d75/ddd/d10e/d5e/d78/d44/de7/f119 x:0 0 0 2026-03-10T10:20:02.060 INFO:tasks.workunit.client.1.vm05.stdout:1/800: rmdir d4/d79/d83 39 2026-03-10T10:20:02.063 INFO:tasks.workunit.client.1.vm05.stdout:6/718: write dd/d36/d3f/f22 [1262837,118652] 0 2026-03-10T10:20:02.079 INFO:tasks.workunit.client.1.vm05.stdout:5/723: write da/d9a/dc7/db4/fe2 [1001892,125216] 0 2026-03-10T10:20:02.080 INFO:tasks.workunit.client.0.vm02.stdout:2/725: mknod d0/d1a/d49/d5e/d65/db0/cf7 0 2026-03-10T10:20:02.089 INFO:tasks.workunit.client.1.vm05.stdout:7/739: dwrite d5/d1d/d20/d35/f36 [0,4194304] 0 2026-03-10T10:20:02.094 INFO:tasks.workunit.client.0.vm02.stdout:8/709: read - d1/d1c/d43/d5b/fb3 zero size 2026-03-10T10:20:02.099 INFO:tasks.workunit.client.0.vm02.stdout:1/751: creat d4/dc3/fec x:0 0 0 2026-03-10T10:20:02.099 INFO:tasks.workunit.client.0.vm02.stdout:1/752: dread - d4/dc3/fec zero size 2026-03-10T10:20:02.108 INFO:tasks.workunit.client.0.vm02.stdout:9/693: mkdir da/d3c/d4c/d38/d7c/dde 0 2026-03-10T10:20:02.125 INFO:tasks.workunit.client.0.vm02.stdout:5/864: rename d1/db/d11/d7b/ff4 to d1/db/d11/d13/d28/da7/dd9/df8/f129 0 2026-03-10T10:20:02.125 INFO:tasks.workunit.client.1.vm05.stdout:8/668: write d7/f1c [3594298,111935] 0 2026-03-10T10:20:02.125 INFO:tasks.workunit.client.0.vm02.stdout:5/865: chown d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fdf 188751 1 2026-03-10T10:20:02.131 INFO:tasks.workunit.client.1.vm05.stdout:3/727: truncate dd/d15/d24/d8e/dac/fd7 346765 0 2026-03-10T10:20:02.132 INFO:tasks.workunit.client.0.vm02.stdout:7/714: symlink d1/dc/d55/d9a/da5/le1 0 2026-03-10T10:20:02.134 INFO:tasks.workunit.client.1.vm05.stdout:1/801: chown d4/df/ca5 3969 1 2026-03-10T10:20:02.138 INFO:tasks.workunit.client.1.vm05.stdout:0/714: write d1/d2/d39/d6e/fdd [713396,79637] 0 2026-03-10T10:20:02.141 INFO:tasks.workunit.client.1.vm05.stdout:2/668: dwrite db/f19 [0,4194304] 0 2026-03-10T10:20:02.142 INFO:tasks.workunit.client.0.vm02.stdout:2/726: dread - d0/dd4/fd7 zero size 2026-03-10T10:20:02.144 INFO:tasks.workunit.client.0.vm02.stdout:2/727: chown d0/fe2 785 1 2026-03-10T10:20:02.160 INFO:tasks.workunit.client.1.vm05.stdout:4/591: dwrite d1/d31/d4b/d6d/f85 [0,4194304] 0 2026-03-10T10:20:02.164 INFO:tasks.workunit.client.0.vm02.stdout:8/710: rmdir d1/d1c/d23/d25 39 2026-03-10T10:20:02.171 INFO:tasks.workunit.client.0.vm02.stdout:0/748: dwrite d9/d18/d1a/d3c/f92 [0,4194304] 0 2026-03-10T10:20:02.172 INFO:tasks.workunit.client.0.vm02.stdout:9/694: truncate da/d3c/d4c/d38/d82/d89/fb5 371460 0 2026-03-10T10:20:02.172 INFO:tasks.workunit.client.0.vm02.stdout:0/749: fdatasync d9/d18/d1a/d22/d24/d80/d49/feb 0 2026-03-10T10:20:02.175 INFO:tasks.workunit.client.0.vm02.stdout:0/750: write d9/d18/d1a/f6f [2203702,20809] 0 2026-03-10T10:20:02.186 INFO:tasks.workunit.client.0.vm02.stdout:3/722: creat d1/d8/d21/ff0 x:0 0 0 2026-03-10T10:20:02.190 INFO:tasks.workunit.client.0.vm02.stdout:6/691: symlink d0/d8/d29/d6d/ldd 0 2026-03-10T10:20:02.196 INFO:tasks.workunit.client.0.vm02.stdout:5/866: rmdir d1/db/d11/d16/d48 39 2026-03-10T10:20:02.199 INFO:tasks.workunit.client.0.vm02.stdout:7/715: symlink d1/d1b/d49/le2 0 2026-03-10T10:20:02.206 INFO:tasks.workunit.client.1.vm05.stdout:4/592: sync 2026-03-10T10:20:02.209 INFO:tasks.workunit.client.0.vm02.stdout:6/692: dwrite d0/d8/d9/f84 [4194304,4194304] 0 2026-03-10T10:20:02.210 INFO:tasks.workunit.client.0.vm02.stdout:4/850: creat d1/d10/dfc/f11a x:0 0 0 2026-03-10T10:20:02.210 INFO:tasks.workunit.client.0.vm02.stdout:6/693: dread - d0/d8/d29/d2f/d50/d7e/fb3 zero size 2026-03-10T10:20:02.219 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:01 vm02.local ceph-mon[50200]: pgmap v12: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 48 MiB/s rd, 110 MiB/s wr, 289 op/s 2026-03-10T10:20:02.219 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:01 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:02.219 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:01 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:02.219 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:01 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:02.219 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:01 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:02.224 INFO:tasks.workunit.client.0.vm02.stdout:3/723: chown d1/d8/d21/d73/f7e 0 1 2026-03-10T10:20:02.225 INFO:tasks.workunit.client.0.vm02.stdout:3/724: dread - d1/d8/d44/faf zero size 2026-03-10T10:20:02.245 INFO:tasks.workunit.client.0.vm02.stdout:4/851: unlink d1/d75/ddd/d10e/d5e/d78/d7f/d82/cb6 0 2026-03-10T10:20:02.246 INFO:tasks.workunit.client.0.vm02.stdout:0/751: dread d9/d34/d3d/d65/f84 [0,4194304] 0 2026-03-10T10:20:02.257 INFO:tasks.workunit.client.0.vm02.stdout:6/694: creat d0/d8/d9/fde x:0 0 0 2026-03-10T10:20:02.257 INFO:tasks.workunit.client.0.vm02.stdout:1/753: rename d4/da/d27/c7c to d4/da/d27/d38/ced 0 2026-03-10T10:20:02.257 INFO:tasks.workunit.client.0.vm02.stdout:0/752: chown d9/d18/d1a/d22/d24/d79/d7d/fa5 433156 1 2026-03-10T10:20:02.265 INFO:tasks.workunit.client.0.vm02.stdout:9/695: write da/fae [339531,63759] 0 2026-03-10T10:20:02.265 INFO:tasks.workunit.client.0.vm02.stdout:9/696: chown da/d9d/la2 4357338 1 2026-03-10T10:20:02.268 INFO:tasks.workunit.client.0.vm02.stdout:7/716: creat d1/d1b/d8f/dad/d7e/dba/ddf/fe3 x:0 0 0 2026-03-10T10:20:02.272 INFO:tasks.workunit.client.0.vm02.stdout:4/852: creat d1/d10/d88/f11b x:0 0 0 2026-03-10T10:20:02.287 INFO:tasks.workunit.client.0.vm02.stdout:2/728: rename d0/d1a/l3e to d0/d1a/d24/dc6/lf8 0 2026-03-10T10:20:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:01 vm05.local ceph-mon[59051]: pgmap v12: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 48 MiB/s rd, 110 MiB/s wr, 289 op/s 2026-03-10T10:20:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:01 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:01 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:01 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:01 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:02.292 INFO:tasks.workunit.client.1.vm05.stdout:5/724: write f9 [3024317,18237] 0 2026-03-10T10:20:02.295 INFO:tasks.workunit.client.0.vm02.stdout:5/867: dwrite d1/db/d11/d13/f25 [0,4194304] 0 2026-03-10T10:20:02.305 INFO:tasks.workunit.client.0.vm02.stdout:1/754: dread d4/d2c/fa2 [4194304,4194304] 0 2026-03-10T10:20:02.314 INFO:tasks.workunit.client.1.vm05.stdout:7/740: dread d5/d1d/f31 [0,4194304] 0 2026-03-10T10:20:02.319 INFO:tasks.workunit.client.0.vm02.stdout:0/753: creat d9/d18/d1a/d22/d24/d80/dcc/ff1 x:0 0 0 2026-03-10T10:20:02.322 INFO:tasks.workunit.client.1.vm05.stdout:7/741: dread d5/d1d/d29/d3e/d8c/d96/fa6 [0,4194304] 0 2026-03-10T10:20:02.329 INFO:tasks.workunit.client.0.vm02.stdout:0/754: write d9/d18/d1a/d22/d24/d80/dcc/ff1 [423963,51634] 0 2026-03-10T10:20:02.334 INFO:tasks.workunit.client.0.vm02.stdout:9/697: mknod da/d3c/d4c/d56/cdf 0 2026-03-10T10:20:02.356 INFO:tasks.workunit.client.0.vm02.stdout:7/717: fdatasync d1/dc/d16/d28/f4e 0 2026-03-10T10:20:02.361 INFO:tasks.workunit.client.1.vm05.stdout:7/742: dread d5/d1d/d20/fb5 [0,4194304] 0 2026-03-10T10:20:02.386 INFO:tasks.workunit.client.1.vm05.stdout:3/728: rmdir dd/d20/d9e/dc0/ddd 39 2026-03-10T10:20:02.387 INFO:tasks.workunit.client.1.vm05.stdout:3/729: stat dd/d15/d24/d2c/d3b/lc2 0 2026-03-10T10:20:02.390 INFO:tasks.workunit.client.0.vm02.stdout:8/711: rename d1/d1c/d43/d6a/fca to d1/d1c/d43/d6a/da8/d56/db5/fd4 0 2026-03-10T10:20:02.393 INFO:tasks.workunit.client.1.vm05.stdout:1/802: creat d4/d3d/d6e/fee x:0 0 0 2026-03-10T10:20:02.398 INFO:tasks.workunit.client.1.vm05.stdout:0/715: truncate d1/d2/d9/d31/f8c 1045920 0 2026-03-10T10:20:02.403 INFO:tasks.workunit.client.0.vm02.stdout:1/755: creat d4/da/d27/d38/d3c/fee x:0 0 0 2026-03-10T10:20:02.415 INFO:tasks.workunit.client.0.vm02.stdout:7/718: symlink d1/dc/d60/le4 0 2026-03-10T10:20:02.423 INFO:tasks.workunit.client.1.vm05.stdout:2/669: write db/d28/f30 [4222045,57220] 0 2026-03-10T10:20:02.424 INFO:tasks.workunit.client.1.vm05.stdout:6/719: symlink dd/d36/d3f/d12/d44/le8 0 2026-03-10T10:20:02.425 INFO:tasks.workunit.client.1.vm05.stdout:6/720: stat dd/d36/d3f/d12/d44/d2a/d3d/d3e/f64 0 2026-03-10T10:20:02.427 INFO:tasks.workunit.client.0.vm02.stdout:3/725: link d1/d8/d21/d73/d78/d84/fb4 d1/d6/ff1 0 2026-03-10T10:20:02.438 INFO:tasks.workunit.client.1.vm05.stdout:9/640: rename d0/d1/d13/d26/l51 to d0/dc4/lda 0 2026-03-10T10:20:02.439 INFO:tasks.workunit.client.0.vm02.stdout:2/729: getdents d0/d71/dd8 0 2026-03-10T10:20:02.442 INFO:tasks.workunit.client.1.vm05.stdout:5/725: read da/db/f7b [60269,30424] 0 2026-03-10T10:20:02.451 INFO:tasks.workunit.client.0.vm02.stdout:6/695: rename d0/d8/d29/d6d/fdc to d0/d8/d9/d7a/dc0/fdf 0 2026-03-10T10:20:02.456 INFO:tasks.workunit.client.0.vm02.stdout:5/868: mknod d1/db/d11/d13/d28/d37/c12a 0 2026-03-10T10:20:02.465 INFO:tasks.workunit.client.0.vm02.stdout:1/756: creat d4/da/d1a/d47/dbc/fef x:0 0 0 2026-03-10T10:20:02.469 INFO:tasks.workunit.client.0.vm02.stdout:0/755: fdatasync d9/d18/d1a/d22/d24/d8e/d9b/fc2 0 2026-03-10T10:20:02.477 INFO:tasks.workunit.client.1.vm05.stdout:3/730: mkdir dd/d15/d24/d2c/d6d/da7/dbb/dbd/dff 0 2026-03-10T10:20:02.479 INFO:tasks.workunit.client.1.vm05.stdout:8/669: truncate d7/d14/d24/f61 183011 0 2026-03-10T10:20:02.488 INFO:tasks.workunit.client.1.vm05.stdout:2/670: truncate db/d2d/f48 1628961 0 2026-03-10T10:20:02.492 INFO:tasks.workunit.client.0.vm02.stdout:9/698: truncate da/d3c/d4c/d38/d82/d89/fb0 453105 0 2026-03-10T10:20:02.492 INFO:tasks.workunit.client.0.vm02.stdout:7/719: dread - d1/dc/d99/fc8 zero size 2026-03-10T10:20:02.493 INFO:tasks.workunit.client.0.vm02.stdout:4/853: link d1/d10/f71 d1/d75/ddd/d10e/d5e/d78/d1a/f11c 0 2026-03-10T10:20:02.498 INFO:tasks.workunit.client.0.vm02.stdout:2/730: read - d0/d1a/d49/d5e/d65/db0/fda zero size 2026-03-10T10:20:02.498 INFO:tasks.workunit.client.0.vm02.stdout:6/696: creat d0/d8/d29/d6d/d96/fe0 x:0 0 0 2026-03-10T10:20:02.510 INFO:tasks.workunit.client.0.vm02.stdout:8/712: dread d1/d1c/d23/d25/f76 [0,4194304] 0 2026-03-10T10:20:02.513 INFO:tasks.workunit.client.0.vm02.stdout:8/713: dwrite d1/d1c/d43/d5b/f63 [0,4194304] 0 2026-03-10T10:20:02.517 INFO:tasks.workunit.client.1.vm05.stdout:1/803: dread d4/d39/d3e/da0/fa1 [0,4194304] 0 2026-03-10T10:20:02.534 INFO:tasks.workunit.client.0.vm02.stdout:3/726: dwrite d1/d6/d8e/f96 [0,4194304] 0 2026-03-10T10:20:02.534 INFO:tasks.workunit.client.1.vm05.stdout:6/721: write dd/d36/d3f/d12/d44/d2a/fb0 [573331,6061] 0 2026-03-10T10:20:02.534 INFO:tasks.workunit.client.1.vm05.stdout:5/726: mkdir da/db/d26/d70/d72/df6 0 2026-03-10T10:20:02.539 INFO:tasks.workunit.client.0.vm02.stdout:0/756: mknod d9/d18/d1a/d22/d24/d8e/d9b/cf2 0 2026-03-10T10:20:02.545 INFO:tasks.workunit.client.1.vm05.stdout:7/743: symlink d5/d26/d9c/de7/lea 0 2026-03-10T10:20:02.549 INFO:tasks.workunit.client.0.vm02.stdout:9/699: fsync da/d3c/d4c/d2c/d34/f57 0 2026-03-10T10:20:02.556 INFO:tasks.workunit.client.0.vm02.stdout:6/697: mknod d0/d8/d9/d7a/dc0/ce1 0 2026-03-10T10:20:02.564 INFO:tasks.workunit.client.0.vm02.stdout:8/714: fdatasync d1/d2/f29 0 2026-03-10T10:20:02.567 INFO:tasks.workunit.client.0.vm02.stdout:4/854: sync 2026-03-10T10:20:02.586 INFO:tasks.workunit.client.0.vm02.stdout:3/727: dread d1/f54 [0,4194304] 0 2026-03-10T10:20:02.587 INFO:tasks.workunit.client.1.vm05.stdout:1/804: rename d4/d39/d88/fa4 to d4/d20/dbe/de8/fef 0 2026-03-10T10:20:02.588 INFO:tasks.workunit.client.1.vm05.stdout:6/722: creat dd/d36/d3f/d12/d44/d2a/d3d/d48/fe9 x:0 0 0 2026-03-10T10:20:02.588 INFO:tasks.workunit.client.1.vm05.stdout:6/723: chown dd/d36/d3f/d12/d44/d2a/le7 10180 1 2026-03-10T10:20:02.590 INFO:tasks.workunit.client.0.vm02.stdout:0/757: rename d9/d18/d1a/d22/d24/d79/d7d to d9/d34/d3d/d65/d89/df3 0 2026-03-10T10:20:02.591 INFO:tasks.workunit.client.1.vm05.stdout:6/724: dwrite dd/d36/d3f/d12/d44/d2a/fd9 [0,4194304] 0 2026-03-10T10:20:02.597 INFO:tasks.workunit.client.0.vm02.stdout:9/700: rmdir da/d3c/d4c/d38/d82/dd9 39 2026-03-10T10:20:02.601 INFO:tasks.workunit.client.0.vm02.stdout:7/720: getdents d1/d1b/d8f/d67/da7 0 2026-03-10T10:20:02.601 INFO:tasks.workunit.client.0.vm02.stdout:7/721: stat d1/dc/d10/d38 0 2026-03-10T10:20:02.602 INFO:tasks.workunit.client.1.vm05.stdout:4/593: getdents d1/d31/d4b 0 2026-03-10T10:20:02.603 INFO:tasks.workunit.client.1.vm05.stdout:9/641: link d0/d1/f9 d0/d1/d16/d6e/daf/fdb 0 2026-03-10T10:20:02.607 INFO:tasks.workunit.client.1.vm05.stdout:1/805: fdatasync d4/d39/d3e/f7d 0 2026-03-10T10:20:02.617 INFO:tasks.workunit.client.1.vm05.stdout:7/744: mknod d5/d1d/d20/d2d/d80/dd6/ceb 0 2026-03-10T10:20:02.618 INFO:tasks.workunit.client.1.vm05.stdout:7/745: fsync d5/d1d/d29/d3e/d8c/d96/f9e 0 2026-03-10T10:20:02.618 INFO:tasks.workunit.client.0.vm02.stdout:4/855: mkdir d1/d32/da3/d11d 0 2026-03-10T10:20:02.619 INFO:tasks.workunit.client.1.vm05.stdout:9/642: sync 2026-03-10T10:20:02.625 INFO:tasks.workunit.client.0.vm02.stdout:5/869: write d1/db/d11/d13/d28/d37/d3d/f9b [5198026,74493] 0 2026-03-10T10:20:02.626 INFO:tasks.workunit.client.1.vm05.stdout:6/725: read dd/d36/d3f/d12/d44/d2a/d3d/fa2 [115131,110964] 0 2026-03-10T10:20:02.634 INFO:tasks.workunit.client.0.vm02.stdout:0/758: truncate d9/d34/d3d/d65/d89/dd3/da8/fd7 765130 0 2026-03-10T10:20:02.634 INFO:tasks.workunit.client.0.vm02.stdout:5/870: sync 2026-03-10T10:20:02.636 INFO:tasks.workunit.client.1.vm05.stdout:0/716: getdents d1/d2/d39/d6e/d8e 0 2026-03-10T10:20:02.636 INFO:tasks.workunit.client.1.vm05.stdout:0/717: chown d1/d2/d5d/f5f 3875 1 2026-03-10T10:20:02.650 INFO:tasks.workunit.client.0.vm02.stdout:9/701: dread da/d3c/d4c/f49 [0,4194304] 0 2026-03-10T10:20:02.652 INFO:tasks.workunit.client.0.vm02.stdout:7/722: creat d1/d1b/d49/fe5 x:0 0 0 2026-03-10T10:20:02.653 INFO:tasks.workunit.client.0.vm02.stdout:7/723: stat d1/dc/d16/d28/d2d/dae 0 2026-03-10T10:20:02.653 INFO:tasks.workunit.client.0.vm02.stdout:7/724: dread - d1/d1b/d49/fe5 zero size 2026-03-10T10:20:02.654 INFO:tasks.workunit.client.1.vm05.stdout:3/731: dwrite dd/d15/d24/d74/fb2 [0,4194304] 0 2026-03-10T10:20:02.656 INFO:tasks.workunit.client.1.vm05.stdout:3/732: chown dd/d39/d5c 2722606 1 2026-03-10T10:20:02.658 INFO:tasks.workunit.client.1.vm05.stdout:1/806: mkdir d4/d39/d3e/db1/df0 0 2026-03-10T10:20:02.664 INFO:tasks.workunit.client.0.vm02.stdout:3/728: chown d1/d58/dc9/lce 380916 1 2026-03-10T10:20:02.669 INFO:tasks.workunit.client.1.vm05.stdout:7/746: creat d5/d26/fec x:0 0 0 2026-03-10T10:20:02.670 INFO:tasks.workunit.client.1.vm05.stdout:9/643: truncate d0/d1/d57/fbf 6710 0 2026-03-10T10:20:02.673 INFO:tasks.workunit.client.1.vm05.stdout:6/726: creat dd/d36/d3f/d12/d44/d2a/d7f/fea x:0 0 0 2026-03-10T10:20:02.674 INFO:tasks.workunit.client.1.vm05.stdout:6/727: read dd/d36/d3f/d12/d44/d2a/fd9 [2326241,90181] 0 2026-03-10T10:20:02.674 INFO:tasks.workunit.client.0.vm02.stdout:7/725: mknod d1/dc/d10/d38/ce6 0 2026-03-10T10:20:02.676 INFO:tasks.workunit.client.1.vm05.stdout:8/670: link d7/d14/fa5 d7/d14/d15/fd9 0 2026-03-10T10:20:02.689 INFO:tasks.workunit.client.1.vm05.stdout:8/671: dread d7/d14/d15/f1f [0,4194304] 0 2026-03-10T10:20:02.698 INFO:tasks.workunit.client.1.vm05.stdout:9/644: dread d0/dc4/f7e [0,4194304] 0 2026-03-10T10:20:02.699 INFO:tasks.workunit.client.0.vm02.stdout:2/731: dwrite d0/d1a/d49/fc8 [0,4194304] 0 2026-03-10T10:20:02.699 INFO:tasks.workunit.client.1.vm05.stdout:2/671: dwrite db/d61/d67/f6e [0,4194304] 0 2026-03-10T10:20:02.703 INFO:tasks.workunit.client.0.vm02.stdout:1/757: write d4/da/d1a/f40 [1229453,87422] 0 2026-03-10T10:20:02.708 INFO:tasks.workunit.client.0.vm02.stdout:4/856: mknod d1/d32/da3/d11d/c11e 0 2026-03-10T10:20:02.709 INFO:tasks.workunit.client.0.vm02.stdout:3/729: rmdir d1/d6/d8e 39 2026-03-10T10:20:02.732 INFO:tasks.workunit.client.0.vm02.stdout:0/759: truncate d9/fdd 914179 0 2026-03-10T10:20:02.743 INFO:tasks.workunit.client.0.vm02.stdout:8/715: dwrite d1/d1c/d43/d6a/da8/f44 [0,4194304] 0 2026-03-10T10:20:02.750 INFO:tasks.workunit.client.1.vm05.stdout:4/594: write d1/d31/dc/f2a [606098,12589] 0 2026-03-10T10:20:02.753 INFO:tasks.workunit.client.1.vm05.stdout:4/595: chown d1/l88 520 1 2026-03-10T10:20:02.766 INFO:tasks.workunit.client.0.vm02.stdout:8/716: dread d1/d1c/d43/f7a [0,4194304] 0 2026-03-10T10:20:02.777 INFO:tasks.workunit.client.0.vm02.stdout:7/726: unlink d1/dc/d16/d28/f4e 0 2026-03-10T10:20:02.777 INFO:tasks.workunit.client.0.vm02.stdout:6/698: getdents d0/d7f 0 2026-03-10T10:20:02.777 INFO:tasks.workunit.client.1.vm05.stdout:3/733: symlink dd/d15/d24/d8e/l100 0 2026-03-10T10:20:02.777 INFO:tasks.workunit.client.1.vm05.stdout:1/807: rmdir d4/d3d/d6e/dac 39 2026-03-10T10:20:02.777 INFO:tasks.workunit.client.0.vm02.stdout:7/727: readlink d1/dc/d16/d28/d2c/l7f 0 2026-03-10T10:20:02.778 INFO:tasks.workunit.client.0.vm02.stdout:6/699: stat d0/d8/d29/d2f/d50/d7e/db2/dbb/fd1 0 2026-03-10T10:20:02.779 INFO:tasks.workunit.client.1.vm05.stdout:5/727: getdents da/db 0 2026-03-10T10:20:02.818 INFO:tasks.workunit.client.1.vm05.stdout:6/728: chown dd/d36/d3f/d12/lc7 0 1 2026-03-10T10:20:02.826 INFO:tasks.workunit.client.0.vm02.stdout:3/730: fsync d1/d20/f51 0 2026-03-10T10:20:02.829 INFO:tasks.workunit.client.1.vm05.stdout:0/718: truncate d1/d2/d9/d31/daa/fca 844938 0 2026-03-10T10:20:02.838 INFO:tasks.workunit.client.1.vm05.stdout:9/645: mkdir d0/d1/d13/de/d21/ddc 0 2026-03-10T10:20:02.838 INFO:tasks.workunit.client.1.vm05.stdout:9/646: stat d0/d1/d13/d26/l8e 0 2026-03-10T10:20:02.838 INFO:tasks.workunit.client.1.vm05.stdout:2/672: symlink db/d28/d4f/d8b/ld5 0 2026-03-10T10:20:02.840 INFO:tasks.workunit.client.1.vm05.stdout:1/808: rmdir d4/d20/dbe 39 2026-03-10T10:20:02.842 INFO:tasks.workunit.client.1.vm05.stdout:3/734: rename dd/d20/d56/d5e/ce2 to dd/d20/d56/d5e/dab/d9c/c101 0 2026-03-10T10:20:02.843 INFO:tasks.workunit.client.1.vm05.stdout:5/728: mknod da/d9a/daf/cf7 0 2026-03-10T10:20:02.844 INFO:tasks.workunit.client.1.vm05.stdout:5/729: chown da/d9a/dc7/db4/dbd/fd0 140 1 2026-03-10T10:20:02.846 INFO:tasks.workunit.client.1.vm05.stdout:6/729: mknod dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/ceb 0 2026-03-10T10:20:02.868 INFO:tasks.workunit.client.1.vm05.stdout:9/647: unlink d0/d1/d13/d62/l69 0 2026-03-10T10:20:02.869 INFO:tasks.workunit.client.1.vm05.stdout:2/673: mknod db/d28/d4f/d59/da4/d6c/cd6 0 2026-03-10T10:20:02.870 INFO:tasks.workunit.client.0.vm02.stdout:1/758: write d4/da/d1a/fa1 [686450,13176] 0 2026-03-10T10:20:02.871 INFO:tasks.workunit.client.0.vm02.stdout:1/759: fsync d4/da/d1a/d47/d65/f6e 0 2026-03-10T10:20:02.880 INFO:tasks.workunit.client.0.vm02.stdout:1/760: dwrite d4/da/d1a/d47/d65/f6e [0,4194304] 0 2026-03-10T10:20:02.901 INFO:tasks.workunit.client.1.vm05.stdout:3/735: unlink dd/d15/d24/d8e/l100 0 2026-03-10T10:20:02.913 INFO:tasks.workunit.client.1.vm05.stdout:4/596: write d1/d64/f84 [3638012,42998] 0 2026-03-10T10:20:02.914 INFO:tasks.workunit.client.1.vm05.stdout:5/730: read - da/d9a/dc7/db4/fc0 zero size 2026-03-10T10:20:02.939 INFO:tasks.workunit.client.1.vm05.stdout:2/674: readlink db/d28/d4f/d59/l8f 0 2026-03-10T10:20:02.945 INFO:tasks.workunit.client.1.vm05.stdout:9/648: dread d0/df/d74/d8c/fac [0,4194304] 0 2026-03-10T10:20:02.953 INFO:tasks.workunit.client.1.vm05.stdout:0/719: dwrite d1/d2/d9/d31/d13/d2f/f33 [4194304,4194304] 0 2026-03-10T10:20:02.954 INFO:tasks.workunit.client.0.vm02.stdout:0/760: rmdir d9/d18/d1a/d22/d24/d80 39 2026-03-10T10:20:02.966 INFO:tasks.workunit.client.1.vm05.stdout:1/809: dread - d4/d79/d83/dc5/dcb/fd0 zero size 2026-03-10T10:20:02.967 INFO:tasks.workunit.client.1.vm05.stdout:8/672: write d7/d14/d15/d3b/f7c [1094540,111987] 0 2026-03-10T10:20:02.972 INFO:tasks.workunit.client.0.vm02.stdout:5/871: rename d1/d6a/fd7 to d1/db/d11/d84/d40/f12b 0 2026-03-10T10:20:02.974 INFO:tasks.workunit.client.1.vm05.stdout:7/747: getdents d5/d1d 0 2026-03-10T10:20:02.975 INFO:tasks.workunit.client.1.vm05.stdout:6/730: write dd/d36/d3f/d12/d44/daa/fbc [377481,93415] 0 2026-03-10T10:20:02.976 INFO:tasks.workunit.client.0.vm02.stdout:6/700: rmdir d0/d8 39 2026-03-10T10:20:02.977 INFO:tasks.workunit.client.1.vm05.stdout:4/597: mkdir d1/d31/d76/dac/dc5 0 2026-03-10T10:20:02.978 INFO:tasks.workunit.client.1.vm05.stdout:4/598: write d1/d31/d76/dac/db8/dbf/f78 [5627070,23678] 0 2026-03-10T10:20:02.983 INFO:tasks.workunit.client.1.vm05.stdout:8/673: sync 2026-03-10T10:20:02.988 INFO:tasks.workunit.client.0.vm02.stdout:2/732: mknod d0/d10/cf9 0 2026-03-10T10:20:02.992 INFO:tasks.workunit.client.0.vm02.stdout:4/857: truncate d1/d52/d53/f79 539718 0 2026-03-10T10:20:02.997 INFO:tasks.workunit.client.1.vm05.stdout:9/649: truncate d0/df/d11/f52 1744348 0 2026-03-10T10:20:03.013 INFO:tasks.workunit.client.0.vm02.stdout:7/728: dwrite d1/dc/d55/f8b [4194304,4194304] 0 2026-03-10T10:20:03.039 INFO:tasks.workunit.client.1.vm05.stdout:7/748: mknod d5/d1d/d29/d60/ced 0 2026-03-10T10:20:03.044 INFO:tasks.workunit.client.0.vm02.stdout:0/761: creat d9/d18/d1a/d22/d24/d51/ff4 x:0 0 0 2026-03-10T10:20:03.047 INFO:tasks.workunit.client.0.vm02.stdout:8/717: mkdir d1/d1c/d43/d5b/dab/dd5 0 2026-03-10T10:20:03.048 INFO:tasks.workunit.client.1.vm05.stdout:3/736: truncate dd/d15/d4c/db5/ff3 62222 0 2026-03-10T10:20:03.050 INFO:tasks.workunit.client.1.vm05.stdout:4/599: rename d1/d31/c3b to d1/cc6 0 2026-03-10T10:20:03.066 INFO:tasks.workunit.client.0.vm02.stdout:5/872: write d1/db/d11/d84/d40/d4f/d5f/fe5 [330158,85690] 0 2026-03-10T10:20:03.066 INFO:tasks.workunit.client.1.vm05.stdout:8/674: dread d7/d14/d15/f1f [0,4194304] 0 2026-03-10T10:20:03.067 INFO:tasks.workunit.client.0.vm02.stdout:5/873: truncate d1/db/d11/d16/f128 287461 0 2026-03-10T10:20:03.071 INFO:tasks.workunit.client.0.vm02.stdout:6/701: chown d0/d8/d29/d94/fa9 0 1 2026-03-10T10:20:03.072 INFO:tasks.workunit.client.0.vm02.stdout:6/702: dread - d0/d7f/fbe zero size 2026-03-10T10:20:03.074 INFO:tasks.workunit.client.1.vm05.stdout:2/675: mknod db/d2d/dc6/dc7/cd7 0 2026-03-10T10:20:03.075 INFO:tasks.workunit.client.1.vm05.stdout:2/676: write db/d1c/fcf [352804,4123] 0 2026-03-10T10:20:03.075 INFO:tasks.workunit.client.1.vm05.stdout:9/650: stat d0/df/d74/d8c/d8f/ccf 0 2026-03-10T10:20:03.076 INFO:tasks.workunit.client.0.vm02.stdout:2/733: read d0/f44 [636523,44359] 0 2026-03-10T10:20:03.076 INFO:tasks.workunit.client.0.vm02.stdout:2/734: stat d0/c40 0 2026-03-10T10:20:03.076 INFO:tasks.workunit.client.0.vm02.stdout:2/735: chown d0/f2c 746 1 2026-03-10T10:20:03.079 INFO:tasks.workunit.client.0.vm02.stdout:4/858: mkdir d1/d75/ddd/d10e/d11f 0 2026-03-10T10:20:03.080 INFO:tasks.workunit.client.0.vm02.stdout:4/859: stat d1/d75/ddd/fea 0 2026-03-10T10:20:03.084 INFO:tasks.workunit.client.1.vm05.stdout:0/720: write d1/fb8 [802406,102144] 0 2026-03-10T10:20:03.088 INFO:tasks.workunit.client.1.vm05.stdout:1/810: mknod d4/df/d1c/d53/cf1 0 2026-03-10T10:20:03.089 INFO:tasks.workunit.client.1.vm05.stdout:7/749: chown d5/d1d/d29/cb3 2103332 1 2026-03-10T10:20:03.090 INFO:tasks.workunit.client.0.vm02.stdout:7/729: mknod d1/dc/d16/d28/d2d/ce7 0 2026-03-10T10:20:03.096 INFO:tasks.workunit.client.0.vm02.stdout:7/730: stat d1/d1b/f61 0 2026-03-10T10:20:03.102 INFO:tasks.workunit.client.1.vm05.stdout:3/737: creat dd/d15/d24/d2c/dd0/f102 x:0 0 0 2026-03-10T10:20:03.104 INFO:tasks.workunit.client.0.vm02.stdout:1/761: truncate d4/d1b/f44 3449150 0 2026-03-10T10:20:03.106 INFO:tasks.workunit.client.0.vm02.stdout:0/762: creat d9/d18/d1a/d22/d24/d8e/d9b/daa/ff5 x:0 0 0 2026-03-10T10:20:03.108 INFO:tasks.workunit.client.0.vm02.stdout:8/718: symlink d1/d1c/d43/d5b/d88/dac/d83/ld6 0 2026-03-10T10:20:03.116 INFO:tasks.workunit.client.0.vm02.stdout:5/874: rmdir d1/db/d11/d84/d40/d4f/d5f/d6d/d71 39 2026-03-10T10:20:03.126 INFO:tasks.workunit.client.0.vm02.stdout:6/703: read - d0/d8/d29/d2f/d4b/da5/d6f/fc3 zero size 2026-03-10T10:20:03.135 INFO:tasks.workunit.client.0.vm02.stdout:3/731: creat d1/d20/ff2 x:0 0 0 2026-03-10T10:20:03.139 INFO:tasks.workunit.client.0.vm02.stdout:1/762: read d4/da/fb2 [836263,31132] 0 2026-03-10T10:20:03.143 INFO:tasks.workunit.client.0.vm02.stdout:5/875: creat d1/db/d11/d1a/f12c x:0 0 0 2026-03-10T10:20:03.145 INFO:tasks.workunit.client.0.vm02.stdout:4/860: truncate d1/d10/db/f24 2595162 0 2026-03-10T10:20:03.149 INFO:tasks.workunit.client.0.vm02.stdout:1/763: read - d4/da/d27/d38/f5e zero size 2026-03-10T10:20:03.150 INFO:tasks.workunit.client.0.vm02.stdout:0/763: readlink d9/d18/d1a/d22/d24/d80/d49/l73 0 2026-03-10T10:20:03.153 INFO:tasks.workunit.client.0.vm02.stdout:9/702: rename da/d3c/d4c/d38/d82/d89/fb0 to da/d3c/d4c/fe0 0 2026-03-10T10:20:03.153 INFO:tasks.workunit.client.0.vm02.stdout:6/704: sync 2026-03-10T10:20:03.180 INFO:tasks.workunit.client.0.vm02.stdout:4/861: write d1/d75/ddd/d10e/d5e/d78/d44/f59 [548481,112873] 0 2026-03-10T10:20:03.182 INFO:tasks.workunit.client.0.vm02.stdout:5/876: dwrite d1/db/d11/d13/fdb [0,4194304] 0 2026-03-10T10:20:03.195 INFO:tasks.workunit.client.0.vm02.stdout:7/731: creat d1/d1b/fe8 x:0 0 0 2026-03-10T10:20:03.197 INFO:tasks.workunit.client.0.vm02.stdout:1/764: mkdir d4/dc3/df0 0 2026-03-10T10:20:03.197 INFO:tasks.workunit.client.0.vm02.stdout:1/765: stat d4/da/d1a/d47/ld5 0 2026-03-10T10:20:03.202 INFO:tasks.workunit.client.0.vm02.stdout:8/719: unlink d1/d1c/d43/d6a/da8/d8e/fa9 0 2026-03-10T10:20:03.214 INFO:tasks.workunit.client.0.vm02.stdout:2/736: rename d0/d1a/d24 to d0/d1a/d49/d5e/d65/dc4/dfa 0 2026-03-10T10:20:03.221 INFO:tasks.workunit.client.0.vm02.stdout:0/764: dwrite d9/d18/d1a/d22/fd6 [0,4194304] 0 2026-03-10T10:20:03.226 INFO:tasks.workunit.client.0.vm02.stdout:0/765: dwrite d9/d18/d1a/d22/d24/d80/dcc/ff1 [0,4194304] 0 2026-03-10T10:20:03.239 INFO:tasks.workunit.client.0.vm02.stdout:6/705: fdatasync d0/d8/d29/d2f/f67 0 2026-03-10T10:20:03.244 INFO:tasks.workunit.client.1.vm05.stdout:1/811: symlink d4/df/d1c/d53/daa/lf2 0 2026-03-10T10:20:03.257 INFO:tasks.workunit.client.0.vm02.stdout:4/862: mkdir d1/d75/ddd/d10e/d5e/d78/d7f/d82/d120 0 2026-03-10T10:20:03.264 INFO:tasks.workunit.client.1.vm05.stdout:6/731: creat dd/d36/d3f/d12/d44/d2a/fec x:0 0 0 2026-03-10T10:20:03.271 INFO:tasks.workunit.client.1.vm05.stdout:3/738: mkdir dd/d15/d24/d2c/dd0/dd9/d103 0 2026-03-10T10:20:03.281 INFO:tasks.workunit.client.0.vm02.stdout:5/877: symlink d1/db/d11/d13/d28/d37/d3d/l12d 0 2026-03-10T10:20:03.282 INFO:tasks.workunit.client.1.vm05.stdout:9/651: write d0/d1/d16/f40 [337054,8455] 0 2026-03-10T10:20:03.283 INFO:tasks.workunit.client.0.vm02.stdout:3/732: link d1/d8/d21/d7d/fdd d1/d20/d52/ff3 0 2026-03-10T10:20:03.288 INFO:tasks.workunit.client.1.vm05.stdout:0/721: write d1/d2/d9/fc7 [2255486,59704] 0 2026-03-10T10:20:03.289 INFO:tasks.workunit.client.1.vm05.stdout:0/722: write d1/d2/d9/d31/d13/da2/fd6 [1804654,8452] 0 2026-03-10T10:20:03.290 INFO:tasks.workunit.client.1.vm05.stdout:0/723: write d1/d2/d5d/f5f [818034,8008] 0 2026-03-10T10:20:03.295 INFO:tasks.workunit.client.0.vm02.stdout:7/732: mknod d1/dc/d16/d28/d2d/dae/ce9 0 2026-03-10T10:20:03.295 INFO:tasks.workunit.client.0.vm02.stdout:7/733: stat d1/d1b/d49/l57 0 2026-03-10T10:20:03.298 INFO:tasks.workunit.client.1.vm05.stdout:5/731: link da/d63/fe1 da/db/d28/ff8 0 2026-03-10T10:20:03.303 INFO:tasks.workunit.client.1.vm05.stdout:2/677: creat db/d28/d4f/d59/dce/fd8 x:0 0 0 2026-03-10T10:20:03.304 INFO:tasks.workunit.client.1.vm05.stdout:2/678: chown db/d1c/d40/d62 106578918 1 2026-03-10T10:20:03.308 INFO:tasks.workunit.client.0.vm02.stdout:8/720: truncate d1/d1c/d23/f9d 773545 0 2026-03-10T10:20:03.311 INFO:tasks.workunit.client.0.vm02.stdout:2/737: unlink d0/f1b 0 2026-03-10T10:20:03.319 INFO:tasks.workunit.client.1.vm05.stdout:1/812: creat d4/df/de0/d82/ff3 x:0 0 0 2026-03-10T10:20:03.324 INFO:tasks.workunit.client.0.vm02.stdout:9/703: mkdir da/d3c/d4c/de1 0 2026-03-10T10:20:03.333 INFO:tasks.workunit.client.1.vm05.stdout:3/739: symlink dd/d39/d5f/l104 0 2026-03-10T10:20:03.335 INFO:tasks.workunit.client.1.vm05.stdout:8/675: dwrite d7/f9 [0,4194304] 0 2026-03-10T10:20:03.337 INFO:tasks.workunit.client.0.vm02.stdout:9/704: sync 2026-03-10T10:20:03.352 INFO:tasks.workunit.client.1.vm05.stdout:9/652: mkdir d0/df/d74/d8c/d8f/ddd 0 2026-03-10T10:20:03.353 INFO:tasks.workunit.client.1.vm05.stdout:9/653: stat d0/df/d11/dc6 0 2026-03-10T10:20:03.359 INFO:tasks.workunit.client.0.vm02.stdout:5/878: mkdir d1/db/d11/d13/d28/d37/dce/d12e 0 2026-03-10T10:20:03.363 INFO:tasks.workunit.client.0.vm02.stdout:3/733: unlink d1/d6/c23 0 2026-03-10T10:20:03.374 INFO:tasks.workunit.client.0.vm02.stdout:7/734: mkdir d1/d1b/d8f/dad/d7e/dba/dea 0 2026-03-10T10:20:03.381 INFO:tasks.workunit.client.0.vm02.stdout:1/766: mkdir d4/df1 0 2026-03-10T10:20:03.382 INFO:tasks.workunit.client.1.vm05.stdout:5/732: rename da/d9a/daf/cf7 to da/db/d28/d6e/cf9 0 2026-03-10T10:20:03.382 INFO:tasks.workunit.client.1.vm05.stdout:7/750: truncate d5/dd/f12 2923663 0 2026-03-10T10:20:03.391 INFO:tasks.workunit.client.1.vm05.stdout:0/724: write d1/d2/d9/d31/d13/d15/d4e/d8a/fd8 [576789,81911] 0 2026-03-10T10:20:03.397 INFO:tasks.workunit.client.1.vm05.stdout:2/679: write db/d2d/f47 [412590,38180] 0 2026-03-10T10:20:03.398 INFO:tasks.workunit.client.1.vm05.stdout:1/813: stat d4/l9b 0 2026-03-10T10:20:03.401 INFO:tasks.workunit.client.0.vm02.stdout:2/738: mkdir d0/d71/dfb 0 2026-03-10T10:20:03.408 INFO:tasks.workunit.client.1.vm05.stdout:3/740: read - dd/dbe/fe6 zero size 2026-03-10T10:20:03.410 INFO:tasks.workunit.client.1.vm05.stdout:8/676: read - d7/d14/d15/d3b/da0/fc8 zero size 2026-03-10T10:20:03.422 INFO:tasks.workunit.client.1.vm05.stdout:9/654: rmdir d0/d1/d13/de/d93 39 2026-03-10T10:20:03.427 INFO:tasks.workunit.client.0.vm02.stdout:0/766: dwrite d9/d34/d3d/d7b/fc0 [0,4194304] 0 2026-03-10T10:20:03.436 INFO:tasks.workunit.client.0.vm02.stdout:4/863: mknod d1/c121 0 2026-03-10T10:20:03.441 INFO:tasks.workunit.client.0.vm02.stdout:5/879: truncate d1/db/d11/d84/d40/d4f/f6e 216985 0 2026-03-10T10:20:03.453 INFO:tasks.workunit.client.1.vm05.stdout:7/751: truncate d5/d1d/d20/fb5 4792963 0 2026-03-10T10:20:03.453 INFO:tasks.workunit.client.1.vm05.stdout:7/752: readlink d5/d17/l5f 0 2026-03-10T10:20:03.463 INFO:tasks.workunit.client.0.vm02.stdout:1/767: symlink d4/da/d27/lf2 0 2026-03-10T10:20:03.464 INFO:tasks.workunit.client.0.vm02.stdout:1/768: fsync d4/da/d1a/fa1 0 2026-03-10T10:20:03.465 INFO:tasks.workunit.client.1.vm05.stdout:5/733: dwrite da/db/f29 [0,4194304] 0 2026-03-10T10:20:03.474 INFO:tasks.workunit.client.1.vm05.stdout:2/680: rename db/d28/d4f/d59/da4/c91 to db/d28/d4f/d59/d94/d95/cd9 0 2026-03-10T10:20:03.477 INFO:tasks.workunit.client.0.vm02.stdout:8/721: mknod d1/d1c/d43/cd7 0 2026-03-10T10:20:03.495 INFO:tasks.workunit.client.1.vm05.stdout:6/732: creat dd/d36/d3f/d12/d44/d2a/fed x:0 0 0 2026-03-10T10:20:03.495 INFO:tasks.workunit.client.1.vm05.stdout:6/733: rename dd to dd/d36/d3f/d12/d44/daa/dee 22 2026-03-10T10:20:03.496 INFO:tasks.workunit.client.1.vm05.stdout:6/734: fdatasync dd/d36/d3f/d12/d59/fb1 0 2026-03-10T10:20:03.509 INFO:tasks.workunit.client.1.vm05.stdout:3/741: creat dd/d15/d24/d8e/dac/f105 x:0 0 0 2026-03-10T10:20:03.511 INFO:tasks.workunit.client.1.vm05.stdout:3/742: dread dd/d15/d24/f2f [0,4194304] 0 2026-03-10T10:20:03.512 INFO:tasks.workunit.client.0.vm02.stdout:2/739: dwrite d0/d10/f6b [4194304,4194304] 0 2026-03-10T10:20:03.515 INFO:tasks.workunit.client.1.vm05.stdout:8/677: dread - d7/d14/d15/d3b/fc5 zero size 2026-03-10T10:20:03.519 INFO:tasks.workunit.client.1.vm05.stdout:8/678: dwrite d7/d14/d3a/f50 [0,4194304] 0 2026-03-10T10:20:03.525 INFO:tasks.workunit.client.0.vm02.stdout:6/706: link d0/d7f/fbe d0/d87/fe2 0 2026-03-10T10:20:03.526 INFO:tasks.workunit.client.1.vm05.stdout:9/655: dread - d0/d1/d16/fca zero size 2026-03-10T10:20:03.535 INFO:tasks.workunit.client.1.vm05.stdout:3/743: sync 2026-03-10T10:20:03.535 INFO:tasks.workunit.client.0.vm02.stdout:2/740: sync 2026-03-10T10:20:03.536 INFO:tasks.workunit.client.0.vm02.stdout:2/741: fsync d0/d1a/f66 0 2026-03-10T10:20:03.555 INFO:tasks.workunit.client.0.vm02.stdout:5/880: fdatasync d1/db/d11/d84/d95/fd4 0 2026-03-10T10:20:03.578 INFO:tasks.workunit.client.0.vm02.stdout:3/734: mkdir d1/d8/d21/df4 0 2026-03-10T10:20:03.581 INFO:tasks.workunit.client.0.vm02.stdout:0/767: dwrite d9/d18/d1a/d22/d24/d80/f90 [4194304,4194304] 0 2026-03-10T10:20:03.582 INFO:tasks.workunit.client.1.vm05.stdout:5/734: rmdir da/db/d28 39 2026-03-10T10:20:03.582 INFO:tasks.workunit.client.1.vm05.stdout:7/753: dwrite d5/d1d/d20/d91/fc1 [0,4194304] 0 2026-03-10T10:20:03.607 INFO:tasks.workunit.client.0.vm02.stdout:8/722: mknod d1/d1c/d43/d5b/d88/dac/cd8 0 2026-03-10T10:20:03.607 INFO:tasks.workunit.client.0.vm02.stdout:8/723: write d1/d1c/f33 [2993496,23433] 0 2026-03-10T10:20:03.610 INFO:tasks.workunit.client.1.vm05.stdout:6/735: symlink dd/d36/d3f/d12/d44/d2a/d7f/lef 0 2026-03-10T10:20:03.611 INFO:tasks.workunit.client.1.vm05.stdout:6/736: truncate dd/d36/d3f/d12/d96/f9a 1301847 0 2026-03-10T10:20:03.617 INFO:tasks.workunit.client.0.vm02.stdout:9/705: creat da/fe2 x:0 0 0 2026-03-10T10:20:03.619 INFO:tasks.workunit.client.1.vm05.stdout:4/600: getdents d1/d31/dc/d40/d45 0 2026-03-10T10:20:03.619 INFO:tasks.workunit.client.1.vm05.stdout:4/601: readlink d1/d31/l49 0 2026-03-10T10:20:03.620 INFO:tasks.workunit.client.1.vm05.stdout:4/602: fsync d1/d31/d76/dac/db8/dbf/f78 0 2026-03-10T10:20:03.622 INFO:tasks.workunit.client.0.vm02.stdout:6/707: dread - d0/d8/d29/d2f/d4b/fd5 zero size 2026-03-10T10:20:03.630 INFO:tasks.workunit.client.0.vm02.stdout:2/742: dread d0/fcd [0,4194304] 0 2026-03-10T10:20:03.634 INFO:tasks.workunit.client.1.vm05.stdout:8/679: dread d7/d14/f33 [0,4194304] 0 2026-03-10T10:20:03.637 INFO:tasks.workunit.client.1.vm05.stdout:0/725: link d1/d2/d9/d31/d12/fc3 d1/d2/d9/d31/d13/da2/dab/ded/ff4 0 2026-03-10T10:20:03.637 INFO:tasks.workunit.client.0.vm02.stdout:4/864: mkdir d1/d75/ddd/d10e/d5e/d122 0 2026-03-10T10:20:03.645 INFO:tasks.workunit.client.0.vm02.stdout:3/735: unlink d1/d8/d21/d7d/lc6 0 2026-03-10T10:20:03.648 INFO:tasks.workunit.client.1.vm05.stdout:2/681: symlink db/d28/lda 0 2026-03-10T10:20:03.649 INFO:tasks.workunit.client.1.vm05.stdout:2/682: dread db/d28/f7f [0,4194304] 0 2026-03-10T10:20:03.649 INFO:tasks.workunit.client.0.vm02.stdout:0/768: rename d9/d34/d3d/d7b/c59 to d9/d34/cf6 0 2026-03-10T10:20:03.678 INFO:tasks.workunit.client.0.vm02.stdout:7/735: link d1/dc/l7b d1/d1b/d8f/d67/da7/leb 0 2026-03-10T10:20:03.686 INFO:tasks.workunit.client.0.vm02.stdout:7/736: sync 2026-03-10T10:20:03.690 INFO:tasks.workunit.client.0.vm02.stdout:1/769: symlink d4/dc3/df0/lf3 0 2026-03-10T10:20:03.712 INFO:tasks.workunit.client.1.vm05.stdout:3/744: write dd/d15/d1f/f53 [1924665,38774] 0 2026-03-10T10:20:03.712 INFO:tasks.workunit.client.1.vm05.stdout:6/737: write dd/d36/d3f/d12/d44/d30/f9e [108169,117481] 0 2026-03-10T10:20:03.715 INFO:tasks.workunit.client.1.vm05.stdout:7/754: dwrite d5/d1d/d20/d91/fbd [0,4194304] 0 2026-03-10T10:20:03.717 INFO:tasks.workunit.client.1.vm05.stdout:9/656: symlink d0/dc4/lde 0 2026-03-10T10:20:03.722 INFO:tasks.workunit.client.0.vm02.stdout:9/706: symlink da/d3c/d4c/d2c/d96/le3 0 2026-03-10T10:20:03.723 INFO:tasks.workunit.client.0.vm02.stdout:6/708: stat d0/d8/d29/d2f/d50/d98/fb1 0 2026-03-10T10:20:03.724 INFO:tasks.workunit.client.1.vm05.stdout:8/680: truncate d7/d14/f40 562853 0 2026-03-10T10:20:03.724 INFO:tasks.workunit.client.1.vm05.stdout:0/726: mkdir d1/d2/d9/d31/d13/d17/da1/df5 0 2026-03-10T10:20:03.725 INFO:tasks.workunit.client.1.vm05.stdout:8/681: chown d7/d14/d15/d3b/da0/dce 4384 1 2026-03-10T10:20:03.726 INFO:tasks.workunit.client.1.vm05.stdout:5/735: write da/db/d28/fec [15339,27697] 0 2026-03-10T10:20:03.729 INFO:tasks.workunit.client.0.vm02.stdout:5/881: mkdir d1/db/d12f 0 2026-03-10T10:20:03.732 INFO:tasks.workunit.client.1.vm05.stdout:1/814: getdents d4/d79 0 2026-03-10T10:20:03.733 INFO:tasks.workunit.client.1.vm05.stdout:1/815: dread - d4/df/de0/d82/ff3 zero size 2026-03-10T10:20:03.733 INFO:tasks.workunit.client.1.vm05.stdout:1/816: stat d4/df/de0/f62 0 2026-03-10T10:20:03.744 INFO:tasks.workunit.client.0.vm02.stdout:1/770: fdatasync d4/da/d1a/d22/fae 0 2026-03-10T10:20:03.759 INFO:tasks.workunit.client.1.vm05.stdout:9/657: dread d0/d1/d13/d26/f58 [0,4194304] 0 2026-03-10T10:20:03.760 INFO:tasks.workunit.client.1.vm05.stdout:3/745: mkdir dd/dbe/d106 0 2026-03-10T10:20:03.762 INFO:tasks.workunit.client.1.vm05.stdout:6/738: symlink dd/d36/d3f/dbd/lf0 0 2026-03-10T10:20:03.762 INFO:tasks.workunit.client.1.vm05.stdout:6/739: chown dd/d36/d3f/dbd/dd5 0 1 2026-03-10T10:20:03.763 INFO:tasks.workunit.client.1.vm05.stdout:6/740: write dd/d36/d3f/d12/d44/d2a/fb0 [564779,36760] 0 2026-03-10T10:20:03.770 INFO:tasks.workunit.client.1.vm05.stdout:7/755: creat d5/d1d/d29/d60/fee x:0 0 0 2026-03-10T10:20:03.772 INFO:tasks.workunit.client.0.vm02.stdout:4/865: write d1/d10/f71 [1592888,93004] 0 2026-03-10T10:20:03.789 INFO:tasks.workunit.client.0.vm02.stdout:3/736: dwrite d1/d6/f1b [0,4194304] 0 2026-03-10T10:20:03.803 INFO:tasks.workunit.client.1.vm05.stdout:0/727: mkdir d1/d2/d9/d31/d13/d15/d4e/df6 0 2026-03-10T10:20:03.803 INFO:tasks.workunit.client.1.vm05.stdout:0/728: readlink d1/d2/d39/d3d/l91 0 2026-03-10T10:20:03.807 INFO:tasks.workunit.client.0.vm02.stdout:7/737: dwrite d1/dc/d16/faa [4194304,4194304] 0 2026-03-10T10:20:03.815 INFO:tasks.workunit.client.0.vm02.stdout:0/769: dwrite d9/d18/d1a/d22/d24/d8e/d9b/fc5 [0,4194304] 0 2026-03-10T10:20:03.818 INFO:tasks.workunit.client.1.vm05.stdout:8/682: truncate d7/d14/d62/d90/dac/fc0 711291 0 2026-03-10T10:20:03.822 INFO:tasks.workunit.client.1.vm05.stdout:5/736: readlink da/d9a/dc7/lac 0 2026-03-10T10:20:03.823 INFO:tasks.workunit.client.1.vm05.stdout:5/737: dread - da/d63/fe1 zero size 2026-03-10T10:20:03.823 INFO:tasks.workunit.client.1.vm05.stdout:2/683: mknod db/d12/cdb 0 2026-03-10T10:20:03.825 INFO:tasks.workunit.client.1.vm05.stdout:4/603: write d1/d64/f8f [2540391,35353] 0 2026-03-10T10:20:03.827 INFO:tasks.workunit.client.1.vm05.stdout:2/684: dread db/d28/d4f/d59/f8d [0,4194304] 0 2026-03-10T10:20:03.828 INFO:tasks.workunit.client.1.vm05.stdout:1/817: fsync d4/d39/f7b 0 2026-03-10T10:20:03.839 INFO:tasks.workunit.client.1.vm05.stdout:6/741: dread - dd/d36/d3f/d12/d44/d2a/d3d/d48/fb2 zero size 2026-03-10T10:20:03.849 INFO:tasks.workunit.client.1.vm05.stdout:7/756: rmdir d5/d1d/d29 39 2026-03-10T10:20:03.850 INFO:tasks.workunit.client.1.vm05.stdout:6/742: dwrite dd/d36/d3f/d12/f35 [0,4194304] 0 2026-03-10T10:20:03.850 INFO:tasks.workunit.client.1.vm05.stdout:6/743: write dd/d36/d3f/d12/d44/d2a/fed [217593,129105] 0 2026-03-10T10:20:03.854 INFO:tasks.workunit.client.1.vm05.stdout:0/729: rename d1/d2/d9/d31/d13/d2f/d49/f5c to d1/d2/d9/d50/d9a/da0/ff7 0 2026-03-10T10:20:03.855 INFO:tasks.workunit.client.1.vm05.stdout:8/683: truncate d7/d2f/f7e 1098330 0 2026-03-10T10:20:03.858 INFO:tasks.workunit.client.1.vm05.stdout:5/738: truncate da/db/d28/f44 1184517 0 2026-03-10T10:20:03.864 INFO:tasks.workunit.client.0.vm02.stdout:9/707: mkdir da/d3c/d4c/db1/de4 0 2026-03-10T10:20:03.866 INFO:tasks.workunit.client.0.vm02.stdout:6/709: fdatasync d0/d8/d29/d2f/d4b/da5/d6f/fa2 0 2026-03-10T10:20:03.872 INFO:tasks.workunit.client.0.vm02.stdout:2/743: mkdir d0/d1a/d49/d5e/d65/dc4/de0/dfc 0 2026-03-10T10:20:03.874 INFO:tasks.workunit.client.1.vm05.stdout:3/746: mkdir dd/d15/d24/d2c/d107 0 2026-03-10T10:20:03.882 INFO:tasks.workunit.client.1.vm05.stdout:2/685: rename db/d1c/d40/l49 to db/d28/d4f/d8b/d9a/ldc 0 2026-03-10T10:20:03.892 INFO:tasks.workunit.client.0.vm02.stdout:1/771: creat d4/dc3/ff4 x:0 0 0 2026-03-10T10:20:03.892 INFO:tasks.workunit.client.1.vm05.stdout:1/818: dread d4/d39/f54 [0,4194304] 0 2026-03-10T10:20:03.892 INFO:tasks.workunit.client.1.vm05.stdout:0/730: unlink d1/d2/d9/d31/d13/d17/l2b 0 2026-03-10T10:20:03.894 INFO:tasks.workunit.client.0.vm02.stdout:4/866: readlink d1/d52/d53/dda/df7/l118 0 2026-03-10T10:20:03.905 INFO:tasks.workunit.client.0.vm02.stdout:0/770: creat d9/d34/d3d/d65/d89/dd3/da7/db7/de1/ff7 x:0 0 0 2026-03-10T10:20:03.909 INFO:tasks.workunit.client.1.vm05.stdout:1/819: chown d4/d3d/d6e/fc3 285 1 2026-03-10T10:20:03.918 INFO:tasks.workunit.client.0.vm02.stdout:5/882: fsync d1/db/d11/d84/d40/fd0 0 2026-03-10T10:20:03.918 INFO:tasks.workunit.client.1.vm05.stdout:5/739: creat da/d9a/daf/ded/ffa x:0 0 0 2026-03-10T10:20:03.920 INFO:tasks.workunit.client.0.vm02.stdout:1/772: unlink d4/f3a 0 2026-03-10T10:20:03.921 INFO:tasks.workunit.client.0.vm02.stdout:8/724: getdents d1/d2 0 2026-03-10T10:20:03.926 INFO:tasks.workunit.client.1.vm05.stdout:0/731: fdatasync d1/d2/d9/d31/d12/d20/f2e 0 2026-03-10T10:20:03.931 INFO:tasks.workunit.client.0.vm02.stdout:0/771: symlink d9/d18/d1a/d3c/lf8 0 2026-03-10T10:20:03.940 INFO:tasks.workunit.client.0.vm02.stdout:6/710: rename d0/d8/d29/d2f/d4b/l91 to d0/d8/d29/dce/le3 0 2026-03-10T10:20:03.941 INFO:tasks.workunit.client.1.vm05.stdout:0/732: mkdir d1/d2/d39/d6e/dc0/df8 0 2026-03-10T10:20:03.943 INFO:tasks.workunit.client.1.vm05.stdout:5/740: truncate da/db/d26/d70/fcc 485715 0 2026-03-10T10:20:03.951 INFO:tasks.workunit.client.1.vm05.stdout:2/686: link db/d12/d74/c8c db/d61/d67/cdd 0 2026-03-10T10:20:03.952 INFO:tasks.workunit.client.0.vm02.stdout:1/773: dread d4/da/d1a/d47/d65/fba [0,4194304] 0 2026-03-10T10:20:03.954 INFO:tasks.workunit.client.1.vm05.stdout:5/741: rename da/db/d28/d32/lbb to da/d9a/daf/lfb 0 2026-03-10T10:20:03.956 INFO:tasks.workunit.client.1.vm05.stdout:2/687: unlink db/d28/d4f/d59/fbe 0 2026-03-10T10:20:03.956 INFO:tasks.workunit.client.0.vm02.stdout:3/737: creat d1/ff5 x:0 0 0 2026-03-10T10:20:03.960 INFO:tasks.workunit.client.1.vm05.stdout:0/733: fdatasync d1/d2/d39/f69 0 2026-03-10T10:20:03.971 INFO:tasks.workunit.client.0.vm02.stdout:0/772: creat d9/d34/ff9 x:0 0 0 2026-03-10T10:20:03.971 INFO:tasks.workunit.client.0.vm02.stdout:6/711: mkdir d0/d8/d29/d6d/d96/de4 0 2026-03-10T10:20:03.971 INFO:tasks.workunit.client.1.vm05.stdout:5/742: read - da/db/fad zero size 2026-03-10T10:20:03.971 INFO:tasks.workunit.client.1.vm05.stdout:5/743: stat da/d9a/dc7/f95 0 2026-03-10T10:20:03.971 INFO:tasks.workunit.client.1.vm05.stdout:2/688: dread db/d12/f3b [0,4194304] 0 2026-03-10T10:20:03.973 INFO:tasks.workunit.client.0.vm02.stdout:3/738: dread d1/d8/d21/d73/d78/d79/fbd [0,4194304] 0 2026-03-10T10:20:03.997 INFO:tasks.workunit.client.1.vm05.stdout:9/658: write d0/d1/d13/d55/fc9 [175832,78677] 0 2026-03-10T10:20:04.012 INFO:tasks.workunit.client.1.vm05.stdout:7/757: dwrite d5/d1d/d20/d2d/d5d/f67 [0,4194304] 0 2026-03-10T10:20:04.013 INFO:tasks.workunit.client.1.vm05.stdout:7/758: stat d5/d1d 0 2026-03-10T10:20:04.013 INFO:tasks.workunit.client.0.vm02.stdout:1/774: creat d4/d2c/d53/da6/ff5 x:0 0 0 2026-03-10T10:20:04.017 INFO:tasks.workunit.client.1.vm05.stdout:8/684: dwrite d7/d14/d24/d3f/d6a/d8a/d96/fa2 [0,4194304] 0 2026-03-10T10:20:04.018 INFO:tasks.workunit.client.0.vm02.stdout:6/712: fsync d0/d8/d29/d94/fbf 0 2026-03-10T10:20:04.018 INFO:tasks.workunit.client.1.vm05.stdout:5/744: symlink da/db/lfc 0 2026-03-10T10:20:04.033 INFO:tasks.workunit.client.0.vm02.stdout:3/739: dread - d1/fa1 zero size 2026-03-10T10:20:04.044 INFO:tasks.workunit.client.0.vm02.stdout:1/775: rename d4/da/d27/d38/d3c/l55 to d4/da/d1a/d5b/d93/de8/lf6 0 2026-03-10T10:20:04.053 INFO:tasks.workunit.client.1.vm05.stdout:8/685: read - d7/d14/d15/faa zero size 2026-03-10T10:20:04.057 INFO:tasks.workunit.client.0.vm02.stdout:6/713: mkdir d0/d8/d29/d2f/d50/d7e/db2/dbb/de5 0 2026-03-10T10:20:04.063 INFO:tasks.workunit.client.0.vm02.stdout:5/883: getdents d1/db 0 2026-03-10T10:20:04.070 INFO:tasks.workunit.client.1.vm05.stdout:9/659: mkdir d0/d1/d13/de/ddf 0 2026-03-10T10:20:04.071 INFO:tasks.workunit.client.1.vm05.stdout:4/604: write d1/d3/f4a [1122189,60519] 0 2026-03-10T10:20:04.075 INFO:tasks.workunit.client.1.vm05.stdout:3/747: write dd/d15/d24/f42 [2656486,55461] 0 2026-03-10T10:20:04.080 INFO:tasks.workunit.client.1.vm05.stdout:8/686: creat d7/d14/d15/d3b/fda x:0 0 0 2026-03-10T10:20:04.085 INFO:tasks.workunit.client.0.vm02.stdout:7/738: dwrite d1/dc/d99/fc8 [0,4194304] 0 2026-03-10T10:20:04.088 INFO:tasks.workunit.client.0.vm02.stdout:6/714: fsync d0/d8/d29/d94/fa9 0 2026-03-10T10:20:04.093 INFO:tasks.workunit.client.1.vm05.stdout:6/744: dwrite dd/f14 [4194304,4194304] 0 2026-03-10T10:20:04.093 INFO:tasks.workunit.client.0.vm02.stdout:5/884: readlink d1/db/d11/d62/d67/lc8 0 2026-03-10T10:20:04.104 INFO:tasks.workunit.client.0.vm02.stdout:3/740: rename d1/d8/d21/f3c to d1/d8/d21/df4/ff6 0 2026-03-10T10:20:04.107 INFO:tasks.workunit.client.0.vm02.stdout:1/776: mknod d4/da/d1a/cf7 0 2026-03-10T10:20:04.110 INFO:tasks.workunit.client.0.vm02.stdout:0/773: link d9/d34/d3d/d65/cda d9/d18/d1a/d22/d24/cfa 0 2026-03-10T10:20:04.115 INFO:tasks.workunit.client.0.vm02.stdout:6/715: creat d0/d8/d8c/fe6 x:0 0 0 2026-03-10T10:20:04.128 INFO:tasks.workunit.client.0.vm02.stdout:7/739: rename d1/dc/d10/f27 to d1/d1b/d8f/d67/da7/fec 0 2026-03-10T10:20:04.128 INFO:tasks.workunit.client.0.vm02.stdout:3/741: stat d1/d8/d21/f4c 0 2026-03-10T10:20:04.131 INFO:tasks.workunit.client.0.vm02.stdout:0/774: dread - d9/d34/d3d/d65/d89/fcd zero size 2026-03-10T10:20:04.138 INFO:tasks.workunit.client.0.vm02.stdout:1/777: dread d4/d2c/fc7 [0,4194304] 0 2026-03-10T10:20:04.138 INFO:tasks.workunit.client.0.vm02.stdout:2/744: dwrite d0/d1a/d49/deb/fd2 [0,4194304] 0 2026-03-10T10:20:04.158 INFO:tasks.workunit.client.0.vm02.stdout:0/775: dread - d9/d18/d1a/d22/d24/d8e/fce zero size 2026-03-10T10:20:04.172 INFO:tasks.workunit.client.0.vm02.stdout:4/867: dwrite d1/d52/d53/fbb [0,4194304] 0 2026-03-10T10:20:04.174 INFO:tasks.workunit.client.0.vm02.stdout:1/778: unlink d4/da/d1a/cf7 0 2026-03-10T10:20:04.180 INFO:tasks.workunit.client.0.vm02.stdout:2/745: symlink d0/d71/lfd 0 2026-03-10T10:20:04.182 INFO:tasks.workunit.client.0.vm02.stdout:9/708: write da/f13 [4404645,38765] 0 2026-03-10T10:20:04.191 INFO:tasks.workunit.client.1.vm05.stdout:4/605: read - d1/d31/dc/f3d zero size 2026-03-10T10:20:04.191 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:03 vm05.local ceph-mon[59051]: pgmap v13: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 32 MiB/s rd, 68 MiB/s wr, 182 op/s 2026-03-10T10:20:04.191 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:03 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:04.192 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:03 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:04.192 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:03 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:04.192 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:03 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:04.192 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:03 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:04.192 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:03 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:04.192 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:03 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mgr fail", "who": "vm05.coparq"}]: dispatch 2026-03-10T10:20:04.194 INFO:tasks.workunit.client.0.vm02.stdout:3/742: rmdir d1/d58 39 2026-03-10T10:20:04.197 INFO:tasks.workunit.client.0.vm02.stdout:3/743: readlink d1/d8/d86/db1/lda 0 2026-03-10T10:20:04.199 INFO:tasks.workunit.client.0.vm02.stdout:1/779: fdatasync d4/d1b/f4c 0 2026-03-10T10:20:04.199 INFO:tasks.workunit.client.0.vm02.stdout:1/780: chown d4/da/d1a/d47/d78/cdf 149 1 2026-03-10T10:20:04.200 INFO:tasks.workunit.client.0.vm02.stdout:1/781: chown d4/da/d27/d38/f3b 28623707 1 2026-03-10T10:20:04.200 INFO:tasks.workunit.client.0.vm02.stdout:1/782: chown d4/da/d27/d38/f5e 64771190 1 2026-03-10T10:20:04.203 INFO:tasks.workunit.client.1.vm05.stdout:6/745: fsync dd/d36/d3f/d12/d44/d30/d4a/fc9 0 2026-03-10T10:20:04.205 INFO:tasks.workunit.client.0.vm02.stdout:0/776: sync 2026-03-10T10:20:04.205 INFO:tasks.workunit.client.0.vm02.stdout:4/868: sync 2026-03-10T10:20:04.217 INFO:tasks.workunit.client.0.vm02.stdout:1/783: dread - d4/da/d1a/d47/d78/fcf zero size 2026-03-10T10:20:04.217 INFO:tasks.workunit.client.0.vm02.stdout:2/746: creat d0/d1a/d49/d5e/d65/dc4/dfa/df1/ffe x:0 0 0 2026-03-10T10:20:04.217 INFO:tasks.workunit.client.0.vm02.stdout:0/777: rmdir d9/d18/d1a/d22/d24/d80 39 2026-03-10T10:20:04.218 INFO:tasks.workunit.client.0.vm02.stdout:1/784: dread - d4/d2c/d53/fc5 zero size 2026-03-10T10:20:04.218 INFO:tasks.workunit.client.0.vm02.stdout:2/747: readlink d0/d1a/d49/d5e/d65/dc4/dfa/d80/l8d 0 2026-03-10T10:20:04.226 INFO:tasks.workunit.client.0.vm02.stdout:9/709: dread da/d3c/d4c/d38/f84 [0,4194304] 0 2026-03-10T10:20:04.229 INFO:tasks.workunit.client.1.vm05.stdout:1/820: truncate d4/d79/d83/dc5/dcb/fde 3714600 0 2026-03-10T10:20:04.237 INFO:tasks.workunit.client.0.vm02.stdout:4/869: rename d1/d75/ddd/d10e/l84 to d1/d10/dfc/l123 0 2026-03-10T10:20:04.243 INFO:tasks.workunit.client.1.vm05.stdout:6/746: rmdir dd/d36/d3f/d12/d44/d2a/d7f 39 2026-03-10T10:20:04.244 INFO:tasks.workunit.client.0.vm02.stdout:4/870: sync 2026-03-10T10:20:04.257 INFO:tasks.workunit.client.0.vm02.stdout:8/725: dwrite d1/d1c/d43/d6a/d7c/da6/fb1 [0,4194304] 0 2026-03-10T10:20:04.268 INFO:tasks.workunit.client.1.vm05.stdout:0/734: write d1/d2/d39/d3d/f64 [1360857,91631] 0 2026-03-10T10:20:04.269 INFO:tasks.workunit.client.1.vm05.stdout:2/689: write db/d28/d4f/f68 [245438,15006] 0 2026-03-10T10:20:04.269 INFO:tasks.workunit.client.1.vm05.stdout:5/745: link da/db/d26/d70/l90 da/db/d26/d70/d72/lfd 0 2026-03-10T10:20:04.274 INFO:tasks.workunit.client.1.vm05.stdout:8/687: rmdir d7/d2f/d57/dcc 0 2026-03-10T10:20:04.275 INFO:tasks.workunit.client.1.vm05.stdout:6/747: truncate dd/d36/d3f/d12/d44/d30/f8d 690929 0 2026-03-10T10:20:04.277 INFO:tasks.workunit.client.1.vm05.stdout:0/735: read - d1/d2/d39/d3d/d9f/fc2 zero size 2026-03-10T10:20:04.279 INFO:tasks.workunit.client.0.vm02.stdout:3/744: creat d1/d20/ff7 x:0 0 0 2026-03-10T10:20:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:03 vm02.local ceph-mon[50200]: pgmap v13: 65 pgs: 65 active+clean; 2.9 GiB data, 10 GiB used, 110 GiB / 120 GiB avail; 32 MiB/s rd, 68 MiB/s wr, 182 op/s 2026-03-10T10:20:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:03 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:03 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:03 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:03 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:03 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' 2026-03-10T10:20:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:03 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:04.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:03 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd=[{"prefix": "mgr fail", "who": "vm05.coparq"}]: dispatch 2026-03-10T10:20:04.280 INFO:tasks.workunit.client.1.vm05.stdout:0/736: dwrite d1/d2/d9/fc7 [0,4194304] 0 2026-03-10T10:20:04.283 INFO:tasks.workunit.client.0.vm02.stdout:8/726: creat d1/d1c/d43/d6a/da8/d56/fd9 x:0 0 0 2026-03-10T10:20:04.284 INFO:tasks.workunit.client.1.vm05.stdout:4/606: rename d1/d3/f26 to d1/fc7 0 2026-03-10T10:20:04.285 INFO:tasks.workunit.client.1.vm05.stdout:8/688: mknod d7/d14/d24/d3f/d4f/cdb 0 2026-03-10T10:20:04.285 INFO:tasks.workunit.client.0.vm02.stdout:2/748: creat d0/d10/dee/fff x:0 0 0 2026-03-10T10:20:04.289 INFO:tasks.workunit.client.0.vm02.stdout:9/710: mkdir da/de5 0 2026-03-10T10:20:04.292 INFO:tasks.workunit.client.1.vm05.stdout:0/737: fdatasync d1/d2/d9/d31/d13/d15/d4e/f89 0 2026-03-10T10:20:04.293 INFO:tasks.workunit.client.1.vm05.stdout:5/746: mkdir da/d9a/dc7/db4/dfe 0 2026-03-10T10:20:04.294 INFO:tasks.workunit.client.1.vm05.stdout:6/748: sync 2026-03-10T10:20:04.295 INFO:tasks.workunit.client.1.vm05.stdout:4/607: dread - d1/d31/d76/faf zero size 2026-03-10T10:20:04.295 INFO:tasks.workunit.client.0.vm02.stdout:0/778: creat d9/d18/d1a/d22/d24/ffb x:0 0 0 2026-03-10T10:20:04.297 INFO:tasks.workunit.client.1.vm05.stdout:8/689: creat d7/d14/d3a/d49/d65/fdc x:0 0 0 2026-03-10T10:20:04.297 INFO:tasks.workunit.client.0.vm02.stdout:2/749: mknod d0/d1a/d49/d5e/d65/dc4/dfa/dd3/de8/c100 0 2026-03-10T10:20:04.306 INFO:tasks.workunit.client.0.vm02.stdout:9/711: dread da/d3c/d4c/d38/fd5 [0,4194304] 0 2026-03-10T10:20:04.306 INFO:tasks.workunit.client.0.vm02.stdout:9/712: readlink da/d3c/d4c/d2c/d34/d35/l55 0 2026-03-10T10:20:04.307 INFO:tasks.workunit.client.1.vm05.stdout:0/738: creat d1/d2/d9/d31/d13/da2/dab/dce/ff9 x:0 0 0 2026-03-10T10:20:04.310 INFO:tasks.workunit.client.0.vm02.stdout:8/727: mkdir d1/d1c/d24/dad/dbe/dda 0 2026-03-10T10:20:04.311 INFO:tasks.workunit.client.1.vm05.stdout:7/759: write d5/dd/f73 [845904,58131] 0 2026-03-10T10:20:04.313 INFO:tasks.workunit.client.1.vm05.stdout:7/760: chown d5/d1d/d20/d2d/d80/lc3 679216534 1 2026-03-10T10:20:04.314 INFO:tasks.workunit.client.0.vm02.stdout:2/750: sync 2026-03-10T10:20:04.314 INFO:tasks.workunit.client.0.vm02.stdout:2/751: chown d0/d8c 126 1 2026-03-10T10:20:04.316 INFO:tasks.workunit.client.0.vm02.stdout:0/779: rmdir d9/d34/d3d/d65/da2 39 2026-03-10T10:20:04.317 INFO:tasks.workunit.client.0.vm02.stdout:3/745: link d1/d20/d52/lef d1/d8/d44/deb/lf8 0 2026-03-10T10:20:04.320 INFO:tasks.workunit.client.0.vm02.stdout:3/746: dwrite d1/d20/d52/dd3/fe7 [0,4194304] 0 2026-03-10T10:20:04.327 INFO:tasks.workunit.client.0.vm02.stdout:9/713: rmdir da/d3c/d4c/d2c/d34/dc2 39 2026-03-10T10:20:04.328 INFO:tasks.workunit.client.1.vm05.stdout:4/608: mknod d1/d64/da9/cc8 0 2026-03-10T10:20:04.330 INFO:tasks.workunit.client.1.vm05.stdout:6/749: read f2 [1425362,129910] 0 2026-03-10T10:20:04.336 INFO:tasks.workunit.client.0.vm02.stdout:9/714: sync 2026-03-10T10:20:04.346 INFO:tasks.workunit.client.1.vm05.stdout:5/747: dread - da/db/de9/fd2 zero size 2026-03-10T10:20:04.359 INFO:tasks.workunit.client.0.vm02.stdout:3/747: rename d1/d6/f36 to d1/d8/d44/ff9 0 2026-03-10T10:20:04.375 INFO:tasks.workunit.client.1.vm05.stdout:6/750: creat dd/d36/ff1 x:0 0 0 2026-03-10T10:20:04.376 INFO:tasks.workunit.client.0.vm02.stdout:2/752: mkdir d0/d71/dfb/d101 0 2026-03-10T10:20:04.378 INFO:tasks.workunit.client.1.vm05.stdout:0/739: mknod d1/d2/d9/d31/d12/d20/dbe/df1/cfa 0 2026-03-10T10:20:04.382 INFO:tasks.workunit.client.0.vm02.stdout:6/716: dwrite d0/d8/d29/d94/fa9 [0,4194304] 0 2026-03-10T10:20:04.387 INFO:tasks.workunit.client.1.vm05.stdout:4/609: rename d1/d3/c44 to d1/d31/d76/dac/db8/dbf/cc9 0 2026-03-10T10:20:04.388 INFO:tasks.workunit.client.1.vm05.stdout:4/610: chown d1/d64/c7e 1544 1 2026-03-10T10:20:04.396 INFO:tasks.workunit.client.1.vm05.stdout:8/690: link d7/ld4 d7/d14/d24/d3f/d6a/db0/ldd 0 2026-03-10T10:20:04.397 INFO:tasks.workunit.client.0.vm02.stdout:3/748: mkdir d1/d8/d21/d73/d78/d84/dfa 0 2026-03-10T10:20:04.403 INFO:tasks.workunit.client.1.vm05.stdout:0/740: fdatasync d1/d2/d39/f69 0 2026-03-10T10:20:04.410 INFO:tasks.workunit.client.1.vm05.stdout:5/748: mknod da/d96/df5/cff 0 2026-03-10T10:20:04.421 INFO:tasks.workunit.client.1.vm05.stdout:7/761: rename d5/d1d/d20/d35/f37 to d5/d1d/d29/d3e/d8c/fef 0 2026-03-10T10:20:04.423 INFO:tasks.workunit.client.0.vm02.stdout:6/717: unlink d0/d8/l1d 0 2026-03-10T10:20:04.426 INFO:tasks.workunit.client.1.vm05.stdout:4/611: truncate d1/d31/d76/f95 674 0 2026-03-10T10:20:04.428 INFO:tasks.workunit.client.0.vm02.stdout:5/885: dwrite d1/db/d11/d84/fb2 [0,4194304] 0 2026-03-10T10:20:04.436 INFO:tasks.workunit.client.0.vm02.stdout:7/740: write d1/d1b/d49/fc4 [796578,101542] 0 2026-03-10T10:20:04.441 INFO:tasks.workunit.client.1.vm05.stdout:6/751: creat dd/d36/d3f/d12/d44/d30/d4a/d6e/dc3/ff2 x:0 0 0 2026-03-10T10:20:04.450 INFO:tasks.workunit.client.0.vm02.stdout:5/886: sync 2026-03-10T10:20:04.450 INFO:tasks.workunit.client.1.vm05.stdout:3/748: write dd/d15/fdf [3362750,89304] 0 2026-03-10T10:20:04.452 INFO:tasks.workunit.client.1.vm05.stdout:9/660: dwrite d0/df/d11/f84 [0,4194304] 0 2026-03-10T10:20:04.461 INFO:tasks.workunit.client.0.vm02.stdout:9/715: rename da/d3c/d4c/d38/d4a/cdb to da/d3c/d4c/d2c/d34/ce6 0 2026-03-10T10:20:04.461 INFO:tasks.workunit.client.1.vm05.stdout:4/612: unlink d1/d31/dc/f2a 0 2026-03-10T10:20:04.462 INFO:tasks.workunit.client.1.vm05.stdout:8/691: symlink d7/d14/d62/d90/dd3/lde 0 2026-03-10T10:20:04.467 INFO:tasks.workunit.client.0.vm02.stdout:2/753: creat d0/d1a/d49/d5e/f102 x:0 0 0 2026-03-10T10:20:04.467 INFO:tasks.workunit.client.1.vm05.stdout:5/749: mknod da/d9a/dc7/db4/c100 0 2026-03-10T10:20:04.470 INFO:tasks.workunit.client.0.vm02.stdout:1/785: write d4/da/d27/f6a [2090494,125048] 0 2026-03-10T10:20:04.472 INFO:tasks.workunit.client.1.vm05.stdout:7/762: dread d5/dd/f62 [0,4194304] 0 2026-03-10T10:20:04.473 INFO:tasks.workunit.client.0.vm02.stdout:2/754: read d0/f44 [53667,16727] 0 2026-03-10T10:20:04.483 INFO:tasks.workunit.client.1.vm05.stdout:9/661: rmdir d0/d1 39 2026-03-10T10:20:04.483 INFO:tasks.workunit.client.1.vm05.stdout:0/741: rename d1/d2/d9/d31/d13/da2/dab/ded/ff4 to d1/d2/dc6/de7/ffb 0 2026-03-10T10:20:04.489 INFO:tasks.workunit.client.1.vm05.stdout:4/613: creat d1/d31/d76/dac/db8/dbf/fca x:0 0 0 2026-03-10T10:20:04.489 INFO:tasks.workunit.client.0.vm02.stdout:4/871: write d1/d52/d53/f9b [948883,48210] 0 2026-03-10T10:20:04.489 INFO:tasks.workunit.client.0.vm02.stdout:4/872: write d1/d10/f71 [2097147,119917] 0 2026-03-10T10:20:04.490 INFO:tasks.workunit.client.1.vm05.stdout:8/692: truncate d7/d14/d15/d3b/f73 367521 0 2026-03-10T10:20:04.491 INFO:tasks.workunit.client.0.vm02.stdout:9/716: creat da/de5/fe7 x:0 0 0 2026-03-10T10:20:04.494 INFO:tasks.workunit.client.0.vm02.stdout:2/755: truncate d0/d10/f19 1696055 0 2026-03-10T10:20:04.495 INFO:tasks.workunit.client.0.vm02.stdout:2/756: write d0/d1a/d49/fc8 [2973464,33776] 0 2026-03-10T10:20:04.495 INFO:tasks.workunit.client.0.vm02.stdout:2/757: stat d0/d10/f6b 0 2026-03-10T10:20:04.500 INFO:tasks.workunit.client.1.vm05.stdout:7/763: mknod d5/d1d/d29/d3e/d8c/d7f/cf0 0 2026-03-10T10:20:04.501 INFO:tasks.workunit.client.0.vm02.stdout:4/873: creat d1/d10/d88/db2/f124 x:0 0 0 2026-03-10T10:20:04.501 INFO:tasks.workunit.client.1.vm05.stdout:7/764: write d5/d1d/d29/d3e/d8c/d96/f9e [2492301,82673] 0 2026-03-10T10:20:04.510 INFO:tasks.workunit.client.1.vm05.stdout:0/742: dread d1/d2/d9/d31/d13/d2f/f33 [0,4194304] 0 2026-03-10T10:20:04.510 INFO:tasks.workunit.client.1.vm05.stdout:0/743: stat d1/d2/d9/d31/d13/d17/c3a 0 2026-03-10T10:20:04.515 INFO:tasks.workunit.client.0.vm02.stdout:9/717: truncate da/d3c/d4c/d38/d82/d8c/fa8 787357 0 2026-03-10T10:20:04.605 INFO:tasks.workunit.client.1.vm05.stdout:2/690: write db/d61/f92 [593263,83318] 0 2026-03-10T10:20:04.611 INFO:tasks.workunit.client.1.vm05.stdout:2/691: sync 2026-03-10T10:20:04.611 INFO:tasks.workunit.client.1.vm05.stdout:4/614: truncate d1/d31/f7a 4411884 0 2026-03-10T10:20:04.614 INFO:tasks.workunit.client.0.vm02.stdout:7/741: getdents d1/d1b/d8e 0 2026-03-10T10:20:04.625 INFO:tasks.workunit.client.0.vm02.stdout:4/874: mknod d1/de8/d109/c125 0 2026-03-10T10:20:04.627 INFO:tasks.workunit.client.0.vm02.stdout:9/718: truncate da/d3c/d4c/d2c/d34/f83 228017 0 2026-03-10T10:20:04.630 INFO:tasks.workunit.client.1.vm05.stdout:1/821: dwrite d4/df/d1c/d92/f9e [0,4194304] 0 2026-03-10T10:20:04.640 INFO:tasks.workunit.client.1.vm05.stdout:7/765: chown d5/d1d/d29/ccb 42 1 2026-03-10T10:20:04.644 INFO:tasks.workunit.client.0.vm02.stdout:7/742: dread d1/d1b/f43 [0,4194304] 0 2026-03-10T10:20:04.646 INFO:tasks.workunit.client.1.vm05.stdout:9/662: fdatasync d0/f45 0 2026-03-10T10:20:04.661 INFO:tasks.workunit.client.1.vm05.stdout:2/692: dread - db/d12/fb2 zero size 2026-03-10T10:20:04.664 INFO:tasks.workunit.client.1.vm05.stdout:2/693: truncate db/d61/d67/f77 9039710 0 2026-03-10T10:20:04.664 INFO:tasks.workunit.client.0.vm02.stdout:8/728: write d1/d1c/d43/d6a/da8/d56/f81 [4728252,102262] 0 2026-03-10T10:20:04.664 INFO:tasks.workunit.client.0.vm02.stdout:8/729: readlink d1/d1c/d43/laa 0 2026-03-10T10:20:04.664 INFO:tasks.workunit.client.0.vm02.stdout:8/730: write d1/d1c/d43/f7e [5464998,59620] 0 2026-03-10T10:20:04.665 INFO:tasks.workunit.client.0.vm02.stdout:8/731: dread d1/d1c/d43/d6a/da8/f44 [0,4194304] 0 2026-03-10T10:20:04.668 INFO:tasks.workunit.client.0.vm02.stdout:4/875: dread d1/d75/ddd/d10e/d5e/d78/f3f [0,4194304] 0 2026-03-10T10:20:04.668 INFO:tasks.workunit.client.0.vm02.stdout:4/876: dread - d1/d32/fb3 zero size 2026-03-10T10:20:04.680 INFO:tasks.workunit.client.0.vm02.stdout:7/743: creat d1/d1b/d8f/dad/d7e/dba/dea/fed x:0 0 0 2026-03-10T10:20:04.689 INFO:tasks.workunit.client.0.vm02.stdout:3/749: getdents d1/d8/d44 0 2026-03-10T10:20:04.707 INFO:tasks.workunit.client.1.vm05.stdout:1/822: mknod d4/df/d76/cf4 0 2026-03-10T10:20:04.712 INFO:tasks.workunit.client.1.vm05.stdout:5/750: creat da/db/d26/f101 x:0 0 0 2026-03-10T10:20:04.715 INFO:tasks.workunit.client.0.vm02.stdout:3/750: truncate d1/fb6 815536 0 2026-03-10T10:20:04.721 INFO:tasks.workunit.client.0.vm02.stdout:0/780: dwrite d9/d18/d1a/d22/d24/d80/fe0 [0,4194304] 0 2026-03-10T10:20:04.723 INFO:tasks.workunit.client.0.vm02.stdout:7/744: dread d1/d1b/d8f/f59 [0,4194304] 0 2026-03-10T10:20:04.723 INFO:tasks.workunit.client.0.vm02.stdout:7/745: chown d1/dc/d16/f1f 456 1 2026-03-10T10:20:04.740 INFO:tasks.workunit.client.0.vm02.stdout:3/751: write d1/d8/d44/ff9 [3911678,62227] 0 2026-03-10T10:20:04.770 INFO:tasks.workunit.client.0.vm02.stdout:5/887: dwrite d1/db/d11/f47 [0,4194304] 0 2026-03-10T10:20:04.780 INFO:tasks.workunit.client.0.vm02.stdout:1/786: dwrite d4/d2c/fac [0,4194304] 0 2026-03-10T10:20:04.795 INFO:tasks.workunit.client.1.vm05.stdout:6/752: dwrite dd/d36/d3f/f6f [0,4194304] 0 2026-03-10T10:20:04.811 INFO:tasks.workunit.client.1.vm05.stdout:5/751: symlink da/db/d28/d8a/de3/l102 0 2026-03-10T10:20:04.815 INFO:tasks.workunit.client.1.vm05.stdout:7/766: link d5/d17/l5f d5/d26/d9c/de7/lf1 0 2026-03-10T10:20:04.822 INFO:tasks.workunit.client.1.vm05.stdout:0/744: truncate d1/d2/d9/d31/d12/fc3 377193 0 2026-03-10T10:20:04.823 INFO:tasks.workunit.client.1.vm05.stdout:0/745: dread - d1/d2/d9/d31/d13/da2/dab/dce/ff9 zero size 2026-03-10T10:20:04.838 INFO:tasks.workunit.client.1.vm05.stdout:5/752: creat da/d96/df5/f103 x:0 0 0 2026-03-10T10:20:04.841 INFO:tasks.workunit.client.1.vm05.stdout:5/753: dwrite da/db/d28/fd7 [0,4194304] 0 2026-03-10T10:20:04.843 INFO:tasks.workunit.client.1.vm05.stdout:0/746: mkdir d1/d2/d9/d31/d13/d15/d4e/d8a/dfc 0 2026-03-10T10:20:04.851 INFO:tasks.workunit.client.1.vm05.stdout:5/754: rmdir da/db/d28/d6e 39 2026-03-10T10:20:04.870 INFO:tasks.workunit.client.1.vm05.stdout:7/767: getdents d5/d1d/d20/d91/da7 0 2026-03-10T10:20:04.871 INFO:tasks.workunit.client.0.vm02.stdout:0/781: unlink d9/d34/d3d/d67/l75 0 2026-03-10T10:20:04.872 INFO:tasks.workunit.client.0.vm02.stdout:0/782: read - d9/d34/d3d/d65/d89/fcd zero size 2026-03-10T10:20:04.882 INFO:tasks.workunit.client.0.vm02.stdout:3/752: creat d1/d8/d21/d73/ffb x:0 0 0 2026-03-10T10:20:04.882 INFO:tasks.workunit.client.0.vm02.stdout:3/753: stat d1/d8/d21/d7d/l9e 0 2026-03-10T10:20:04.896 INFO:tasks.workunit.client.0.vm02.stdout:1/787: unlink d4/d2c/fac 0 2026-03-10T10:20:04.897 INFO:tasks.workunit.client.0.vm02.stdout:0/783: fdatasync d9/d34/d3d/d67/fc3 0 2026-03-10T10:20:04.899 INFO:tasks.workunit.client.0.vm02.stdout:3/754: mknod d1/d8/d86/db1/cfc 0 2026-03-10T10:20:04.900 INFO:tasks.workunit.client.0.vm02.stdout:5/888: dread d1/db/f96 [0,4194304] 0 2026-03-10T10:20:04.901 INFO:tasks.workunit.client.0.vm02.stdout:6/718: read - d0/d8/d29/d2f/d50/d7e/db2/dbb/fd1 zero size 2026-03-10T10:20:04.901 INFO:tasks.workunit.client.0.vm02.stdout:5/889: chown d1/db/d11/d1a/fdc 54376 1 2026-03-10T10:20:04.903 INFO:tasks.workunit.client.0.vm02.stdout:1/788: truncate d4/da/d1a/d5b/f9f 4849855 0 2026-03-10T10:20:04.909 INFO:tasks.workunit.client.0.vm02.stdout:5/890: read d1/db/d11/d62/fbf [101380,107687] 0 2026-03-10T10:20:04.911 INFO:tasks.workunit.client.0.vm02.stdout:1/789: unlink d4/da/f71 0 2026-03-10T10:20:04.912 INFO:tasks.workunit.client.0.vm02.stdout:1/790: readlink d4/da/d27/d38/l42 0 2026-03-10T10:20:04.923 INFO:tasks.workunit.client.0.vm02.stdout:5/891: creat d1/db/d11/d13/d28/d37/dce/f130 x:0 0 0 2026-03-10T10:20:04.924 INFO:tasks.workunit.client.0.vm02.stdout:1/791: truncate d4/da/f12 2344585 0 2026-03-10T10:20:04.929 INFO:tasks.workunit.client.0.vm02.stdout:0/784: getdents d9/d18/d1a/d22/d24/d80/d49 0 2026-03-10T10:20:04.932 INFO:tasks.workunit.client.0.vm02.stdout:0/785: dwrite d9/d18/d1a/d3c/f92 [4194304,4194304] 0 2026-03-10T10:20:04.938 INFO:tasks.workunit.client.0.vm02.stdout:0/786: mknod d9/d18/d1a/cfc 0 2026-03-10T10:20:04.943 INFO:tasks.workunit.client.0.vm02.stdout:0/787: creat d9/d34/d3d/d65/d89/dd3/da7/ffd x:0 0 0 2026-03-10T10:20:05.038 INFO:tasks.workunit.client.0.vm02.stdout:2/758: dwrite d0/f36 [4194304,4194304] 0 2026-03-10T10:20:05.045 INFO:tasks.workunit.client.0.vm02.stdout:2/759: truncate d0/d10/d81/f9b 2930281 0 2026-03-10T10:20:05.046 INFO:tasks.workunit.client.0.vm02.stdout:2/760: write d0/d10/ff6 [931201,50662] 0 2026-03-10T10:20:05.052 INFO:tasks.workunit.client.0.vm02.stdout:2/761: truncate d0/d1a/d49/d5e/d65/f9e 734607 0 2026-03-10T10:20:05.071 INFO:tasks.workunit.client.1.vm05.stdout:8/693: dwrite d7/d2f/fb4 [0,4194304] 0 2026-03-10T10:20:05.072 INFO:tasks.workunit.client.1.vm05.stdout:8/694: chown d7/d14/d24/f34 30215615 1 2026-03-10T10:20:05.074 INFO:tasks.workunit.client.1.vm05.stdout:8/695: creat d7/d14/d15/da7/fdf x:0 0 0 2026-03-10T10:20:05.083 INFO:tasks.workunit.client.0.vm02.stdout:4/877: write d1/d32/fb3 [327349,8340] 0 2026-03-10T10:20:05.085 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:04 vm02.local ceph-mon[50200]: osdmap e42: 6 total, 6 up, 6 in 2026-03-10T10:20:05.085 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:04 vm02.local ceph-mon[50200]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd='[{"prefix": "mgr fail", "who": "vm05.coparq"}]': finished 2026-03-10T10:20:05.085 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:04 vm02.local ceph-mon[50200]: mgrmap e26: vm02.zmavgl(active, starting, since 0.118506s) 2026-03-10T10:20:05.086 INFO:tasks.workunit.client.1.vm05.stdout:4/615: creat d1/d3/fcb x:0 0 0 2026-03-10T10:20:05.087 INFO:tasks.workunit.client.0.vm02.stdout:4/878: creat d1/d75/ddd/d10e/d117/f126 x:0 0 0 2026-03-10T10:20:05.088 INFO:tasks.workunit.client.0.vm02.stdout:4/879: fsync d1/d52/d53/fbb 0 2026-03-10T10:20:05.098 INFO:tasks.workunit.client.1.vm05.stdout:4/616: dread d1/fc7 [0,4194304] 0 2026-03-10T10:20:05.112 INFO:tasks.workunit.client.1.vm05.stdout:4/617: getdents d1 0 2026-03-10T10:20:05.112 INFO:tasks.workunit.client.1.vm05.stdout:4/618: readlink d1/d3/lb4 0 2026-03-10T10:20:05.114 INFO:tasks.workunit.client.0.vm02.stdout:8/732: dwrite d1/d1c/d43/d5b/d88/fb9 [0,4194304] 0 2026-03-10T10:20:05.124 INFO:tasks.workunit.client.1.vm05.stdout:4/619: mkdir d1/d64/da9/dae/dcc 0 2026-03-10T10:20:05.127 INFO:tasks.workunit.client.0.vm02.stdout:8/733: truncate d1/d1c/d23/d25/f4c 769137 0 2026-03-10T10:20:05.146 INFO:tasks.workunit.client.0.vm02.stdout:8/734: link d1/d1c/d24/f31 d1/d1c/d43/d5b/d88/dac/d83/d9f/fdb 0 2026-03-10T10:20:05.148 INFO:tasks.workunit.client.1.vm05.stdout:1/823: dwrite d4/d3d/d6e/fd5 [0,4194304] 0 2026-03-10T10:20:05.149 INFO:tasks.workunit.client.1.vm05.stdout:1/824: dread - d4/d79/d83/dc5/dcb/fd0 zero size 2026-03-10T10:20:05.154 INFO:tasks.workunit.client.1.vm05.stdout:9/663: write d0/d1/d13/de/d21/f76 [3492030,22659] 0 2026-03-10T10:20:05.164 INFO:tasks.workunit.client.1.vm05.stdout:9/664: fsync d0/d1/d13/d26/f7c 0 2026-03-10T10:20:05.170 INFO:tasks.workunit.client.1.vm05.stdout:2/694: dwrite db/d2d/f48 [0,4194304] 0 2026-03-10T10:20:05.176 INFO:tasks.workunit.client.1.vm05.stdout:2/695: mknod db/d28/d4f/d59/da4/d81/da7/cde 0 2026-03-10T10:20:05.178 INFO:tasks.workunit.client.1.vm05.stdout:2/696: dread - db/d28/d4f/d59/da4/fca zero size 2026-03-10T10:20:05.183 INFO:tasks.workunit.client.1.vm05.stdout:2/697: fdatasync db/d2d/f65 0 2026-03-10T10:20:05.183 INFO:tasks.workunit.client.1.vm05.stdout:2/698: chown db/d1c/d40/d62 10747231 1 2026-03-10T10:20:05.190 INFO:tasks.workunit.client.1.vm05.stdout:2/699: symlink db/d28/dbc/ldf 0 2026-03-10T10:20:05.192 INFO:tasks.workunit.client.1.vm05.stdout:2/700: rmdir db/d28/d4f/d8b/d9a/d9d 39 2026-03-10T10:20:05.197 INFO:tasks.workunit.client.1.vm05.stdout:7/768: rmdir d5/d1d/d29/d3e/d8c/d96 39 2026-03-10T10:20:05.208 INFO:tasks.workunit.client.1.vm05.stdout:2/701: fsync db/f25 0 2026-03-10T10:20:05.212 INFO:tasks.workunit.client.1.vm05.stdout:7/769: rename l2 to d5/d1d/d20/d2d/d5d/d7a/lf2 0 2026-03-10T10:20:05.216 INFO:tasks.workunit.client.1.vm05.stdout:1/825: dread d4/d39/d3e/f3f [0,4194304] 0 2026-03-10T10:20:05.218 INFO:tasks.workunit.client.1.vm05.stdout:7/770: read d5/d1d/d20/d35/f47 [3000111,104962] 0 2026-03-10T10:20:05.226 INFO:tasks.workunit.client.0.vm02.stdout:8/735: rename d1/f68 to d1/d1c/d23/d25/fdc 0 2026-03-10T10:20:05.226 INFO:tasks.workunit.client.0.vm02.stdout:8/736: stat d1/d1c/d43/f52 0 2026-03-10T10:20:05.227 INFO:tasks.workunit.client.0.vm02.stdout:8/737: fdatasync d1/f1b 0 2026-03-10T10:20:05.231 INFO:tasks.workunit.client.1.vm05.stdout:2/702: dread db/d28/d4f/d59/da4/faf [0,4194304] 0 2026-03-10T10:20:05.232 INFO:tasks.workunit.client.0.vm02.stdout:8/738: unlink d1/d1c/d43/d6a/da8/d56/db5/fc1 0 2026-03-10T10:20:05.239 INFO:tasks.workunit.client.1.vm05.stdout:2/703: rename db/d28/d4f/d59/da4/d6c/cd6 to db/d1c/d40/d62/d85/ce0 0 2026-03-10T10:20:05.240 INFO:tasks.workunit.client.1.vm05.stdout:3/749: creat dd/d15/f108 x:0 0 0 2026-03-10T10:20:05.240 INFO:tasks.workunit.client.1.vm05.stdout:3/750: write dd/d20/d9e/dc0/ddd/fea [1956064,73783] 0 2026-03-10T10:20:05.242 INFO:tasks.workunit.client.0.vm02.stdout:8/739: rmdir d1/d1c/d43 39 2026-03-10T10:20:05.251 INFO:tasks.workunit.client.0.vm02.stdout:8/740: creat d1/d1c/d24/d71/fdd x:0 0 0 2026-03-10T10:20:05.257 INFO:tasks.workunit.client.1.vm05.stdout:3/751: rmdir dd/d15/d24/d2c/d6d 39 2026-03-10T10:20:05.259 INFO:tasks.workunit.client.1.vm05.stdout:2/704: symlink db/d61/dcc/le1 0 2026-03-10T10:20:05.259 INFO:tasks.workunit.client.1.vm05.stdout:2/705: chown db/d2d/fcb 1257 1 2026-03-10T10:20:05.262 INFO:tasks.workunit.client.1.vm05.stdout:2/706: dwrite db/d28/d4f/d59/f7e [0,4194304] 0 2026-03-10T10:20:05.263 INFO:tasks.workunit.client.1.vm05.stdout:3/752: creat dd/dbe/f109 x:0 0 0 2026-03-10T10:20:05.264 INFO:tasks.workunit.client.1.vm05.stdout:3/753: chown dd/d20/d56/fb7 2301085 1 2026-03-10T10:20:05.264 INFO:tasks.workunit.client.1.vm05.stdout:6/753: write dd/d36/f71 [1022375,100050] 0 2026-03-10T10:20:05.269 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:04 vm05.local ceph-mon[59051]: osdmap e42: 6 total, 6 up, 6 in 2026-03-10T10:20:05.269 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:04 vm05.local ceph-mon[59051]: from='mgr.14674 192.168.123.105:0/3840982098' entity='mgr.vm05.coparq' cmd='[{"prefix": "mgr fail", "who": "vm05.coparq"}]': finished 2026-03-10T10:20:05.269 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:04 vm05.local ceph-mon[59051]: mgrmap e26: vm02.zmavgl(active, starting, since 0.118506s) 2026-03-10T10:20:05.284 INFO:tasks.workunit.client.1.vm05.stdout:6/754: fdatasync dd/d36/d3f/d12/f8f 0 2026-03-10T10:20:05.288 INFO:tasks.workunit.client.1.vm05.stdout:2/707: link db/d12/l46 db/d1c/d40/d80/le2 0 2026-03-10T10:20:05.290 INFO:tasks.workunit.client.1.vm05.stdout:2/708: dwrite db/d1c/f9b [0,4194304] 0 2026-03-10T10:20:05.304 INFO:tasks.workunit.client.1.vm05.stdout:0/747: write d1/d2/d9/d31/fa8 [3639906,43129] 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.1.vm05.stdout:0/748: chown d1/d2/d9 1 1 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.1.vm05.stdout:5/755: write da/db/dee/f7d [5142487,60537] 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.1.vm05.stdout:0/749: mknod d1/d2/d9/d31/d13/d15/d4e/d8a/dfc/cfd 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.1.vm05.stdout:5/756: truncate da/db/d26/fdd 122110 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.1.vm05.stdout:0/750: unlink d1/fb8 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.0.vm02.stdout:7/746: dwrite d1/d1b/d8f/f93 [0,4194304] 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.0.vm02.stdout:7/747: stat d1/dc/d16/d28/d2d/dae 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.0.vm02.stdout:7/748: mkdir d1/d1b/d49/d98/dee 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.0.vm02.stdout:1/792: unlink d4/da/d1a/d5b/f9f 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.0.vm02.stdout:7/749: mkdir d1/def 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.0.vm02.stdout:3/755: dwrite d1/f3 [0,4194304] 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.0.vm02.stdout:7/750: creat d1/dc/d60/ff0 x:0 0 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.0.vm02.stdout:3/756: symlink d1/d8/d86/da2/lfd 0 2026-03-10T10:20:05.347 INFO:tasks.workunit.client.0.vm02.stdout:5/892: write d1/db/f96 [4052331,63551] 0 2026-03-10T10:20:05.351 INFO:tasks.workunit.client.0.vm02.stdout:5/893: creat d1/db/d11/d13/d28/d37/d3d/da3/d113/f131 x:0 0 0 2026-03-10T10:20:05.352 INFO:tasks.workunit.client.0.vm02.stdout:0/788: dwrite d9/d34/fd5 [0,4194304] 0 2026-03-10T10:20:05.353 INFO:tasks.workunit.client.0.vm02.stdout:8/741: sync 2026-03-10T10:20:05.369 INFO:tasks.workunit.client.1.vm05.stdout:2/709: mkdir db/d28/d4f/d8b/de3 0 2026-03-10T10:20:05.376 INFO:tasks.workunit.client.0.vm02.stdout:0/789: fsync d9/d34/d3d/d65/d89/dd3/da7/db7/de1/fe6 0 2026-03-10T10:20:05.377 INFO:tasks.workunit.client.0.vm02.stdout:0/790: truncate d9/d18/d1a/d22/d24/ffb 895725 0 2026-03-10T10:20:05.378 INFO:tasks.workunit.client.0.vm02.stdout:0/791: chown d9/d18/d1a/d22/d24/d8e/fce 3140991 1 2026-03-10T10:20:05.381 INFO:tasks.workunit.client.0.vm02.stdout:8/742: symlink d1/d1c/d43/lde 0 2026-03-10T10:20:05.382 INFO:tasks.workunit.client.0.vm02.stdout:0/792: rename d9/d18/d1a/d22/d24/f40 to d9/d18/d1a/d22/d24/d51/ffe 0 2026-03-10T10:20:05.395 INFO:tasks.workunit.client.1.vm05.stdout:2/710: sync 2026-03-10T10:20:05.409 INFO:tasks.workunit.client.0.vm02.stdout:8/743: sync 2026-03-10T10:20:05.409 INFO:tasks.workunit.client.0.vm02.stdout:8/744: stat d1/f7d 0 2026-03-10T10:20:05.411 INFO:tasks.workunit.client.0.vm02.stdout:1/793: dread d4/fe [0,4194304] 0 2026-03-10T10:20:05.420 INFO:tasks.workunit.client.0.vm02.stdout:7/751: dread d1/dc/d55/f8d [0,4194304] 0 2026-03-10T10:20:05.420 INFO:tasks.workunit.client.0.vm02.stdout:7/752: chown d1/dc/d10 0 1 2026-03-10T10:20:05.423 INFO:tasks.workunit.client.0.vm02.stdout:5/894: unlink d1/db/d11/d84/d40/l54 0 2026-03-10T10:20:05.425 INFO:tasks.workunit.client.0.vm02.stdout:8/745: dread d1/d1c/d43/d6a/f9c [0,4194304] 0 2026-03-10T10:20:05.432 INFO:tasks.workunit.client.0.vm02.stdout:7/753: fdatasync d1/dc/f3 0 2026-03-10T10:20:05.437 INFO:tasks.workunit.client.0.vm02.stdout:7/754: symlink d1/dc/d10/d38/lf1 0 2026-03-10T10:20:05.447 INFO:tasks.workunit.client.0.vm02.stdout:5/895: creat d1/db/d11/d84/d40/d4f/f132 x:0 0 0 2026-03-10T10:20:05.449 INFO:tasks.workunit.client.1.vm05.stdout:8/696: write d7/d14/d24/f2c [1617434,5297] 0 2026-03-10T10:20:05.454 INFO:tasks.workunit.client.0.vm02.stdout:8/746: link d1/d1c/d24/dad/cc9 d1/d1c/d43/d6a/cdf 0 2026-03-10T10:20:05.456 INFO:tasks.workunit.client.1.vm05.stdout:8/697: unlink d7/d14/d24/fd1 0 2026-03-10T10:20:05.462 INFO:tasks.workunit.client.0.vm02.stdout:4/880: write d1/d10/f30 [8822610,85319] 0 2026-03-10T10:20:05.465 INFO:tasks.workunit.client.0.vm02.stdout:0/793: symlink d9/d18/d1a/d22/d24/d80/lff 0 2026-03-10T10:20:05.465 INFO:tasks.workunit.client.0.vm02.stdout:0/794: chown d9/f28 0 1 2026-03-10T10:20:05.467 INFO:tasks.workunit.client.0.vm02.stdout:6/719: symlink d0/le7 0 2026-03-10T10:20:05.480 INFO:tasks.workunit.client.1.vm05.stdout:8/698: truncate d7/d14/d24/d3f/f7d 2712526 0 2026-03-10T10:20:05.491 INFO:tasks.workunit.client.1.vm05.stdout:4/620: write d1/d31/dc/f53 [1044024,49238] 0 2026-03-10T10:20:05.493 INFO:tasks.workunit.client.1.vm05.stdout:4/621: dread d1/fc7 [0,4194304] 0 2026-03-10T10:20:05.499 INFO:tasks.workunit.client.0.vm02.stdout:4/881: rmdir d1/d75/ddd/d10e/d5e/d78/d55 39 2026-03-10T10:20:05.503 INFO:tasks.workunit.client.1.vm05.stdout:4/622: unlink d1/d3/f6c 0 2026-03-10T10:20:05.523 INFO:tasks.workunit.client.0.vm02.stdout:6/720: rename d0/d8/d29/d2f/d50/d7e to d0/d8/d29/d52/de8 0 2026-03-10T10:20:05.539 INFO:tasks.workunit.client.1.vm05.stdout:4/623: truncate d1/d31/d4b/f9e 4631566 0 2026-03-10T10:20:05.539 INFO:tasks.workunit.client.0.vm02.stdout:2/762: mknod d0/d1a/d49/d5e/c103 0 2026-03-10T10:20:05.540 INFO:tasks.workunit.client.0.vm02.stdout:7/755: link d1/d1b/d8f/d67/cab d1/d1b/d8f/dad/d7e/dba/ddf/cf2 0 2026-03-10T10:20:05.555 INFO:tasks.workunit.client.1.vm05.stdout:4/624: creat d1/fcd x:0 0 0 2026-03-10T10:20:05.556 INFO:tasks.workunit.client.1.vm05.stdout:9/665: write d0/f2f [210735,93916] 0 2026-03-10T10:20:05.562 INFO:tasks.workunit.client.0.vm02.stdout:4/882: unlink d1/d75/ddd/d10e/d5e/d78/c6b 0 2026-03-10T10:20:05.577 INFO:tasks.workunit.client.0.vm02.stdout:2/763: dread d0/d71/fb7 [0,4194304] 0 2026-03-10T10:20:05.588 INFO:tasks.workunit.client.1.vm05.stdout:7/771: chown d5/d1d/d20/d2d/d5d/d7a/lf2 55 1 2026-03-10T10:20:05.589 INFO:tasks.workunit.client.0.vm02.stdout:4/883: rename d1/def/ff8 to d1/d75/ddd/f127 0 2026-03-10T10:20:05.598 INFO:tasks.workunit.client.0.vm02.stdout:9/719: symlink da/d3c/d4c/d38/d82/le8 0 2026-03-10T10:20:05.601 INFO:tasks.workunit.client.1.vm05.stdout:7/772: symlink d5/d1d/d29/d3e/d8c/d82/d90/d9a/lf3 0 2026-03-10T10:20:05.604 INFO:tasks.workunit.client.1.vm05.stdout:1/826: dwrite d4/df/f73 [0,4194304] 0 2026-03-10T10:20:05.605 INFO:tasks.workunit.client.0.vm02.stdout:2/764: mknod d0/d1a/d49/d5e/d65/db0/c104 0 2026-03-10T10:20:05.612 INFO:tasks.workunit.client.1.vm05.stdout:7/773: symlink d5/d17/d66/lf4 0 2026-03-10T10:20:05.616 INFO:tasks.workunit.client.0.vm02.stdout:8/747: link d1/d1c/d43/d5b/d88/dac/d83/d9f/fc0 d1/d1c/fe0 0 2026-03-10T10:20:05.620 INFO:tasks.workunit.client.1.vm05.stdout:1/827: creat d4/d79/d83/dc5/dcb/ff5 x:0 0 0 2026-03-10T10:20:05.624 INFO:tasks.workunit.client.1.vm05.stdout:7/774: creat d5/d1d/d20/d2d/ff5 x:0 0 0 2026-03-10T10:20:05.629 INFO:tasks.workunit.client.0.vm02.stdout:4/884: fdatasync d1/d75/ddd/d10e/d5e/d78/d7f/f8e 0 2026-03-10T10:20:05.636 INFO:tasks.workunit.client.1.vm05.stdout:1/828: creat d4/d39/d3e/db1/db8/ff6 x:0 0 0 2026-03-10T10:20:05.639 INFO:tasks.workunit.client.1.vm05.stdout:7/775: dread - d5/d1d/d20/d2d/d5d/d7a/f7b zero size 2026-03-10T10:20:05.639 INFO:tasks.workunit.client.1.vm05.stdout:7/776: readlink d5/d1d/d20/d35/l4a 0 2026-03-10T10:20:05.644 INFO:tasks.workunit.client.0.vm02.stdout:9/720: creat da/d3c/d4c/d38/d82/d89/fe9 x:0 0 0 2026-03-10T10:20:05.646 INFO:tasks.workunit.client.1.vm05.stdout:7/777: rmdir d5/d1d/d29/d3e 39 2026-03-10T10:20:05.646 INFO:tasks.workunit.client.1.vm05.stdout:7/778: readlink d5/d1d/d29/l2a 0 2026-03-10T10:20:05.649 INFO:tasks.workunit.client.1.vm05.stdout:1/829: dread d4/dd/f64 [0,4194304] 0 2026-03-10T10:20:05.651 INFO:tasks.workunit.client.1.vm05.stdout:3/754: dwrite dd/d39/d5c/f6b [0,4194304] 0 2026-03-10T10:20:05.652 INFO:tasks.workunit.client.1.vm05.stdout:7/779: rmdir d5/d1d 39 2026-03-10T10:20:05.661 INFO:tasks.workunit.client.1.vm05.stdout:6/755: dwrite dd/d36/d3f/d12/d44/d30/f9f [0,4194304] 0 2026-03-10T10:20:05.663 INFO:tasks.workunit.client.0.vm02.stdout:4/885: creat d1/d75/ddd/d10e/d7e/f128 x:0 0 0 2026-03-10T10:20:05.664 INFO:tasks.workunit.client.0.vm02.stdout:2/765: dread d0/dd4/fdd [0,4194304] 0 2026-03-10T10:20:05.665 INFO:tasks.workunit.client.0.vm02.stdout:2/766: chown d0/d1a/d49/d5e/d65/db0/cf7 773105 1 2026-03-10T10:20:05.674 INFO:tasks.workunit.client.1.vm05.stdout:3/755: mknod dd/d15/d24/d2c/dd0/dd9/c10a 0 2026-03-10T10:20:05.687 INFO:tasks.workunit.client.0.vm02.stdout:2/767: dread - d0/d1a/d49/d5e/fad zero size 2026-03-10T10:20:05.691 INFO:tasks.workunit.client.1.vm05.stdout:5/757: dwrite da/db/d26/d5c/f2c [0,4194304] 0 2026-03-10T10:20:05.692 INFO:tasks.workunit.client.1.vm05.stdout:6/756: chown dd/d36/d3f/d12/d44/d2a/c5c 9 1 2026-03-10T10:20:05.693 INFO:tasks.workunit.client.0.vm02.stdout:9/721: mknod da/d3c/d4c/db1/de4/cea 0 2026-03-10T10:20:05.696 INFO:tasks.workunit.client.1.vm05.stdout:3/756: unlink dd/d15/d24/d2c/d3b/f83 0 2026-03-10T10:20:05.698 INFO:tasks.workunit.client.1.vm05.stdout:3/757: stat dd/d15/d24/la0 0 2026-03-10T10:20:05.701 INFO:tasks.workunit.client.0.vm02.stdout:2/768: mkdir d0/d1a/d49/d5e/d65/dc4/dfa/dd3/de8/d105 0 2026-03-10T10:20:05.715 INFO:tasks.workunit.client.1.vm05.stdout:0/751: dwrite d1/d2/d9/d31/d12/fc3 [0,4194304] 0 2026-03-10T10:20:05.717 INFO:tasks.workunit.client.0.vm02.stdout:2/769: creat d0/d1a/d49/deb/de6/f106 x:0 0 0 2026-03-10T10:20:05.718 INFO:tasks.workunit.client.0.vm02.stdout:2/770: chown d0/d1a/d49/d5e/d65/dc4/dfa/dbf/fd9 54237 1 2026-03-10T10:20:05.719 INFO:tasks.workunit.client.0.vm02.stdout:3/757: write d1/f50 [4538824,16679] 0 2026-03-10T10:20:05.737 INFO:tasks.workunit.client.0.vm02.stdout:2/771: chown d0/c1d 1 1 2026-03-10T10:20:05.743 INFO:tasks.workunit.client.1.vm05.stdout:0/752: symlink d1/d2/d9/d50/lfe 0 2026-03-10T10:20:05.746 INFO:tasks.workunit.client.1.vm05.stdout:2/711: write db/d1c/d40/d62/f83 [2727535,58889] 0 2026-03-10T10:20:05.746 INFO:tasks.workunit.client.1.vm05.stdout:2/712: stat db/f4a 0 2026-03-10T10:20:05.746 INFO:tasks.workunit.client.1.vm05.stdout:2/713: chown db 410 1 2026-03-10T10:20:05.752 INFO:tasks.workunit.client.0.vm02.stdout:1/794: write d4/da/d27/d38/fad [488861,16401] 0 2026-03-10T10:20:05.755 INFO:tasks.workunit.client.1.vm05.stdout:2/714: sync 2026-03-10T10:20:05.756 INFO:tasks.workunit.client.1.vm05.stdout:2/715: chown db/ca1 2064715 1 2026-03-10T10:20:05.757 INFO:tasks.workunit.client.1.vm05.stdout:7/780: getdents d5/d26/d9c/de7 0 2026-03-10T10:20:05.758 INFO:tasks.workunit.client.1.vm05.stdout:2/716: dread db/d1c/f56 [0,4194304] 0 2026-03-10T10:20:05.759 INFO:tasks.workunit.client.1.vm05.stdout:2/717: chown db/d28/dd4 17779 1 2026-03-10T10:20:05.759 INFO:tasks.workunit.client.1.vm05.stdout:5/758: creat da/d9a/dc7/db4/f104 x:0 0 0 2026-03-10T10:20:05.772 INFO:tasks.workunit.client.1.vm05.stdout:3/758: creat dd/d15/f10b x:0 0 0 2026-03-10T10:20:05.783 INFO:tasks.workunit.client.1.vm05.stdout:7/781: mkdir d5/d26/d9c/de7/df6 0 2026-03-10T10:20:05.795 INFO:tasks.workunit.client.1.vm05.stdout:5/759: rmdir da/db/d26/d70/d72 39 2026-03-10T10:20:05.796 INFO:tasks.workunit.client.1.vm05.stdout:5/760: chown da/d9a 0 1 2026-03-10T10:20:05.797 INFO:tasks.workunit.client.0.vm02.stdout:0/795: getdents d9/d18/d1a/d22/d24/d80 0 2026-03-10T10:20:05.803 INFO:tasks.workunit.client.1.vm05.stdout:8/699: fsync d7/d14/d24/d3f/f7d 0 2026-03-10T10:20:05.805 INFO:tasks.workunit.client.0.vm02.stdout:3/758: chown d1/d8/d21/f5e 90 1 2026-03-10T10:20:05.806 INFO:tasks.workunit.client.0.vm02.stdout:5/896: write d1/d9c/fa9 [1152688,74014] 0 2026-03-10T10:20:05.826 INFO:tasks.workunit.client.0.vm02.stdout:1/795: rename d4/da/d1a/f40 to d4/d4a/ff8 0 2026-03-10T10:20:05.832 INFO:tasks.workunit.client.1.vm05.stdout:3/759: creat dd/d20/d56/d5e/ded/f10c x:0 0 0 2026-03-10T10:20:05.833 INFO:tasks.workunit.client.1.vm05.stdout:9/666: dwrite d0/d1/d57/f91 [0,4194304] 0 2026-03-10T10:20:05.833 INFO:tasks.workunit.client.1.vm05.stdout:3/760: write dd/d39/d5f/df7/ffd [343317,104625] 0 2026-03-10T10:20:05.835 INFO:tasks.workunit.client.0.vm02.stdout:7/756: dwrite d1/dc/d16/d28/f73 [0,4194304] 0 2026-03-10T10:20:05.836 INFO:tasks.workunit.client.0.vm02.stdout:0/796: symlink d9/d34/d3d/l100 0 2026-03-10T10:20:05.836 INFO:tasks.workunit.client.0.vm02.stdout:7/757: chown d1/dc/d16/faa 934 1 2026-03-10T10:20:05.842 INFO:tasks.workunit.client.0.vm02.stdout:0/797: dwrite d9/d34/fd5 [0,4194304] 0 2026-03-10T10:20:05.854 INFO:tasks.workunit.client.0.vm02.stdout:3/759: mknod d1/d8/d86/da2/cfe 0 2026-03-10T10:20:05.855 INFO:tasks.workunit.client.0.vm02.stdout:3/760: chown d1/d8/d21/d73/d78/d79/ced 0 1 2026-03-10T10:20:05.855 INFO:tasks.workunit.client.0.vm02.stdout:6/721: write d0/d8/d9/fa0 [189593,9538] 0 2026-03-10T10:20:05.858 INFO:tasks.workunit.client.0.vm02.stdout:5/897: dread - d1/db/d11/d84/d40/d4f/d5f/d6d/d71/f124 zero size 2026-03-10T10:20:05.860 INFO:tasks.workunit.client.0.vm02.stdout:9/722: getdents da/de5 0 2026-03-10T10:20:05.862 INFO:tasks.workunit.client.1.vm05.stdout:0/753: link d1/d2/d9/d31/d13/d15/laf d1/d2/d9/d31/d13/d2f/lff 0 2026-03-10T10:20:05.863 INFO:tasks.workunit.client.1.vm05.stdout:2/718: link db/d12/f9c db/d28/d4f/fe4 0 2026-03-10T10:20:05.867 INFO:tasks.workunit.client.0.vm02.stdout:0/798: creat d9/d34/d3d/d65/d89/dd3/d9c/f101 x:0 0 0 2026-03-10T10:20:05.872 INFO:tasks.workunit.client.1.vm05.stdout:1/830: dwrite d4/df/d76/fb6 [0,4194304] 0 2026-03-10T10:20:05.872 INFO:tasks.workunit.client.1.vm05.stdout:1/831: chown d4/d79/f8d 1883 1 2026-03-10T10:20:05.872 INFO:tasks.workunit.client.1.vm05.stdout:9/667: creat d0/d1/dcc/fe0 x:0 0 0 2026-03-10T10:20:05.872 INFO:tasks.workunit.client.1.vm05.stdout:9/668: chown d0/df/d11/c81 96705 1 2026-03-10T10:20:05.874 INFO:tasks.workunit.client.0.vm02.stdout:8/748: truncate d1/d1c/d43/d6a/da8/d56/db5/fd4 1662261 0 2026-03-10T10:20:05.880 INFO:tasks.workunit.client.1.vm05.stdout:2/719: dread db/f2e [0,4194304] 0 2026-03-10T10:20:05.893 INFO:tasks.workunit.client.1.vm05.stdout:7/782: rename d5/d1d/d20/d35/c9f to d5/d1d/d29/d3e/cf7 0 2026-03-10T10:20:05.895 INFO:tasks.workunit.client.0.vm02.stdout:7/758: mknod d1/d1b/cf3 0 2026-03-10T10:20:05.906 INFO:tasks.workunit.client.1.vm05.stdout:6/757: dwrite dd/d36/d3f/d12/d44/fa1 [0,4194304] 0 2026-03-10T10:20:05.906 INFO:tasks.workunit.client.1.vm05.stdout:1/832: sync 2026-03-10T10:20:05.906 INFO:tasks.workunit.client.0.vm02.stdout:7/759: chown d1/dc/d16/f6d 16981025 1 2026-03-10T10:20:05.906 INFO:tasks.workunit.client.0.vm02.stdout:7/760: truncate d1/d1b/d8f/dad/d7e/dba/dea/fed 1040179 0 2026-03-10T10:20:05.906 INFO:tasks.workunit.client.0.vm02.stdout:7/761: dread - d1/dc/d16/d28/d2d/f4c zero size 2026-03-10T10:20:05.906 INFO:tasks.workunit.client.0.vm02.stdout:0/799: creat d9/d18/d1a/d22/d24/d8e/d9b/f102 x:0 0 0 2026-03-10T10:20:05.906 INFO:tasks.workunit.client.1.vm05.stdout:4/625: unlink d1/d31/dc/f3a 0 2026-03-10T10:20:05.907 INFO:tasks.workunit.client.0.vm02.stdout:1/796: sync 2026-03-10T10:20:05.907 INFO:tasks.workunit.client.0.vm02.stdout:3/761: sync 2026-03-10T10:20:05.915 INFO:tasks.workunit.client.0.vm02.stdout:8/749: rmdir d1 39 2026-03-10T10:20:05.921 INFO:tasks.workunit.client.0.vm02.stdout:6/722: rename d0/d8/d8c/fe6 to d0/d8/d29/d94/fe9 0 2026-03-10T10:20:05.931 INFO:tasks.workunit.client.1.vm05.stdout:2/720: creat db/d28/d4f/d59/da4/d81/da7/fe5 x:0 0 0 2026-03-10T10:20:05.936 INFO:tasks.workunit.client.0.vm02.stdout:9/723: symlink da/d3c/d4c/d38/d7c/dde/leb 0 2026-03-10T10:20:05.938 INFO:tasks.workunit.client.0.vm02.stdout:7/762: mknod d1/dc/d55/d9a/da5/cf4 0 2026-03-10T10:20:05.939 INFO:tasks.workunit.client.1.vm05.stdout:8/700: creat d7/d2f/fe0 x:0 0 0 2026-03-10T10:20:05.940 INFO:tasks.workunit.client.0.vm02.stdout:0/800: mknod d9/d34/d3d/d65/d89/df3/c103 0 2026-03-10T10:20:05.942 INFO:tasks.workunit.client.1.vm05.stdout:7/783: mkdir d5/d1d/d20/d35/df8 0 2026-03-10T10:20:05.945 INFO:tasks.workunit.client.0.vm02.stdout:0/801: dwrite d9/d34/d3d/d65/f84 [4194304,4194304] 0 2026-03-10T10:20:05.945 INFO:tasks.workunit.client.1.vm05.stdout:7/784: stat d5/ff 0 2026-03-10T10:20:05.946 INFO:tasks.workunit.client.0.vm02.stdout:1/797: creat d4/dc3/ff9 x:0 0 0 2026-03-10T10:20:05.955 INFO:tasks.workunit.client.0.vm02.stdout:2/772: dwrite d0/d1a/fb4 [0,4194304] 0 2026-03-10T10:20:05.960 INFO:tasks.workunit.client.0.vm02.stdout:8/750: chown d1/d1c/d43/d6a/da8/l94 1512 1 2026-03-10T10:20:05.961 INFO:tasks.workunit.client.1.vm05.stdout:5/761: dwrite da/d63/fe1 [0,4194304] 0 2026-03-10T10:20:05.973 INFO:tasks.workunit.client.0.vm02.stdout:6/723: symlink d0/d8/d29/d2f/lea 0 2026-03-10T10:20:05.973 INFO:tasks.workunit.client.0.vm02.stdout:6/724: readlink d0/d8/d29/d2f/d4b/da5/d6f/l7b 0 2026-03-10T10:20:05.976 INFO:tasks.workunit.client.1.vm05.stdout:1/833: chown d4/d3d/d6e/dac/ld2 1149 1 2026-03-10T10:20:05.980 INFO:tasks.workunit.client.0.vm02.stdout:4/886: dwrite d1/d75/ddd/f127 [0,4194304] 0 2026-03-10T10:20:05.987 INFO:tasks.workunit.client.1.vm05.stdout:3/761: write dd/d15/f23 [1591016,91546] 0 2026-03-10T10:20:05.988 INFO:tasks.workunit.client.1.vm05.stdout:9/669: mknod d0/df/ce1 0 2026-03-10T10:20:06.000 INFO:tasks.workunit.client.0.vm02.stdout:7/763: dread d1/dc/f26 [0,4194304] 0 2026-03-10T10:20:06.001 INFO:tasks.workunit.client.0.vm02.stdout:7/764: write d1/d1b/d8f/f93 [3159307,44794] 0 2026-03-10T10:20:06.005 INFO:tasks.workunit.client.0.vm02.stdout:5/898: dwrite d1/db/d11/d13/d28/f91 [0,4194304] 0 2026-03-10T10:20:06.006 INFO:tasks.workunit.client.1.vm05.stdout:0/754: creat d1/f100 x:0 0 0 2026-03-10T10:20:06.012 INFO:tasks.workunit.client.0.vm02.stdout:2/773: mkdir d0/d10/da6/d107 0 2026-03-10T10:20:06.014 INFO:tasks.workunit.client.1.vm05.stdout:6/758: creat dd/d36/d3f/d12/d44/daa/de4/ff3 x:0 0 0 2026-03-10T10:20:06.016 INFO:tasks.workunit.client.0.vm02.stdout:8/751: dread d1/d1c/d43/d5b/d88/dac/fa5 [0,4194304] 0 2026-03-10T10:20:06.023 INFO:tasks.workunit.client.1.vm05.stdout:3/762: read dd/d15/d24/d2c/f3e [1401149,30574] 0 2026-03-10T10:20:06.024 INFO:tasks.workunit.client.1.vm05.stdout:3/763: chown dd/d15/f84 273279 1 2026-03-10T10:20:06.035 INFO:tasks.workunit.client.0.vm02.stdout:9/724: creat da/d3c/d4c/d38/da6/fec x:0 0 0 2026-03-10T10:20:06.039 INFO:tasks.workunit.client.0.vm02.stdout:0/802: mknod d9/d34/d3d/d65/d89/dd3/da7/db9/dc9/c104 0 2026-03-10T10:20:06.040 INFO:tasks.workunit.client.1.vm05.stdout:3/764: read f9 [704377,23074] 0 2026-03-10T10:20:06.048 INFO:tasks.workunit.client.0.vm02.stdout:7/765: fdatasync d1/f17 0 2026-03-10T10:20:06.051 INFO:tasks.workunit.client.0.vm02.stdout:1/798: symlink d4/da/d27/d38/d80/lfa 0 2026-03-10T10:20:06.053 INFO:tasks.workunit.client.0.vm02.stdout:5/899: dread - d1/db/d11/d13/d28/f11f zero size 2026-03-10T10:20:06.055 INFO:tasks.workunit.client.1.vm05.stdout:6/759: mkdir dd/d36/d3f/d12/d44/d30/d4a/df4 0 2026-03-10T10:20:06.056 INFO:tasks.workunit.client.0.vm02.stdout:3/762: getdents d1/d6/d8e 0 2026-03-10T10:20:06.069 INFO:tasks.workunit.client.0.vm02.stdout:2/774: rename d0/d1a/d49/d5e to d0/d71/d108 0 2026-03-10T10:20:06.073 INFO:tasks.workunit.client.1.vm05.stdout:8/701: write d7/d14/d3a/d49/f72 [214795,123492] 0 2026-03-10T10:20:06.074 INFO:tasks.workunit.client.1.vm05.stdout:8/702: readlink d7/d14/d3a/l77 0 2026-03-10T10:20:06.076 INFO:tasks.workunit.client.1.vm05.stdout:8/703: dread - d7/d14/d3a/d49/d65/fdc zero size 2026-03-10T10:20:06.076 INFO:tasks.workunit.client.1.vm05.stdout:2/721: dwrite db/d28/d4f/f8a [4194304,4194304] 0 2026-03-10T10:20:06.096 INFO:tasks.workunit.client.0.vm02.stdout:7/766: fdatasync d1/f5 0 2026-03-10T10:20:06.098 INFO:tasks.workunit.client.0.vm02.stdout:2/775: sync 2026-03-10T10:20:06.101 INFO:tasks.workunit.client.0.vm02.stdout:2/776: dwrite d0/d10/ff6 [0,4194304] 0 2026-03-10T10:20:06.108 INFO:tasks.workunit.client.1.vm05.stdout:6/760: rename dd/d36/d3f/d12/d44/d30/d4a/d6e/dc3 to dd/d36/d3f/d12/d59/df5 0 2026-03-10T10:20:06.114 INFO:tasks.workunit.client.0.vm02.stdout:4/887: write d1/d10/f8 [1093042,86448] 0 2026-03-10T10:20:06.115 INFO:tasks.workunit.client.1.vm05.stdout:4/626: write d1/d64/da9/fb9 [392558,70829] 0 2026-03-10T10:20:06.115 INFO:tasks.workunit.client.1.vm05.stdout:5/762: write da/db/dee/d38/f94 [734228,73989] 0 2026-03-10T10:20:06.115 INFO:tasks.workunit.client.1.vm05.stdout:4/627: chown d1/d31/f36 87779095 1 2026-03-10T10:20:06.140 INFO:tasks.workunit.client.0.vm02.stdout:6/725: creat d0/d8/d29/d2f/d4b/da5/d6f/feb x:0 0 0 2026-03-10T10:20:06.143 INFO:tasks.workunit.client.1.vm05.stdout:0/755: write d1/f38 [3226112,8938] 0 2026-03-10T10:20:06.144 INFO:tasks.workunit.client.1.vm05.stdout:9/670: dwrite d0/df/d74/d8c/fac [0,4194304] 0 2026-03-10T10:20:06.145 INFO:tasks.workunit.client.1.vm05.stdout:9/671: chown d0/d1/d16/d6e/c86 0 1 2026-03-10T10:20:06.157 INFO:tasks.workunit.client.1.vm05.stdout:3/765: dwrite dd/d20/d56/d5e/ffc [0,4194304] 0 2026-03-10T10:20:06.166 INFO:tasks.workunit.client.0.vm02.stdout:2/777: dwrite d0/d71/d108/d65/dc4/dfa/dbf/fed [4194304,4194304] 0 2026-03-10T10:20:06.183 INFO:tasks.workunit.client.0.vm02.stdout:4/888: creat d1/d52/d53/dda/df7/f129 x:0 0 0 2026-03-10T10:20:06.191 INFO:tasks.workunit.client.0.vm02.stdout:6/726: dwrite d0/d87/fa7 [0,4194304] 0 2026-03-10T10:20:06.210 INFO:tasks.workunit.client.0.vm02.stdout:2/778: dread d0/d1a/f33 [0,4194304] 0 2026-03-10T10:20:06.211 INFO:tasks.workunit.client.0.vm02.stdout:2/779: chown d0/d8c/fab 671755630 1 2026-03-10T10:20:06.214 INFO:tasks.workunit.client.0.vm02.stdout:3/763: mknod d1/d58/cff 0 2026-03-10T10:20:06.218 INFO:tasks.workunit.client.0.vm02.stdout:9/725: rename da/d3c/d4c/d38/f84 to da/d3c/d4c/d2c/d34/fed 0 2026-03-10T10:20:06.225 INFO:tasks.workunit.client.0.vm02.stdout:6/727: rmdir d0/d8/d9 39 2026-03-10T10:20:06.227 INFO:tasks.workunit.client.0.vm02.stdout:8/752: chown d1/d1c/d23/d25/f8c 856 1 2026-03-10T10:20:06.236 INFO:tasks.workunit.client.0.vm02.stdout:1/799: dwrite d4/da/d27/d38/d80/fb5 [0,4194304] 0 2026-03-10T10:20:06.237 INFO:tasks.workunit.client.0.vm02.stdout:8/753: sync 2026-03-10T10:20:06.246 INFO:tasks.workunit.client.1.vm05.stdout:2/722: rmdir db/d61/dcc 39 2026-03-10T10:20:06.247 INFO:tasks.workunit.client.0.vm02.stdout:7/767: dwrite d1/dc/d16/d28/fca [0,4194304] 0 2026-03-10T10:20:06.250 INFO:tasks.workunit.client.1.vm05.stdout:7/785: getdents d5/d26 0 2026-03-10T10:20:06.255 INFO:tasks.workunit.client.0.vm02.stdout:0/803: rename d9/d34/d3d/d7b/fde to d9/d18/d1a/d22/d24/d8e/d9b/daa/f105 0 2026-03-10T10:20:06.259 INFO:tasks.workunit.client.0.vm02.stdout:9/726: creat da/d3c/d4c/d2c/d96/fee x:0 0 0 2026-03-10T10:20:06.269 INFO:tasks.workunit.client.0.vm02.stdout:1/800: mknod d4/dc3/df0/cfb 0 2026-03-10T10:20:06.269 INFO:tasks.workunit.client.1.vm05.stdout:5/763: creat da/db/d28/d32/f105 x:0 0 0 2026-03-10T10:20:06.269 INFO:tasks.workunit.client.1.vm05.stdout:4/628: creat d1/d31/d76/fce x:0 0 0 2026-03-10T10:20:06.269 INFO:tasks.workunit.client.1.vm05.stdout:1/834: getdents d4/df/de0 0 2026-03-10T10:20:06.269 INFO:tasks.workunit.client.1.vm05.stdout:6/761: read dd/d36/d3f/d12/d44/d2a/fa5 [2209095,81282] 0 2026-03-10T10:20:06.271 INFO:tasks.workunit.client.1.vm05.stdout:0/756: creat d1/d2/d9/d31/d12/d41/f101 x:0 0 0 2026-03-10T10:20:06.289 INFO:tasks.workunit.client.0.vm02.stdout:2/780: write d0/d10/f8b [217441,75712] 0 2026-03-10T10:20:06.298 INFO:tasks.workunit.client.0.vm02.stdout:7/768: rmdir d1/d1b/d8f/dad 39 2026-03-10T10:20:06.298 INFO:tasks.workunit.client.0.vm02.stdout:7/769: readlink d1/dc/d55/d9a/la8 0 2026-03-10T10:20:06.299 INFO:tasks.workunit.client.1.vm05.stdout:2/723: symlink db/d1c/d40/d62/le6 0 2026-03-10T10:20:06.300 INFO:tasks.workunit.client.1.vm05.stdout:2/724: stat db/d2d/l88 0 2026-03-10T10:20:06.300 INFO:tasks.workunit.client.0.vm02.stdout:3/764: symlink d1/d8/d44/deb/l100 0 2026-03-10T10:20:06.305 INFO:tasks.workunit.client.0.vm02.stdout:7/770: sync 2026-03-10T10:20:06.306 INFO:tasks.workunit.client.0.vm02.stdout:7/771: chown d1/dc/d16/d28/d2d/l45 100837 1 2026-03-10T10:20:06.311 INFO:tasks.workunit.client.1.vm05.stdout:7/786: dwrite d5/d1d/f53 [0,4194304] 0 2026-03-10T10:20:06.320 INFO:tasks.workunit.client.1.vm05.stdout:5/764: fsync da/db/d28/d8a/fa0 0 2026-03-10T10:20:06.335 INFO:tasks.workunit.client.1.vm05.stdout:4/629: write d1/d31/dc/f3d [24358,93138] 0 2026-03-10T10:20:06.335 INFO:tasks.workunit.client.1.vm05.stdout:1/835: write d4/d39/f7b [2943102,18718] 0 2026-03-10T10:20:06.336 INFO:tasks.workunit.client.1.vm05.stdout:1/836: truncate d4/d3d/d6e/fee 308957 0 2026-03-10T10:20:06.340 INFO:tasks.workunit.client.0.vm02.stdout:9/727: dread da/d3c/d4c/d38/d82/d8c/f98 [0,4194304] 0 2026-03-10T10:20:06.344 INFO:tasks.workunit.client.0.vm02.stdout:9/728: sync 2026-03-10T10:20:06.344 INFO:tasks.workunit.client.1.vm05.stdout:6/762: mknod dd/d36/d3f/d12/d59/cf6 0 2026-03-10T10:20:06.345 INFO:tasks.workunit.client.0.vm02.stdout:9/729: chown da/d3c/d4c/d2c/d34/l3f 59493430 1 2026-03-10T10:20:06.345 INFO:tasks.workunit.client.0.vm02.stdout:9/730: readlink da/d3c/d4c/d38/d4a/l52 0 2026-03-10T10:20:06.346 INFO:tasks.workunit.client.1.vm05.stdout:6/763: read dd/d36/d3f/d12/d44/d63/fc4 [106122,7078] 0 2026-03-10T10:20:06.347 INFO:tasks.workunit.client.0.vm02.stdout:9/731: sync 2026-03-10T10:20:06.348 INFO:tasks.workunit.client.0.vm02.stdout:9/732: read da/d3c/d53/f73 [1070712,14713] 0 2026-03-10T10:20:06.354 INFO:tasks.workunit.client.0.vm02.stdout:1/801: write d4/d2c/fa2 [12822736,34117] 0 2026-03-10T10:20:06.362 INFO:tasks.workunit.client.1.vm05.stdout:9/672: mknod d0/dc4/ce2 0 2026-03-10T10:20:06.365 INFO:tasks.workunit.client.1.vm05.stdout:9/673: dwrite d0/d1/d16/f40 [4194304,4194304] 0 2026-03-10T10:20:06.365 INFO:tasks.workunit.client.1.vm05.stdout:9/674: fdatasync d0/d1/d13/de/f46 0 2026-03-10T10:20:06.367 INFO:tasks.workunit.client.1.vm05.stdout:3/766: truncate dd/d15/d24/d2c/d6d/da7/dbb/dbd/ff6 348058 0 2026-03-10T10:20:06.369 INFO:tasks.workunit.client.0.vm02.stdout:4/889: getdents d1/d75/ddd/d10e/d5e/d78/d1a 0 2026-03-10T10:20:06.370 INFO:tasks.workunit.client.1.vm05.stdout:3/767: dwrite dd/d15/f84 [0,4194304] 0 2026-03-10T10:20:06.371 INFO:tasks.workunit.client.0.vm02.stdout:2/781: dwrite d0/d1a/f4c [0,4194304] 0 2026-03-10T10:20:06.375 INFO:tasks.workunit.client.0.vm02.stdout:2/782: write d0/d1a/d49/deb/fd2 [4824221,103396] 0 2026-03-10T10:20:06.378 INFO:tasks.workunit.client.0.vm02.stdout:2/783: sync 2026-03-10T10:20:06.378 INFO:tasks.workunit.client.0.vm02.stdout:4/890: dwrite d1/d75/ddd/d10e/d7e/f128 [0,4194304] 0 2026-03-10T10:20:06.381 INFO:tasks.workunit.client.0.vm02.stdout:3/765: mkdir d1/d8/d21/d73/d101 0 2026-03-10T10:20:06.393 INFO:tasks.workunit.client.0.vm02.stdout:5/900: rename d1/db/d11/d84/f82 to d1/d6a/f133 0 2026-03-10T10:20:06.409 INFO:tasks.workunit.client.0.vm02.stdout:6/728: fdatasync d0/d8/d29/d2f/d4b/f53 0 2026-03-10T10:20:06.418 INFO:tasks.workunit.client.0.vm02.stdout:1/802: truncate d4/da/f73 264611 0 2026-03-10T10:20:06.418 INFO:tasks.workunit.client.0.vm02.stdout:1/803: stat d4/da/d27/d38/d3c 0 2026-03-10T10:20:06.420 INFO:tasks.workunit.client.0.vm02.stdout:8/754: creat d1/d1c/d43/d5b/d88/fe1 x:0 0 0 2026-03-10T10:20:06.440 INFO:tasks.workunit.client.1.vm05.stdout:5/765: dread da/f10 [8388608,4194304] 0 2026-03-10T10:20:06.452 INFO:tasks.workunit.client.0.vm02.stdout:3/766: mkdir d1/d8/d86/db1/d102 0 2026-03-10T10:20:06.456 INFO:tasks.workunit.client.1.vm05.stdout:0/757: creat d1/d2/d9/d31/d13/d15/d4e/df6/f102 x:0 0 0 2026-03-10T10:20:06.457 INFO:tasks.workunit.client.1.vm05.stdout:0/758: chown d1/d2/d39/d6e/d95 284059187 1 2026-03-10T10:20:06.458 INFO:tasks.workunit.client.1.vm05.stdout:0/759: read - d1/d2/d39/d3d/d9f/fc2 zero size 2026-03-10T10:20:06.459 INFO:tasks.workunit.client.1.vm05.stdout:2/725: dread db/d28/d4f/d8b/fa8 [0,4194304] 0 2026-03-10T10:20:06.461 INFO:tasks.workunit.client.0.vm02.stdout:4/891: symlink d1/d32/da3/l12a 0 2026-03-10T10:20:06.461 INFO:tasks.workunit.client.0.vm02.stdout:4/892: readlink d1/d75/ddd/le3 0 2026-03-10T10:20:06.463 INFO:tasks.workunit.client.0.vm02.stdout:3/767: dread d1/fe [0,4194304] 0 2026-03-10T10:20:06.470 INFO:tasks.workunit.client.0.vm02.stdout:2/784: write d0/dd4/ff2 [40369,67992] 0 2026-03-10T10:20:06.475 INFO:tasks.workunit.client.1.vm05.stdout:1/837: dwrite d4/d39/d3e/fd9 [0,4194304] 0 2026-03-10T10:20:06.475 INFO:tasks.workunit.client.0.vm02.stdout:2/785: stat d0/d8c/faf 0 2026-03-10T10:20:06.475 INFO:tasks.workunit.client.1.vm05.stdout:1/838: dread d4/df/f73 [0,4194304] 0 2026-03-10T10:20:06.479 INFO:tasks.workunit.client.0.vm02.stdout:7/772: rmdir d1/d1b/d8f/dad/d7e/dba/dea 39 2026-03-10T10:20:06.480 INFO:tasks.workunit.client.0.vm02.stdout:7/773: chown d1/dc/d16/d28/d2c/cb5 13 1 2026-03-10T10:20:06.480 INFO:tasks.workunit.client.0.vm02.stdout:7/774: chown d1/dc/d10/d38/ld6 162 1 2026-03-10T10:20:06.481 INFO:tasks.workunit.client.0.vm02.stdout:7/775: chown d1/dc/d99 241 1 2026-03-10T10:20:06.492 INFO:tasks.workunit.client.0.vm02.stdout:0/804: creat d9/d18/d1a/d22/d24/f106 x:0 0 0 2026-03-10T10:20:06.494 INFO:tasks.workunit.client.1.vm05.stdout:3/768: read - dd/d15/d24/fee zero size 2026-03-10T10:20:06.495 INFO:tasks.workunit.client.0.vm02.stdout:6/729: creat d0/d8/d29/d52/fec x:0 0 0 2026-03-10T10:20:06.497 INFO:tasks.workunit.client.1.vm05.stdout:8/704: getdents d7/d14/d24/d3f/d6a/d8a/d96/db7 0 2026-03-10T10:20:06.509 INFO:tasks.workunit.client.1.vm05.stdout:7/787: truncate d5/d17/f40 3769342 0 2026-03-10T10:20:06.509 INFO:tasks.workunit.client.0.vm02.stdout:9/733: dwrite da/d3c/d4c/d38/d82/d89/fb5 [0,4194304] 0 2026-03-10T10:20:06.509 INFO:tasks.workunit.client.1.vm05.stdout:7/788: readlink d5/d1d/d20/d2d/d80/la4 0 2026-03-10T10:20:06.509 INFO:tasks.workunit.client.1.vm05.stdout:7/789: stat d5/d1d/d20/d2d/d5d/d7a/lbc 0 2026-03-10T10:20:06.527 INFO:tasks.workunit.client.0.vm02.stdout:4/893: rmdir d1/d75/ddd/d10e/d5e/d78/d1a/d49 39 2026-03-10T10:20:06.532 INFO:tasks.workunit.client.0.vm02.stdout:4/894: dwrite d1/d75/ddd/d10e/d5e/d78/d44/de7/f114 [0,4194304] 0 2026-03-10T10:20:06.532 INFO:tasks.workunit.client.0.vm02.stdout:2/786: fdatasync d0/fe2 0 2026-03-10T10:20:06.535 INFO:tasks.workunit.client.1.vm05.stdout:0/760: symlink d1/d2/d39/d6e/d8e/l103 0 2026-03-10T10:20:06.540 INFO:tasks.workunit.client.0.vm02.stdout:5/901: fsync d1/db/d11/d84/d40/f12b 0 2026-03-10T10:20:06.548 INFO:tasks.workunit.client.1.vm05.stdout:9/675: symlink d0/d1/le3 0 2026-03-10T10:20:06.551 INFO:tasks.workunit.client.1.vm05.stdout:7/790: rmdir d5/d17/d66 39 2026-03-10T10:20:06.562 INFO:tasks.workunit.client.1.vm05.stdout:4/630: symlink d1/d3/d65/db0/lcf 0 2026-03-10T10:20:06.570 INFO:tasks.workunit.client.0.vm02.stdout:9/734: creat da/d3c/d4c/d38/d4a/d99/fef x:0 0 0 2026-03-10T10:20:06.570 INFO:tasks.workunit.client.1.vm05.stdout:0/761: creat d1/d2/dc6/f104 x:0 0 0 2026-03-10T10:20:06.571 INFO:tasks.workunit.client.1.vm05.stdout:1/839: fdatasync d4/df/d1c/d53/daa/fab 0 2026-03-10T10:20:06.574 INFO:tasks.workunit.client.1.vm05.stdout:7/791: unlink d5/d1d/d20/d2d/d5d/f67 0 2026-03-10T10:20:06.574 INFO:tasks.workunit.client.0.vm02.stdout:7/776: mkdir d1/dc/d10/df5 0 2026-03-10T10:20:06.576 INFO:tasks.workunit.client.1.vm05.stdout:4/631: creat d1/d31/dc/d40/d45/daa/fd0 x:0 0 0 2026-03-10T10:20:06.578 INFO:tasks.workunit.client.0.vm02.stdout:5/902: unlink d1/d6a/c117 0 2026-03-10T10:20:06.578 INFO:tasks.workunit.client.0.vm02.stdout:5/903: fdatasync d1/db/d11/d84/fb2 0 2026-03-10T10:20:06.579 INFO:tasks.workunit.client.1.vm05.stdout:0/762: fsync d1/d2/d9/d50/d9a/fbf 0 2026-03-10T10:20:06.597 INFO:tasks.workunit.client.0.vm02.stdout:2/787: mknod d0/d71/d108/d65/dc4/dfa/dd3/de8/d105/c109 0 2026-03-10T10:20:06.604 INFO:tasks.workunit.client.1.vm05.stdout:4/632: creat d1/d31/d76/dac/db8/fd1 x:0 0 0 2026-03-10T10:20:06.604 INFO:tasks.workunit.client.0.vm02.stdout:7/777: truncate d1/dc/d16/f95 109649 0 2026-03-10T10:20:06.606 INFO:tasks.workunit.client.1.vm05.stdout:2/726: rename db/d12/c17 to db/d28/d4f/d59/ce7 0 2026-03-10T10:20:06.610 INFO:tasks.workunit.client.1.vm05.stdout:2/727: dwrite db/d61/f92 [0,4194304] 0 2026-03-10T10:20:06.618 INFO:tasks.workunit.client.1.vm05.stdout:9/676: sync 2026-03-10T10:20:06.622 INFO:tasks.workunit.client.0.vm02.stdout:6/730: link d0/d8/d29/d2f/f77 d0/d8/d29/d52/de8/db2/dbb/fed 0 2026-03-10T10:20:06.622 INFO:tasks.workunit.client.0.vm02.stdout:6/731: fsync d0/d87/fa7 0 2026-03-10T10:20:06.627 INFO:tasks.workunit.client.1.vm05.stdout:8/705: getdents d7/d14/d24/d3f/d4f 0 2026-03-10T10:20:06.643 INFO:tasks.workunit.client.1.vm05.stdout:4/633: truncate d1/d31/dc/d40/d45/f50 3665561 0 2026-03-10T10:20:06.645 INFO:tasks.workunit.client.0.vm02.stdout:8/755: write d1/d1c/f72 [2022316,57577] 0 2026-03-10T10:20:06.648 INFO:tasks.workunit.client.1.vm05.stdout:5/766: dwrite da/db/d26/d70/fd1 [0,4194304] 0 2026-03-10T10:20:06.648 INFO:tasks.workunit.client.0.vm02.stdout:3/768: dwrite d1/d8/d21/f4c [4194304,4194304] 0 2026-03-10T10:20:06.650 INFO:tasks.workunit.client.1.vm05.stdout:6/764: dwrite dd/d36/d3f/d12/d58/f5a [0,4194304] 0 2026-03-10T10:20:06.668 INFO:tasks.workunit.client.0.vm02.stdout:0/805: getdents d9/d18/d1a/d3c 0 2026-03-10T10:20:06.682 INFO:tasks.workunit.client.1.vm05.stdout:1/840: dwrite d4/df/d1c/d92/f97 [0,4194304] 0 2026-03-10T10:20:06.682 INFO:tasks.workunit.client.0.vm02.stdout:9/735: dwrite da/d3c/d4c/d38/d82/d89/f8a [0,4194304] 0 2026-03-10T10:20:06.697 INFO:tasks.workunit.client.1.vm05.stdout:3/769: rename dd/d15/d24/f2f to dd/dbe/d106/f10d 0 2026-03-10T10:20:06.700 INFO:tasks.workunit.client.0.vm02.stdout:4/895: dwrite d1/f9d [0,4194304] 0 2026-03-10T10:20:06.706 INFO:tasks.workunit.client.0.vm02.stdout:7/778: creat d1/dc/d99/ff6 x:0 0 0 2026-03-10T10:20:06.722 INFO:tasks.workunit.client.1.vm05.stdout:2/728: creat db/d28/d4f/d59/da4/fe8 x:0 0 0 2026-03-10T10:20:06.723 INFO:tasks.workunit.client.1.vm05.stdout:2/729: chown db/d28/d4f/d59/c79 1372833582 1 2026-03-10T10:20:06.724 INFO:tasks.workunit.client.1.vm05.stdout:7/792: dwrite d5/d1d/d20/d2d/f3d [0,4194304] 0 2026-03-10T10:20:06.732 INFO:tasks.workunit.client.0.vm02.stdout:5/904: dwrite d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fc0 [4194304,4194304] 0 2026-03-10T10:20:06.738 INFO:tasks.workunit.client.0.vm02.stdout:5/905: dwrite d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fc0 [0,4194304] 0 2026-03-10T10:20:06.745 INFO:tasks.workunit.client.1.vm05.stdout:8/706: fdatasync d7/f59 0 2026-03-10T10:20:06.748 INFO:tasks.workunit.client.1.vm05.stdout:0/763: dwrite d1/d2/d9/d31/d54/f16 [0,4194304] 0 2026-03-10T10:20:06.750 INFO:tasks.workunit.client.0.vm02.stdout:2/788: dwrite d0/d1a/d49/deb/de6/fe7 [0,4194304] 0 2026-03-10T10:20:06.752 INFO:tasks.workunit.client.1.vm05.stdout:4/634: read - d1/d31/dc/d40/f9c zero size 2026-03-10T10:20:06.756 INFO:tasks.workunit.client.0.vm02.stdout:3/769: rename d1/cae to d1/d8/d86/db1/dbc/c103 0 2026-03-10T10:20:06.762 INFO:tasks.workunit.client.1.vm05.stdout:5/767: read da/d9a/fa7 [1565006,61639] 0 2026-03-10T10:20:06.762 INFO:tasks.workunit.client.1.vm05.stdout:5/768: fdatasync da/d9a/daf/ded/ffa 0 2026-03-10T10:20:06.763 INFO:tasks.workunit.client.1.vm05.stdout:6/765: rmdir dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6 39 2026-03-10T10:20:06.764 INFO:tasks.workunit.client.1.vm05.stdout:6/766: stat dd/d36/d3f/d12/d44/d2a/laf 0 2026-03-10T10:20:06.764 INFO:tasks.workunit.client.1.vm05.stdout:6/767: chown dd/d36/d3f/d12/d44/d2a/laf 2 1 2026-03-10T10:20:06.766 INFO:tasks.workunit.client.1.vm05.stdout:1/841: fsync d4/d79/d83/dc5/dcb/fd0 0 2026-03-10T10:20:06.772 INFO:tasks.workunit.client.1.vm05.stdout:2/730: symlink db/d28/dbc/le9 0 2026-03-10T10:20:06.772 INFO:tasks.workunit.client.1.vm05.stdout:7/793: truncate d5/d1d/d20/d35/fbb 843248 0 2026-03-10T10:20:06.774 INFO:tasks.workunit.client.1.vm05.stdout:9/677: mkdir d0/df/d74/d8c/de4 0 2026-03-10T10:20:06.788 INFO:tasks.workunit.client.1.vm05.stdout:0/764: creat d1/d2/d9/d50/d9a/da0/f105 x:0 0 0 2026-03-10T10:20:06.803 INFO:tasks.workunit.client.1.vm05.stdout:5/769: unlink da/db/d28/f56 0 2026-03-10T10:20:06.803 INFO:tasks.workunit.client.1.vm05.stdout:5/770: chown da/db/dee/d38/c93 9 1 2026-03-10T10:20:06.806 INFO:tasks.workunit.client.1.vm05.stdout:5/771: dread da/db/d28/ff8 [0,4194304] 0 2026-03-10T10:20:06.830 INFO:tasks.workunit.client.1.vm05.stdout:7/794: read d5/d1d/d29/f5c [35136,28786] 0 2026-03-10T10:20:06.835 INFO:tasks.workunit.client.1.vm05.stdout:8/707: dread d7/d14/d24/f95 [0,4194304] 0 2026-03-10T10:20:06.852 INFO:tasks.workunit.client.1.vm05.stdout:9/678: creat d0/d1/d16/d6e/daf/db7/fe5 x:0 0 0 2026-03-10T10:20:06.861 INFO:tasks.workunit.client.0.vm02.stdout:1/804: dwrite d4/da/f73 [0,4194304] 0 2026-03-10T10:20:06.862 INFO:tasks.workunit.client.0.vm02.stdout:1/805: fsync d4/da/d27/d38/d80/fb5 0 2026-03-10T10:20:06.865 INFO:tasks.workunit.client.1.vm05.stdout:0/765: mkdir d1/d2/d9/d31/d13/da2/dab/dce/d106 0 2026-03-10T10:20:06.869 INFO:tasks.workunit.client.1.vm05.stdout:0/766: dwrite d1/d2/dc6/f104 [0,4194304] 0 2026-03-10T10:20:06.874 INFO:tasks.workunit.client.0.vm02.stdout:8/756: dwrite d1/d1c/d43/d5b/f60 [0,4194304] 0 2026-03-10T10:20:06.878 INFO:tasks.workunit.client.0.vm02.stdout:1/806: dread d4/d2c/d53/da6/fbe [8388608,4194304] 0 2026-03-10T10:20:06.893 INFO:tasks.workunit.client.0.vm02.stdout:0/806: dwrite d9/d34/d3d/f4e [4194304,4194304] 0 2026-03-10T10:20:06.926 INFO:tasks.workunit.client.1.vm05.stdout:3/770: creat dd/d39/f10e x:0 0 0 2026-03-10T10:20:06.938 INFO:tasks.workunit.client.0.vm02.stdout:4/896: dwrite d1/d75/ddd/d10e/d5e/d78/f3f [4194304,4194304] 0 2026-03-10T10:20:06.943 INFO:tasks.workunit.client.0.vm02.stdout:4/897: dwrite d1/d75/ddd/d10e/d117/f126 [0,4194304] 0 2026-03-10T10:20:06.954 INFO:tasks.workunit.client.1.vm05.stdout:1/842: write d4/d39/d3e/f96 [2457502,26637] 0 2026-03-10T10:20:06.961 INFO:tasks.workunit.client.1.vm05.stdout:7/795: read d5/dd/f12 [287363,44555] 0 2026-03-10T10:20:06.973 INFO:tasks.workunit.client.1.vm05.stdout:0/767: fdatasync d1/d2/d9/d31/d13/d2f/f33 0 2026-03-10T10:20:06.986 INFO:tasks.workunit.client.0.vm02.stdout:6/732: getdents d0/d8/d29/d52/de8/db2/dbb/de5 0 2026-03-10T10:20:06.989 INFO:tasks.workunit.client.1.vm05.stdout:6/768: rmdir dd/d36/d3f/d12/d58/dcf 0 2026-03-10T10:20:06.996 INFO:tasks.workunit.client.1.vm05.stdout:0/768: dread d1/d2/d9/d31/d12/d20/f71 [0,4194304] 0 2026-03-10T10:20:06.998 INFO:tasks.workunit.client.1.vm05.stdout:3/771: symlink dd/d15/d24/d2c/dd0/dd9/l10f 0 2026-03-10T10:20:07.006 INFO:tasks.workunit.client.1.vm05.stdout:2/731: dwrite db/d28/d4f/d59/da4/d6c/fd0 [0,4194304] 0 2026-03-10T10:20:07.010 INFO:tasks.workunit.client.1.vm05.stdout:7/796: fdatasync d5/f76 0 2026-03-10T10:20:07.017 INFO:tasks.workunit.client.1.vm05.stdout:9/679: fdatasync d0/d1/d13/de/d93/fbd 0 2026-03-10T10:20:07.021 INFO:tasks.workunit.client.0.vm02.stdout:2/789: chown d0/d71/d108/d65/dc4/dfa/d80/ddb/ldf 7283 1 2026-03-10T10:20:07.034 INFO:tasks.workunit.client.1.vm05.stdout:8/708: write d7/d14/f9b [725427,114338] 0 2026-03-10T10:20:07.035 INFO:tasks.workunit.client.1.vm05.stdout:1/843: dread d4/fdf [0,4194304] 0 2026-03-10T10:20:07.045 INFO:tasks.workunit.client.0.vm02.stdout:5/906: dwrite d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fc3 [0,4194304] 0 2026-03-10T10:20:07.047 INFO:tasks.workunit.client.1.vm05.stdout:4/635: getdents d1/d31/dc 0 2026-03-10T10:20:07.048 INFO:tasks.workunit.client.1.vm05.stdout:5/772: creat da/db/d26/d70/d72/df6/f106 x:0 0 0 2026-03-10T10:20:07.052 INFO:tasks.workunit.client.1.vm05.stdout:6/769: creat dd/d36/d3f/d12/d44/d30/d4a/ff7 x:0 0 0 2026-03-10T10:20:07.053 INFO:tasks.workunit.client.1.vm05.stdout:0/769: unlink d1/d2/d39/d3d/d9f/fb1 0 2026-03-10T10:20:07.054 INFO:tasks.workunit.client.0.vm02.stdout:9/736: write da/f1f [4843981,130234] 0 2026-03-10T10:20:07.070 INFO:tasks.workunit.client.1.vm05.stdout:3/772: dread dd/d39/d5c/fb9 [0,4194304] 0 2026-03-10T10:20:07.074 INFO:tasks.workunit.client.1.vm05.stdout:7/797: truncate d5/d1d/d20/d2d/fdb 847842 0 2026-03-10T10:20:07.080 INFO:tasks.workunit.client.0.vm02.stdout:8/757: symlink d1/d1c/d43/d6a/d7c/le2 0 2026-03-10T10:20:07.080 INFO:tasks.workunit.client.1.vm05.stdout:7/798: readlink d5/d1d/d29/d3e/d8c/d82/d90/d9a/lf3 0 2026-03-10T10:20:07.080 INFO:tasks.workunit.client.0.vm02.stdout:1/807: read d4/d2c/d53/f99 [1525467,48734] 0 2026-03-10T10:20:07.087 INFO:tasks.workunit.client.0.vm02.stdout:0/807: mkdir d9/d18/d1a/d22/d24/d80/d57/d107 0 2026-03-10T10:20:07.093 INFO:tasks.workunit.client.1.vm05.stdout:2/732: write db/d12/f31 [1364619,64631] 0 2026-03-10T10:20:07.095 INFO:tasks.workunit.client.1.vm05.stdout:8/709: write d7/f59 [1278880,90683] 0 2026-03-10T10:20:07.096 INFO:tasks.workunit.client.1.vm05.stdout:2/733: dread db/d28/d4f/f8a [4194304,4194304] 0 2026-03-10T10:20:07.101 INFO:tasks.workunit.client.1.vm05.stdout:4/636: symlink d1/d3/d65/ld2 0 2026-03-10T10:20:07.102 INFO:tasks.workunit.client.0.vm02.stdout:6/733: creat d0/d8/d29/dce/fee x:0 0 0 2026-03-10T10:20:07.104 INFO:tasks.workunit.client.1.vm05.stdout:5/773: readlink da/d63/ld8 0 2026-03-10T10:20:07.113 INFO:tasks.workunit.client.0.vm02.stdout:2/790: rename d0/d71/d108/d65/dc4/dfa/dc6/l86 to d0/d71/d108/d65/dc4/de0/dfc/l10a 0 2026-03-10T10:20:07.116 INFO:tasks.workunit.client.1.vm05.stdout:6/770: write dd/d36/d3f/d12/d44/d2a/d3d/d3e/f7c [935034,60814] 0 2026-03-10T10:20:07.118 INFO:tasks.workunit.client.1.vm05.stdout:0/770: symlink d1/d2/d9/d31/d12/d41/l107 0 2026-03-10T10:20:07.119 INFO:tasks.workunit.client.1.vm05.stdout:0/771: readlink d1/d2/d9/d31/d12/d41/ld4 0 2026-03-10T10:20:07.128 INFO:tasks.workunit.client.0.vm02.stdout:9/737: chown da/d3c/d4c/d38/c62 2 1 2026-03-10T10:20:07.142 INFO:tasks.workunit.client.0.vm02.stdout:4/898: mkdir d1/d10/d12b 0 2026-03-10T10:20:07.142 INFO:tasks.workunit.client.0.vm02.stdout:4/899: write d1/d10/f71 [3217495,15645] 0 2026-03-10T10:20:07.145 INFO:tasks.workunit.client.0.vm02.stdout:0/808: dwrite d9/d34/d3d/d65/f6d [0,4194304] 0 2026-03-10T10:20:07.150 INFO:tasks.workunit.client.0.vm02.stdout:0/809: dwrite d9/d18/d1a/d22/d24/d8e/d9b/fc5 [0,4194304] 0 2026-03-10T10:20:07.154 INFO:tasks.workunit.client.0.vm02.stdout:0/810: chown d9/d18/d1a/d22/d24/d80/d49/feb 19 1 2026-03-10T10:20:07.165 INFO:tasks.workunit.client.0.vm02.stdout:1/808: rename d4/da/fb2 to d4/dc3/dd6/ffc 0 2026-03-10T10:20:07.169 INFO:tasks.workunit.client.0.vm02.stdout:2/791: chown d0/d71/d108/d65/dc4/dfa/lac 367 1 2026-03-10T10:20:07.172 INFO:tasks.workunit.client.0.vm02.stdout:3/770: mkdir d1/d58/d104 0 2026-03-10T10:20:07.174 INFO:tasks.workunit.client.0.vm02.stdout:5/907: mkdir d1/db/d11/d16/d48/dcf/d134 0 2026-03-10T10:20:07.178 INFO:tasks.workunit.client.0.vm02.stdout:9/738: symlink da/d3c/d4c/d38/d82/d89/lf0 0 2026-03-10T10:20:07.180 INFO:tasks.workunit.client.0.vm02.stdout:7/779: getdents d1/dc/d99 0 2026-03-10T10:20:07.195 INFO:tasks.workunit.client.0.vm02.stdout:8/758: dwrite d1/d1c/d43/d6a/f87 [0,4194304] 0 2026-03-10T10:20:07.203 INFO:tasks.workunit.client.1.vm05.stdout:3/773: truncate dd/d15/d1f/dae/fc7 806632 0 2026-03-10T10:20:07.203 INFO:tasks.workunit.client.0.vm02.stdout:4/900: chown d1/d75/ddd/d10e/d5e/d78/d1a/d49/d81/dc6/df2 2 1 2026-03-10T10:20:07.204 INFO:tasks.workunit.client.0.vm02.stdout:4/901: stat d1/d75/ddd/d10e/d5e/d78/d37/lc8 0 2026-03-10T10:20:07.215 INFO:tasks.workunit.client.1.vm05.stdout:9/680: fdatasync d0/d1/d57/fbf 0 2026-03-10T10:20:07.220 INFO:tasks.workunit.client.0.vm02.stdout:6/734: rename d0/d8/d29/d2f/d4b/da5 to d0/d8/d29/d6d/d96/de4/def 0 2026-03-10T10:20:07.224 INFO:tasks.workunit.client.0.vm02.stdout:1/809: mknod d4/d1b/cfd 0 2026-03-10T10:20:07.233 INFO:tasks.workunit.client.0.vm02.stdout:2/792: write d0/d71/d108/fad [258130,16223] 0 2026-03-10T10:20:07.234 INFO:tasks.workunit.client.0.vm02.stdout:3/771: dread d1/d8/d21/f5e [4194304,4194304] 0 2026-03-10T10:20:07.236 INFO:tasks.workunit.client.0.vm02.stdout:2/793: dwrite d0/d71/d108/d65/dc4/fdc [0,4194304] 0 2026-03-10T10:20:07.238 INFO:tasks.workunit.client.0.vm02.stdout:5/908: truncate d1/db/d11/d13/d28/f31 4459680 0 2026-03-10T10:20:07.238 INFO:tasks.workunit.client.0.vm02.stdout:5/909: stat d1/db/d11/d62/d67 0 2026-03-10T10:20:07.247 INFO:tasks.workunit.client.1.vm05.stdout:2/734: creat db/d12/d74/fea x:0 0 0 2026-03-10T10:20:07.249 INFO:tasks.workunit.client.0.vm02.stdout:7/780: mkdir d1/dc/d55/d9a/dd9/df7 0 2026-03-10T10:20:07.257 INFO:tasks.workunit.client.0.vm02.stdout:8/759: unlink d1/d1c/d43/d5b/d88/dac/f50 0 2026-03-10T10:20:07.262 INFO:tasks.workunit.client.0.vm02.stdout:8/760: dread d1/d1c/d43/f7a [0,4194304] 0 2026-03-10T10:20:07.266 INFO:tasks.workunit.client.0.vm02.stdout:4/902: truncate d1/d32/fd3 4539543 0 2026-03-10T10:20:07.269 INFO:tasks.workunit.client.1.vm05.stdout:6/771: write dd/d36/f69 [1820985,128011] 0 2026-03-10T10:20:07.273 INFO:tasks.workunit.client.0.vm02.stdout:0/811: mknod d9/d18/d1a/d22/d24/c108 0 2026-03-10T10:20:07.278 INFO:tasks.workunit.client.1.vm05.stdout:0/772: write d1/d2/d9/d31/d13/d17/f5a [2827164,102401] 0 2026-03-10T10:20:07.284 INFO:tasks.workunit.client.0.vm02.stdout:0/812: dread d9/d18/d1a/d22/d24/d51/ffe [0,4194304] 0 2026-03-10T10:20:07.284 INFO:tasks.workunit.client.0.vm02.stdout:0/813: chown d9/d18/d1a/f6f 444720898 1 2026-03-10T10:20:07.291 INFO:tasks.workunit.client.1.vm05.stdout:3/774: write dd/dbe/fe6 [423601,29350] 0 2026-03-10T10:20:07.298 INFO:tasks.workunit.client.0.vm02.stdout:1/810: write d4/da/d27/f66 [991709,115896] 0 2026-03-10T10:20:07.309 INFO:tasks.workunit.client.0.vm02.stdout:3/772: write d1/d8/f7c [5882574,69350] 0 2026-03-10T10:20:07.309 INFO:tasks.workunit.client.0.vm02.stdout:3/773: chown d1/d6/d8b/cdc 0 1 2026-03-10T10:20:07.312 INFO:tasks.workunit.client.0.vm02.stdout:2/794: fdatasync d0/d1a/d49/f54 0 2026-03-10T10:20:07.318 INFO:tasks.workunit.client.1.vm05.stdout:8/710: creat d7/d14/d3a/d49/d65/db8/fe1 x:0 0 0 2026-03-10T10:20:07.320 INFO:tasks.workunit.client.0.vm02.stdout:5/910: fdatasync d1/db/d11/d62/f74 0 2026-03-10T10:20:07.324 INFO:tasks.workunit.client.1.vm05.stdout:2/735: creat db/d28/d4f/d8b/feb x:0 0 0 2026-03-10T10:20:07.325 INFO:tasks.workunit.client.1.vm05.stdout:2/736: write db/d28/d4f/d59/da4/d81/fbb [4283087,56772] 0 2026-03-10T10:20:07.325 INFO:tasks.workunit.client.1.vm05.stdout:2/737: readlink db/d12/lb1 0 2026-03-10T10:20:07.326 INFO:tasks.workunit.client.1.vm05.stdout:2/738: chown db/d28/d4f/d59/da4/d6c/c87 189 1 2026-03-10T10:20:07.328 INFO:tasks.workunit.client.0.vm02.stdout:9/739: mknod da/d3c/d4c/de1/cf1 0 2026-03-10T10:20:07.330 INFO:tasks.workunit.client.1.vm05.stdout:1/844: creat d4/d20/ff7 x:0 0 0 2026-03-10T10:20:07.341 INFO:tasks.workunit.client.0.vm02.stdout:7/781: dwrite d1/dc/fbc [0,4194304] 0 2026-03-10T10:20:07.346 INFO:tasks.workunit.client.0.vm02.stdout:8/761: fdatasync d1/d1c/d43/d5b/fb3 0 2026-03-10T10:20:07.353 INFO:tasks.workunit.client.0.vm02.stdout:4/903: symlink d1/d75/ddd/d10e/d5e/d78/d1a/d49/d81/dc6/l12c 0 2026-03-10T10:20:07.358 INFO:tasks.workunit.client.0.vm02.stdout:0/814: creat d9/d18/d1a/d22/d24/d80/d49/f109 x:0 0 0 2026-03-10T10:20:07.363 INFO:tasks.workunit.client.1.vm05.stdout:0/773: rmdir d1/d2/d9/d50/d9a 39 2026-03-10T10:20:07.368 INFO:tasks.workunit.client.1.vm05.stdout:0/774: chown d1/d2/d9/d31/d54/l96 254849834 1 2026-03-10T10:20:07.369 INFO:tasks.workunit.client.0.vm02.stdout:1/811: creat d4/dc3/ffe x:0 0 0 2026-03-10T10:20:07.369 INFO:tasks.workunit.client.1.vm05.stdout:7/799: rmdir d5/d1d/d20/d35/df8 0 2026-03-10T10:20:07.369 INFO:tasks.workunit.client.1.vm05.stdout:9/681: mkdir d0/df/d74/d8c/d8f/ddd/de6 0 2026-03-10T10:20:07.369 INFO:tasks.workunit.client.1.vm05.stdout:9/682: readlink d0/d1/d13/l94 0 2026-03-10T10:20:07.369 INFO:tasks.workunit.client.1.vm05.stdout:8/711: mknod d7/d14/d3a/d49/d65/ce2 0 2026-03-10T10:20:07.370 INFO:tasks.workunit.client.1.vm05.stdout:8/712: dread - d7/d14/d15/faa zero size 2026-03-10T10:20:07.370 INFO:tasks.workunit.client.1.vm05.stdout:8/713: chown d7/d14/d24/d3f/f7d 110803 1 2026-03-10T10:20:07.371 INFO:tasks.workunit.client.0.vm02.stdout:2/795: mknod d0/d10/d81/c10b 0 2026-03-10T10:20:07.379 INFO:tasks.workunit.client.0.vm02.stdout:9/740: rmdir da/d3c/d4c/d38/da6 39 2026-03-10T10:20:07.398 INFO:tasks.workunit.client.1.vm05.stdout:5/774: creat da/db/d28/f107 x:0 0 0 2026-03-10T10:20:07.399 INFO:tasks.workunit.client.0.vm02.stdout:6/735: creat d0/d8/d29/d2f/d50/ff0 x:0 0 0 2026-03-10T10:20:07.401 INFO:tasks.workunit.client.0.vm02.stdout:3/774: symlink d1/d8/d21/d73/d78/l105 0 2026-03-10T10:20:07.404 INFO:tasks.workunit.client.0.vm02.stdout:2/796: rename d0/d71/d108/d65/dc4/dfa/dbf/lec to d0/d71/dfb/l10c 0 2026-03-10T10:20:07.406 INFO:tasks.workunit.client.0.vm02.stdout:7/782: sync 2026-03-10T10:20:07.406 INFO:tasks.workunit.client.1.vm05.stdout:8/714: sync 2026-03-10T10:20:07.412 INFO:tasks.workunit.client.0.vm02.stdout:5/911: mkdir d1/db/d11/d16/d79/d85/d135 0 2026-03-10T10:20:07.413 INFO:tasks.workunit.client.0.vm02.stdout:5/912: chown d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fdf 53 1 2026-03-10T10:20:07.420 INFO:tasks.workunit.client.1.vm05.stdout:4/637: dwrite d1/f19 [0,4194304] 0 2026-03-10T10:20:07.421 INFO:tasks.workunit.client.1.vm05.stdout:4/638: chown d1/d31/d76/dac/db8/dbf/f96 6 1 2026-03-10T10:20:07.421 INFO:tasks.workunit.client.1.vm05.stdout:6/772: write dd/d36/d3f/d12/f4f [285649,82422] 0 2026-03-10T10:20:07.421 INFO:tasks.workunit.client.1.vm05.stdout:9/683: write d0/d1/fb0 [1612551,55397] 0 2026-03-10T10:20:07.423 INFO:tasks.workunit.client.1.vm05.stdout:4/639: sync 2026-03-10T10:20:07.428 INFO:tasks.workunit.client.1.vm05.stdout:0/775: dwrite d1/d2/d9/d31/d13/fd2 [0,4194304] 0 2026-03-10T10:20:07.431 INFO:tasks.workunit.client.0.vm02.stdout:8/762: mknod d1/d1c/d43/d6a/da8/ce3 0 2026-03-10T10:20:07.431 INFO:tasks.workunit.client.1.vm05.stdout:2/739: rename db/d28/d4f/d59/c79 to db/d28/d4f/d59/cec 0 2026-03-10T10:20:07.437 INFO:tasks.workunit.client.0.vm02.stdout:0/815: link d9/d18/d1a/d22/d24/d8e/d9b/f102 d9/d18/d1a/d22/d24/d80/d49/f10a 0 2026-03-10T10:20:07.441 INFO:tasks.workunit.client.1.vm05.stdout:3/775: rename dd/d15/d24/d2c/c31 to dd/c110 0 2026-03-10T10:20:07.441 INFO:tasks.workunit.client.1.vm05.stdout:3/776: stat dd/d20/f35 0 2026-03-10T10:20:07.444 INFO:tasks.workunit.client.0.vm02.stdout:6/736: truncate d0/f5d 3439607 0 2026-03-10T10:20:07.447 INFO:tasks.workunit.client.1.vm05.stdout:1/845: dwrite d4/df/d1c/fdb [0,4194304] 0 2026-03-10T10:20:07.447 INFO:tasks.workunit.client.1.vm05.stdout:8/715: fsync d7/d14/d24/f9c 0 2026-03-10T10:20:07.448 INFO:tasks.workunit.client.0.vm02.stdout:2/797: mknod d0/d71/d108/d65/dc4/de0/c10d 0 2026-03-10T10:20:07.451 INFO:tasks.workunit.client.0.vm02.stdout:7/783: truncate d1/dc/d60/fd0 342732 0 2026-03-10T10:20:07.455 INFO:tasks.workunit.client.1.vm05.stdout:9/684: truncate d0/df/d11/f64 5157033 0 2026-03-10T10:20:07.457 INFO:tasks.workunit.client.0.vm02.stdout:3/775: dread d1/d6/f42 [0,4194304] 0 2026-03-10T10:20:07.461 INFO:tasks.workunit.client.1.vm05.stdout:2/740: creat db/d1c/d40/d62/fed x:0 0 0 2026-03-10T10:20:07.462 INFO:tasks.workunit.client.0.vm02.stdout:0/816: mknod d9/d34/d3d/c10b 0 2026-03-10T10:20:07.462 INFO:tasks.workunit.client.0.vm02.stdout:0/817: chown d9/d34/d3d/d65/d89/fc1 12 1 2026-03-10T10:20:07.466 INFO:tasks.workunit.client.1.vm05.stdout:5/775: link da/d9a/daf/fdf da/d9a/dbe/f108 0 2026-03-10T10:20:07.466 INFO:tasks.workunit.client.0.vm02.stdout:1/812: link d4/da/d27/d38/d3c/ce2 d4/da/d1a/d47/d88/da8/cff 0 2026-03-10T10:20:07.466 INFO:tasks.workunit.client.0.vm02.stdout:1/813: chown d4/da/d1a/d47/caa 32 1 2026-03-10T10:20:07.466 INFO:tasks.workunit.client.0.vm02.stdout:2/798: creat d0/d71/d108/d65/dc4/dfa/dd3/de8/d105/f10e x:0 0 0 2026-03-10T10:20:07.466 INFO:tasks.workunit.client.0.vm02.stdout:8/763: read d1/d1c/d23/f75 [2231822,44926] 0 2026-03-10T10:20:07.475 INFO:tasks.workunit.client.0.vm02.stdout:3/776: fsync d1/f1c 0 2026-03-10T10:20:07.476 INFO:tasks.workunit.client.0.vm02.stdout:4/904: getdents d1/d75/ddd/d10e/d5e/d78/d37 0 2026-03-10T10:20:07.476 INFO:tasks.workunit.client.0.vm02.stdout:4/905: fsync d1/d32/fb3 0 2026-03-10T10:20:07.484 INFO:tasks.workunit.client.0.vm02.stdout:1/814: creat d4/d1b/f100 x:0 0 0 2026-03-10T10:20:07.488 INFO:tasks.workunit.client.0.vm02.stdout:1/815: dwrite d4/da/d1a/d47/d65/f6e [4194304,4194304] 0 2026-03-10T10:20:07.492 INFO:tasks.workunit.client.0.vm02.stdout:4/906: dread d1/d75/ddd/d10e/d5e/d78/d1a/fad [0,4194304] 0 2026-03-10T10:20:07.508 INFO:tasks.workunit.client.0.vm02.stdout:9/741: write da/d3c/d4c/d38/d7c/fc8 [812661,104121] 0 2026-03-10T10:20:07.508 INFO:tasks.workunit.client.0.vm02.stdout:9/742: truncate da/f1f 5108950 0 2026-03-10T10:20:07.510 INFO:tasks.workunit.client.1.vm05.stdout:7/800: dwrite d5/d1d/d20/fb5 [4194304,4194304] 0 2026-03-10T10:20:07.512 INFO:tasks.workunit.client.1.vm05.stdout:4/640: creat d1/d31/dc/d40/fd3 x:0 0 0 2026-03-10T10:20:07.514 INFO:tasks.workunit.client.1.vm05.stdout:4/641: chown d1/d64/c80 15 1 2026-03-10T10:20:07.519 INFO:tasks.workunit.client.1.vm05.stdout:7/801: dwrite d5/d26/fec [0,4194304] 0 2026-03-10T10:20:07.520 INFO:tasks.workunit.client.1.vm05.stdout:9/685: fsync d0/d1/d16/d6e/daf/fdb 0 2026-03-10T10:20:07.521 INFO:tasks.workunit.client.1.vm05.stdout:5/776: mkdir da/db/dee/d109 0 2026-03-10T10:20:07.522 INFO:tasks.workunit.client.0.vm02.stdout:6/737: creat d0/d8/d29/d6d/ff1 x:0 0 0 2026-03-10T10:20:07.522 INFO:tasks.workunit.client.0.vm02.stdout:3/777: read d1/d20/d52/f6c [4149314,44592] 0 2026-03-10T10:20:07.523 INFO:tasks.workunit.client.1.vm05.stdout:3/777: link dd/d15/d4c/c72 dd/d20/d56/d5e/c111 0 2026-03-10T10:20:07.527 INFO:tasks.workunit.client.0.vm02.stdout:0/818: mknod d9/d18/d1a/d22/d24/c10c 0 2026-03-10T10:20:07.528 INFO:tasks.workunit.client.0.vm02.stdout:0/819: chown d9/d34/d3d/f94 3 1 2026-03-10T10:20:07.528 INFO:tasks.workunit.client.0.vm02.stdout:0/820: stat d9/d34/d3d/c10b 0 2026-03-10T10:20:07.533 INFO:tasks.workunit.client.1.vm05.stdout:3/778: dread dd/d39/d5f/df7/ffd [0,4194304] 0 2026-03-10T10:20:07.533 INFO:tasks.workunit.client.1.vm05.stdout:1/846: link d4/df/ca5 d4/d79/cf8 0 2026-03-10T10:20:07.534 INFO:tasks.workunit.client.1.vm05.stdout:1/847: chown d4/df/d1c/d92/f9e 116046542 1 2026-03-10T10:20:07.534 INFO:tasks.workunit.client.1.vm05.stdout:6/773: getdents dd/d36/d3f/d12/d59 0 2026-03-10T10:20:07.534 INFO:tasks.workunit.client.1.vm05.stdout:1/848: stat d4/df/de0/d82/fba 0 2026-03-10T10:20:07.546 INFO:tasks.workunit.client.0.vm02.stdout:2/799: getdents d0/d10/da6/d107 0 2026-03-10T10:20:07.550 INFO:tasks.workunit.client.0.vm02.stdout:8/764: symlink d1/d1c/d43/d5b/le4 0 2026-03-10T10:20:07.553 INFO:tasks.workunit.client.1.vm05.stdout:7/802: dread d5/d1d/d29/d3e/d8c/d96/fa6 [0,4194304] 0 2026-03-10T10:20:07.553 INFO:tasks.workunit.client.0.vm02.stdout:7/784: link d1/dc/cb1 d1/d1b/d8f/d67/da7/cf8 0 2026-03-10T10:20:07.555 INFO:tasks.workunit.client.1.vm05.stdout:9/686: mkdir d0/d1/d13/d62/de7 0 2026-03-10T10:20:07.566 INFO:tasks.workunit.client.0.vm02.stdout:5/913: write d1/db/d11/d13/d28/d37/dce/f10b [1990396,41543] 0 2026-03-10T10:20:07.578 INFO:tasks.workunit.client.1.vm05.stdout:0/776: write d1/d2/d9/d31/d12/d20/f2e [469228,88719] 0 2026-03-10T10:20:07.578 INFO:tasks.workunit.client.1.vm05.stdout:0/777: chown d1/d2/d5d 2 1 2026-03-10T10:20:07.585 INFO:tasks.workunit.client.0.vm02.stdout:0/821: rmdir d9/d18/d1a/d22/d24/d80 39 2026-03-10T10:20:07.600 INFO:tasks.workunit.client.1.vm05.stdout:8/716: write d7/d14/d24/f42 [130325,102685] 0 2026-03-10T10:20:07.609 INFO:tasks.workunit.client.0.vm02.stdout:2/800: rename d0/d71/d108/d65/dc4/dfa/dc6 to d0/d71/d108/d65/dc4/dfa/d80/d10f 0 2026-03-10T10:20:07.613 INFO:tasks.workunit.client.0.vm02.stdout:8/765: rmdir d1/d1c/d43/d5b/d88/dac/d83/d9f 39 2026-03-10T10:20:07.614 INFO:tasks.workunit.client.1.vm05.stdout:2/741: dwrite db/d28/d4f/f8a [8388608,4194304] 0 2026-03-10T10:20:07.622 INFO:tasks.workunit.client.1.vm05.stdout:9/687: creat d0/df/d74/fe8 x:0 0 0 2026-03-10T10:20:07.624 INFO:tasks.workunit.client.0.vm02.stdout:7/785: fsync d1/dc/d55/f85 0 2026-03-10T10:20:07.639 INFO:tasks.workunit.client.0.vm02.stdout:6/738: creat d0/d8/d29/d6d/d96/de4/def/d6f/da1/ff2 x:0 0 0 2026-03-10T10:20:07.642 INFO:tasks.workunit.client.1.vm05.stdout:3/779: mknod dd/d20/c112 0 2026-03-10T10:20:07.651 INFO:tasks.workunit.client.0.vm02.stdout:0/822: fdatasync d9/d34/d3d/f94 0 2026-03-10T10:20:07.651 INFO:tasks.workunit.client.0.vm02.stdout:0/823: readlink d9/d18/d1a/d22/d24/l77 0 2026-03-10T10:20:07.652 INFO:tasks.workunit.client.0.vm02.stdout:0/824: stat d9/d18/d1a/d22/d24/d79 0 2026-03-10T10:20:07.652 INFO:tasks.workunit.client.0.vm02.stdout:0/825: fsync d9/d34/d3d/d7b/fc0 0 2026-03-10T10:20:07.658 INFO:tasks.workunit.client.0.vm02.stdout:4/907: write d1/d52/fcd [1354862,47961] 0 2026-03-10T10:20:07.658 INFO:tasks.workunit.client.0.vm02.stdout:4/908: stat d1/d75/ddd/d10e/d5e/d78/d1a/f11c 0 2026-03-10T10:20:07.661 INFO:tasks.workunit.client.1.vm05.stdout:4/642: dwrite d1/d31/d4b/f51 [0,4194304] 0 2026-03-10T10:20:07.662 INFO:tasks.workunit.client.0.vm02.stdout:1/816: link d4/da/d27/f35 d4/d2c/d53/da6/db8/f101 0 2026-03-10T10:20:07.672 INFO:tasks.workunit.client.1.vm05.stdout:1/849: rename d4/df/d1c/f23 to d4/d39/d3e/db1/ff9 0 2026-03-10T10:20:07.673 INFO:tasks.workunit.client.1.vm05.stdout:8/717: readlink d7/d14/d15/d3b/l6e 0 2026-03-10T10:20:07.675 INFO:tasks.workunit.client.0.vm02.stdout:2/801: symlink d0/d71/l110 0 2026-03-10T10:20:07.685 INFO:tasks.workunit.client.1.vm05.stdout:7/803: getdents d5/d17/dae 0 2026-03-10T10:20:07.685 INFO:tasks.workunit.client.1.vm05.stdout:7/804: readlink d5/d1d/d20/d2d/d5d/d7a/laf 0 2026-03-10T10:20:07.685 INFO:tasks.workunit.client.1.vm05.stdout:9/688: mknod d0/d1/d13/d55/ce9 0 2026-03-10T10:20:07.696 INFO:tasks.workunit.client.1.vm05.stdout:3/780: rmdir dd/d15/d24/d2c 39 2026-03-10T10:20:07.697 INFO:tasks.workunit.client.1.vm05.stdout:3/781: readlink dd/d20/d56/d5e/dab/d9c/lf0 0 2026-03-10T10:20:07.700 INFO:tasks.workunit.client.1.vm05.stdout:5/777: write da/db/d26/d5c/f68 [1148273,49971] 0 2026-03-10T10:20:07.701 INFO:tasks.workunit.client.1.vm05.stdout:5/778: dread - da/d9a/daf/ded/ffa zero size 2026-03-10T10:20:07.703 INFO:tasks.workunit.client.0.vm02.stdout:8/766: dread d1/d1c/f42 [0,4194304] 0 2026-03-10T10:20:07.703 INFO:tasks.workunit.client.1.vm05.stdout:7/805: dread d5/d1d/d20/d91/fc9 [0,4194304] 0 2026-03-10T10:20:07.706 INFO:tasks.workunit.client.1.vm05.stdout:7/806: dread d5/d1d/d20/d35/f47 [0,4194304] 0 2026-03-10T10:20:07.708 INFO:tasks.workunit.client.0.vm02.stdout:9/743: dwrite da/d3c/f8b [0,4194304] 0 2026-03-10T10:20:07.714 INFO:tasks.workunit.client.0.vm02.stdout:3/778: dwrite d1/d8/f46 [0,4194304] 0 2026-03-10T10:20:07.735 INFO:tasks.workunit.client.1.vm05.stdout:8/718: mkdir d7/d2f/d57/de3 0 2026-03-10T10:20:07.743 INFO:tasks.workunit.client.0.vm02.stdout:4/909: chown d1/d32/fd3 27 1 2026-03-10T10:20:07.746 INFO:tasks.workunit.client.0.vm02.stdout:4/910: dwrite d1/d75/ddd/d10e/d117/f126 [4194304,4194304] 0 2026-03-10T10:20:07.777 INFO:tasks.workunit.client.0.vm02.stdout:3/779: rmdir d1/d20/db2 39 2026-03-10T10:20:07.788 INFO:tasks.workunit.client.0.vm02.stdout:6/739: truncate d0/d87/fa7 4131342 0 2026-03-10T10:20:07.798 INFO:tasks.workunit.client.1.vm05.stdout:9/689: symlink d0/df/d74/d8c/d8f/ddd/de6/lea 0 2026-03-10T10:20:07.800 INFO:tasks.workunit.client.1.vm05.stdout:5/779: mknod da/d96/dd9/c10a 0 2026-03-10T10:20:07.814 INFO:tasks.workunit.client.1.vm05.stdout:6/774: link dd/d36/d3f/d12/fa6 dd/d36/d3f/d12/d44/d30/d4a/d6e/ff8 0 2026-03-10T10:20:07.817 INFO:tasks.workunit.client.1.vm05.stdout:0/778: dwrite d1/d2/d9/d50/d9a/fbf [0,4194304] 0 2026-03-10T10:20:07.820 INFO:tasks.workunit.client.1.vm05.stdout:1/850: creat d4/d37/ffa x:0 0 0 2026-03-10T10:20:07.826 INFO:tasks.workunit.client.1.vm05.stdout:2/742: getdents db/d28/d4f/d8b/d9a 0 2026-03-10T10:20:07.832 INFO:tasks.workunit.client.0.vm02.stdout:7/786: write d1/d1b/d49/fbf [297502,70531] 0 2026-03-10T10:20:07.838 INFO:tasks.workunit.client.1.vm05.stdout:3/782: dwrite dd/d15/d1f/f2b [0,4194304] 0 2026-03-10T10:20:07.838 INFO:tasks.workunit.client.1.vm05.stdout:5/780: mkdir da/db/d26/d70/d72/d10b 0 2026-03-10T10:20:07.838 INFO:tasks.workunit.client.1.vm05.stdout:3/783: write dd/d15/f108 [70111,92445] 0 2026-03-10T10:20:07.838 INFO:tasks.workunit.client.0.vm02.stdout:7/787: chown d1/dc/d10/d38/ce6 4031 1 2026-03-10T10:20:07.838 INFO:tasks.workunit.client.0.vm02.stdout:7/788: stat d1/dc/d16/d28/l5e 0 2026-03-10T10:20:07.838 INFO:tasks.workunit.client.0.vm02.stdout:7/789: write d1/dc/d16/d28/f73 [3505703,57602] 0 2026-03-10T10:20:07.838 INFO:tasks.workunit.client.0.vm02.stdout:7/790: chown d1/d1b/d8f/dad/d7e/lc6 35073 1 2026-03-10T10:20:07.838 INFO:tasks.workunit.client.0.vm02.stdout:7/791: chown d1/dc/d16/d28 123 1 2026-03-10T10:20:07.840 INFO:tasks.workunit.client.1.vm05.stdout:6/775: creat dd/d1b/ff9 x:0 0 0 2026-03-10T10:20:07.848 INFO:tasks.workunit.client.1.vm05.stdout:0/779: creat d1/d2/d9/d31/d12/d20/dbe/df1/f108 x:0 0 0 2026-03-10T10:20:07.862 INFO:tasks.workunit.client.1.vm05.stdout:9/690: creat d0/d1/d13/de/ddf/feb x:0 0 0 2026-03-10T10:20:07.863 INFO:tasks.workunit.client.1.vm05.stdout:5/781: dread - da/ff3 zero size 2026-03-10T10:20:07.872 INFO:tasks.workunit.client.0.vm02.stdout:9/744: write da/d3c/d4c/f29 [2078376,41749] 0 2026-03-10T10:20:07.873 INFO:tasks.workunit.client.0.vm02.stdout:9/745: chown da/d3c/d4c/d38/d7c/cc7 95257649 1 2026-03-10T10:20:07.873 INFO:tasks.workunit.client.0.vm02.stdout:9/746: chown da/d3c/d4c/d38/fd5 2059110 1 2026-03-10T10:20:07.874 INFO:tasks.workunit.client.0.vm02.stdout:1/817: symlink d4/da/l102 0 2026-03-10T10:20:07.881 INFO:tasks.workunit.client.0.vm02.stdout:5/914: rename d1/db/d11/d13/d28/d37/l5a to d1/l136 0 2026-03-10T10:20:07.897 INFO:tasks.workunit.client.1.vm05.stdout:3/784: fsync dd/d15/d24/d8e/dac/fd7 0 2026-03-10T10:20:07.903 INFO:tasks.workunit.client.1.vm05.stdout:9/691: dread d0/d1/d16/d6e/daf/fcd [0,4194304] 0 2026-03-10T10:20:07.904 INFO:tasks.workunit.client.0.vm02.stdout:3/780: read d1/d8/f3f [243337,105312] 0 2026-03-10T10:20:07.905 INFO:tasks.workunit.client.0.vm02.stdout:3/781: chown d1/d20/d52/dd3/fd9 4874851 1 2026-03-10T10:20:07.905 INFO:tasks.workunit.client.1.vm05.stdout:4/643: dwrite d1/d31/f7a [0,4194304] 0 2026-03-10T10:20:07.916 INFO:tasks.workunit.client.0.vm02.stdout:6/740: mknod d0/db9/cf3 0 2026-03-10T10:20:07.941 INFO:tasks.workunit.client.1.vm05.stdout:3/785: creat dd/d20/d56/d5e/f113 x:0 0 0 2026-03-10T10:20:07.942 INFO:tasks.workunit.client.1.vm05.stdout:3/786: read dd/d20/d56/db3/ff4 [1024723,95086] 0 2026-03-10T10:20:07.949 INFO:tasks.workunit.client.1.vm05.stdout:9/692: stat d0/d1/d13/de/d93/fa1 0 2026-03-10T10:20:07.949 INFO:tasks.workunit.client.1.vm05.stdout:9/693: readlink d0/d70/lb4 0 2026-03-10T10:20:07.951 INFO:tasks.workunit.client.1.vm05.stdout:8/719: dwrite d7/d14/d24/f34 [0,4194304] 0 2026-03-10T10:20:07.954 INFO:tasks.workunit.client.0.vm02.stdout:8/767: rename d1/dc7/fcd to d1/d1c/d24/dad/dbe/dda/fe5 0 2026-03-10T10:20:07.961 INFO:tasks.workunit.client.1.vm05.stdout:7/807: truncate d5/d1d/f53 967949 0 2026-03-10T10:20:07.973 INFO:tasks.workunit.client.0.vm02.stdout:6/741: mknod d0/d8/d29/d94/cf4 0 2026-03-10T10:20:07.973 INFO:tasks.workunit.client.0.vm02.stdout:6/742: stat d0/d8/d29/d6d/d96/de4/def/fcf 0 2026-03-10T10:20:07.973 INFO:tasks.workunit.client.0.vm02.stdout:6/743: dread - d0/d8/d29/d94/fbf zero size 2026-03-10T10:20:07.973 INFO:tasks.workunit.client.1.vm05.stdout:4/644: creat d1/d31/dc/d40/d45/daa/fd4 x:0 0 0 2026-03-10T10:20:07.973 INFO:tasks.workunit.client.1.vm05.stdout:0/780: rename d1/d2/d9/d31/d12/f1e to d1/d2/d9/d31/f109 0 2026-03-10T10:20:07.973 INFO:tasks.workunit.client.1.vm05.stdout:5/782: creat da/db/d26/f10c x:0 0 0 2026-03-10T10:20:07.973 INFO:tasks.workunit.client.1.vm05.stdout:0/781: chown d1/d2/d9/fc7 420809251 1 2026-03-10T10:20:07.976 INFO:tasks.workunit.client.0.vm02.stdout:4/911: link d1/d52/dff/c108 d1/d52/d53/dda/c12d 0 2026-03-10T10:20:07.976 INFO:tasks.workunit.client.1.vm05.stdout:9/694: mkdir d0/d1/d16/d6e/dec 0 2026-03-10T10:20:07.979 INFO:tasks.workunit.client.1.vm05.stdout:8/720: sync 2026-03-10T10:20:07.980 INFO:tasks.workunit.client.1.vm05.stdout:8/721: stat d7/d14/d62/d90/ca1 0 2026-03-10T10:20:07.985 INFO:tasks.workunit.client.1.vm05.stdout:6/776: getdents dd/d36/d3f 0 2026-03-10T10:20:07.999 INFO:tasks.workunit.client.1.vm05.stdout:2/743: dwrite db/d12/f1d [0,4194304] 0 2026-03-10T10:20:08.001 INFO:tasks.workunit.client.0.vm02.stdout:0/826: dwrite d9/d34/d3d/d65/d89/dd3/da8/fd7 [0,4194304] 0 2026-03-10T10:20:08.024 INFO:tasks.workunit.client.1.vm05.stdout:0/782: chown d1/d2/d9/d31/d12/d20/f37 6266 1 2026-03-10T10:20:08.025 INFO:tasks.workunit.client.0.vm02.stdout:1/818: rename d4/da/d1a/d22/fae to d4/d1b/f103 0 2026-03-10T10:20:08.025 INFO:tasks.workunit.client.0.vm02.stdout:1/819: stat d4/da/d1a/d47/ld5 0 2026-03-10T10:20:08.037 INFO:tasks.workunit.client.1.vm05.stdout:3/787: mknod dd/d15/d24/d2c/dd0/c114 0 2026-03-10T10:20:08.044 INFO:tasks.workunit.client.0.vm02.stdout:6/744: creat d0/d8/d29/d6d/d96/ff5 x:0 0 0 2026-03-10T10:20:08.055 INFO:tasks.workunit.client.1.vm05.stdout:1/851: write d4/d39/d3e/da0/fc9 [59550,9930] 0 2026-03-10T10:20:08.061 INFO:tasks.workunit.client.0.vm02.stdout:4/912: dread d1/d75/ddd/d10e/d5e/d78/d1a/d49/f7a [0,4194304] 0 2026-03-10T10:20:08.067 INFO:tasks.workunit.client.0.vm02.stdout:0/827: chown d9/d18/d1a/d22/d24/d80/f90 31 1 2026-03-10T10:20:08.068 INFO:tasks.workunit.client.0.vm02.stdout:0/828: fdatasync d9/d34/d3d/d65/d89/dd3/da8/fd7 0 2026-03-10T10:20:08.072 INFO:tasks.workunit.client.0.vm02.stdout:2/802: write d0/d10/f19 [866703,112598] 0 2026-03-10T10:20:08.072 INFO:tasks.workunit.client.0.vm02.stdout:7/792: write d1/d1b/f43 [3162785,83255] 0 2026-03-10T10:20:08.072 INFO:tasks.workunit.client.0.vm02.stdout:7/793: chown d1/dc/d16 1145921 1 2026-03-10T10:20:08.081 INFO:tasks.workunit.client.0.vm02.stdout:3/782: write d1/d20/fca [14222,11967] 0 2026-03-10T10:20:08.086 INFO:tasks.workunit.client.0.vm02.stdout:9/747: dwrite da/f15 [0,4194304] 0 2026-03-10T10:20:08.090 INFO:tasks.workunit.client.1.vm05.stdout:6/777: write dd/d36/d3f/fa0 [1637071,36570] 0 2026-03-10T10:20:08.092 INFO:tasks.workunit.client.1.vm05.stdout:7/808: dwrite d5/d1d/d20/d2d/fdb [0,4194304] 0 2026-03-10T10:20:08.100 INFO:tasks.workunit.client.1.vm05.stdout:5/783: mknod da/db/c10d 0 2026-03-10T10:20:08.100 INFO:tasks.workunit.client.1.vm05.stdout:4/645: write d1/fb6 [703343,129950] 0 2026-03-10T10:20:08.115 INFO:tasks.workunit.client.0.vm02.stdout:6/745: fdatasync d0/d8/d29/d2f/d4b/f26 0 2026-03-10T10:20:08.119 INFO:tasks.workunit.client.0.vm02.stdout:1/820: write d4/da/d1a/d47/d78/fc2 [4515421,74567] 0 2026-03-10T10:20:08.130 INFO:tasks.workunit.client.1.vm05.stdout:1/852: readlink d4/lcd 0 2026-03-10T10:20:08.137 INFO:tasks.workunit.client.0.vm02.stdout:2/803: mkdir d0/d71/d108/d65/dc4/dfa/d111 0 2026-03-10T10:20:08.139 INFO:tasks.workunit.client.0.vm02.stdout:7/794: truncate d1/f6b 3564757 0 2026-03-10T10:20:08.156 INFO:tasks.workunit.client.0.vm02.stdout:3/783: write d1/d6/d8e/f8f [1034301,73618] 0 2026-03-10T10:20:08.160 INFO:tasks.workunit.client.0.vm02.stdout:0/829: dwrite d9/d18/d1a/d22/d24/d80/d49/feb [0,4194304] 0 2026-03-10T10:20:08.162 INFO:tasks.workunit.client.0.vm02.stdout:0/830: dread - d9/d18/d1a/d22/d24/d80/fe4 zero size 2026-03-10T10:20:08.165 INFO:tasks.workunit.client.0.vm02.stdout:9/748: creat da/d3c/d4c/d2c/d96/ff2 x:0 0 0 2026-03-10T10:20:08.170 INFO:tasks.workunit.client.0.vm02.stdout:3/784: dread d1/d8/f3f [0,4194304] 0 2026-03-10T10:20:08.190 INFO:tasks.workunit.client.0.vm02.stdout:2/804: unlink d0/c2a 0 2026-03-10T10:20:08.194 INFO:tasks.workunit.client.0.vm02.stdout:7/795: creat d1/d1b/d49/ff9 x:0 0 0 2026-03-10T10:20:08.194 INFO:tasks.workunit.client.0.vm02.stdout:2/805: write d0/d10/f19 [935235,33414] 0 2026-03-10T10:20:08.194 INFO:tasks.workunit.client.0.vm02.stdout:7/796: chown d1/d1b/d49/d98/dee 1355 1 2026-03-10T10:20:08.194 INFO:tasks.workunit.client.0.vm02.stdout:2/806: fsync d0/dd4/ff2 0 2026-03-10T10:20:08.194 INFO:tasks.workunit.client.0.vm02.stdout:2/807: read - d0/d1a/d49/deb/de6/f106 zero size 2026-03-10T10:20:08.203 INFO:tasks.workunit.client.1.vm05.stdout:6/778: truncate dd/d36/d3f/d12/d44/d2a/d3d/f53 4110772 0 2026-03-10T10:20:08.210 INFO:tasks.workunit.client.0.vm02.stdout:0/831: rmdir d9/d18/d1a/d22/d24/d8e 39 2026-03-10T10:20:08.224 INFO:tasks.workunit.client.1.vm05.stdout:0/783: truncate d1/d2/d9/d31/d13/d17/fd3 522761 0 2026-03-10T10:20:08.224 INFO:tasks.workunit.client.1.vm05.stdout:8/722: link d7/d14/d3a/f50 d7/d14/d24/fe4 0 2026-03-10T10:20:08.224 INFO:tasks.workunit.client.1.vm05.stdout:8/723: stat d7/d14/d24/d3f/d4f/f98 0 2026-03-10T10:20:08.224 INFO:tasks.workunit.client.1.vm05.stdout:8/724: chown d7/d14/d3a/d49/f6b 0 1 2026-03-10T10:20:08.224 INFO:tasks.workunit.client.0.vm02.stdout:0/832: dread - d9/d34/ff9 zero size 2026-03-10T10:20:08.224 INFO:tasks.workunit.client.0.vm02.stdout:3/785: dread d1/d8/d21/d73/d78/d84/fb7 [0,4194304] 0 2026-03-10T10:20:08.224 INFO:tasks.workunit.client.0.vm02.stdout:5/915: rename d1/db/d11/d1a/f121 to d1/db/d11/d84/d40/d4f/f137 0 2026-03-10T10:20:08.224 INFO:tasks.workunit.client.0.vm02.stdout:1/821: symlink d4/da/d1a/d47/dbc/dcb/l104 0 2026-03-10T10:20:08.225 INFO:tasks.workunit.client.0.vm02.stdout:9/749: sync 2026-03-10T10:20:08.232 INFO:tasks.workunit.client.0.vm02.stdout:8/768: write d1/d1c/d43/d5b/d88/dac/d83/d9f/fc0 [836025,92441] 0 2026-03-10T10:20:08.233 INFO:tasks.workunit.client.0.vm02.stdout:6/746: write d0/d8/d29/d6d/d96/de4/def/d6f/fc6 [983097,103465] 0 2026-03-10T10:20:08.248 INFO:tasks.workunit.client.0.vm02.stdout:4/913: creat d1/d75/ddd/d10e/d5e/f12e x:0 0 0 2026-03-10T10:20:08.250 INFO:tasks.workunit.client.1.vm05.stdout:4/646: truncate d1/d31/dc/d40/f67 3109790 0 2026-03-10T10:20:08.250 INFO:tasks.workunit.client.0.vm02.stdout:7/797: symlink d1/d1b/d8f/d67/lfa 0 2026-03-10T10:20:08.252 INFO:tasks.workunit.client.0.vm02.stdout:2/808: read d0/d1a/d49/f4f [578750,96164] 0 2026-03-10T10:20:08.259 INFO:tasks.workunit.client.1.vm05.stdout:3/788: rename dd/d15/d24/fee to dd/d20/d9e/dc0/f115 0 2026-03-10T10:20:08.259 INFO:tasks.workunit.client.0.vm02.stdout:2/809: dread d0/d1a/d49/f4f [0,4194304] 0 2026-03-10T10:20:08.264 INFO:tasks.workunit.client.0.vm02.stdout:0/833: rmdir d9/d18/d1a/d22/d24 39 2026-03-10T10:20:08.264 INFO:tasks.workunit.client.1.vm05.stdout:9/695: link d0/df/d11/f8d d0/df/d74/d8c/fed 0 2026-03-10T10:20:08.265 INFO:tasks.workunit.client.1.vm05.stdout:9/696: chown d0/d1/d13/de/ddf 42 1 2026-03-10T10:20:08.266 INFO:tasks.workunit.client.1.vm05.stdout:8/725: unlink d7/d14/d62/d90/lb6 0 2026-03-10T10:20:08.275 INFO:tasks.workunit.client.0.vm02.stdout:4/914: readlink d1/d10/dfc/l123 0 2026-03-10T10:20:08.277 INFO:tasks.workunit.client.1.vm05.stdout:4/647: unlink d1/d31/dc/d40/d63/f89 0 2026-03-10T10:20:08.278 INFO:tasks.workunit.client.0.vm02.stdout:2/810: mknod d0/d71/d108/d65/dc4/dfa/d80/c112 0 2026-03-10T10:20:08.279 INFO:tasks.workunit.client.0.vm02.stdout:2/811: write d0/d71/d108/d65/dc4/dfa/dbf/fed [5425021,59394] 0 2026-03-10T10:20:08.279 INFO:tasks.workunit.client.0.vm02.stdout:2/812: stat d0/d1a/d49/lca 0 2026-03-10T10:20:08.282 INFO:tasks.workunit.client.0.vm02.stdout:0/834: symlink d9/d18/d1a/d46/l10d 0 2026-03-10T10:20:08.283 INFO:tasks.workunit.client.0.vm02.stdout:8/769: write d1/f40 [4577830,87448] 0 2026-03-10T10:20:08.285 INFO:tasks.workunit.client.1.vm05.stdout:5/784: rename da/d96/df5 to da/db/d26/d70/d72/df6/d10e 0 2026-03-10T10:20:08.285 INFO:tasks.workunit.client.1.vm05.stdout:3/789: mkdir dd/d15/d1f/d116 0 2026-03-10T10:20:08.286 INFO:tasks.workunit.client.1.vm05.stdout:9/697: truncate d0/d1/d16/f5c 373270 0 2026-03-10T10:20:08.290 INFO:tasks.workunit.client.0.vm02.stdout:5/916: mknod d1/db/d11/d62/c138 0 2026-03-10T10:20:08.304 INFO:tasks.workunit.client.1.vm05.stdout:2/744: getdents db/d61/dcc 0 2026-03-10T10:20:08.304 INFO:tasks.workunit.client.1.vm05.stdout:2/745: stat db/d61/f99 0 2026-03-10T10:20:08.305 INFO:tasks.workunit.client.1.vm05.stdout:7/809: getdents d5/d17 0 2026-03-10T10:20:08.306 INFO:tasks.workunit.client.0.vm02.stdout:6/747: truncate d0/d87/fa7 3488291 0 2026-03-10T10:20:08.317 INFO:tasks.workunit.client.0.vm02.stdout:1/822: write d4/d2c/d53/da6/db8/f101 [3948033,101647] 0 2026-03-10T10:20:08.319 INFO:tasks.workunit.client.0.vm02.stdout:4/915: creat d1/d10/d88/f12f x:0 0 0 2026-03-10T10:20:08.320 INFO:tasks.workunit.client.1.vm05.stdout:1/853: write d4/d20/dbe/de8/fef [4364858,38028] 0 2026-03-10T10:20:08.321 INFO:tasks.workunit.client.1.vm05.stdout:6/779: creat dd/d36/d3f/d12/d44/d2a/d3d/ffa x:0 0 0 2026-03-10T10:20:08.327 INFO:tasks.workunit.client.1.vm05.stdout:4/648: unlink d1/d31/d76/fce 0 2026-03-10T10:20:08.329 INFO:tasks.workunit.client.1.vm05.stdout:0/784: creat d1/d2/d9/d31/d13/f10a x:0 0 0 2026-03-10T10:20:08.333 INFO:tasks.workunit.client.0.vm02.stdout:7/798: write d1/f6b [3022154,71874] 0 2026-03-10T10:20:08.344 INFO:tasks.workunit.client.1.vm05.stdout:3/790: dread dd/d20/d56/d5e/dab/fc4 [0,4194304] 0 2026-03-10T10:20:08.344 INFO:tasks.workunit.client.1.vm05.stdout:3/791: readlink dd/d39/d5c/l62 0 2026-03-10T10:20:08.344 INFO:tasks.workunit.client.1.vm05.stdout:3/792: chown dd/d15/d24/d2c/d3b/f67 3133 1 2026-03-10T10:20:08.362 INFO:tasks.workunit.client.0.vm02.stdout:6/748: mkdir d0/d8/d29/d2f/d50/d98/df6 0 2026-03-10T10:20:08.362 INFO:tasks.workunit.client.1.vm05.stdout:7/810: rmdir d5/d1d/d20/d2d/d5d 39 2026-03-10T10:20:08.362 INFO:tasks.workunit.client.0.vm02.stdout:6/749: fsync d0/d8/d29/d6d/d96/ff5 0 2026-03-10T10:20:08.362 INFO:tasks.workunit.client.1.vm05.stdout:7/811: chown d5/d1d/d20/d3b/lb8 339511 1 2026-03-10T10:20:08.363 INFO:tasks.workunit.client.1.vm05.stdout:9/698: write d0/df/f97 [985815,36706] 0 2026-03-10T10:20:08.368 INFO:tasks.workunit.client.1.vm05.stdout:9/699: dread d0/f45 [0,4194304] 0 2026-03-10T10:20:08.370 INFO:tasks.workunit.client.0.vm02.stdout:1/823: truncate d4/da/d1a/d22/f32 2350541 0 2026-03-10T10:20:08.375 INFO:tasks.workunit.client.1.vm05.stdout:6/780: symlink dd/d36/d3f/d12/d44/daa/de4/lfb 0 2026-03-10T10:20:08.377 INFO:tasks.workunit.client.1.vm05.stdout:0/785: mkdir d1/d2/d9/d31/d12/d41/d10b 0 2026-03-10T10:20:08.377 INFO:tasks.workunit.client.1.vm05.stdout:6/781: write dd/d36/d3f/d12/d44/d2a/fb0 [1523370,21357] 0 2026-03-10T10:20:08.378 INFO:tasks.workunit.client.1.vm05.stdout:4/649: dread - d1/d3/f60 zero size 2026-03-10T10:20:08.380 INFO:tasks.workunit.client.0.vm02.stdout:8/770: mknod d1/d1c/d23/ce6 0 2026-03-10T10:20:08.380 INFO:tasks.workunit.client.1.vm05.stdout:5/785: symlink da/db/d26/d70/l10f 0 2026-03-10T10:20:08.380 INFO:tasks.workunit.client.1.vm05.stdout:0/786: fdatasync d1/d2/d9/d31/d13/d15/d4e/df6/f102 0 2026-03-10T10:20:08.380 INFO:tasks.workunit.client.0.vm02.stdout:8/771: readlink d1/d1c/d43/d6a/d7c/la7 0 2026-03-10T10:20:08.381 INFO:tasks.workunit.client.1.vm05.stdout:5/786: chown da/d9a/daf 0 1 2026-03-10T10:20:08.381 INFO:tasks.workunit.client.0.vm02.stdout:3/786: getdents d1/d8/d21/d7d 0 2026-03-10T10:20:08.386 INFO:tasks.workunit.client.1.vm05.stdout:2/746: dwrite db/d28/d4f/d59/f7e [4194304,4194304] 0 2026-03-10T10:20:08.390 INFO:tasks.workunit.client.1.vm05.stdout:1/854: dwrite d4/d3d/d6e/faf [0,4194304] 0 2026-03-10T10:20:08.393 INFO:tasks.workunit.client.1.vm05.stdout:1/855: chown d4/fe9 341209 1 2026-03-10T10:20:08.394 INFO:tasks.workunit.client.1.vm05.stdout:1/856: write d4/df/d1c/fdb [2694212,125943] 0 2026-03-10T10:20:08.396 INFO:tasks.workunit.client.0.vm02.stdout:1/824: dread d4/d2c/fc7 [0,4194304] 0 2026-03-10T10:20:08.407 INFO:tasks.workunit.client.1.vm05.stdout:1/857: dread d4/d39/d3e/f7d [0,4194304] 0 2026-03-10T10:20:08.408 INFO:tasks.workunit.client.1.vm05.stdout:7/812: fsync d5/d17/d66/f94 0 2026-03-10T10:20:08.409 INFO:tasks.workunit.client.0.vm02.stdout:9/750: getdents da/d3c/d53 0 2026-03-10T10:20:08.415 INFO:tasks.workunit.client.0.vm02.stdout:6/750: creat d0/d8/d29/d52/de8/db2/dbb/ff7 x:0 0 0 2026-03-10T10:20:08.423 INFO:tasks.workunit.client.0.vm02.stdout:0/835: link d9/d18/d1a/d3c/le7 d9/d18/d1a/d46/l10e 0 2026-03-10T10:20:08.429 INFO:tasks.workunit.client.1.vm05.stdout:4/650: truncate d1/d64/da9/fc0 187378 0 2026-03-10T10:20:08.429 INFO:tasks.workunit.client.1.vm05.stdout:0/787: readlink d1/d2/d9/d31/l29 0 2026-03-10T10:20:08.429 INFO:tasks.workunit.client.0.vm02.stdout:3/787: rmdir d1/d6/d8b 39 2026-03-10T10:20:08.429 INFO:tasks.workunit.client.0.vm02.stdout:1/825: mkdir d4/da/d1a/d5b/d93/d105 0 2026-03-10T10:20:08.432 INFO:tasks.workunit.client.0.vm02.stdout:9/751: creat da/de5/ff3 x:0 0 0 2026-03-10T10:20:08.439 INFO:tasks.workunit.client.0.vm02.stdout:6/751: rmdir d0/d8/d29/d2f/d4b 39 2026-03-10T10:20:08.440 INFO:tasks.workunit.client.1.vm05.stdout:7/813: mkdir d5/d1d/d20/d2d/d80/dd6/df9 0 2026-03-10T10:20:08.441 INFO:tasks.workunit.client.0.vm02.stdout:2/813: getdents d0 0 2026-03-10T10:20:08.447 INFO:tasks.workunit.client.1.vm05.stdout:8/726: rename d7/d14/d24/d3f/d6a/d8a/l97 to d7/d2f/le5 0 2026-03-10T10:20:08.447 INFO:tasks.workunit.client.1.vm05.stdout:6/782: mknod dd/d36/d3f/d12/d58/db8/cfc 0 2026-03-10T10:20:08.448 INFO:tasks.workunit.client.0.vm02.stdout:8/772: creat d1/dc7/dd2/fe7 x:0 0 0 2026-03-10T10:20:08.458 INFO:tasks.workunit.client.1.vm05.stdout:2/747: creat db/d28/dd4/fee x:0 0 0 2026-03-10T10:20:08.460 INFO:tasks.workunit.client.0.vm02.stdout:1/826: rename d4/d1b/f5d to d4/da/d1a/d47/d88/da8/f106 0 2026-03-10T10:20:08.471 INFO:tasks.workunit.client.1.vm05.stdout:7/814: truncate d5/d1d/f7d 72593 0 2026-03-10T10:20:08.472 INFO:tasks.workunit.client.1.vm05.stdout:7/815: stat d5/d17/d66/lf4 0 2026-03-10T10:20:08.473 INFO:tasks.workunit.client.1.vm05.stdout:7/816: truncate d5/d1d/d20/d91/fbd 4806423 0 2026-03-10T10:20:08.476 INFO:tasks.workunit.client.0.vm02.stdout:5/917: dwrite d1/f32 [0,4194304] 0 2026-03-10T10:20:08.483 INFO:tasks.workunit.client.1.vm05.stdout:3/793: rename dd/d15/lc6 to dd/d20/d56/d5e/l117 0 2026-03-10T10:20:08.484 INFO:tasks.workunit.client.0.vm02.stdout:2/814: symlink d0/d71/d108/d65/dc4/dfa/l113 0 2026-03-10T10:20:08.488 INFO:tasks.workunit.client.1.vm05.stdout:9/700: dwrite d0/d70/fb6 [0,4194304] 0 2026-03-10T10:20:08.489 INFO:tasks.workunit.client.0.vm02.stdout:2/815: dwrite d0/d71/d108/d65/dc4/dfa/dd3/de8/d105/f10e [0,4194304] 0 2026-03-10T10:20:08.497 INFO:tasks.workunit.client.1.vm05.stdout:3/794: dread fa [0,4194304] 0 2026-03-10T10:20:08.506 INFO:tasks.workunit.client.1.vm05.stdout:8/727: fdatasync d7/d2f/f7f 0 2026-03-10T10:20:08.507 INFO:tasks.workunit.client.0.vm02.stdout:7/799: getdents d1/d1b/d49 0 2026-03-10T10:20:08.508 INFO:tasks.workunit.client.0.vm02.stdout:7/800: write d1/d1b/d8f/dad/f75 [107908,98639] 0 2026-03-10T10:20:08.508 INFO:tasks.workunit.client.0.vm02.stdout:7/801: read d1/dc/fbc [3165428,26364] 0 2026-03-10T10:20:08.520 INFO:tasks.workunit.client.0.vm02.stdout:0/836: getdents d9/d18/d1a/d3c 0 2026-03-10T10:20:08.520 INFO:tasks.workunit.client.0.vm02.stdout:4/916: dwrite d1/d52/d53/f79 [0,4194304] 0 2026-03-10T10:20:08.525 INFO:tasks.workunit.client.1.vm05.stdout:0/788: mknod d1/d2/d9/d31/d13/d17/da1/df5/c10c 0 2026-03-10T10:20:08.525 INFO:tasks.workunit.client.0.vm02.stdout:8/773: truncate d1/f73 421628 0 2026-03-10T10:20:08.526 INFO:tasks.workunit.client.0.vm02.stdout:8/774: write d1/f40 [4421950,11885] 0 2026-03-10T10:20:08.527 INFO:tasks.workunit.client.0.vm02.stdout:8/775: write d1/d2/f36 [4578025,105026] 0 2026-03-10T10:20:08.535 INFO:tasks.workunit.client.1.vm05.stdout:1/858: write d4/d3d/d6e/fd1 [42341,33323] 0 2026-03-10T10:20:08.557 INFO:tasks.workunit.client.1.vm05.stdout:8/728: truncate d7/d14/d3a/d49/f54 1153496 0 2026-03-10T10:20:08.561 INFO:tasks.workunit.client.0.vm02.stdout:7/802: chown d1/d1b/d8f/d67/cab 850554203 1 2026-03-10T10:20:08.565 INFO:tasks.workunit.client.1.vm05.stdout:2/748: link db/d1c/d40/d62/d85/fd1 db/d12/d74/fef 0 2026-03-10T10:20:08.569 INFO:tasks.workunit.client.0.vm02.stdout:4/917: creat d1/d75/ddd/d10e/d5e/d78/d1a/d49/f130 x:0 0 0 2026-03-10T10:20:08.572 INFO:tasks.workunit.client.1.vm05.stdout:1/859: truncate d4/d39/d3e/f7d 3955826 0 2026-03-10T10:20:08.573 INFO:tasks.workunit.client.1.vm05.stdout:8/729: creat d7/d14/d24/d3f/d6a/fe6 x:0 0 0 2026-03-10T10:20:08.573 INFO:tasks.workunit.client.0.vm02.stdout:3/788: creat d1/d6/f106 x:0 0 0 2026-03-10T10:20:08.578 INFO:tasks.workunit.client.0.vm02.stdout:3/789: dwrite d1/d8/d21/f4c [4194304,4194304] 0 2026-03-10T10:20:08.589 INFO:tasks.workunit.client.0.vm02.stdout:1/827: symlink d4/da/d1a/l107 0 2026-03-10T10:20:08.589 INFO:tasks.workunit.client.1.vm05.stdout:0/789: symlink d1/d2/d9/d31/d13/l10d 0 2026-03-10T10:20:08.589 INFO:tasks.workunit.client.0.vm02.stdout:5/918: link d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fc3 d1/d9c/f139 0 2026-03-10T10:20:08.590 INFO:tasks.workunit.client.1.vm05.stdout:7/817: link d5/d1d/c6e d5/d1d/d29/d3e/d8c/d82/cfa 0 2026-03-10T10:20:08.590 INFO:tasks.workunit.client.1.vm05.stdout:5/787: rename da/db/dee/d38/c60 to da/c110 0 2026-03-10T10:20:08.591 INFO:tasks.workunit.client.1.vm05.stdout:1/860: mknod d4/df/d76/cfb 0 2026-03-10T10:20:08.592 INFO:tasks.workunit.client.1.vm05.stdout:1/861: read d4/df/d76/fc2 [225134,114697] 0 2026-03-10T10:20:08.597 INFO:tasks.workunit.client.1.vm05.stdout:1/862: dwrite d4/df/d1c/d92/f97 [4194304,4194304] 0 2026-03-10T10:20:08.599 INFO:tasks.workunit.client.0.vm02.stdout:7/803: creat d1/d1b/d8e/ffb x:0 0 0 2026-03-10T10:20:08.607 INFO:tasks.workunit.client.0.vm02.stdout:9/752: write da/d3c/d4c/d38/d82/d8c/f98 [1707021,24459] 0 2026-03-10T10:20:08.614 INFO:tasks.workunit.client.1.vm05.stdout:7/818: creat d5/dd/ffb x:0 0 0 2026-03-10T10:20:08.616 INFO:tasks.workunit.client.1.vm05.stdout:7/819: chown d5/d1d/d29/d3e/d8c/c8a 0 1 2026-03-10T10:20:08.616 INFO:tasks.workunit.client.0.vm02.stdout:3/790: symlink d1/d8/d21/d7d/l107 0 2026-03-10T10:20:08.619 INFO:tasks.workunit.client.0.vm02.stdout:1/828: unlink d4/da/d1a/d47/d78/fb4 0 2026-03-10T10:20:08.624 INFO:tasks.workunit.client.0.vm02.stdout:5/919: dread d1/db/d11/d16/d48/dcf/f112 [0,4194304] 0 2026-03-10T10:20:08.628 INFO:tasks.workunit.client.0.vm02.stdout:7/804: mkdir d1/dc/d16/dfc 0 2026-03-10T10:20:08.629 INFO:tasks.workunit.client.0.vm02.stdout:4/918: sync 2026-03-10T10:20:08.633 INFO:tasks.workunit.client.0.vm02.stdout:9/753: mkdir da/d3c/d4c/d2c/d34/d35/df4 0 2026-03-10T10:20:08.633 INFO:tasks.workunit.client.1.vm05.stdout:1/863: mkdir d4/df/d1c/d53/daa/dfc 0 2026-03-10T10:20:08.634 INFO:tasks.workunit.client.0.vm02.stdout:0/837: mknod d9/d18/d1a/d22/d24/d8e/d91/c10f 0 2026-03-10T10:20:08.635 INFO:tasks.workunit.client.1.vm05.stdout:6/783: dwrite dd/d36/d3f/d12/d44/d2a/d3d/d3e/f73 [0,4194304] 0 2026-03-10T10:20:08.640 INFO:tasks.workunit.client.0.vm02.stdout:7/805: sync 2026-03-10T10:20:08.640 INFO:tasks.workunit.client.0.vm02.stdout:1/829: sync 2026-03-10T10:20:08.640 INFO:tasks.workunit.client.0.vm02.stdout:9/754: sync 2026-03-10T10:20:08.642 INFO:tasks.workunit.client.1.vm05.stdout:6/784: dwrite dd/d36/d3f/d12/d44/d30/d4a/ff7 [0,4194304] 0 2026-03-10T10:20:08.642 INFO:tasks.workunit.client.0.vm02.stdout:7/806: sync 2026-03-10T10:20:08.643 INFO:tasks.workunit.client.0.vm02.stdout:7/807: chown d1/dc/d60/c71 7811 1 2026-03-10T10:20:08.652 INFO:tasks.workunit.client.1.vm05.stdout:2/749: creat db/d12/ff0 x:0 0 0 2026-03-10T10:20:08.665 INFO:tasks.workunit.client.1.vm05.stdout:7/820: dread d5/d1d/d20/d3b/f8d [0,4194304] 0 2026-03-10T10:20:08.666 INFO:tasks.workunit.client.1.vm05.stdout:7/821: write d5/d1d/d20/d91/fbd [1937095,55873] 0 2026-03-10T10:20:08.677 INFO:tasks.workunit.client.1.vm05.stdout:5/788: creat da/db/d26/d70/d72/d10b/f111 x:0 0 0 2026-03-10T10:20:08.678 INFO:tasks.workunit.client.0.vm02.stdout:1/830: symlink d4/d2c/d53/da6/l108 0 2026-03-10T10:20:08.680 INFO:tasks.workunit.client.0.vm02.stdout:9/755: symlink da/d3c/d4c/db1/de4/lf5 0 2026-03-10T10:20:08.681 INFO:tasks.workunit.client.0.vm02.stdout:9/756: chown da/d3c/d4c/d2c/d34/l4e 34 1 2026-03-10T10:20:08.684 INFO:tasks.workunit.client.1.vm05.stdout:6/785: mknod dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/cfd 0 2026-03-10T10:20:08.684 INFO:tasks.workunit.client.0.vm02.stdout:7/808: stat d1/d1b/d8f/dad/d7e/ld5 0 2026-03-10T10:20:08.685 INFO:tasks.workunit.client.0.vm02.stdout:5/920: creat d1/db/d11/d84/d40/d4f/d5f/d6d/d71/d114/f13a x:0 0 0 2026-03-10T10:20:08.685 INFO:tasks.workunit.client.1.vm05.stdout:6/786: dread - dd/d36/d3f/d12/d44/d2a/d3d/d48/fb2 zero size 2026-03-10T10:20:08.689 INFO:tasks.workunit.client.0.vm02.stdout:6/752: write d0/d8/d29/d6d/f3d [3780996,111249] 0 2026-03-10T10:20:08.698 INFO:tasks.workunit.client.1.vm05.stdout:7/822: readlink d5/d1d/d20/d2d/d5d/d7a/lf2 0 2026-03-10T10:20:08.698 INFO:tasks.workunit.client.0.vm02.stdout:0/838: mknod d9/d34/d3d/d65/da2/c110 0 2026-03-10T10:20:08.703 INFO:tasks.workunit.client.1.vm05.stdout:4/651: rename d1/d31/c2c to d1/d31/dc/d40/cd5 0 2026-03-10T10:20:08.705 INFO:tasks.workunit.client.1.vm05.stdout:5/789: sync 2026-03-10T10:20:08.707 INFO:tasks.workunit.client.0.vm02.stdout:1/831: mknod d4/d1b/c109 0 2026-03-10T10:20:08.712 INFO:tasks.workunit.client.0.vm02.stdout:2/816: dwrite d0/d8c/fab [0,4194304] 0 2026-03-10T10:20:08.713 INFO:tasks.workunit.client.1.vm05.stdout:1/864: mknod d4/cfd 0 2026-03-10T10:20:08.728 INFO:tasks.workunit.client.1.vm05.stdout:6/787: dread - dd/d1b/fe6 zero size 2026-03-10T10:20:08.732 INFO:tasks.workunit.client.1.vm05.stdout:7/823: truncate d5/d1d/d20/d91/fc9 2884857 0 2026-03-10T10:20:08.732 INFO:tasks.workunit.client.1.vm05.stdout:6/788: dwrite dd/d36/d3f/d12/d44/d30/f9e [0,4194304] 0 2026-03-10T10:20:08.733 INFO:tasks.workunit.client.1.vm05.stdout:7/824: readlink d5/d17/l43 0 2026-03-10T10:20:08.751 INFO:tasks.workunit.client.1.vm05.stdout:3/795: rename dd/d39/d66/fad to dd/d15/d24/d2c/d107/f118 0 2026-03-10T10:20:08.757 INFO:tasks.workunit.client.0.vm02.stdout:5/921: unlink d1/db/d11/l9d 0 2026-03-10T10:20:08.759 INFO:tasks.workunit.client.1.vm05.stdout:9/701: truncate d0/df/d11/f84 974030 0 2026-03-10T10:20:08.759 INFO:tasks.workunit.client.1.vm05.stdout:9/702: readlink d0/df/d11/l88 0 2026-03-10T10:20:08.761 INFO:tasks.workunit.client.0.vm02.stdout:8/776: write d1/d1c/d43/d6a/da8/fbf [856398,125699] 0 2026-03-10T10:20:08.762 INFO:tasks.workunit.client.0.vm02.stdout:8/777: write d1/d1c/d43/d6a/da8/d56/fd9 [757711,76995] 0 2026-03-10T10:20:08.762 INFO:tasks.workunit.client.1.vm05.stdout:4/652: mknod d1/d64/da9/dae/cd6 0 2026-03-10T10:20:08.763 INFO:tasks.workunit.client.0.vm02.stdout:8/778: chown d1/d1c/d23/d25/cbd 1 1 2026-03-10T10:20:08.772 INFO:tasks.workunit.client.1.vm05.stdout:5/790: symlink da/d96/l112 0 2026-03-10T10:20:08.775 INFO:tasks.workunit.client.0.vm02.stdout:1/832: mknod d4/d2c/d53/da6/db8/dd9/c10a 0 2026-03-10T10:20:08.781 INFO:tasks.workunit.client.0.vm02.stdout:9/757: mkdir da/d3c/d4c/df6 0 2026-03-10T10:20:08.785 INFO:tasks.workunit.client.1.vm05.stdout:6/789: truncate dd/d36/d3f/f61 2215951 0 2026-03-10T10:20:08.785 INFO:tasks.workunit.client.1.vm05.stdout:8/730: write d7/d14/d15/d3b/da0/fc8 [858293,98433] 0 2026-03-10T10:20:08.786 INFO:tasks.workunit.client.1.vm05.stdout:6/790: dread - dd/d36/d3f/fbe zero size 2026-03-10T10:20:08.786 INFO:tasks.workunit.client.1.vm05.stdout:8/731: dread - d7/d14/d24/d3f/d4f/fbf zero size 2026-03-10T10:20:08.787 INFO:tasks.workunit.client.1.vm05.stdout:0/790: write d1/d2/d9/d31/d13/d15/d4e/d8a/fae [1551226,88804] 0 2026-03-10T10:20:08.789 INFO:tasks.workunit.client.1.vm05.stdout:3/796: rmdir dd/d15/d1f/dae 39 2026-03-10T10:20:08.795 INFO:tasks.workunit.client.1.vm05.stdout:4/653: creat d1/d31/d76/fd7 x:0 0 0 2026-03-10T10:20:08.797 INFO:tasks.workunit.client.0.vm02.stdout:4/919: write d1/d75/ddd/d10e/d5e/d78/d1a/d49/f5c [1899871,593] 0 2026-03-10T10:20:08.807 INFO:tasks.workunit.client.1.vm05.stdout:8/732: fdatasync d7/d2f/d57/fae 0 2026-03-10T10:20:08.811 INFO:tasks.workunit.client.1.vm05.stdout:6/791: truncate dd/d36/d3f/d12/d44/d2a/d3d/d48/fe9 329554 0 2026-03-10T10:20:08.814 INFO:tasks.workunit.client.1.vm05.stdout:0/791: rmdir d1/d2/d9/d31/d12/d20/dbe 39 2026-03-10T10:20:08.816 INFO:tasks.workunit.client.1.vm05.stdout:3/797: creat dd/d15/d24/d8e/dac/f119 x:0 0 0 2026-03-10T10:20:08.817 INFO:tasks.workunit.client.1.vm05.stdout:2/750: write db/d1c/f3d [436174,122313] 0 2026-03-10T10:20:08.818 INFO:tasks.workunit.client.1.vm05.stdout:0/792: sync 2026-03-10T10:20:08.818 INFO:tasks.workunit.client.1.vm05.stdout:4/654: symlink d1/d31/d72/ld8 0 2026-03-10T10:20:08.819 INFO:tasks.workunit.client.1.vm05.stdout:4/655: read d1/d31/f7a [2195822,119992] 0 2026-03-10T10:20:08.819 INFO:tasks.workunit.client.1.vm05.stdout:5/791: fsync da/db/f7b 0 2026-03-10T10:20:08.827 INFO:tasks.workunit.client.0.vm02.stdout:3/791: rmdir d1/d6/d8b/de3 0 2026-03-10T10:20:08.834 INFO:tasks.workunit.client.0.vm02.stdout:3/792: dread d1/f50 [0,4194304] 0 2026-03-10T10:20:08.836 INFO:tasks.workunit.client.1.vm05.stdout:8/733: mknod d7/d14/d15/da7/ce7 0 2026-03-10T10:20:08.837 INFO:tasks.workunit.client.0.vm02.stdout:7/809: mkdir d1/dc/d55/d9c/dfd 0 2026-03-10T10:20:08.848 INFO:tasks.workunit.client.0.vm02.stdout:0/839: dwrite d9/d34/d3d/d67/fc3 [0,4194304] 0 2026-03-10T10:20:08.848 INFO:tasks.workunit.client.0.vm02.stdout:0/840: chown d9/d34/d3d/d65/f84 94426 1 2026-03-10T10:20:08.857 INFO:tasks.workunit.client.1.vm05.stdout:3/798: rename dd/d20/d56/d5e/dab/f9b to dd/d39/d66/f11a 0 2026-03-10T10:20:08.859 INFO:tasks.workunit.client.1.vm05.stdout:7/825: write d5/d1d/d29/d3e/f65 [495897,12850] 0 2026-03-10T10:20:08.865 INFO:tasks.workunit.client.1.vm05.stdout:9/703: write d0/d1/d13/de/d93/fbd [19789,66917] 0 2026-03-10T10:20:08.868 INFO:tasks.workunit.client.1.vm05.stdout:1/865: dwrite d4/df/d1c/d53/daa/fa9 [0,4194304] 0 2026-03-10T10:20:08.871 INFO:tasks.workunit.client.0.vm02.stdout:8/779: read d1/d1c/d43/f46 [929692,33421] 0 2026-03-10T10:20:08.871 INFO:tasks.workunit.client.0.vm02.stdout:8/780: stat d1/d1c/d23/ce6 0 2026-03-10T10:20:08.871 INFO:tasks.workunit.client.0.vm02.stdout:1/833: truncate d4/d2c/d53/da6/fab 3949851 0 2026-03-10T10:20:08.885 INFO:tasks.workunit.client.0.vm02.stdout:5/922: write d1/fdd [2697143,16549] 0 2026-03-10T10:20:08.892 INFO:tasks.workunit.client.0.vm02.stdout:3/793: creat d1/d8/d21/d7d/f108 x:0 0 0 2026-03-10T10:20:08.894 INFO:tasks.workunit.client.1.vm05.stdout:9/704: chown d0/d1/c5e 247805905 1 2026-03-10T10:20:08.897 INFO:tasks.workunit.client.1.vm05.stdout:3/799: dread dd/d20/d94/fa9 [0,4194304] 0 2026-03-10T10:20:08.903 INFO:tasks.workunit.client.1.vm05.stdout:7/826: dread d5/dd/f2f [0,4194304] 0 2026-03-10T10:20:08.904 INFO:tasks.workunit.client.1.vm05.stdout:0/793: getdents d1/d2/d39/d6e/d95 0 2026-03-10T10:20:08.904 INFO:tasks.workunit.client.0.vm02.stdout:0/841: dread - d9/d18/d1a/d22/d24/d8e/fd0 zero size 2026-03-10T10:20:08.905 INFO:tasks.workunit.client.0.vm02.stdout:0/842: chown d9/d18/d1a/d22/d24/d80/d49/feb 837635 1 2026-03-10T10:20:08.908 INFO:tasks.workunit.client.0.vm02.stdout:6/753: creat d0/d8/d29/d6d/ff8 x:0 0 0 2026-03-10T10:20:08.909 INFO:tasks.workunit.client.1.vm05.stdout:1/866: fsync d4/d39/d3e/fd7 0 2026-03-10T10:20:08.923 INFO:tasks.workunit.client.0.vm02.stdout:1/834: rmdir d4/da/d1a/d5b/d93 39 2026-03-10T10:20:08.925 INFO:tasks.workunit.client.0.vm02.stdout:9/758: truncate da/d3c/d4c/d38/d4a/f54 743835 0 2026-03-10T10:20:08.928 INFO:tasks.workunit.client.1.vm05.stdout:6/792: dwrite dd/d36/d3f/d12/d44/d2a/d3d/f53 [4194304,4194304] 0 2026-03-10T10:20:08.939 INFO:tasks.workunit.client.0.vm02.stdout:7/810: write d1/d1b/d8f/dad/d7e/fc0 [599248,127027] 0 2026-03-10T10:20:08.947 INFO:tasks.workunit.client.0.vm02.stdout:3/794: fdatasync d1/d8/d21/d73/d78/d84/fe5 0 2026-03-10T10:20:08.948 INFO:tasks.workunit.client.0.vm02.stdout:3/795: stat d1/d20/d52/l8c 0 2026-03-10T10:20:08.949 INFO:tasks.workunit.client.0.vm02.stdout:3/796: truncate d1/d20/ff7 796086 0 2026-03-10T10:20:08.950 INFO:tasks.workunit.client.0.vm02.stdout:8/781: dwrite d1/d1c/d43/f7a [0,4194304] 0 2026-03-10T10:20:08.958 INFO:tasks.workunit.client.0.vm02.stdout:4/920: write d1/d75/ddd/d10e/d5e/d78/d44/de7/fed [551818,80002] 0 2026-03-10T10:20:08.958 INFO:tasks.workunit.client.1.vm05.stdout:2/751: truncate db/d28/d4f/d59/da4/d6c/fd0 3330876 0 2026-03-10T10:20:08.959 INFO:tasks.workunit.client.0.vm02.stdout:5/923: write d1/db/d11/d16/d48/dcf/f112 [1181308,119550] 0 2026-03-10T10:20:08.963 INFO:tasks.workunit.client.1.vm05.stdout:8/734: dwrite d7/d14/d3a/d49/f54 [0,4194304] 0 2026-03-10T10:20:08.969 INFO:tasks.workunit.client.0.vm02.stdout:2/817: getdents d0/d71/d108/d65/dc4/dfa/d80 0 2026-03-10T10:20:08.976 INFO:tasks.workunit.client.1.vm05.stdout:0/794: creat d1/d2/dc6/f10e x:0 0 0 2026-03-10T10:20:08.983 INFO:tasks.workunit.client.0.vm02.stdout:7/811: rename d1/dc/d16/d28/l5e to d1/d1b/d49/d98/lfe 0 2026-03-10T10:20:08.983 INFO:tasks.workunit.client.1.vm05.stdout:1/867: fdatasync d4/d39/d3e/db1/db8/fe1 0 2026-03-10T10:20:08.989 INFO:tasks.workunit.client.1.vm05.stdout:5/792: creat da/d9a/dc7/db4/f113 x:0 0 0 2026-03-10T10:20:08.990 INFO:tasks.workunit.client.1.vm05.stdout:6/793: read dd/d36/d3f/d12/d44/daa/fae [484340,129652] 0 2026-03-10T10:20:08.992 INFO:tasks.workunit.client.1.vm05.stdout:2/752: creat db/d61/dcc/ff1 x:0 0 0 2026-03-10T10:20:08.992 INFO:tasks.workunit.client.0.vm02.stdout:4/921: mkdir d1/d75/ddd/d10e/d7e/d131 0 2026-03-10T10:20:08.994 INFO:tasks.workunit.client.0.vm02.stdout:5/924: truncate d1/db/d11/d84/d40/d4f/d5f/f73 2135658 0 2026-03-10T10:20:08.996 INFO:tasks.workunit.client.0.vm02.stdout:1/835: mkdir d4/da/d1a/d47/d88/d10b 0 2026-03-10T10:20:09.003 INFO:tasks.workunit.client.1.vm05.stdout:0/795: truncate d1/d2/d39/d6e/dc0/fcd 188054 0 2026-03-10T10:20:09.004 INFO:tasks.workunit.client.1.vm05.stdout:4/656: link d1/d3/c14 d1/d31/d76/dac/db8/cd9 0 2026-03-10T10:20:09.005 INFO:tasks.workunit.client.1.vm05.stdout:4/657: write d1/d31/d4b/f51 [3582807,50924] 0 2026-03-10T10:20:09.006 INFO:tasks.workunit.client.1.vm05.stdout:4/658: readlink d1/d31/l32 0 2026-03-10T10:20:09.006 INFO:tasks.workunit.client.0.vm02.stdout:8/782: mknod d1/d1c/d43/ce8 0 2026-03-10T10:20:09.007 INFO:tasks.workunit.client.1.vm05.stdout:5/793: creat da/d96/dd9/f114 x:0 0 0 2026-03-10T10:20:09.015 INFO:tasks.workunit.client.1.vm05.stdout:6/794: fsync dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/fe1 0 2026-03-10T10:20:09.016 INFO:tasks.workunit.client.1.vm05.stdout:2/753: creat db/d28/d4f/d59/da4/d81/da7/ff2 x:0 0 0 2026-03-10T10:20:09.017 INFO:tasks.workunit.client.1.vm05.stdout:2/754: chown db/d28/d4f/d59/d94/d95/lad 1255638 1 2026-03-10T10:20:09.018 INFO:tasks.workunit.client.1.vm05.stdout:8/735: mknod d7/d14/d3a/d49/ce8 0 2026-03-10T10:20:09.019 INFO:tasks.workunit.client.1.vm05.stdout:8/736: write d7/d14/f9b [1027852,104488] 0 2026-03-10T10:20:09.023 INFO:tasks.workunit.client.1.vm05.stdout:2/755: dwrite db/d28/d4f/f8a [4194304,4194304] 0 2026-03-10T10:20:09.029 INFO:tasks.workunit.client.0.vm02.stdout:6/754: dread d0/d8/d9/f13 [0,4194304] 0 2026-03-10T10:20:09.029 INFO:tasks.workunit.client.1.vm05.stdout:3/800: write dd/d15/f1b [721605,14590] 0 2026-03-10T10:20:09.032 INFO:tasks.workunit.client.1.vm05.stdout:9/705: dwrite d0/f1e [4194304,4194304] 0 2026-03-10T10:20:09.038 INFO:tasks.workunit.client.0.vm02.stdout:9/759: write da/d3c/d4c/f17 [5523545,10637] 0 2026-03-10T10:20:09.043 INFO:tasks.workunit.client.0.vm02.stdout:1/836: fdatasync d4/dc3/fd8 0 2026-03-10T10:20:09.059 INFO:tasks.workunit.client.0.vm02.stdout:8/783: creat d1/d1c/d24/dad/dbe/fe9 x:0 0 0 2026-03-10T10:20:09.059 INFO:tasks.workunit.client.0.vm02.stdout:3/797: write d1/d6/f49 [244632,93877] 0 2026-03-10T10:20:09.064 INFO:tasks.workunit.client.0.vm02.stdout:4/922: creat d1/d75/ddd/d10e/d7e/d131/f132 x:0 0 0 2026-03-10T10:20:09.067 INFO:tasks.workunit.client.1.vm05.stdout:6/795: mknod dd/d1b/cfe 0 2026-03-10T10:20:09.068 INFO:tasks.workunit.client.0.vm02.stdout:0/843: getdents d9/d34/d3d/d7b 0 2026-03-10T10:20:09.080 INFO:tasks.workunit.client.1.vm05.stdout:7/827: dwrite d5/d1d/f7d [0,4194304] 0 2026-03-10T10:20:09.082 INFO:tasks.workunit.client.0.vm02.stdout:1/837: chown d4/da/d1a/d5b/d93/de8 61809 1 2026-03-10T10:20:09.082 INFO:tasks.workunit.client.1.vm05.stdout:2/756: rmdir db/d28/d4f 39 2026-03-10T10:20:09.083 INFO:tasks.workunit.client.0.vm02.stdout:9/760: dread da/d3c/fc0 [0,4194304] 0 2026-03-10T10:20:09.084 INFO:tasks.workunit.client.1.vm05.stdout:9/706: unlink d0/df/d11/fcb 0 2026-03-10T10:20:09.084 INFO:tasks.workunit.client.1.vm05.stdout:9/707: dread - d0/d1/d16/fca zero size 2026-03-10T10:20:09.085 INFO:tasks.workunit.client.1.vm05.stdout:1/868: creat d4/d37/ffe x:0 0 0 2026-03-10T10:20:09.089 INFO:tasks.workunit.client.1.vm05.stdout:4/659: mknod d1/d64/da9/dae/dcc/cda 0 2026-03-10T10:20:09.090 INFO:tasks.workunit.client.0.vm02.stdout:4/923: mknod d1/d75/ddd/d10e/d7e/c133 0 2026-03-10T10:20:09.104 INFO:tasks.workunit.client.0.vm02.stdout:7/812: write d1/d1b/d8e/f9d [743434,86108] 0 2026-03-10T10:20:09.113 INFO:tasks.workunit.client.0.vm02.stdout:3/798: dread d1/d8/d86/f87 [0,4194304] 0 2026-03-10T10:20:09.119 INFO:tasks.workunit.client.0.vm02.stdout:5/925: dwrite d1/db/d11/d84/d40/fb3 [0,4194304] 0 2026-03-10T10:20:09.127 INFO:tasks.workunit.client.0.vm02.stdout:9/761: symlink da/d3c/d4c/lf7 0 2026-03-10T10:20:09.133 INFO:tasks.workunit.client.0.vm02.stdout:2/818: dwrite d0/d10/d81/f9b [0,4194304] 0 2026-03-10T10:20:09.138 INFO:tasks.workunit.client.0.vm02.stdout:7/813: fdatasync d1/f5 0 2026-03-10T10:20:09.139 INFO:tasks.workunit.client.0.vm02.stdout:7/814: chown d1/d1b/d49/l6c 2642529 1 2026-03-10T10:20:09.141 INFO:tasks.workunit.client.1.vm05.stdout:5/794: write da/db/d28/f8d [1458553,30650] 0 2026-03-10T10:20:09.146 INFO:tasks.workunit.client.0.vm02.stdout:6/755: dwrite d0/d8/d29/d94/fb4 [0,4194304] 0 2026-03-10T10:20:09.164 INFO:tasks.workunit.client.1.vm05.stdout:0/796: dwrite d1/dd7/fe8 [0,4194304] 0 2026-03-10T10:20:09.198 INFO:tasks.workunit.client.0.vm02.stdout:3/799: creat d1/d6/d8e/f109 x:0 0 0 2026-03-10T10:20:09.198 INFO:tasks.workunit.client.0.vm02.stdout:3/800: dread d1/d8/d21/d73/d78/d84/fb7 [0,4194304] 0 2026-03-10T10:20:09.198 INFO:tasks.workunit.client.0.vm02.stdout:3/801: chown d1/d20/d52/ff3 123 1 2026-03-10T10:20:09.198 INFO:tasks.workunit.client.0.vm02.stdout:3/802: dwrite d1/d8/d21/d73/ffb [0,4194304] 0 2026-03-10T10:20:09.198 INFO:tasks.workunit.client.0.vm02.stdout:1/838: link d4/da/d1a/d47/d78/fcf d4/da/d1a/d47/dbc/f10c 0 2026-03-10T10:20:09.198 INFO:tasks.workunit.client.0.vm02.stdout:4/924: dwrite d1/d75/ddd/f42 [0,4194304] 0 2026-03-10T10:20:09.198 INFO:tasks.workunit.client.0.vm02.stdout:2/819: symlink d0/d71/d108/d65/l114 0 2026-03-10T10:20:09.199 INFO:tasks.workunit.client.0.vm02.stdout:2/820: chown d0/f9 26 1 2026-03-10T10:20:09.199 INFO:tasks.workunit.client.0.vm02.stdout:7/815: creat d1/dc/d16/d28/d2d/dae/fff x:0 0 0 2026-03-10T10:20:09.199 INFO:tasks.workunit.client.0.vm02.stdout:6/756: dread - d0/d8/d29/d52/de8/fb7 zero size 2026-03-10T10:20:09.199 INFO:tasks.workunit.client.0.vm02.stdout:8/784: getdents d1/d1c/d43/d5b/d88/dac/d83/d9f 0 2026-03-10T10:20:09.199 INFO:tasks.workunit.client.0.vm02.stdout:6/757: dwrite d0/d8/d29/d2f/d50/ff0 [0,4194304] 0 2026-03-10T10:20:09.199 INFO:tasks.workunit.client.0.vm02.stdout:0/844: getdents d9/d34/d3d/d65/d89/dd3/da8 0 2026-03-10T10:20:09.206 INFO:tasks.workunit.client.0.vm02.stdout:9/762: sync 2026-03-10T10:20:09.213 INFO:tasks.workunit.client.1.vm05.stdout:8/737: creat d7/d2f/d57/de3/fe9 x:0 0 0 2026-03-10T10:20:09.218 INFO:tasks.workunit.client.1.vm05.stdout:3/801: mkdir dd/d15/d24/d2c/dd0/dd9/d103/d11b 0 2026-03-10T10:20:09.230 INFO:tasks.workunit.client.1.vm05.stdout:3/802: write dd/d15/d24/d8e/dac/f119 [256180,33790] 0 2026-03-10T10:20:09.230 INFO:tasks.workunit.client.1.vm05.stdout:9/708: dread - d0/d1/dcc/dd0/fd9 zero size 2026-03-10T10:20:09.230 INFO:tasks.workunit.client.1.vm05.stdout:9/709: chown d0/df/d74/d8c/d8f/lc1 78520 1 2026-03-10T10:20:09.233 INFO:tasks.workunit.client.1.vm05.stdout:1/869: unlink d4/d37/c8a 0 2026-03-10T10:20:09.233 INFO:tasks.workunit.client.1.vm05.stdout:4/660: mkdir d1/d3/d65/ddb 0 2026-03-10T10:20:09.234 INFO:tasks.workunit.client.1.vm05.stdout:1/870: chown d4/df/de0/f62 115740 1 2026-03-10T10:20:09.234 INFO:tasks.workunit.client.1.vm05.stdout:4/661: chown d1/d3/l4 38041 1 2026-03-10T10:20:09.235 INFO:tasks.workunit.client.0.vm02.stdout:7/816: fsync d1/dc/f26 0 2026-03-10T10:20:09.241 INFO:tasks.workunit.client.1.vm05.stdout:6/796: mkdir dd/d36/d3f/d12/d44/d2a/d7f/dff 0 2026-03-10T10:20:09.246 INFO:tasks.workunit.client.1.vm05.stdout:0/797: mknod d1/d2/d9/d50/d9a/c10f 0 2026-03-10T10:20:09.251 INFO:tasks.workunit.client.1.vm05.stdout:7/828: creat d5/d1d/d20/d35/dd2/ffc x:0 0 0 2026-03-10T10:20:09.251 INFO:tasks.workunit.client.0.vm02.stdout:5/926: creat d1/db/d11/d16/f13b x:0 0 0 2026-03-10T10:20:09.253 INFO:tasks.workunit.client.0.vm02.stdout:7/817: sync 2026-03-10T10:20:09.253 INFO:tasks.workunit.client.1.vm05.stdout:8/738: read d7/f44 [518042,29100] 0 2026-03-10T10:20:09.255 INFO:tasks.workunit.client.0.vm02.stdout:2/821: mknod d0/d71/d108/d65/dc4/dfa/d111/c115 0 2026-03-10T10:20:09.257 INFO:tasks.workunit.client.0.vm02.stdout:8/785: symlink d1/d1c/d24/dcf/lea 0 2026-03-10T10:20:09.261 INFO:tasks.workunit.client.1.vm05.stdout:1/871: fdatasync d4/d39/fb2 0 2026-03-10T10:20:09.264 INFO:tasks.workunit.client.1.vm05.stdout:4/662: symlink d1/d31/dc/d40/d45/ldc 0 2026-03-10T10:20:09.264 INFO:tasks.workunit.client.0.vm02.stdout:6/758: rename d0/f5d to d0/d8/d29/d94/ff9 0 2026-03-10T10:20:09.267 INFO:tasks.workunit.client.1.vm05.stdout:5/795: fdatasync da/db/f6d 0 2026-03-10T10:20:09.267 INFO:tasks.workunit.client.1.vm05.stdout:5/796: chown da/db/d28/d8a 233 1 2026-03-10T10:20:09.271 INFO:tasks.workunit.client.1.vm05.stdout:3/803: mknod dd/d39/c11c 0 2026-03-10T10:20:09.271 INFO:tasks.workunit.client.0.vm02.stdout:5/927: mknod d1/db/d11/d16/d79/d85/d135/c13c 0 2026-03-10T10:20:09.278 INFO:tasks.workunit.client.0.vm02.stdout:0/845: rename d9/d18/d1a/d3c/la6 to d9/d34/d3d/d65/d89/l111 0 2026-03-10T10:20:09.280 INFO:tasks.workunit.client.0.vm02.stdout:6/759: symlink d0/d8/d29/dce/lfa 0 2026-03-10T10:20:09.283 INFO:tasks.workunit.client.1.vm05.stdout:2/757: creat db/d61/ff3 x:0 0 0 2026-03-10T10:20:09.285 INFO:tasks.workunit.client.0.vm02.stdout:5/928: creat d1/db/d11/d16/d79/d85/d93/f13d x:0 0 0 2026-03-10T10:20:09.286 INFO:tasks.workunit.client.0.vm02.stdout:5/929: chown d1/db/d11/f7d 23020157 1 2026-03-10T10:20:09.286 INFO:tasks.workunit.client.1.vm05.stdout:1/872: rmdir d4/df 39 2026-03-10T10:20:09.287 INFO:tasks.workunit.client.0.vm02.stdout:0/846: creat d9/d34/d3d/f112 x:0 0 0 2026-03-10T10:20:09.290 INFO:tasks.workunit.client.0.vm02.stdout:6/760: creat d0/d8/d29/d52/de8/ffb x:0 0 0 2026-03-10T10:20:09.291 INFO:tasks.workunit.client.1.vm05.stdout:4/663: truncate d1/d31/f36 49006 0 2026-03-10T10:20:09.292 INFO:tasks.workunit.client.1.vm05.stdout:4/664: chown d1/d31/d4b/d6d/c86 348 1 2026-03-10T10:20:09.292 INFO:tasks.workunit.client.1.vm05.stdout:4/665: dread - d1/fcd zero size 2026-03-10T10:20:09.294 INFO:tasks.workunit.client.0.vm02.stdout:8/786: creat d1/d1c/d43/d6a/da8/feb x:0 0 0 2026-03-10T10:20:09.306 INFO:tasks.workunit.client.1.vm05.stdout:3/804: truncate dd/d15/f1c 1562645 0 2026-03-10T10:20:09.306 INFO:tasks.workunit.client.1.vm05.stdout:7/829: getdents d5/d1d/d29/dbe 0 2026-03-10T10:20:09.318 INFO:tasks.workunit.client.1.vm05.stdout:2/758: symlink db/d12/d74/lf4 0 2026-03-10T10:20:09.322 INFO:tasks.workunit.client.0.vm02.stdout:1/839: dread d4/d2c/f43 [0,4194304] 0 2026-03-10T10:20:09.326 INFO:tasks.workunit.client.0.vm02.stdout:3/803: write d1/d8/d21/d7d/fdd [1906604,59685] 0 2026-03-10T10:20:09.327 INFO:tasks.workunit.client.0.vm02.stdout:4/925: write d1/d75/ddd/d10e/fd6 [252891,57417] 0 2026-03-10T10:20:09.332 INFO:tasks.workunit.client.0.vm02.stdout:9/763: write da/d3c/d4c/d56/fa1 [77213,24214] 0 2026-03-10T10:20:09.337 INFO:tasks.workunit.client.1.vm05.stdout:6/797: rename l8 to dd/d36/d3f/d12/d44/d2a/d3d/d48/l100 0 2026-03-10T10:20:09.339 INFO:tasks.workunit.client.1.vm05.stdout:9/710: dwrite d0/df/d74/fc3 [4194304,4194304] 0 2026-03-10T10:20:09.348 INFO:tasks.workunit.client.1.vm05.stdout:4/666: creat d1/d31/dc/d40/d45/fdd x:0 0 0 2026-03-10T10:20:09.349 INFO:tasks.workunit.client.0.vm02.stdout:6/761: creat d0/d8/d29/d6d/d96/dd9/ffc x:0 0 0 2026-03-10T10:20:09.353 INFO:tasks.workunit.client.0.vm02.stdout:8/787: symlink d1/d1c/d43/d5b/d88/lec 0 2026-03-10T10:20:09.355 INFO:tasks.workunit.client.0.vm02.stdout:7/818: write d1/dc/d16/f1e [2071942,30858] 0 2026-03-10T10:20:09.357 INFO:tasks.workunit.client.0.vm02.stdout:2/822: dwrite d0/d1a/f33 [0,4194304] 0 2026-03-10T10:20:09.369 INFO:tasks.workunit.client.0.vm02.stdout:3/804: symlink d1/d6/d8e/l10a 0 2026-03-10T10:20:09.375 INFO:tasks.workunit.client.1.vm05.stdout:8/739: dwrite d7/d14/d24/f9c [0,4194304] 0 2026-03-10T10:20:09.378 INFO:tasks.workunit.client.1.vm05.stdout:4/667: sync 2026-03-10T10:20:09.387 INFO:tasks.workunit.client.0.vm02.stdout:5/930: dwrite d1/db/d11/d13/d28/d37/f3c [4194304,4194304] 0 2026-03-10T10:20:09.389 INFO:tasks.workunit.client.1.vm05.stdout:0/798: dwrite d1/d2/d9/d50/d9a/da0/ff7 [0,4194304] 0 2026-03-10T10:20:09.392 INFO:tasks.workunit.client.1.vm05.stdout:6/798: unlink dd/d1b/fe6 0 2026-03-10T10:20:09.401 INFO:tasks.workunit.client.1.vm05.stdout:1/873: rename d4/d3d/d6e/dac/ld2 to d4/d20/dbe/de8/lff 0 2026-03-10T10:20:09.402 INFO:tasks.workunit.client.0.vm02.stdout:0/847: link d9/d34/d3d/d65/d89/dd3/f9a d9/d18/d1a/d3c/f113 0 2026-03-10T10:20:09.402 INFO:tasks.workunit.client.0.vm02.stdout:0/848: chown d9/d34/d3d/d65/d89/df3/c103 1 1 2026-03-10T10:20:09.413 INFO:tasks.workunit.client.0.vm02.stdout:2/823: rename d0/d71/d108/d65/dc4/dfa/d80/ddb to d0/d10/dee/d116 0 2026-03-10T10:20:09.416 INFO:tasks.workunit.client.1.vm05.stdout:0/799: sync 2026-03-10T10:20:09.427 INFO:tasks.workunit.client.1.vm05.stdout:4/668: mknod d1/d31/d76/cde 0 2026-03-10T10:20:09.427 INFO:tasks.workunit.client.1.vm05.stdout:2/759: mknod db/d28/d4f/d59/cf5 0 2026-03-10T10:20:09.427 INFO:tasks.workunit.client.0.vm02.stdout:2/824: truncate d0/f88 3804983 0 2026-03-10T10:20:09.428 INFO:tasks.workunit.client.0.vm02.stdout:1/840: link d4/dc3/fd0 d4/d2c/d53/da6/db8/f10d 0 2026-03-10T10:20:09.428 INFO:tasks.workunit.client.0.vm02.stdout:1/841: write d4/dc3/ff9 [800070,14409] 0 2026-03-10T10:20:09.428 INFO:tasks.workunit.client.0.vm02.stdout:1/842: chown d4/da/d1a/l107 666118 1 2026-03-10T10:20:09.428 INFO:tasks.workunit.client.1.vm05.stdout:2/760: read db/d1c/f9b [642719,65158] 0 2026-03-10T10:20:09.429 INFO:tasks.workunit.client.1.vm05.stdout:8/740: symlink d7/d14/d3a/d49/lea 0 2026-03-10T10:20:09.431 INFO:tasks.workunit.client.1.vm05.stdout:4/669: mknod d1/d31/dc/d40/d45/cdf 0 2026-03-10T10:20:09.431 INFO:tasks.workunit.client.0.vm02.stdout:5/931: mknod d1/db/d11/d16/d79/c13e 0 2026-03-10T10:20:09.432 INFO:tasks.workunit.client.0.vm02.stdout:5/932: chown d1/db/d11/d62/d67/lf9 3970814 1 2026-03-10T10:20:09.433 INFO:tasks.workunit.client.0.vm02.stdout:7/819: link d1/d1b/d8f/dad/d7e/dba/dea/fed d1/d1b/d8f/dad/d7e/dba/dea/f100 0 2026-03-10T10:20:09.434 INFO:tasks.workunit.client.0.vm02.stdout:2/825: mknod d0/d71/d108/d65/db0/c117 0 2026-03-10T10:20:09.436 INFO:tasks.workunit.client.0.vm02.stdout:1/843: rmdir d4/dc3/dd6 39 2026-03-10T10:20:09.437 INFO:tasks.workunit.client.1.vm05.stdout:0/800: creat d1/d2/d9/d31/d12/d20/dbe/df1/f110 x:0 0 0 2026-03-10T10:20:09.440 INFO:tasks.workunit.client.1.vm05.stdout:2/761: creat db/d28/d4f/d8b/d9a/ff6 x:0 0 0 2026-03-10T10:20:09.443 INFO:tasks.workunit.client.0.vm02.stdout:0/849: sync 2026-03-10T10:20:09.446 INFO:tasks.workunit.client.0.vm02.stdout:3/805: dread d1/d8/d21/f4d [0,4194304] 0 2026-03-10T10:20:09.446 INFO:tasks.workunit.client.1.vm05.stdout:2/762: dwrite db/d2d/f48 [0,4194304] 0 2026-03-10T10:20:09.455 INFO:tasks.workunit.client.0.vm02.stdout:5/933: mkdir d1/db/d11/d13/d28/d13f 0 2026-03-10T10:20:09.455 INFO:tasks.workunit.client.1.vm05.stdout:2/763: dwrite db/d28/d4f/d8b/feb [0,4194304] 0 2026-03-10T10:20:09.465 INFO:tasks.workunit.client.0.vm02.stdout:0/850: unlink d9/d34/d3d/d65/d89/dd3/da7/fb2 0 2026-03-10T10:20:09.475 INFO:tasks.workunit.client.1.vm05.stdout:5/797: write da/db/dee/d38/fdb [2600784,51386] 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.0.vm02.stdout:6/762: rename d0/d8/d29/d6d/d96/dd9 to d0/d8/d29/d6d/d96/dfd 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.0.vm02.stdout:6/763: readlink d0/d8/d29/d2f/lea 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.0.vm02.stdout:6/764: chown d0/d8/d29/d52/fbd 822 1 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.0.vm02.stdout:1/844: link d4/da/d27/d38/ccc d4/da/d1a/d47/dbc/c10e 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.0.vm02.stdout:5/934: rmdir d1/db/d11/d13/d28/d13f 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.0.vm02.stdout:6/765: truncate d0/d8/d29/d2f/d4b/f39 1877742 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.1.vm05.stdout:3/805: write dd/d39/d5c/fb9 [3152201,96132] 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.1.vm05.stdout:7/830: write d5/d17/f3c [848027,115916] 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.1.vm05.stdout:1/874: mknod d4/d79/de6/c100 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.1.vm05.stdout:0/801: creat d1/d2/d9/d31/d13/da2/dab/dce/d106/f111 x:0 0 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.1.vm05.stdout:1/875: dread - d4/d39/d3e/db1/db8/fe1 zero size 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.1.vm05.stdout:5/798: creat da/db/d28/d8a/f115 x:0 0 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.1.vm05.stdout:7/831: readlink d5/d17/l5f 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.1.vm05.stdout:8/741: creat d7/d14/d24/d3f/feb x:0 0 0 2026-03-10T10:20:09.491 INFO:tasks.workunit.client.1.vm05.stdout:0/802: mkdir d1/d2/d9/d31/d13/d15/d4e/df6/d112 0 2026-03-10T10:20:09.499 INFO:tasks.workunit.client.0.vm02.stdout:1/845: mknod d4/d2c/d53/da6/c10f 0 2026-03-10T10:20:09.500 INFO:tasks.workunit.client.1.vm05.stdout:0/803: readlink d1/d2/d9/d31/d13/d17/l23 0 2026-03-10T10:20:09.502 INFO:tasks.workunit.client.0.vm02.stdout:1/846: creat d4/dc3/dd4/f110 x:0 0 0 2026-03-10T10:20:09.503 INFO:tasks.workunit.client.1.vm05.stdout:0/804: fdatasync d1/d2/d9/d31/d13/d17/f56 0 2026-03-10T10:20:09.504 INFO:tasks.workunit.client.1.vm05.stdout:3/806: link dd/d15/d24/d2c/d6d/da7/dbb/dbd/ff6 dd/d20/d9e/dc0/f11d 0 2026-03-10T10:20:09.505 INFO:tasks.workunit.client.0.vm02.stdout:5/935: creat d1/db/d11/f140 x:0 0 0 2026-03-10T10:20:09.505 INFO:tasks.workunit.client.0.vm02.stdout:0/851: sync 2026-03-10T10:20:09.510 INFO:tasks.workunit.client.1.vm05.stdout:5/799: sync 2026-03-10T10:20:09.511 INFO:tasks.workunit.client.1.vm05.stdout:0/805: mknod d1/d2/d39/d3d/d9f/c113 0 2026-03-10T10:20:09.511 INFO:tasks.workunit.client.0.vm02.stdout:6/766: creat d0/d8/d29/d2f/ffe x:0 0 0 2026-03-10T10:20:09.513 INFO:tasks.workunit.client.0.vm02.stdout:0/852: dread d9/d34/d3d/fae [0,4194304] 0 2026-03-10T10:20:09.516 INFO:tasks.workunit.client.0.vm02.stdout:0/853: dread d9/d18/d1a/d22/d24/d80/d74/f62 [0,4194304] 0 2026-03-10T10:20:09.520 INFO:tasks.workunit.client.1.vm05.stdout:0/806: dread d1/d2/d9/d31/d54/f27 [0,4194304] 0 2026-03-10T10:20:09.520 INFO:tasks.workunit.client.1.vm05.stdout:0/807: read - d1/f100 zero size 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: Active manager daemon vm02.zmavgl restarted 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: Activating manager daemon vm02.zmavgl 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: osdmap e43: 6 total, 6 up, 6 in 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: mgrmap e27: vm02.zmavgl(active, starting, since 0.0245345s) 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm02.zmavgl/crt"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm02.zmavgl/key"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:20:09.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: Standby manager daemon vm05.coparq started 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.105:0/2147830114' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/crt"}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.105:0/2147830114' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.105:0/2147830114' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/key"}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.? 192.168.123.105:0/2147830114' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: Manager daemon vm02.zmavgl is now available 2026-03-10T10:20:09.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:09 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:09.564 INFO:tasks.workunit.client.0.vm02.stdout:9/764: dwrite da/d3c/f72 [0,4194304] 0 2026-03-10T10:20:09.588 INFO:tasks.workunit.client.1.vm05.stdout:3/807: rename dd/f52 to dd/d15/d24/d2c/d6d/da7/dbb/dbd/dff/f11e 0 2026-03-10T10:20:09.591 INFO:tasks.workunit.client.0.vm02.stdout:8/788: dwrite d1/d2/f29 [0,4194304] 0 2026-03-10T10:20:09.591 INFO:tasks.workunit.client.1.vm05.stdout:9/711: dwrite d0/d1/d13/d26/f4e [4194304,4194304] 0 2026-03-10T10:20:09.592 INFO:tasks.workunit.client.1.vm05.stdout:3/808: sync 2026-03-10T10:20:09.614 INFO:tasks.workunit.client.0.vm02.stdout:4/926: truncate d1/d52/d53/f79 749511 0 2026-03-10T10:20:09.622 INFO:tasks.workunit.client.0.vm02.stdout:5/936: dread d1/db/d11/d1a/fc6 [0,4194304] 0 2026-03-10T10:20:09.623 INFO:tasks.workunit.client.0.vm02.stdout:5/937: chown d1/db/d11/d13/d28/f91 46436588 1 2026-03-10T10:20:09.631 INFO:tasks.workunit.client.1.vm05.stdout:6/799: write dd/d36/d3f/d12/d44/d63/fc5 [12759,92592] 0 2026-03-10T10:20:09.635 INFO:tasks.workunit.client.1.vm05.stdout:0/808: fsync d1/d2/d9/d31/d12/d20/dbe/fc5 0 2026-03-10T10:20:09.641 INFO:tasks.workunit.client.0.vm02.stdout:8/789: unlink d1/d1c/d23/d25/ca1 0 2026-03-10T10:20:09.643 INFO:tasks.workunit.client.0.vm02.stdout:9/765: dread da/d3c/d4c/d2c/d34/f81 [0,4194304] 0 2026-03-10T10:20:09.647 INFO:tasks.workunit.client.1.vm05.stdout:9/712: creat d0/d1/d13/de/d93/fee x:0 0 0 2026-03-10T10:20:09.655 INFO:tasks.workunit.client.0.vm02.stdout:4/927: mkdir d1/d75/ddd/d10e/d5e/d78/d1a/d49/d81/dc6/d134 0 2026-03-10T10:20:09.656 INFO:tasks.workunit.client.0.vm02.stdout:5/938: sync 2026-03-10T10:20:09.659 INFO:tasks.workunit.client.0.vm02.stdout:4/928: dwrite d1/d75/ddd/d10e/d7e/d131/f132 [0,4194304] 0 2026-03-10T10:20:09.666 INFO:tasks.workunit.client.0.vm02.stdout:7/820: write d1/dc/d60/f53 [2203061,68314] 0 2026-03-10T10:20:09.669 INFO:tasks.workunit.client.0.vm02.stdout:8/790: mkdir d1/d1c/d24/dad/dbe/dda/ded 0 2026-03-10T10:20:09.669 INFO:tasks.workunit.client.0.vm02.stdout:2/826: write d0/fe2 [4437298,81993] 0 2026-03-10T10:20:09.669 INFO:tasks.workunit.client.1.vm05.stdout:6/800: dread - dd/d36/d3f/d12/d44/d2a/d7f/fea zero size 2026-03-10T10:20:09.674 INFO:tasks.workunit.client.1.vm05.stdout:0/809: truncate d1/d2/d9/d31/d54/f6b 379254 0 2026-03-10T10:20:09.674 INFO:tasks.workunit.client.0.vm02.stdout:6/767: creat d0/d8/fff x:0 0 0 2026-03-10T10:20:09.674 INFO:tasks.workunit.client.1.vm05.stdout:0/810: fdatasync d1/d2/dc6/de7/ffb 0 2026-03-10T10:20:09.675 INFO:tasks.workunit.client.1.vm05.stdout:9/713: mknod d0/d1/d16/d6e/daf/cef 0 2026-03-10T10:20:09.687 INFO:tasks.workunit.client.1.vm05.stdout:6/801: creat dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/f101 x:0 0 0 2026-03-10T10:20:09.692 INFO:tasks.workunit.client.0.vm02.stdout:4/929: dread d1/d32/f46 [4194304,4194304] 0 2026-03-10T10:20:09.695 INFO:tasks.workunit.client.0.vm02.stdout:3/806: write d1/d8/d21/d73/d78/d79/fea [193362,68349] 0 2026-03-10T10:20:09.698 INFO:tasks.workunit.client.1.vm05.stdout:3/809: dread dd/d15/d24/d8e/dac/fd7 [0,4194304] 0 2026-03-10T10:20:09.699 INFO:tasks.workunit.client.0.vm02.stdout:4/930: sync 2026-03-10T10:20:09.704 INFO:tasks.workunit.client.0.vm02.stdout:7/821: read - d1/d1b/fcf zero size 2026-03-10T10:20:09.706 INFO:tasks.workunit.client.1.vm05.stdout:3/810: dread dd/d39/d5c/f6b [0,4194304] 0 2026-03-10T10:20:09.708 INFO:tasks.workunit.client.1.vm05.stdout:0/811: mkdir d1/d2/d9/d31/d13/d15/d114 0 2026-03-10T10:20:09.709 INFO:tasks.workunit.client.1.vm05.stdout:0/812: fsync d1/d2/d9/d31/d13/d15/d4e/d8a/fae 0 2026-03-10T10:20:09.709 INFO:tasks.workunit.client.0.vm02.stdout:9/766: creat da/d3c/d4c/d2c/d34/d35/df4/ff8 x:0 0 0 2026-03-10T10:20:09.733 INFO:tasks.workunit.client.1.vm05.stdout:2/764: write db/d1c/d40/f50 [4320627,76691] 0 2026-03-10T10:20:09.740 INFO:tasks.workunit.client.0.vm02.stdout:2/827: truncate d0/d1a/d49/f64 425747 0 2026-03-10T10:20:09.764 INFO:tasks.workunit.client.1.vm05.stdout:3/811: stat dd/d20/d9e/dc0/f115 0 2026-03-10T10:20:09.765 INFO:tasks.workunit.client.0.vm02.stdout:4/931: unlink d1/d32/f46 0 2026-03-10T10:20:09.774 INFO:tasks.workunit.client.1.vm05.stdout:8/742: write d7/d14/d3a/f50 [12814263,26231] 0 2026-03-10T10:20:09.776 INFO:tasks.workunit.client.1.vm05.stdout:7/832: dwrite d5/dd/fa9 [0,4194304] 0 2026-03-10T10:20:09.778 INFO:tasks.workunit.client.1.vm05.stdout:1/876: dwrite d4/df/d1c/d53/daa/fab [0,4194304] 0 2026-03-10T10:20:09.787 INFO:tasks.workunit.client.0.vm02.stdout:9/767: dread da/d3c/d4c/d2c/d34/fed [0,4194304] 0 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: Active manager daemon vm02.zmavgl restarted 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: Activating manager daemon vm02.zmavgl 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: osdmap e43: 6 total, 6 up, 6 in 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: mgrmap e27: vm02.zmavgl(active, starting, since 0.0245345s) 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm02.zmavgl/crt"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm02.zmavgl/key"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:20:09.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:20:09.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: Standby manager daemon vm05.coparq started 2026-03-10T10:20:09.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/2147830114' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/crt"}]: dispatch 2026-03-10T10:20:09.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/2147830114' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:20:09.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/2147830114' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/key"}]: dispatch 2026-03-10T10:20:09.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/2147830114' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:20:09.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: Manager daemon vm02.zmavgl is now available 2026-03-10T10:20:09.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:09 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:09.791 INFO:tasks.workunit.client.1.vm05.stdout:8/743: dread f2 [0,4194304] 0 2026-03-10T10:20:09.793 INFO:tasks.workunit.client.1.vm05.stdout:2/765: creat db/d2d/dc6/ff7 x:0 0 0 2026-03-10T10:20:09.793 INFO:tasks.workunit.client.1.vm05.stdout:2/766: chown db/d28/d4f/d59/f8d 28 1 2026-03-10T10:20:09.801 INFO:tasks.workunit.client.1.vm05.stdout:3/812: chown dd/d20/cf1 9689 1 2026-03-10T10:20:09.824 INFO:tasks.workunit.client.0.vm02.stdout:1/847: dwrite d4/da/d1a/d47/d88/da8/f106 [0,4194304] 0 2026-03-10T10:20:09.835 INFO:tasks.workunit.client.1.vm05.stdout:7/833: truncate d5/d1d/fd5 3729318 0 2026-03-10T10:20:09.835 INFO:tasks.workunit.client.1.vm05.stdout:7/834: write d5/d1d/d20/d2d/f3d [1262347,123643] 0 2026-03-10T10:20:09.840 INFO:tasks.workunit.client.1.vm05.stdout:1/877: mkdir d4/df/d76/d101 0 2026-03-10T10:20:09.845 INFO:tasks.workunit.client.1.vm05.stdout:8/744: dread - d7/d14/fa5 zero size 2026-03-10T10:20:09.852 INFO:tasks.workunit.client.1.vm05.stdout:2/767: dread - db/d1c/d40/d80/faa zero size 2026-03-10T10:20:09.876 INFO:tasks.workunit.client.1.vm05.stdout:7/835: symlink d5/d1d/d20/d35/dd2/lfd 0 2026-03-10T10:20:09.877 INFO:tasks.workunit.client.1.vm05.stdout:7/836: chown d5/d17 3561 1 2026-03-10T10:20:09.881 INFO:tasks.workunit.client.1.vm05.stdout:1/878: chown d4/d79/cf8 10350283 1 2026-03-10T10:20:09.892 INFO:tasks.workunit.client.1.vm05.stdout:2/768: creat db/d28/d4f/d59/da4/d81/da7/ff8 x:0 0 0 2026-03-10T10:20:09.902 INFO:tasks.workunit.client.1.vm05.stdout:0/813: getdents d1/d2/d9/d31/d13 0 2026-03-10T10:20:09.907 INFO:tasks.workunit.client.1.vm05.stdout:7/837: truncate d5/d1d/d20/d2d/d68/fc4 495065 0 2026-03-10T10:20:09.907 INFO:tasks.workunit.client.1.vm05.stdout:7/838: fsync d5/dd/fa9 0 2026-03-10T10:20:09.927 INFO:tasks.workunit.client.1.vm05.stdout:8/745: mknod d7/d14/cec 0 2026-03-10T10:20:09.935 INFO:tasks.workunit.client.1.vm05.stdout:3/813: creat dd/d15/f11f x:0 0 0 2026-03-10T10:20:09.935 INFO:tasks.workunit.client.1.vm05.stdout:3/814: chown dd/d15/d24/d2c/d3b/f67 8 1 2026-03-10T10:20:09.943 INFO:tasks.workunit.client.0.vm02.stdout:4/932: mkdir d1/d52/d53/dda/df7/d135 0 2026-03-10T10:20:09.947 INFO:tasks.workunit.client.0.vm02.stdout:4/933: dwrite d1/d32/fb3 [0,4194304] 0 2026-03-10T10:20:09.972 INFO:tasks.workunit.client.0.vm02.stdout:9/768: stat da/d3c/d4c/d38/d82/dd9 0 2026-03-10T10:20:09.978 INFO:tasks.workunit.client.1.vm05.stdout:7/839: fdatasync d5/d26/f39 0 2026-03-10T10:20:09.984 INFO:tasks.workunit.client.1.vm05.stdout:8/746: creat d7/d2f/d57/fed x:0 0 0 2026-03-10T10:20:09.985 INFO:tasks.workunit.client.0.vm02.stdout:1/848: truncate d4/da/d1a/d47/d88/fdd 1408485 0 2026-03-10T10:20:09.988 INFO:tasks.workunit.client.1.vm05.stdout:3/815: symlink dd/dbe/l120 0 2026-03-10T10:20:09.991 INFO:tasks.workunit.client.0.vm02.stdout:6/768: creat d0/d8/f100 x:0 0 0 2026-03-10T10:20:09.991 INFO:tasks.workunit.client.0.vm02.stdout:6/769: stat d0/d8/d29/d52 0 2026-03-10T10:20:10.019 INFO:tasks.workunit.client.0.vm02.stdout:4/934: mknod d1/d52/dff/c136 0 2026-03-10T10:20:10.020 INFO:tasks.workunit.client.0.vm02.stdout:4/935: stat d1/d75/ddd/d10e/d5e/d78/d1a/fad 0 2026-03-10T10:20:10.025 INFO:tasks.workunit.client.0.vm02.stdout:8/791: getdents d1/d1c/d43/d5b/d88/dac/d83 0 2026-03-10T10:20:10.047 INFO:tasks.workunit.client.1.vm05.stdout:7/840: symlink d5/d1d/de3/lfe 0 2026-03-10T10:20:10.048 INFO:tasks.workunit.client.1.vm05.stdout:8/747: mknod d7/d14/d24/d3f/cee 0 2026-03-10T10:20:10.049 INFO:tasks.workunit.client.1.vm05.stdout:8/748: truncate d7/d14/d15/d3b/da0/fc8 1132472 0 2026-03-10T10:20:10.050 INFO:tasks.workunit.client.1.vm05.stdout:8/749: chown d7/d14/fa5 1 1 2026-03-10T10:20:10.051 INFO:tasks.workunit.client.0.vm02.stdout:6/770: creat d0/d8/d29/d94/d9a/f101 x:0 0 0 2026-03-10T10:20:10.051 INFO:tasks.workunit.client.0.vm02.stdout:1/849: creat d4/da/d1a/d47/d78/f111 x:0 0 0 2026-03-10T10:20:10.052 INFO:tasks.workunit.client.0.vm02.stdout:3/807: link d1/d8/d21/d7d/ce6 d1/d20/db2/c10b 0 2026-03-10T10:20:10.052 INFO:tasks.workunit.client.0.vm02.stdout:4/936: creat d1/d75/ddd/d10e/d5e/d78/d44/dd0/f137 x:0 0 0 2026-03-10T10:20:10.052 INFO:tasks.workunit.client.0.vm02.stdout:3/808: stat d1/d8/d21/ff0 0 2026-03-10T10:20:10.052 INFO:tasks.workunit.client.0.vm02.stdout:4/937: chown d1/d75/ddd/d10e/d5e/d78 6838495 1 2026-03-10T10:20:10.056 INFO:tasks.workunit.client.0.vm02.stdout:4/938: stat d1/d10/dfc/f11a 0 2026-03-10T10:20:10.056 INFO:tasks.workunit.client.0.vm02.stdout:8/792: fdatasync d1/f65 0 2026-03-10T10:20:10.103 INFO:tasks.workunit.client.1.vm05.stdout:5/800: dwrite da/db/d26/f7e [0,4194304] 0 2026-03-10T10:20:10.106 INFO:tasks.workunit.client.1.vm05.stdout:3/816: link dd/d20/d56/d5e/ded/f10c dd/d15/f121 0 2026-03-10T10:20:10.111 INFO:tasks.workunit.client.1.vm05.stdout:7/841: fsync d5/dd/f2f 0 2026-03-10T10:20:10.117 INFO:tasks.workunit.client.0.vm02.stdout:1/850: creat d4/d2c/d53/da6/f112 x:0 0 0 2026-03-10T10:20:10.123 INFO:tasks.workunit.client.0.vm02.stdout:1/851: chown d4/dc3/df0/lf3 4959184 1 2026-03-10T10:20:10.123 INFO:tasks.workunit.client.1.vm05.stdout:0/814: link d1/d2/d9/d31/f36 d1/d2/d9/d31/d12/d20/f115 0 2026-03-10T10:20:10.123 INFO:tasks.workunit.client.1.vm05.stdout:0/815: chown d1/d2/d9/d31/d13/d17/f57 0 1 2026-03-10T10:20:10.138 INFO:tasks.workunit.client.0.vm02.stdout:4/939: fsync d1/d52/dff/f107 0 2026-03-10T10:20:10.139 INFO:tasks.workunit.client.0.vm02.stdout:4/940: fsync d1/f9d 0 2026-03-10T10:20:10.140 INFO:tasks.workunit.client.1.vm05.stdout:4/670: dread d1/d31/f36 [0,4194304] 0 2026-03-10T10:20:10.141 INFO:tasks.workunit.client.1.vm05.stdout:0/816: sync 2026-03-10T10:20:10.145 INFO:tasks.workunit.client.1.vm05.stdout:4/671: dwrite d1/fcd [0,4194304] 0 2026-03-10T10:20:10.151 INFO:tasks.workunit.client.0.vm02.stdout:8/793: dread d1/d1c/d24/dad/dbe/dda/fe5 [0,4194304] 0 2026-03-10T10:20:10.157 INFO:tasks.workunit.client.0.vm02.stdout:4/941: readlink d1/l80 0 2026-03-10T10:20:10.181 INFO:tasks.workunit.client.0.vm02.stdout:8/794: read d1/d1c/d23/d25/f5d [3447838,124667] 0 2026-03-10T10:20:10.182 INFO:tasks.workunit.client.0.vm02.stdout:8/795: write d1/d1c/d43/d6a/da8/d56/fd9 [140758,66614] 0 2026-03-10T10:20:10.229 INFO:tasks.workunit.client.0.vm02.stdout:4/942: getdents d1/d75/ddd/d10e/d11f 0 2026-03-10T10:20:10.231 INFO:tasks.workunit.client.0.vm02.stdout:4/943: readlink d1/d52/d53/la5 0 2026-03-10T10:20:10.234 INFO:tasks.workunit.client.1.vm05.stdout:5/801: rmdir da/d9a/dc7/db4/dbd 39 2026-03-10T10:20:10.243 INFO:tasks.workunit.client.1.vm05.stdout:3/817: symlink dd/d20/d56/d5e/ded/l122 0 2026-03-10T10:20:10.245 INFO:tasks.workunit.client.0.vm02.stdout:8/796: dread d1/d1c/f14 [0,4194304] 0 2026-03-10T10:20:10.245 INFO:tasks.workunit.client.0.vm02.stdout:4/944: creat d1/d52/d53/dda/de0/f138 x:0 0 0 2026-03-10T10:20:10.246 INFO:tasks.workunit.client.0.vm02.stdout:4/945: read - d1/d52/dff/f107 zero size 2026-03-10T10:20:10.261 INFO:tasks.workunit.client.1.vm05.stdout:7/842: truncate d5/d1d/f7c 4625697 0 2026-03-10T10:20:10.261 INFO:tasks.workunit.client.1.vm05.stdout:9/714: write d0/d1/dcc/dd0/fd6 [744468,93569] 0 2026-03-10T10:20:10.262 INFO:tasks.workunit.client.1.vm05.stdout:9/715: readlink d0/df/d74/l8b 0 2026-03-10T10:20:10.267 INFO:tasks.workunit.client.1.vm05.stdout:7/843: dwrite d5/d1d/d29/d3e/f65 [0,4194304] 0 2026-03-10T10:20:10.268 INFO:tasks.workunit.client.1.vm05.stdout:7/844: write d5/d26/fec [2888561,33100] 0 2026-03-10T10:20:10.279 INFO:tasks.workunit.client.0.vm02.stdout:4/946: chown d1/d75/ddd/d10e/d5e/cf3 3 1 2026-03-10T10:20:10.279 INFO:tasks.workunit.client.1.vm05.stdout:0/817: fdatasync d1/d2/d9/d31/d13/d17/f1b 0 2026-03-10T10:20:10.281 INFO:tasks.workunit.client.1.vm05.stdout:6/802: dwrite dd/d36/d7d/f97 [8388608,4194304] 0 2026-03-10T10:20:10.283 INFO:tasks.workunit.client.1.vm05.stdout:6/803: dread - dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/f101 zero size 2026-03-10T10:20:10.286 INFO:tasks.workunit.client.1.vm05.stdout:4/672: mkdir d1/d3/d65/de0 0 2026-03-10T10:20:10.306 INFO:tasks.workunit.client.1.vm05.stdout:3/818: chown dd/d39/d66/f11a 2387072 1 2026-03-10T10:20:10.306 INFO:tasks.workunit.client.1.vm05.stdout:7/845: creat d5/d1d/d20/d2d/d80/fff x:0 0 0 2026-03-10T10:20:10.319 INFO:tasks.workunit.client.1.vm05.stdout:0/818: dread - d1/d2/d39/d6e/dc0/fea zero size 2026-03-10T10:20:10.331 INFO:tasks.workunit.client.1.vm05.stdout:4/673: creat d1/d31/dc/fe1 x:0 0 0 2026-03-10T10:20:10.341 INFO:tasks.workunit.client.1.vm05.stdout:3/819: dread dd/d15/d24/d2c/d6d/da7/dbb/fe5 [0,4194304] 0 2026-03-10T10:20:10.354 INFO:tasks.workunit.client.1.vm05.stdout:6/804: fdatasync dd/d36/d3f/d12/f20 0 2026-03-10T10:20:10.354 INFO:tasks.workunit.client.1.vm05.stdout:3/820: symlink dd/d15/d24/d2c/d6d/da7/dbb/l123 0 2026-03-10T10:20:10.356 INFO:tasks.workunit.client.1.vm05.stdout:5/802: link da/db/dee/d38/fa6 da/db/dee/d109/f116 0 2026-03-10T10:20:10.356 INFO:tasks.workunit.client.1.vm05.stdout:6/805: write dd/d36/d3f/d12/d44/d2a/d3d/d3e/f73 [3412181,38379] 0 2026-03-10T10:20:10.359 INFO:tasks.workunit.client.1.vm05.stdout:3/821: sync 2026-03-10T10:20:10.361 INFO:tasks.workunit.client.1.vm05.stdout:4/674: dread d1/d3/f12 [4194304,4194304] 0 2026-03-10T10:20:10.362 INFO:tasks.workunit.client.1.vm05.stdout:5/803: mkdir da/d96/d117 0 2026-03-10T10:20:10.362 INFO:tasks.workunit.client.1.vm05.stdout:6/806: mkdir dd/d36/d7d/d102 0 2026-03-10T10:20:10.363 INFO:tasks.workunit.client.1.vm05.stdout:6/807: readlink dd/d36/d3f/d12/d44/d2a/d7f/lef 0 2026-03-10T10:20:10.364 INFO:tasks.workunit.client.1.vm05.stdout:5/804: fdatasync da/db/dee/f74 0 2026-03-10T10:20:10.364 INFO:tasks.workunit.client.1.vm05.stdout:5/805: chown da/d9a/daf/ded/ffa 3 1 2026-03-10T10:20:10.377 INFO:tasks.workunit.client.1.vm05.stdout:3/822: link dd/d15/d24/la0 dd/d15/d24/d8e/dac/l124 0 2026-03-10T10:20:10.378 INFO:tasks.workunit.client.1.vm05.stdout:6/808: getdents dd/d36/d7d/d102 0 2026-03-10T10:20:10.381 INFO:tasks.workunit.client.1.vm05.stdout:6/809: symlink dd/d36/d3f/d12/d44/d2a/d7f/l103 0 2026-03-10T10:20:10.382 INFO:tasks.workunit.client.1.vm05.stdout:3/823: mkdir dd/d20/d125 0 2026-03-10T10:20:10.385 INFO:tasks.workunit.client.1.vm05.stdout:3/824: creat dd/d39/d66/f126 x:0 0 0 2026-03-10T10:20:10.386 INFO:tasks.workunit.client.1.vm05.stdout:6/810: rename dd/d36/d3f/d12/c26 to dd/d36/d3f/dbd/c104 0 2026-03-10T10:20:10.389 INFO:tasks.workunit.client.1.vm05.stdout:3/825: rename dd/d15/d24/d2c/d3b/f67 to dd/d15/d69/f127 0 2026-03-10T10:20:10.397 INFO:tasks.workunit.client.1.vm05.stdout:3/826: mknod dd/d15/d24/d8e/dac/c128 0 2026-03-10T10:20:10.398 INFO:tasks.workunit.client.1.vm05.stdout:6/811: link dd/ce3 dd/d36/d7d/c105 0 2026-03-10T10:20:10.399 INFO:tasks.workunit.client.1.vm05.stdout:3/827: chown dd/d15/d24/d2c/d6d/da7/dbb/dbd/ff6 32687138 1 2026-03-10T10:20:10.403 INFO:tasks.workunit.client.1.vm05.stdout:6/812: mknod dd/d36/d7d/d102/c106 0 2026-03-10T10:20:10.406 INFO:tasks.workunit.client.1.vm05.stdout:6/813: sync 2026-03-10T10:20:10.407 INFO:tasks.workunit.client.1.vm05.stdout:6/814: read dd/d36/d3f/d12/d44/d30/d4a/fc9 [841062,122906] 0 2026-03-10T10:20:10.469 INFO:tasks.workunit.client.1.vm05.stdout:1/879: rmdir d4/df/d1c 39 2026-03-10T10:20:10.469 INFO:tasks.workunit.client.1.vm05.stdout:1/880: dread - d4/d37/ffa zero size 2026-03-10T10:20:10.470 INFO:tasks.workunit.client.1.vm05.stdout:1/881: creat d4/df/de0/d82/f102 x:0 0 0 2026-03-10T10:20:10.472 INFO:tasks.workunit.client.1.vm05.stdout:1/882: creat d4/d79/d83/dc5/dcb/f103 x:0 0 0 2026-03-10T10:20:10.497 INFO:tasks.workunit.client.1.vm05.stdout:3/828: link dd/d39/d66/f11a dd/d20/d56/f129 0 2026-03-10T10:20:10.501 INFO:tasks.workunit.client.1.vm05.stdout:3/829: rename dd/d20/d56/d5e/dab/ce9 to dd/d15/d24/d2c/dd0/dd9/d103/c12a 0 2026-03-10T10:20:10.503 INFO:tasks.workunit.client.1.vm05.stdout:3/830: sync 2026-03-10T10:20:10.505 INFO:tasks.workunit.client.1.vm05.stdout:3/831: rename dd/d15/d24/d74/d88/fe1 to dd/d15/d24/d74/d88/f12b 0 2026-03-10T10:20:10.506 INFO:tasks.workunit.client.1.vm05.stdout:3/832: mkdir dd/d20/d12c 0 2026-03-10T10:20:10.506 INFO:tasks.workunit.client.1.vm05.stdout:3/833: chown dd/d15/d24/d2c/d3b/lc2 256137707 1 2026-03-10T10:20:10.507 INFO:tasks.workunit.client.1.vm05.stdout:3/834: rename dd to dd/d15/d24/d2c/d6d/da7/dbb/dbd/dff/d12d 22 2026-03-10T10:20:10.510 INFO:tasks.workunit.client.1.vm05.stdout:3/835: symlink dd/d15/d1f/dae/l12e 0 2026-03-10T10:20:10.516 INFO:tasks.workunit.client.1.vm05.stdout:3/836: dwrite dd/d15/f84 [0,4194304] 0 2026-03-10T10:20:10.525 INFO:tasks.workunit.client.1.vm05.stdout:7/846: symlink d5/d1d/d29/d3e/l100 0 2026-03-10T10:20:10.528 INFO:tasks.workunit.client.1.vm05.stdout:7/847: dwrite d5/d26/fec [0,4194304] 0 2026-03-10T10:20:10.528 INFO:tasks.workunit.client.1.vm05.stdout:7/848: readlink d5/d1d/d29/l48 0 2026-03-10T10:20:10.530 INFO:tasks.workunit.client.1.vm05.stdout:7/849: stat d5/d1d/d29/d3e/d8c/f81 0 2026-03-10T10:20:10.530 INFO:tasks.workunit.client.1.vm05.stdout:7/850: dread d5/d17/f4f [0,4194304] 0 2026-03-10T10:20:10.765 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:10 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:10.765 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:10 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:20:10.765 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:10 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:20:10.765 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:10 vm05.local ceph-mon[59051]: mgrmap e28: vm02.zmavgl(active, since 1.02994s), standbys: vm05.coparq 2026-03-10T10:20:10.765 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:10 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm05.coparq", "id": "vm05.coparq"}]: dispatch 2026-03-10T10:20:10.765 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:10 vm05.local ceph-mon[59051]: pgmap v3: 65 pgs: 65 active+clean; 3.4 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:10.765 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:10 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:20:10.765 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:10 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:20:10.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:10 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:10.782 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:10 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:20:10.782 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:10 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:20:10.782 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:10 vm02.local ceph-mon[50200]: mgrmap e28: vm02.zmavgl(active, since 1.02994s), standbys: vm05.coparq 2026-03-10T10:20:10.782 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:10 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm05.coparq", "id": "vm05.coparq"}]: dispatch 2026-03-10T10:20:10.782 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:10 vm02.local ceph-mon[50200]: pgmap v3: 65 pgs: 65 active+clean; 3.4 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:10.782 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:10 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:20:10.782 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:10 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:20:10.954 INFO:tasks.workunit.client.0.vm02.stdout:2/828: write d0/d71/d108/d65/f6c [1120788,63391] 0 2026-03-10T10:20:10.955 INFO:tasks.workunit.client.0.vm02.stdout:2/829: creat d0/d1a/d49/deb/de6/f118 x:0 0 0 2026-03-10T10:20:10.959 INFO:tasks.workunit.client.0.vm02.stdout:2/830: truncate d0/d71/d108/d65/dc4/dfa/d80/d10f/fae 739481 0 2026-03-10T10:20:10.959 INFO:tasks.workunit.client.0.vm02.stdout:2/831: write d0/d1a/f33 [3233228,1006] 0 2026-03-10T10:20:10.970 INFO:tasks.workunit.client.1.vm05.stdout:8/750: getdents d7/d14 0 2026-03-10T10:20:11.161 INFO:tasks.workunit.client.0.vm02.stdout:9/769: write da/d3c/d4c/d38/d82/d8c/fa8 [21352,4399] 0 2026-03-10T10:20:11.233 INFO:tasks.workunit.client.0.vm02.stdout:6/771: write d0/d8/d29/d2f/f61 [1521207,32930] 0 2026-03-10T10:20:11.309 INFO:tasks.workunit.client.0.vm02.stdout:1/852: write d4/d1b/f103 [159171,86362] 0 2026-03-10T10:20:11.318 INFO:tasks.workunit.client.0.vm02.stdout:1/853: link d4/d2c/d53/da6/ff5 d4/da/d1a/d47/d88/da8/f113 0 2026-03-10T10:20:11.320 INFO:tasks.workunit.client.0.vm02.stdout:1/854: fdatasync d4/da/d1a/fa1 0 2026-03-10T10:20:11.343 INFO:tasks.workunit.client.1.vm05.stdout:0/819: mkdir d1/d2/d9/d116 0 2026-03-10T10:20:11.345 INFO:tasks.workunit.client.1.vm05.stdout:0/820: rmdir d1/d2/dc6/de7 39 2026-03-10T10:20:11.346 INFO:tasks.workunit.client.1.vm05.stdout:0/821: read d1/d2/d9/d31/d54/f16 [6752904,115748] 0 2026-03-10T10:20:11.363 INFO:tasks.workunit.client.1.vm05.stdout:0/822: truncate d1/d2/d9/d31/d13/f4c 2287095 0 2026-03-10T10:20:11.364 INFO:tasks.workunit.client.1.vm05.stdout:0/823: write d1/f100 [659851,22655] 0 2026-03-10T10:20:11.366 INFO:tasks.workunit.client.1.vm05.stdout:0/824: creat d1/d2/d9/d31/d13/d17/da1/df5/f117 x:0 0 0 2026-03-10T10:20:11.367 INFO:tasks.workunit.client.1.vm05.stdout:0/825: dread - d1/d2/d39/d6e/dc0/fea zero size 2026-03-10T10:20:11.368 INFO:tasks.workunit.client.1.vm05.stdout:0/826: mknod d1/d2/d9/d31/d13/d15/d4e/d8a/dfc/c118 0 2026-03-10T10:20:11.408 INFO:tasks.workunit.client.1.vm05.stdout:5/806: dread da/db/f9f [0,4194304] 0 2026-03-10T10:20:11.410 INFO:tasks.workunit.client.1.vm05.stdout:5/807: dread - da/db/de9/fb7 zero size 2026-03-10T10:20:11.412 INFO:tasks.workunit.client.1.vm05.stdout:5/808: dread - da/d96/fea zero size 2026-03-10T10:20:11.415 INFO:tasks.workunit.client.1.vm05.stdout:5/809: truncate da/db/d26/d5c/f92 477082 0 2026-03-10T10:20:11.417 INFO:tasks.workunit.client.1.vm05.stdout:2/769: creat db/d28/d4f/ff9 x:0 0 0 2026-03-10T10:20:11.417 INFO:tasks.workunit.client.1.vm05.stdout:5/810: truncate da/db/d26/d5c/fc5 201858 0 2026-03-10T10:20:11.420 INFO:tasks.workunit.client.1.vm05.stdout:2/770: creat db/d28/d4f/d59/d94/d95/ffa x:0 0 0 2026-03-10T10:20:11.468 INFO:tasks.workunit.client.0.vm02.stdout:8/797: write d1/d1c/d23/d25/f76 [4255596,93377] 0 2026-03-10T10:20:11.504 INFO:tasks.workunit.client.1.vm05.stdout:0/827: write d1/d2/d9/d31/d13/d17/f1b [2288185,50305] 0 2026-03-10T10:20:11.511 INFO:tasks.workunit.client.1.vm05.stdout:0/828: unlink d1/d2/d9/d31/d13/d15/laf 0 2026-03-10T10:20:11.519 INFO:tasks.workunit.client.1.vm05.stdout:9/716: write d0/d1/d13/de/fd4 [793990,111486] 0 2026-03-10T10:20:11.523 INFO:tasks.workunit.client.1.vm05.stdout:1/883: rmdir d4/df/d1c/d53/daa 39 2026-03-10T10:20:11.527 INFO:tasks.workunit.client.1.vm05.stdout:1/884: readlink d4/df/de0/lb7 0 2026-03-10T10:20:11.536 INFO:tasks.workunit.client.1.vm05.stdout:1/885: mkdir d4/df/d1c/d104 0 2026-03-10T10:20:11.541 INFO:tasks.workunit.client.1.vm05.stdout:1/886: creat d4/d20/dbe/f105 x:0 0 0 2026-03-10T10:20:11.545 INFO:tasks.workunit.client.1.vm05.stdout:1/887: mknod d4/d20/dbe/de8/c106 0 2026-03-10T10:20:11.562 INFO:tasks.workunit.client.1.vm05.stdout:6/815: write dd/d36/d3f/d12/d44/d30/d4a/fc9 [1178915,121133] 0 2026-03-10T10:20:11.566 INFO:tasks.workunit.client.1.vm05.stdout:6/816: dread dd/d36/d3f/d12/d44/d2a/d3d/d3e/f73 [0,4194304] 0 2026-03-10T10:20:11.627 INFO:tasks.workunit.client.1.vm05.stdout:7/851: write d5/d1d/d20/d91/fc1 [3108489,38380] 0 2026-03-10T10:20:11.627 INFO:tasks.workunit.client.1.vm05.stdout:3/837: write dd/d39/d5f/df7/ffd [44973,110533] 0 2026-03-10T10:20:11.628 INFO:tasks.workunit.client.0.vm02.stdout:0/854: rename d9/d34/d3d/d65/f6d to d9/d18/d1a/f114 0 2026-03-10T10:20:11.628 INFO:tasks.workunit.client.1.vm05.stdout:7/852: chown d5/d17/f3c 49270 1 2026-03-10T10:20:11.629 INFO:tasks.workunit.client.1.vm05.stdout:7/853: dread - d5/d26/f5a zero size 2026-03-10T10:20:11.638 INFO:tasks.workunit.client.0.vm02.stdout:5/939: rename d1/db/d11/d13/d28/d37/ff6 to d1/db/d11/d13/d28/d37/dce/d12e/f141 0 2026-03-10T10:20:11.643 INFO:tasks.workunit.client.1.vm05.stdout:3/838: creat dd/d20/d9e/f12f x:0 0 0 2026-03-10T10:20:11.647 INFO:tasks.workunit.client.1.vm05.stdout:3/839: dwrite dd/d39/d5f/df7/ffd [0,4194304] 0 2026-03-10T10:20:11.696 INFO:tasks.workunit.client.1.vm05.stdout:8/751: dwrite d7/d2f/f7f [0,4194304] 0 2026-03-10T10:20:11.697 INFO:tasks.workunit.client.0.vm02.stdout:2/832: dwrite d0/d71/d108/d65/f9e [0,4194304] 0 2026-03-10T10:20:11.707 INFO:tasks.workunit.client.1.vm05.stdout:3/840: rename dd/d20/d56/d5e/dab/d9c to dd/d20/d130 0 2026-03-10T10:20:11.707 INFO:tasks.workunit.client.1.vm05.stdout:3/841: stat dd/d20/d94 0 2026-03-10T10:20:11.709 INFO:tasks.workunit.client.0.vm02.stdout:2/833: chown d0/d1a/d49/deb/de6/fe7 1963 1 2026-03-10T10:20:11.710 INFO:tasks.workunit.client.1.vm05.stdout:3/842: dread dd/d15/f1b [0,4194304] 0 2026-03-10T10:20:11.725 INFO:tasks.workunit.client.0.vm02.stdout:7/822: rename d1/dc/d55/d9a/dd9/cdd to d1/d1b/d8f/dad/c101 0 2026-03-10T10:20:11.725 INFO:tasks.workunit.client.0.vm02.stdout:2/834: write d0/d71/d108/d65/dc4/dfa/dbf/fed [6652850,2012] 0 2026-03-10T10:20:11.726 INFO:tasks.workunit.client.0.vm02.stdout:7/823: dread - d1/dc/d16/f7a zero size 2026-03-10T10:20:11.726 INFO:tasks.workunit.client.1.vm05.stdout:3/843: sync 2026-03-10T10:20:11.728 INFO:tasks.workunit.client.1.vm05.stdout:3/844: chown dd/l17 26026963 1 2026-03-10T10:20:11.728 INFO:tasks.workunit.client.1.vm05.stdout:8/752: chown d7/d14/d62/d90/dac/fc0 232 1 2026-03-10T10:20:11.737 INFO:tasks.workunit.client.0.vm02.stdout:9/770: write da/d3c/d4c/d38/d4a/f7f [1795560,52118] 0 2026-03-10T10:20:11.751 INFO:tasks.workunit.client.0.vm02.stdout:7/824: mknod d1/d1b/d49/c102 0 2026-03-10T10:20:11.772 INFO:tasks.workunit.client.0.vm02.stdout:1/855: dwrite d4/d2c/d53/f58 [0,4194304] 0 2026-03-10T10:20:11.775 INFO:tasks.workunit.client.1.vm05.stdout:3/845: dwrite dd/d15/f1b [0,4194304] 0 2026-03-10T10:20:11.785 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:11 vm02.local ceph-mon[50200]: pgmap v4: 65 pgs: 65 active+clean; 3.4 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:11.786 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:11 vm02.local ceph-mon[50200]: mgrmap e29: vm02.zmavgl(active, since 2s), standbys: vm05.coparq 2026-03-10T10:20:11.789 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:11 vm05.local ceph-mon[59051]: pgmap v4: 65 pgs: 65 active+clean; 3.4 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:11.789 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:11 vm05.local ceph-mon[59051]: mgrmap e29: vm02.zmavgl(active, since 2s), standbys: vm05.coparq 2026-03-10T10:20:11.806 INFO:tasks.workunit.client.1.vm05.stdout:2/771: write db/d28/d4f/d59/f7c [4607813,94317] 0 2026-03-10T10:20:11.814 INFO:tasks.workunit.client.1.vm05.stdout:5/811: dwrite da/db/d26/f101 [0,4194304] 0 2026-03-10T10:20:11.814 INFO:tasks.workunit.client.1.vm05.stdout:0/829: dwrite d1/d2/d9/f1d [0,4194304] 0 2026-03-10T10:20:11.816 INFO:tasks.workunit.client.0.vm02.stdout:6/772: dwrite d0/d8/d29/d2f/d4b/f39 [0,4194304] 0 2026-03-10T10:20:11.819 INFO:tasks.workunit.client.1.vm05.stdout:9/717: dwrite d0/d1/f6d [0,4194304] 0 2026-03-10T10:20:11.829 INFO:tasks.workunit.client.1.vm05.stdout:5/812: write da/db/d26/d70/fd1 [4222635,65717] 0 2026-03-10T10:20:11.829 INFO:tasks.workunit.client.1.vm05.stdout:5/813: readlink da/d96/l112 0 2026-03-10T10:20:11.831 INFO:tasks.workunit.client.1.vm05.stdout:4/675: write d1/d31/dc/d40/f7d [1012528,12086] 0 2026-03-10T10:20:11.832 INFO:tasks.workunit.client.1.vm05.stdout:2/772: rmdir db/d1c/d40/d80 39 2026-03-10T10:20:11.851 INFO:tasks.workunit.client.1.vm05.stdout:0/830: fsync d1/d2/d9/d31/f8c 0 2026-03-10T10:20:11.855 INFO:tasks.workunit.client.1.vm05.stdout:9/718: mknod d0/df/d74/d8c/d8f/ddd/de6/cf0 0 2026-03-10T10:20:11.867 INFO:tasks.workunit.client.1.vm05.stdout:5/814: unlink da/db/d26/d70/f82 0 2026-03-10T10:20:11.873 INFO:tasks.workunit.client.1.vm05.stdout:2/773: rmdir db/d1c/d40/d62/d85 39 2026-03-10T10:20:11.874 INFO:tasks.workunit.client.1.vm05.stdout:0/831: rename d1/d2/d9/d31/d13/da2/dab/dce/ldb to d1/d2/d9/d31/d13/d17/da1/df5/l119 0 2026-03-10T10:20:11.875 INFO:tasks.workunit.client.1.vm05.stdout:0/832: chown d1/d2/d9/d31/d12/d20/dbe/df1/cfa 354 1 2026-03-10T10:20:11.899 INFO:tasks.workunit.client.1.vm05.stdout:0/833: dread d1/d2/d9/d50/f94 [0,4194304] 0 2026-03-10T10:20:11.908 INFO:tasks.workunit.client.1.vm05.stdout:4/676: symlink d1/d3/d65/de0/le2 0 2026-03-10T10:20:11.909 INFO:tasks.workunit.client.1.vm05.stdout:4/677: chown d1/d31/dc/d40/d45/daa/fd0 1225 1 2026-03-10T10:20:11.914 INFO:tasks.workunit.client.1.vm05.stdout:2/774: rmdir db/d2d 39 2026-03-10T10:20:11.920 INFO:tasks.workunit.client.1.vm05.stdout:9/719: mkdir d0/df/df1 0 2026-03-10T10:20:11.920 INFO:tasks.workunit.client.1.vm05.stdout:9/720: chown d0/df/d74/d8c/d8f 7 1 2026-03-10T10:20:11.922 INFO:tasks.workunit.client.1.vm05.stdout:0/834: chown d1/d2/d9/d31/d12/d20/f115 158 1 2026-03-10T10:20:11.932 INFO:tasks.workunit.client.1.vm05.stdout:4/678: rmdir d1/d31 39 2026-03-10T10:20:11.932 INFO:tasks.workunit.client.1.vm05.stdout:1/888: write d4/d39/f54 [289541,44598] 0 2026-03-10T10:20:11.945 INFO:tasks.workunit.client.1.vm05.stdout:6/817: write dd/d36/d3f/d12/d44/d2a/d3d/d48/f75 [1525342,6553] 0 2026-03-10T10:20:11.945 INFO:tasks.workunit.client.1.vm05.stdout:2/775: chown db/d2d/fcb 737792903 1 2026-03-10T10:20:11.949 INFO:tasks.workunit.client.1.vm05.stdout:2/776: truncate db/d1c/d40/d62/fed 325901 0 2026-03-10T10:20:11.955 INFO:tasks.workunit.client.1.vm05.stdout:5/815: link da/db/d28/d8a/fa0 da/db/dee/d38/f118 0 2026-03-10T10:20:11.970 INFO:tasks.workunit.client.0.vm02.stdout:0/855: dwrite d9/d18/d1a/d22/d24/d80/f72 [0,4194304] 0 2026-03-10T10:20:11.974 INFO:tasks.workunit.client.1.vm05.stdout:9/721: chown d0/d1/d16/f5c 39394453 1 2026-03-10T10:20:11.981 INFO:tasks.workunit.client.1.vm05.stdout:7/854: write d5/d1d/f31 [1348078,109471] 0 2026-03-10T10:20:11.985 INFO:tasks.workunit.client.1.vm05.stdout:7/855: readlink d5/dd/l27 0 2026-03-10T10:20:11.990 INFO:tasks.workunit.client.0.vm02.stdout:5/940: write d1/db/d11/d13/d28/d37/dce/f101 [369370,62009] 0 2026-03-10T10:20:12.000 INFO:tasks.workunit.client.1.vm05.stdout:1/889: unlink d4/d20/lc6 0 2026-03-10T10:20:12.001 INFO:tasks.workunit.client.1.vm05.stdout:1/890: dread d4/d39/f7b [4194304,4194304] 0 2026-03-10T10:20:12.009 INFO:tasks.workunit.client.1.vm05.stdout:1/891: dread d4/d39/d3e/fd9 [0,4194304] 0 2026-03-10T10:20:12.019 INFO:tasks.workunit.client.1.vm05.stdout:2/777: read db/d28/f7d [4040321,22471] 0 2026-03-10T10:20:12.043 INFO:tasks.workunit.client.1.vm05.stdout:9/722: fdatasync d0/d1/f7b 0 2026-03-10T10:20:12.043 INFO:tasks.workunit.client.1.vm05.stdout:8/753: write d7/d14/d15/da7/fdf [25132,97238] 0 2026-03-10T10:20:12.049 INFO:tasks.workunit.client.0.vm02.stdout:6/773: read d0/d8/d29/d2f/d4b/f26 [2851077,59457] 0 2026-03-10T10:20:12.065 INFO:tasks.workunit.client.1.vm05.stdout:1/892: unlink d4/df/d1c/d92/cbd 0 2026-03-10T10:20:12.066 INFO:tasks.workunit.client.1.vm05.stdout:3/846: write dd/d20/d56/d5e/ded/f10c [908858,39004] 0 2026-03-10T10:20:12.067 INFO:tasks.workunit.client.1.vm05.stdout:3/847: stat dd/d39/d5f/l87 0 2026-03-10T10:20:12.072 INFO:tasks.workunit.client.0.vm02.stdout:2/835: link d0/d10/f6b d0/d71/d108/d65/dc4/dfa/dd3/de8/d105/f119 0 2026-03-10T10:20:12.082 INFO:tasks.workunit.client.1.vm05.stdout:7/856: dread d5/d1d/d29/fb7 [0,4194304] 0 2026-03-10T10:20:12.110 INFO:tasks.workunit.client.1.vm05.stdout:8/754: mkdir d7/d14/d15/da7/def 0 2026-03-10T10:20:12.113 INFO:tasks.workunit.client.1.vm05.stdout:2/778: dread db/d1c/d40/f70 [0,4194304] 0 2026-03-10T10:20:12.116 INFO:tasks.workunit.client.0.vm02.stdout:3/809: rename d1/d8/d21/c3e to d1/d58/dc9/c10c 0 2026-03-10T10:20:12.120 INFO:tasks.workunit.client.0.vm02.stdout:5/941: mkdir d1/db/d11/d16/d142 0 2026-03-10T10:20:12.122 INFO:tasks.workunit.client.0.vm02.stdout:9/771: write da/d3c/d4c/d56/fd3 [1641507,62762] 0 2026-03-10T10:20:12.123 INFO:tasks.workunit.client.1.vm05.stdout:4/679: fdatasync d1/d31/d76/dac/db8/dbf/f78 0 2026-03-10T10:20:12.128 INFO:tasks.workunit.client.0.vm02.stdout:7/825: dwrite d1/dc/d16/f48 [0,4194304] 0 2026-03-10T10:20:12.132 INFO:tasks.workunit.client.0.vm02.stdout:4/947: rename d1/d52/d53/c54 to d1/d75/ddd/d10e/d7e/c139 0 2026-03-10T10:20:12.164 INFO:tasks.workunit.client.1.vm05.stdout:3/848: creat dd/d20/d130/f131 x:0 0 0 2026-03-10T10:20:12.164 INFO:tasks.workunit.client.0.vm02.stdout:3/810: chown d1/d20/d52/lef 3211127 1 2026-03-10T10:20:12.164 INFO:tasks.workunit.client.0.vm02.stdout:9/772: mkdir da/d3c/d4c/df6/df9 0 2026-03-10T10:20:12.182 INFO:tasks.workunit.client.0.vm02.stdout:3/811: mknod d1/d8/c10d 0 2026-03-10T10:20:12.182 INFO:tasks.workunit.client.0.vm02.stdout:3/812: dread - d1/d8/d21/d7d/fdf zero size 2026-03-10T10:20:12.192 INFO:tasks.workunit.client.0.vm02.stdout:8/798: rename d1/d1c/d43/d6a/f82 to d1/d1c/d23/fee 0 2026-03-10T10:20:12.206 INFO:tasks.workunit.client.0.vm02.stdout:4/948: creat d1/d75/ddd/d10e/d5e/d78/f13a x:0 0 0 2026-03-10T10:20:12.220 INFO:tasks.workunit.client.0.vm02.stdout:6/774: rename d0/d8/d29/d2f/d4b to d0/d8/d29/d6d/d96/de4/d102 0 2026-03-10T10:20:12.226 INFO:tasks.workunit.client.0.vm02.stdout:2/836: sync 2026-03-10T10:20:12.239 INFO:tasks.workunit.client.0.vm02.stdout:0/856: write d9/d34/d3d/f69 [733224,81724] 0 2026-03-10T10:20:12.239 INFO:tasks.workunit.client.0.vm02.stdout:0/857: stat d9/d34/c86 0 2026-03-10T10:20:12.239 INFO:tasks.workunit.client.1.vm05.stdout:6/818: dwrite dd/d36/d3f/d12/d44/d2a/d3d/fa2 [0,4194304] 0 2026-03-10T10:20:12.240 INFO:tasks.workunit.client.0.vm02.stdout:1/856: dwrite d4/da/fde [0,4194304] 0 2026-03-10T10:20:12.258 INFO:tasks.workunit.client.0.vm02.stdout:5/942: dwrite d1/db/d11/d84/d40/d4f/f57 [4194304,4194304] 0 2026-03-10T10:20:12.260 INFO:tasks.workunit.client.0.vm02.stdout:7/826: dwrite d1/dc/d55/f85 [0,4194304] 0 2026-03-10T10:20:12.262 INFO:tasks.workunit.client.0.vm02.stdout:4/949: symlink d1/d52/d53/dda/de0/l13b 0 2026-03-10T10:20:12.264 INFO:tasks.workunit.client.0.vm02.stdout:4/950: dread - d1/d52/dff/f107 zero size 2026-03-10T10:20:12.280 INFO:tasks.workunit.client.0.vm02.stdout:9/773: dwrite da/d3c/d4c/d2c/d96/fee [0,4194304] 0 2026-03-10T10:20:12.302 INFO:tasks.workunit.client.0.vm02.stdout:4/951: sync 2026-03-10T10:20:12.312 INFO:tasks.workunit.client.0.vm02.stdout:3/813: dwrite d1/d20/d52/dd3/fe7 [0,4194304] 0 2026-03-10T10:20:12.322 INFO:tasks.workunit.client.1.vm05.stdout:8/755: unlink f2 0 2026-03-10T10:20:12.322 INFO:tasks.workunit.client.0.vm02.stdout:3/814: chown d1/d20/ff2 1873518 1 2026-03-10T10:20:12.334 INFO:tasks.workunit.client.0.vm02.stdout:7/827: rmdir d1/dc/d55 39 2026-03-10T10:20:12.335 INFO:tasks.workunit.client.1.vm05.stdout:2/779: truncate db/d12/f3b 556982 0 2026-03-10T10:20:12.336 INFO:tasks.workunit.client.1.vm05.stdout:2/780: chown db/d28/d4f/d59/f7e 4862 1 2026-03-10T10:20:12.344 INFO:tasks.workunit.client.1.vm05.stdout:0/835: getdents d1/d2/d9/d31/d13/d15 0 2026-03-10T10:20:12.347 INFO:tasks.workunit.client.0.vm02.stdout:9/774: symlink da/d3c/d4c/d38/d4a/d99/lfa 0 2026-03-10T10:20:12.360 INFO:tasks.workunit.client.1.vm05.stdout:4/680: symlink d1/d31/d76/le3 0 2026-03-10T10:20:12.379 INFO:tasks.workunit.client.0.vm02.stdout:4/952: creat d1/d75/ddd/d10e/d5e/d78/d7f/f13c x:0 0 0 2026-03-10T10:20:12.380 INFO:tasks.workunit.client.1.vm05.stdout:3/849: truncate dd/d20/d56/f68 1195851 0 2026-03-10T10:20:12.382 INFO:tasks.workunit.client.1.vm05.stdout:3/850: write dd/d20/d56/d5e/ded/f10c [549553,26572] 0 2026-03-10T10:20:12.382 INFO:tasks.workunit.client.1.vm05.stdout:3/851: dread - dd/d20/d9e/f12f zero size 2026-03-10T10:20:12.391 INFO:tasks.workunit.client.1.vm05.stdout:5/816: truncate da/db/d28/d8a/fa0 907191 0 2026-03-10T10:20:12.391 INFO:tasks.workunit.client.1.vm05.stdout:5/817: stat da/db/dee/f2a 0 2026-03-10T10:20:12.398 INFO:tasks.workunit.client.0.vm02.stdout:3/815: unlink d1/d6/d8e/f96 0 2026-03-10T10:20:12.399 INFO:tasks.workunit.client.0.vm02.stdout:3/816: stat d1/d20/d52/dd3/fe7 0 2026-03-10T10:20:12.400 INFO:tasks.workunit.client.1.vm05.stdout:6/819: creat dd/d36/d3f/d12/d96/f107 x:0 0 0 2026-03-10T10:20:12.409 INFO:tasks.workunit.client.0.vm02.stdout:7/828: mkdir d1/dc/d16/d28/d2d/d103 0 2026-03-10T10:20:12.409 INFO:tasks.workunit.client.1.vm05.stdout:6/820: sync 2026-03-10T10:20:12.414 INFO:tasks.workunit.client.0.vm02.stdout:3/817: dread d1/d8/d21/d7d/fdd [0,4194304] 0 2026-03-10T10:20:12.416 INFO:tasks.workunit.client.0.vm02.stdout:7/829: sync 2026-03-10T10:20:12.425 INFO:tasks.workunit.client.0.vm02.stdout:8/799: dwrite d1/d1c/d43/d6a/da8/f97 [0,4194304] 0 2026-03-10T10:20:12.441 INFO:tasks.workunit.client.1.vm05.stdout:0/836: rename d1/d2/d39/d6e/fdd to d1/d2/d9/d31/d13/d15/d4e/d8a/dfc/f11a 0 2026-03-10T10:20:12.451 INFO:tasks.workunit.client.1.vm05.stdout:4/681: dread - d1/d31/dc/d40/d45/daa/fd0 zero size 2026-03-10T10:20:12.460 INFO:tasks.workunit.client.0.vm02.stdout:6/775: dwrite d0/d8/d9/f6a [0,4194304] 0 2026-03-10T10:20:12.469 INFO:tasks.workunit.client.0.vm02.stdout:5/943: dwrite d1/db/d11/f7d [0,4194304] 0 2026-03-10T10:20:12.470 INFO:tasks.workunit.client.0.vm02.stdout:7/830: dread d1/d1b/d8f/f8c [0,4194304] 0 2026-03-10T10:20:12.473 INFO:tasks.workunit.client.1.vm05.stdout:3/852: creat dd/d15/d24/d74/d88/f132 x:0 0 0 2026-03-10T10:20:12.479 INFO:tasks.workunit.client.1.vm05.stdout:5/818: creat da/db/d28/d32/f119 x:0 0 0 2026-03-10T10:20:12.486 INFO:tasks.workunit.client.1.vm05.stdout:7/857: rmdir d5/d26/d9c/de7/df6 0 2026-03-10T10:20:12.486 INFO:tasks.workunit.client.0.vm02.stdout:2/837: getdents d0/d71/d108/d65 0 2026-03-10T10:20:12.490 INFO:tasks.workunit.client.0.vm02.stdout:1/857: getdents d4/d2c/d53/da6/db8 0 2026-03-10T10:20:12.490 INFO:tasks.workunit.client.0.vm02.stdout:8/800: creat d1/d2/fef x:0 0 0 2026-03-10T10:20:12.490 INFO:tasks.workunit.client.0.vm02.stdout:2/838: chown d0/d71/d108/d65/dc4/dfa/d80/l8d 1163108237 1 2026-03-10T10:20:12.492 INFO:tasks.workunit.client.0.vm02.stdout:0/858: getdents d9/d18/d1a/d22/d24/d80/d57 0 2026-03-10T10:20:12.494 INFO:tasks.workunit.client.1.vm05.stdout:0/837: fsync d1/f38 0 2026-03-10T10:20:12.496 INFO:tasks.workunit.client.1.vm05.stdout:1/893: getdents d4/d39/d3e/db1 0 2026-03-10T10:20:12.503 INFO:tasks.workunit.client.1.vm05.stdout:9/723: truncate d0/df/f97 712729 0 2026-03-10T10:20:12.504 INFO:tasks.workunit.client.1.vm05.stdout:4/682: read d1/d31/dc/d40/d63/f94 [1969529,35656] 0 2026-03-10T10:20:12.506 INFO:tasks.workunit.client.1.vm05.stdout:3/853: read dd/fe [2031726,128646] 0 2026-03-10T10:20:12.507 INFO:tasks.workunit.client.0.vm02.stdout:6/776: symlink d0/db9/l103 0 2026-03-10T10:20:12.508 INFO:tasks.workunit.client.1.vm05.stdout:9/724: dwrite d0/d1/d13/de/d93/fee [0,4194304] 0 2026-03-10T10:20:12.509 INFO:tasks.workunit.client.0.vm02.stdout:6/777: write d0/d8/d29/d94/d9a/f101 [1033290,101372] 0 2026-03-10T10:20:12.510 INFO:tasks.workunit.client.1.vm05.stdout:3/854: sync 2026-03-10T10:20:12.522 INFO:tasks.workunit.client.1.vm05.stdout:5/819: fdatasync da/db/de9/fb7 0 2026-03-10T10:20:12.526 INFO:tasks.workunit.client.1.vm05.stdout:7/858: rmdir d5/d1d/d29/d3e/d8c 39 2026-03-10T10:20:12.532 INFO:tasks.workunit.client.0.vm02.stdout:5/944: readlink d1/db/d11/d13/d28/da7/dd9/l10d 0 2026-03-10T10:20:12.540 INFO:tasks.workunit.client.0.vm02.stdout:7/831: truncate d1/dc/d60/ff0 46611 0 2026-03-10T10:20:12.541 INFO:tasks.workunit.client.1.vm05.stdout:7/859: sync 2026-03-10T10:20:12.542 INFO:tasks.workunit.client.1.vm05.stdout:7/860: chown d5/d1d/d29/d3e 692 1 2026-03-10T10:20:12.546 INFO:tasks.workunit.client.0.vm02.stdout:8/801: mknod d1/d1c/d43/d6a/d7c/da6/cf0 0 2026-03-10T10:20:12.553 INFO:tasks.workunit.client.1.vm05.stdout:0/838: unlink d1/d2/d9/d31/d12/d20/dbe/df1/f108 0 2026-03-10T10:20:12.555 INFO:tasks.workunit.client.0.vm02.stdout:2/839: creat d0/d10/dee/f11a x:0 0 0 2026-03-10T10:20:12.556 INFO:tasks.workunit.client.0.vm02.stdout:8/802: sync 2026-03-10T10:20:12.559 INFO:tasks.workunit.client.1.vm05.stdout:1/894: creat d4/d20/dbe/f107 x:0 0 0 2026-03-10T10:20:12.582 INFO:tasks.workunit.client.0.vm02.stdout:0/859: rename d9/d18/d1a/d22/d24/d80/d57/d81 to d9/d18/d1a/d22/d24/d8e/d9b/d115 0 2026-03-10T10:20:12.584 INFO:tasks.workunit.client.0.vm02.stdout:0/860: chown d9/d18/d1a/d22/d24/d51/c5f 70960713 1 2026-03-10T10:20:12.584 INFO:tasks.workunit.client.0.vm02.stdout:9/775: dwrite da/d3c/d4c/d2c/d34/f4d [0,4194304] 0 2026-03-10T10:20:12.593 INFO:tasks.workunit.client.0.vm02.stdout:3/818: write d1/d6/d8e/fa0 [1012207,40216] 0 2026-03-10T10:20:12.597 INFO:tasks.workunit.client.1.vm05.stdout:8/756: write d7/d14/d24/f2c [2194519,99636] 0 2026-03-10T10:20:12.603 INFO:tasks.workunit.client.0.vm02.stdout:4/953: creat d1/d75/f13d x:0 0 0 2026-03-10T10:20:12.608 INFO:tasks.workunit.client.1.vm05.stdout:4/683: creat d1/d31/dc/d40/d45/daa/fe4 x:0 0 0 2026-03-10T10:20:12.619 INFO:tasks.workunit.client.0.vm02.stdout:5/945: creat d1/db/d11/d16/d79/d85/d135/f143 x:0 0 0 2026-03-10T10:20:12.621 INFO:tasks.workunit.client.1.vm05.stdout:3/855: creat dd/d15/d4c/db5/f133 x:0 0 0 2026-03-10T10:20:12.622 INFO:tasks.workunit.client.1.vm05.stdout:3/856: chown dd/d15/d1f/d116 1286 1 2026-03-10T10:20:12.622 INFO:tasks.workunit.client.1.vm05.stdout:3/857: chown dd/d20/d56/f7d 32 1 2026-03-10T10:20:12.624 INFO:tasks.workunit.client.0.vm02.stdout:1/858: unlink d4/da/c31 0 2026-03-10T10:20:12.628 INFO:tasks.workunit.client.1.vm05.stdout:9/725: read d0/d1/d13/d26/f58 [1742389,4137] 0 2026-03-10T10:20:12.634 INFO:tasks.workunit.client.0.vm02.stdout:2/840: truncate d0/d71/d108/d65/dc4/fe1 389392 0 2026-03-10T10:20:12.636 INFO:tasks.workunit.client.1.vm05.stdout:5/820: creat da/db/d26/d70/d72/d10b/f11a x:0 0 0 2026-03-10T10:20:12.636 INFO:tasks.workunit.client.1.vm05.stdout:5/821: read - da/db/fad zero size 2026-03-10T10:20:12.637 INFO:tasks.workunit.client.1.vm05.stdout:5/822: readlink da/db/d26/d5c/l98 0 2026-03-10T10:20:12.641 INFO:tasks.workunit.client.0.vm02.stdout:6/778: rename d0/d8/d9/d7a/l9c to d0/d8/d29/d52/l104 0 2026-03-10T10:20:12.641 INFO:tasks.workunit.client.1.vm05.stdout:6/821: creat dd/d36/d3f/d12/d44/d2a/d77/f108 x:0 0 0 2026-03-10T10:20:12.650 INFO:tasks.workunit.client.0.vm02.stdout:9/776: creat da/d3c/d4c/df6/ffb x:0 0 0 2026-03-10T10:20:12.650 INFO:tasks.workunit.client.0.vm02.stdout:9/777: fsync da/d3c/d53/f6a 0 2026-03-10T10:20:12.657 INFO:tasks.workunit.client.0.vm02.stdout:4/954: mknod d1/d75/ddd/d10e/d7e/d131/c13e 0 2026-03-10T10:20:12.669 INFO:tasks.workunit.client.0.vm02.stdout:7/832: dwrite d1/d1b/d49/f4b [4194304,4194304] 0 2026-03-10T10:20:12.718 INFO:tasks.workunit.client.0.vm02.stdout:5/946: fdatasync d1/db/d11/d84/d40/fd0 0 2026-03-10T10:20:12.719 INFO:tasks.workunit.client.0.vm02.stdout:5/947: chown d1/db/d11/d13/d28/d11a 2 1 2026-03-10T10:20:12.719 INFO:tasks.workunit.client.0.vm02.stdout:5/948: write d1/db/d11/f140 [882554,17150] 0 2026-03-10T10:20:12.722 INFO:tasks.workunit.client.1.vm05.stdout:3/858: creat dd/d20/d9e/f134 x:0 0 0 2026-03-10T10:20:12.735 INFO:tasks.workunit.client.0.vm02.stdout:8/803: mkdir d1/d1c/d23/d25/df1 0 2026-03-10T10:20:12.736 INFO:tasks.workunit.client.0.vm02.stdout:8/804: stat d1/d1c/f42 0 2026-03-10T10:20:12.738 INFO:tasks.workunit.client.1.vm05.stdout:8/757: dwrite d7/d14/f4c [0,4194304] 0 2026-03-10T10:20:12.738 INFO:tasks.workunit.client.1.vm05.stdout:1/895: dwrite d4/df/d1c/d92/f9e [0,4194304] 0 2026-03-10T10:20:12.743 INFO:tasks.workunit.client.1.vm05.stdout:1/896: read d4/d39/d3e/db1/ff9 [2131356,28905] 0 2026-03-10T10:20:12.756 INFO:tasks.workunit.client.0.vm02.stdout:2/841: stat d0/d71/d108/d65/dc4/dfa/d80/lc1 0 2026-03-10T10:20:12.763 INFO:tasks.workunit.client.0.vm02.stdout:1/859: dread d4/da/d1a/d22/f23 [0,4194304] 0 2026-03-10T10:20:12.769 INFO:tasks.workunit.client.0.vm02.stdout:6/779: symlink d0/d8/d29/d94/d9a/l105 0 2026-03-10T10:20:12.770 INFO:tasks.workunit.client.0.vm02.stdout:6/780: write d0/d8/d29/d6d/ff8 [202287,103231] 0 2026-03-10T10:20:12.779 INFO:tasks.workunit.client.0.vm02.stdout:3/819: symlink d1/d20/l10e 0 2026-03-10T10:20:12.785 INFO:tasks.workunit.client.0.vm02.stdout:9/778: creat da/d3c/d4c/d38/d7c/dde/ffc x:0 0 0 2026-03-10T10:20:12.788 INFO:tasks.workunit.client.0.vm02.stdout:4/955: fdatasync d1/d75/fe2 0 2026-03-10T10:20:12.799 INFO:tasks.workunit.client.1.vm05.stdout:2/781: truncate db/d12/f3b 1004628 0 2026-03-10T10:20:12.803 INFO:tasks.workunit.client.1.vm05.stdout:4/684: creat d1/d3/d65/db0/fe5 x:0 0 0 2026-03-10T10:20:12.803 INFO:tasks.workunit.client.1.vm05.stdout:4/685: fsync d1/d31/dc/fe1 0 2026-03-10T10:20:12.809 INFO:tasks.workunit.client.1.vm05.stdout:9/726: symlink d0/df/lf2 0 2026-03-10T10:20:12.809 INFO:tasks.workunit.client.1.vm05.stdout:9/727: chown d0/d1 50494 1 2026-03-10T10:20:12.815 INFO:tasks.workunit.client.1.vm05.stdout:4/686: sync 2026-03-10T10:20:12.828 INFO:tasks.workunit.client.1.vm05.stdout:3/859: dwrite dd/d15/d24/d2c/fd1 [0,4194304] 0 2026-03-10T10:20:12.851 INFO:tasks.workunit.client.1.vm05.stdout:1/897: dread d4/df/d1c/f9c [8388608,4194304] 0 2026-03-10T10:20:12.852 INFO:tasks.workunit.client.1.vm05.stdout:1/898: truncate d4/d37/ffa 666360 0 2026-03-10T10:20:12.866 INFO:tasks.workunit.client.1.vm05.stdout:8/758: write d7/f78 [2802771,88376] 0 2026-03-10T10:20:12.867 INFO:tasks.workunit.client.1.vm05.stdout:0/839: link d1/d2/d9/d31/d12/d20/f71 d1/dd7/f11b 0 2026-03-10T10:20:12.871 INFO:tasks.workunit.client.1.vm05.stdout:5/823: dwrite da/d9a/daf/fdf [0,4194304] 0 2026-03-10T10:20:12.878 INFO:tasks.workunit.client.1.vm05.stdout:9/728: unlink d0/df/d11/l56 0 2026-03-10T10:20:12.879 INFO:tasks.workunit.client.1.vm05.stdout:5/824: dwrite da/db/d26/d70/fd1 [4194304,4194304] 0 2026-03-10T10:20:12.881 INFO:tasks.workunit.client.1.vm05.stdout:4/687: fsync f0 0 2026-03-10T10:20:12.887 INFO:tasks.workunit.client.1.vm05.stdout:7/861: getdents d5/d1d/d20/d3b 0 2026-03-10T10:20:12.887 INFO:tasks.workunit.client.1.vm05.stdout:6/822: getdents dd/d36/d3f/d12/d44/d30 0 2026-03-10T10:20:12.888 INFO:tasks.workunit.client.1.vm05.stdout:7/862: write d5/d1d/d20/d91/fc9 [2123851,130794] 0 2026-03-10T10:20:12.896 INFO:tasks.workunit.client.1.vm05.stdout:5/825: dread da/db/d26/fdd [0,4194304] 0 2026-03-10T10:20:12.899 INFO:tasks.workunit.client.1.vm05.stdout:3/860: read dd/d20/d9e/ff9 [3458844,83125] 0 2026-03-10T10:20:12.900 INFO:tasks.workunit.client.0.vm02.stdout:8/805: read d1/d1c/d43/d6a/da8/f6e [17195,107564] 0 2026-03-10T10:20:12.900 INFO:tasks.workunit.client.0.vm02.stdout:2/842: creat d0/d71/d108/d65/dc4/dfa/d80/d10f/f11b x:0 0 0 2026-03-10T10:20:12.907 INFO:tasks.workunit.client.1.vm05.stdout:8/759: dread d7/d14/d62/f9d [0,4194304] 0 2026-03-10T10:20:12.908 INFO:tasks.workunit.client.1.vm05.stdout:9/729: dread d0/d1/d13/f27 [0,4194304] 0 2026-03-10T10:20:12.910 INFO:tasks.workunit.client.0.vm02.stdout:1/860: fdatasync d4/da/d27/d38/fad 0 2026-03-10T10:20:12.910 INFO:tasks.workunit.client.0.vm02.stdout:0/861: link d9/d18/d1a/d46/l78 d9/d34/d3d/l116 0 2026-03-10T10:20:12.911 INFO:tasks.workunit.client.0.vm02.stdout:1/861: truncate d4/da/d27/f35 5490140 0 2026-03-10T10:20:12.912 INFO:tasks.workunit.client.0.vm02.stdout:1/862: read d4/da/f13 [9024201,22959] 0 2026-03-10T10:20:12.914 INFO:tasks.workunit.client.1.vm05.stdout:1/899: unlink d4/df/d1c/l59 0 2026-03-10T10:20:12.917 INFO:tasks.workunit.client.0.vm02.stdout:3/820: rename d1/f54 to d1/d8/d21/d73/d78/d84/dfa/f10f 0 2026-03-10T10:20:12.919 INFO:tasks.workunit.client.1.vm05.stdout:7/863: symlink d5/d1d/d20/d91/l101 0 2026-03-10T10:20:12.932 INFO:tasks.workunit.client.0.vm02.stdout:9/779: write da/d3c/d4c/d38/d82/d89/f8a [330326,33825] 0 2026-03-10T10:20:12.935 INFO:tasks.workunit.client.0.vm02.stdout:4/956: symlink d1/d75/ddd/d10e/d5e/d78/d44/dd0/l13f 0 2026-03-10T10:20:12.940 INFO:tasks.workunit.client.1.vm05.stdout:3/861: rename dd/d15/f121 to dd/d20/d56/d5e/f135 0 2026-03-10T10:20:12.948 INFO:tasks.workunit.client.0.vm02.stdout:2/843: mknod d0/d71/d108/d65/dc4/de0/c11c 0 2026-03-10T10:20:12.948 INFO:tasks.workunit.client.0.vm02.stdout:8/806: unlink d1/d1c/d43/d5b/d88/dac/fa5 0 2026-03-10T10:20:12.954 INFO:tasks.workunit.client.1.vm05.stdout:6/823: symlink dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/l109 0 2026-03-10T10:20:12.959 INFO:tasks.workunit.client.0.vm02.stdout:1/863: write d4/d4a/ff8 [369679,20999] 0 2026-03-10T10:20:12.968 INFO:tasks.workunit.client.1.vm05.stdout:4/688: write d1/d64/f99 [1423703,87790] 0 2026-03-10T10:20:12.973 INFO:tasks.workunit.client.1.vm05.stdout:4/689: sync 2026-03-10T10:20:12.973 INFO:tasks.workunit.client.0.vm02.stdout:5/949: dwrite d1/db/d11/d13/f1f [4194304,4194304] 0 2026-03-10T10:20:12.975 INFO:tasks.workunit.client.1.vm05.stdout:9/730: dwrite d0/d1/f9 [0,4194304] 0 2026-03-10T10:20:12.984 INFO:tasks.workunit.client.0.vm02.stdout:0/862: rename d9/d18/l44 to d9/d34/d3d/d65/d89/dd3/l117 0 2026-03-10T10:20:12.985 INFO:tasks.workunit.client.0.vm02.stdout:0/863: dread - d9/d34/ff9 zero size 2026-03-10T10:20:13.005 INFO:tasks.workunit.client.1.vm05.stdout:0/840: truncate d1/d2/d9/d31/d13/d2f/f33 8436928 0 2026-03-10T10:20:13.014 INFO:tasks.workunit.client.0.vm02.stdout:7/833: getdents d1/dc/d55/d9a/dd9/df7 0 2026-03-10T10:20:13.014 INFO:tasks.workunit.client.0.vm02.stdout:7/834: readlink d1/d1b/d49/d98/l9e 0 2026-03-10T10:20:13.018 INFO:tasks.workunit.client.1.vm05.stdout:5/826: rename da/d9a/dc7/db4/dbd/fd0 to da/db/d28/d97/f11b 0 2026-03-10T10:20:13.019 INFO:tasks.workunit.client.1.vm05.stdout:3/862: mkdir dd/d15/d24/d2c/d6d/da7/d136 0 2026-03-10T10:20:13.020 INFO:tasks.workunit.client.1.vm05.stdout:8/760: truncate d7/d14/d24/d3f/feb 253324 0 2026-03-10T10:20:13.025 INFO:tasks.workunit.client.1.vm05.stdout:1/900: mkdir d4/d3d/ddc/d108 0 2026-03-10T10:20:13.026 INFO:tasks.workunit.client.1.vm05.stdout:4/690: unlink d1/d31/c16 0 2026-03-10T10:20:13.036 INFO:tasks.workunit.client.1.vm05.stdout:8/761: dread d7/d14/d3a/f50 [0,4194304] 0 2026-03-10T10:20:13.045 INFO:tasks.workunit.client.0.vm02.stdout:5/950: creat d1/db/d11/d13/d28/d37/dce/f144 x:0 0 0 2026-03-10T10:20:13.053 INFO:tasks.workunit.client.0.vm02.stdout:4/957: rename d1/d10/c51 to d1/d75/ddd/d10e/d7e/c140 0 2026-03-10T10:20:13.068 INFO:tasks.workunit.client.0.vm02.stdout:3/821: truncate d1/d8/d21/f47 433156 0 2026-03-10T10:20:13.068 INFO:tasks.workunit.client.1.vm05.stdout:0/841: truncate d1/d2/d9/f32 3970191 0 2026-03-10T10:20:13.070 INFO:tasks.workunit.client.1.vm05.stdout:6/824: symlink dd/d36/d3f/d12/d44/d2a/d77/d8b/l10a 0 2026-03-10T10:20:13.071 INFO:tasks.workunit.client.0.vm02.stdout:0/864: dread d9/d34/d3d/f4e [0,4194304] 0 2026-03-10T10:20:13.078 INFO:tasks.workunit.client.0.vm02.stdout:6/781: getdents d0/d8/d29/d52/de8/db2 0 2026-03-10T10:20:13.079 INFO:tasks.workunit.client.1.vm05.stdout:1/901: rename d4/df/d76/cf4 to d4/df/de0/d82/c109 0 2026-03-10T10:20:13.085 INFO:tasks.workunit.client.0.vm02.stdout:8/807: dwrite d1/f80 [0,4194304] 0 2026-03-10T10:20:13.087 INFO:tasks.workunit.client.1.vm05.stdout:4/691: dread d1/d31/d76/dac/db8/dbf/f78 [0,4194304] 0 2026-03-10T10:20:13.088 INFO:tasks.workunit.client.1.vm05.stdout:4/692: dread - d1/d3/d65/db0/fe5 zero size 2026-03-10T10:20:13.088 INFO:tasks.workunit.client.1.vm05.stdout:4/693: write d1/d31/dc/d40/d45/daa/fe4 [638707,13694] 0 2026-03-10T10:20:13.089 INFO:tasks.workunit.client.0.vm02.stdout:5/951: fdatasync d1/db/d11/d13/d28/d37/d3d/da3/d113/f115 0 2026-03-10T10:20:13.092 INFO:tasks.workunit.client.0.vm02.stdout:1/864: write d4/d2c/f77 [224750,38568] 0 2026-03-10T10:20:13.110 INFO:tasks.workunit.client.1.vm05.stdout:5/827: write da/db/dee/d38/fab [1461164,17676] 0 2026-03-10T10:20:13.117 INFO:tasks.workunit.client.0.vm02.stdout:4/958: fsync d1/d75/ddd/d10e/d5e/d78/d7f/d82/fe9 0 2026-03-10T10:20:13.119 INFO:tasks.workunit.client.0.vm02.stdout:4/959: chown d1/d75/ddd/d10e/d5e/d78/d1a/d49/d81/dc6 86191 1 2026-03-10T10:20:13.124 INFO:tasks.workunit.client.1.vm05.stdout:9/731: mkdir d0/df/d74/d8c/de4/df3 0 2026-03-10T10:20:13.132 INFO:tasks.workunit.client.1.vm05.stdout:7/864: unlink d5/d1d/d29/d3e/d8c/d82/c89 0 2026-03-10T10:20:13.132 INFO:tasks.workunit.client.1.vm05.stdout:7/865: chown d5/fe 627 1 2026-03-10T10:20:13.133 INFO:tasks.workunit.client.0.vm02.stdout:7/835: mknod d1/d1b/d8f/dad/d7e/dba/c104 0 2026-03-10T10:20:13.136 INFO:tasks.workunit.client.1.vm05.stdout:2/782: getdents db/d28/d4f/d8b/d9a/d9d 0 2026-03-10T10:20:13.136 INFO:tasks.workunit.client.1.vm05.stdout:2/783: read - db/d28/d4f/d59/da4/fca zero size 2026-03-10T10:20:13.137 INFO:tasks.workunit.client.0.vm02.stdout:7/836: dwrite d1/dc/d55/f85 [0,4194304] 0 2026-03-10T10:20:13.137 INFO:tasks.workunit.client.1.vm05.stdout:2/784: chown db/d2d/f52 160 1 2026-03-10T10:20:13.138 INFO:tasks.workunit.client.0.vm02.stdout:0/865: dread - d9/d34/d3d/d65/d89/dd3/da7/db7/de1/ff7 zero size 2026-03-10T10:20:13.143 INFO:tasks.workunit.client.1.vm05.stdout:0/842: dread d1/d2/d9/d50/f94 [0,4194304] 0 2026-03-10T10:20:13.143 INFO:tasks.workunit.client.0.vm02.stdout:7/837: chown d1/d1b/f61 96 1 2026-03-10T10:20:13.143 INFO:tasks.workunit.client.0.vm02.stdout:6/782: rename d0/d8/d29/d6d/d96/de4/def/d6f/l7b to d0/d8/d29/d94/d9a/l106 0 2026-03-10T10:20:13.143 INFO:tasks.workunit.client.1.vm05.stdout:3/863: symlink dd/d20/l137 0 2026-03-10T10:20:13.144 INFO:tasks.workunit.client.0.vm02.stdout:8/808: truncate d1/d1c/d23/f3b 5878935 0 2026-03-10T10:20:13.144 INFO:tasks.workunit.client.1.vm05.stdout:0/843: stat d1/d2/d9/d31/d13/d17/da1/df5/f117 0 2026-03-10T10:20:13.144 INFO:tasks.workunit.client.0.vm02.stdout:8/809: chown d1/d1c/d23 134215451 1 2026-03-10T10:20:13.144 INFO:tasks.workunit.client.1.vm05.stdout:6/825: dread - dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/fb6 zero size 2026-03-10T10:20:13.147 INFO:tasks.workunit.client.0.vm02.stdout:1/865: rmdir d4/d2c/d53/da6/db8/dd9 39 2026-03-10T10:20:13.161 INFO:tasks.workunit.client.0.vm02.stdout:9/780: getdents da/d3c/d4c/d38 0 2026-03-10T10:20:13.163 INFO:tasks.workunit.client.0.vm02.stdout:9/781: chown da/d3c/d53/l91 462 1 2026-03-10T10:20:13.164 INFO:tasks.workunit.client.0.vm02.stdout:9/782: readlink da/d3c/d4c/d38/d4a/d99/laa 0 2026-03-10T10:20:13.170 INFO:tasks.workunit.client.0.vm02.stdout:0/866: truncate d9/d18/d1a/d22/d24/f4f 357094 0 2026-03-10T10:20:13.170 INFO:tasks.workunit.client.0.vm02.stdout:2/844: getdents d0/d71/d108/d65/dc4/dfa/dd3 0 2026-03-10T10:20:13.175 INFO:tasks.workunit.client.0.vm02.stdout:5/952: getdents d1/db/d11/d13/d28/d11a 0 2026-03-10T10:20:13.182 INFO:tasks.workunit.client.0.vm02.stdout:9/783: symlink da/d3c/d4c/d38/d82/d89/lfd 0 2026-03-10T10:20:13.182 INFO:tasks.workunit.client.0.vm02.stdout:3/822: write d1/f81 [4696786,64678] 0 2026-03-10T10:20:13.195 INFO:tasks.workunit.client.0.vm02.stdout:4/960: dwrite d1/d75/ddd/fea [0,4194304] 0 2026-03-10T10:20:13.206 INFO:tasks.workunit.client.0.vm02.stdout:6/783: dwrite d0/d8/d29/d52/f8b [0,4194304] 0 2026-03-10T10:20:13.213 INFO:tasks.workunit.client.0.vm02.stdout:8/810: dwrite d1/d1c/d43/d5b/d88/dac/fb7 [0,4194304] 0 2026-03-10T10:20:13.222 INFO:tasks.workunit.client.0.vm02.stdout:1/866: symlink d4/da/d1a/l114 0 2026-03-10T10:20:13.242 INFO:tasks.workunit.client.1.vm05.stdout:1/902: creat d4/df/d76/f10a x:0 0 0 2026-03-10T10:20:13.242 INFO:tasks.workunit.client.0.vm02.stdout:7/838: rename d1/d1b/d8f/dad/d7e/dba/fac to d1/d1b/d8f/f105 0 2026-03-10T10:20:13.242 INFO:tasks.workunit.client.0.vm02.stdout:8/811: creat d1/d1c/d43/d5b/d88/dac/ff2 x:0 0 0 2026-03-10T10:20:13.246 INFO:tasks.workunit.client.0.vm02.stdout:1/867: rmdir d4/dc3 39 2026-03-10T10:20:13.247 INFO:tasks.workunit.client.0.vm02.stdout:2/845: link d0/c45 d0/d71/d108/d65/dc4/dfa/df1/c11d 0 2026-03-10T10:20:13.251 INFO:tasks.workunit.client.1.vm05.stdout:2/785: mknod db/d2d/dc6/cfb 0 2026-03-10T10:20:13.253 INFO:tasks.workunit.client.0.vm02.stdout:8/812: creat d1/d1c/d43/d5b/dab/ff3 x:0 0 0 2026-03-10T10:20:13.253 INFO:tasks.workunit.client.1.vm05.stdout:2/786: chown db/d2d/dc6/dc7/cd7 11 1 2026-03-10T10:20:13.253 INFO:tasks.workunit.client.1.vm05.stdout:3/864: symlink dd/d15/d24/d2c/d107/l138 0 2026-03-10T10:20:13.261 INFO:tasks.workunit.client.1.vm05.stdout:0/844: mkdir d1/d2/d9/d31/daa/d11c 0 2026-03-10T10:20:13.262 INFO:tasks.workunit.client.1.vm05.stdout:2/787: sync 2026-03-10T10:20:13.262 INFO:tasks.workunit.client.1.vm05.stdout:0/845: stat d1/d2/d9/d31/d13/d15 0 2026-03-10T10:20:13.266 INFO:tasks.workunit.client.0.vm02.stdout:7/839: creat d1/dc/d55/d9c/dfd/f106 x:0 0 0 2026-03-10T10:20:13.266 INFO:tasks.workunit.client.1.vm05.stdout:5/828: symlink da/d9a/dbe/l11c 0 2026-03-10T10:20:13.267 INFO:tasks.workunit.client.0.vm02.stdout:1/868: truncate d4/d1b/f44 374487 0 2026-03-10T10:20:13.268 INFO:tasks.workunit.client.1.vm05.stdout:9/732: creat d0/df/d11/dc6/ff4 x:0 0 0 2026-03-10T10:20:13.269 INFO:tasks.workunit.client.0.vm02.stdout:8/813: creat d1/d1c/d43/d5b/d88/dac/d83/d9f/ff4 x:0 0 0 2026-03-10T10:20:13.272 INFO:tasks.workunit.client.0.vm02.stdout:7/840: dwrite d1/d1b/d49/f4b [4194304,4194304] 0 2026-03-10T10:20:13.287 INFO:tasks.workunit.client.0.vm02.stdout:0/867: dwrite d9/d18/d1a/d22/d24/d80/d49/f53 [0,4194304] 0 2026-03-10T10:20:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:13 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:13 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:13 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:12] ENGINE Bus STARTING 2026-03-10T10:20:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:13 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:13 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:13 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:13 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:13 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:12] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T10:20:13.303 INFO:tasks.workunit.client.1.vm05.stdout:3/865: symlink dd/d15/d24/d2c/d107/l139 0 2026-03-10T10:20:13.304 INFO:tasks.workunit.client.0.vm02.stdout:5/953: rename d1/db/d11/d84/d40/d4f/f132 to d1/db/d11/d13/d28/d37/f145 0 2026-03-10T10:20:13.307 INFO:tasks.workunit.client.0.vm02.stdout:4/961: dwrite d1/d52/d53/dda/df7/f129 [0,4194304] 0 2026-03-10T10:20:13.308 INFO:tasks.workunit.client.1.vm05.stdout:2/788: readlink db/l10 0 2026-03-10T10:20:13.316 INFO:tasks.workunit.client.1.vm05.stdout:0/846: mkdir d1/d2/d39/d3d/d11d 0 2026-03-10T10:20:13.328 INFO:tasks.workunit.client.0.vm02.stdout:9/784: dwrite da/d3c/d4c/f27 [0,4194304] 0 2026-03-10T10:20:13.329 INFO:tasks.workunit.client.0.vm02.stdout:6/784: write d0/d8/d29/f59 [1379766,70886] 0 2026-03-10T10:20:13.329 INFO:tasks.workunit.client.1.vm05.stdout:7/866: write d5/d1d/d29/d3e/d8c/d7f/f93 [576584,80845] 0 2026-03-10T10:20:13.332 INFO:tasks.workunit.client.0.vm02.stdout:8/814: read d1/d1c/d24/f31 [3474680,107388] 0 2026-03-10T10:20:13.337 INFO:tasks.workunit.client.1.vm05.stdout:6/826: dwrite dd/d36/d3f/d12/d44/d2a/f98 [4194304,4194304] 0 2026-03-10T10:20:13.337 INFO:tasks.workunit.client.0.vm02.stdout:2/846: dwrite d0/d1a/d49/dcc/ff3 [0,4194304] 0 2026-03-10T10:20:13.342 INFO:tasks.workunit.client.1.vm05.stdout:1/903: unlink d4/d3d/c93 0 2026-03-10T10:20:13.350 INFO:tasks.workunit.client.1.vm05.stdout:8/762: getdents d7/d14/d62 0 2026-03-10T10:20:13.351 INFO:tasks.workunit.client.1.vm05.stdout:5/829: truncate f5 102666 0 2026-03-10T10:20:13.351 INFO:tasks.workunit.client.0.vm02.stdout:3/823: rename d1/d8/d86/f87 to d1/d6/d8b/f110 0 2026-03-10T10:20:13.354 INFO:tasks.workunit.client.1.vm05.stdout:9/733: mkdir d0/d1/d13/de/ddf/df5 0 2026-03-10T10:20:13.355 INFO:tasks.workunit.client.1.vm05.stdout:9/734: write d0/d1/f6d [2649048,97236] 0 2026-03-10T10:20:13.356 INFO:tasks.workunit.client.1.vm05.stdout:9/735: chown d0/dc4/d63/caa 0 1 2026-03-10T10:20:13.363 INFO:tasks.workunit.client.1.vm05.stdout:2/789: chown db/c4b 365 1 2026-03-10T10:20:13.386 INFO:tasks.workunit.client.1.vm05.stdout:6/827: rename dd/d36/d3f/d12/d58/f65 to dd/d36/d3f/dbd/f10b 0 2026-03-10T10:20:13.388 INFO:tasks.workunit.client.0.vm02.stdout:9/785: rmdir da/d3c 39 2026-03-10T10:20:13.390 INFO:tasks.workunit.client.0.vm02.stdout:8/815: truncate d1/f7d 362143 0 2026-03-10T10:20:13.391 INFO:tasks.workunit.client.1.vm05.stdout:4/694: getdents d1/d31/d4b/d6d 0 2026-03-10T10:20:13.393 INFO:tasks.workunit.client.0.vm02.stdout:8/816: stat d1/d1c/d43/d5b/d88/fd1 0 2026-03-10T10:20:13.394 INFO:tasks.workunit.client.1.vm05.stdout:5/830: stat da/db/d28/d97/f11b 0 2026-03-10T10:20:13.396 INFO:tasks.workunit.client.0.vm02.stdout:8/817: fsync d1/d1c/fe0 0 2026-03-10T10:20:13.398 INFO:tasks.workunit.client.0.vm02.stdout:2/847: read d0/d10/fa1 [1196558,98027] 0 2026-03-10T10:20:13.398 INFO:tasks.workunit.client.0.vm02.stdout:8/818: chown d1/d1c/d23/d25/l32 58 1 2026-03-10T10:20:13.401 INFO:tasks.workunit.client.0.vm02.stdout:3/824: rename d1/d20/d52/dd3/fe7 to d1/d8/d86/f111 0 2026-03-10T10:20:13.406 INFO:tasks.workunit.client.0.vm02.stdout:5/954: truncate d1/db/d11/d84/d40/d4f/d5f/f73 2966087 0 2026-03-10T10:20:13.409 INFO:tasks.workunit.client.0.vm02.stdout:4/962: mknod d1/d75/c141 0 2026-03-10T10:20:13.409 INFO:tasks.workunit.client.0.vm02.stdout:1/869: mkdir d4/dc3/d115 0 2026-03-10T10:20:13.411 INFO:tasks.workunit.client.0.vm02.stdout:6/785: mkdir d0/d8/d8c/d107 0 2026-03-10T10:20:13.416 INFO:tasks.workunit.client.0.vm02.stdout:9/786: chown da/d3c/d4c/d38/d4a/c71 0 1 2026-03-10T10:20:13.421 INFO:tasks.workunit.client.0.vm02.stdout:8/819: creat d1/d1c/d43/d6a/d7c/ff5 x:0 0 0 2026-03-10T10:20:13.422 INFO:tasks.workunit.client.0.vm02.stdout:0/868: link d9/d34/d3d/d65/d89/fc4 d9/d34/d3d/d65/d89/dd3/f118 0 2026-03-10T10:20:13.428 INFO:tasks.workunit.client.1.vm05.stdout:9/736: symlink d0/df/d74/d8c/de4/lf6 0 2026-03-10T10:20:13.429 INFO:tasks.workunit.client.1.vm05.stdout:2/790: unlink db/d1c/l21 0 2026-03-10T10:20:13.430 INFO:tasks.workunit.client.1.vm05.stdout:7/867: fsync d5/d1d/d20/d35/fbb 0 2026-03-10T10:20:13.433 INFO:tasks.workunit.client.0.vm02.stdout:3/825: read d1/d8/d21/f88 [1988052,11242] 0 2026-03-10T10:20:13.435 INFO:tasks.workunit.client.0.vm02.stdout:3/826: readlink d1/d8/d21/d73/d78/l105 0 2026-03-10T10:20:13.437 INFO:tasks.workunit.client.0.vm02.stdout:3/827: dread - d1/d8/d21/d73/d78/d84/fb4 zero size 2026-03-10T10:20:13.442 INFO:tasks.workunit.client.0.vm02.stdout:8/820: creat d1/d1c/d43/d6a/ff6 x:0 0 0 2026-03-10T10:20:13.442 INFO:tasks.workunit.client.0.vm02.stdout:8/821: readlink d1/d2/l47 0 2026-03-10T10:20:13.446 INFO:tasks.workunit.client.1.vm05.stdout:5/831: mkdir da/d9a/dbe/d11d 0 2026-03-10T10:20:13.449 INFO:tasks.workunit.client.0.vm02.stdout:0/869: mkdir d9/d18/dc7/d119 0 2026-03-10T10:20:13.467 INFO:tasks.workunit.client.0.vm02.stdout:4/963: creat d1/d75/ddd/d10e/d5e/d78/d55/f142 x:0 0 0 2026-03-10T10:20:13.471 INFO:tasks.workunit.client.0.vm02.stdout:9/787: symlink da/d3c/d4c/d2c/lfe 0 2026-03-10T10:20:13.473 INFO:tasks.workunit.client.0.vm02.stdout:1/870: dread d4/d4a/ff8 [0,4194304] 0 2026-03-10T10:20:13.478 INFO:tasks.workunit.client.0.vm02.stdout:7/841: dwrite d1/dc/d99/ff6 [0,4194304] 0 2026-03-10T10:20:13.478 INFO:tasks.workunit.client.0.vm02.stdout:3/828: symlink d1/d20/db2/dcb/l112 0 2026-03-10T10:20:13.480 INFO:tasks.workunit.client.0.vm02.stdout:6/786: sync 2026-03-10T10:20:13.482 INFO:tasks.workunit.client.0.vm02.stdout:9/788: sync 2026-03-10T10:20:13.482 INFO:tasks.workunit.client.1.vm05.stdout:7/868: rename d5/d17/f3c to d5/d17/d66/f102 0 2026-03-10T10:20:13.482 INFO:tasks.workunit.client.0.vm02.stdout:7/842: dread - d1/d1b/d8f/d67/fc2 zero size 2026-03-10T10:20:13.483 INFO:tasks.workunit.client.0.vm02.stdout:7/843: stat d1/dc/d10/d38/l56 0 2026-03-10T10:20:13.484 INFO:tasks.workunit.client.0.vm02.stdout:7/844: chown d1/d1b/d8f/f66 273 1 2026-03-10T10:20:13.486 INFO:tasks.workunit.client.0.vm02.stdout:8/822: symlink d1/d2/lf7 0 2026-03-10T10:20:13.488 INFO:tasks.workunit.client.1.vm05.stdout:3/866: dwrite dd/d15/d24/d74/fb2 [0,4194304] 0 2026-03-10T10:20:13.489 INFO:tasks.workunit.client.1.vm05.stdout:3/867: chown dd/d20/l137 19 1 2026-03-10T10:20:13.492 INFO:tasks.workunit.client.1.vm05.stdout:4/695: readlink d1/d31/dc/d40/l5a 0 2026-03-10T10:20:13.506 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:13 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:13.506 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:13 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:13.506 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:13 vm02.local ceph-mon[50200]: [10/Mar/2026:10:20:12] ENGINE Bus STARTING 2026-03-10T10:20:13.506 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:13 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:13.506 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:13 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:13.506 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:13 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:13.506 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:13 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:13.507 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:13 vm02.local ceph-mon[50200]: [10/Mar/2026:10:20:12] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T10:20:13.507 INFO:tasks.workunit.client.0.vm02.stdout:4/964: mkdir d1/d75/ddd/d10e/d5e/d78/d44/dd0/d143 0 2026-03-10T10:20:13.507 INFO:tasks.workunit.client.0.vm02.stdout:7/845: dread d1/f15 [0,4194304] 0 2026-03-10T10:20:13.512 INFO:tasks.workunit.client.1.vm05.stdout:5/832: unlink da/db/d26/d5c/l19 0 2026-03-10T10:20:13.514 INFO:tasks.workunit.client.1.vm05.stdout:0/847: getdents d1 0 2026-03-10T10:20:13.514 INFO:tasks.workunit.client.1.vm05.stdout:8/763: write d7/d2f/fe0 [458010,42977] 0 2026-03-10T10:20:13.516 INFO:tasks.workunit.client.1.vm05.stdout:0/848: chown d1/d2/d9/d31/d13/da2/dab/dce/d106/f111 1973834799 1 2026-03-10T10:20:13.533 INFO:tasks.workunit.client.1.vm05.stdout:5/833: sync 2026-03-10T10:20:13.540 INFO:tasks.workunit.client.0.vm02.stdout:6/787: unlink d0/db9/l103 0 2026-03-10T10:20:13.545 INFO:tasks.workunit.client.0.vm02.stdout:9/789: dread da/d3c/d4c/d38/d4a/f54 [0,4194304] 0 2026-03-10T10:20:13.551 INFO:tasks.workunit.client.0.vm02.stdout:3/829: truncate d1/d6/d8e/fc7 216132 0 2026-03-10T10:20:13.552 INFO:tasks.workunit.client.0.vm02.stdout:9/790: dwrite da/f15 [0,4194304] 0 2026-03-10T10:20:13.564 INFO:tasks.workunit.client.0.vm02.stdout:3/830: dread d1/d8/f3f [0,4194304] 0 2026-03-10T10:20:13.570 INFO:tasks.workunit.client.0.vm02.stdout:8/823: unlink d1/ca0 0 2026-03-10T10:20:13.577 INFO:tasks.workunit.client.0.vm02.stdout:2/848: write d0/d71/d108/fa0 [423881,11699] 0 2026-03-10T10:20:13.578 INFO:tasks.workunit.client.1.vm05.stdout:1/904: dwrite d4/d79/f8d [0,4194304] 0 2026-03-10T10:20:13.583 INFO:tasks.workunit.client.0.vm02.stdout:5/955: write d1/db/d11/d13/d28/d37/dce/f10c [725870,50896] 0 2026-03-10T10:20:13.587 INFO:tasks.workunit.client.0.vm02.stdout:5/956: chown d1/db/d11/d84/fb2 396189426 1 2026-03-10T10:20:13.588 INFO:tasks.workunit.client.1.vm05.stdout:3/868: creat dd/d15/d24/d8e/dac/f13a x:0 0 0 2026-03-10T10:20:13.589 INFO:tasks.workunit.client.1.vm05.stdout:3/869: stat dd/d15/d24/d74/fb2 0 2026-03-10T10:20:13.601 INFO:tasks.workunit.client.0.vm02.stdout:7/846: symlink d1/dc/d60/l107 0 2026-03-10T10:20:13.604 INFO:tasks.workunit.client.1.vm05.stdout:6/828: dwrite dd/d36/d3f/d12/d44/d30/f8d [0,4194304] 0 2026-03-10T10:20:13.608 INFO:tasks.workunit.client.1.vm05.stdout:8/764: rmdir d7/d14/d24/d3f/d4f 39 2026-03-10T10:20:13.616 INFO:tasks.workunit.client.0.vm02.stdout:0/870: dwrite d9/d18/d1a/d22/d24/d8e/d9b/fc2 [0,4194304] 0 2026-03-10T10:20:13.626 INFO:tasks.workunit.client.0.vm02.stdout:1/871: unlink d4/da/d1a/d47/d88/c98 0 2026-03-10T10:20:13.627 INFO:tasks.workunit.client.0.vm02.stdout:1/872: fdatasync d4/da/d1a/d47/d78/f111 0 2026-03-10T10:20:13.640 INFO:tasks.workunit.client.0.vm02.stdout:6/788: dread - d0/d8/d29/d52/de8/db2/dbb/fed zero size 2026-03-10T10:20:13.642 INFO:tasks.workunit.client.1.vm05.stdout:2/791: write db/d28/d4f/d8b/d9a/fcd [568935,99212] 0 2026-03-10T10:20:13.648 INFO:tasks.workunit.client.1.vm05.stdout:7/869: symlink d5/d1d/l103 0 2026-03-10T10:20:13.648 INFO:tasks.workunit.client.1.vm05.stdout:7/870: truncate d5/dd/ffb 90770 0 2026-03-10T10:20:13.649 INFO:tasks.workunit.client.0.vm02.stdout:9/791: fdatasync da/d3c/d4c/d2c/d34/f3d 0 2026-03-10T10:20:13.651 INFO:tasks.workunit.client.1.vm05.stdout:1/905: dread - d4/d39/d3e/db1/db8/fd6 zero size 2026-03-10T10:20:13.653 INFO:tasks.workunit.client.0.vm02.stdout:3/831: symlink d1/d8/d86/db1/dbc/l113 0 2026-03-10T10:20:13.657 INFO:tasks.workunit.client.0.vm02.stdout:8/824: read d1/d1c/d43/d5b/f79 [730868,107394] 0 2026-03-10T10:20:13.663 INFO:tasks.workunit.client.1.vm05.stdout:6/829: mknod dd/d36/d3f/d12/d44/d30/c10c 0 2026-03-10T10:20:13.672 INFO:tasks.workunit.client.0.vm02.stdout:4/965: write d1/d75/ddd/f7d [4110852,17555] 0 2026-03-10T10:20:13.678 INFO:tasks.workunit.client.1.vm05.stdout:5/834: write da/db/dee/d38/fe4 [20136,88037] 0 2026-03-10T10:20:13.679 INFO:tasks.workunit.client.1.vm05.stdout:3/870: write dd/d20/d56/fb7 [991701,7360] 0 2026-03-10T10:20:13.679 INFO:tasks.workunit.client.0.vm02.stdout:2/849: write d0/d71/d108/d65/dc4/dfa/f6e [620867,22014] 0 2026-03-10T10:20:13.681 INFO:tasks.workunit.client.0.vm02.stdout:5/957: dwrite d1/db/d11/d13/f25 [0,4194304] 0 2026-03-10T10:20:13.698 INFO:tasks.workunit.client.0.vm02.stdout:0/871: dread d9/d18/d1a/d22/d24/f4f [0,4194304] 0 2026-03-10T10:20:13.699 INFO:tasks.workunit.client.1.vm05.stdout:9/737: getdents d0/d1/d13/de/d93 0 2026-03-10T10:20:13.703 INFO:tasks.workunit.client.0.vm02.stdout:6/789: dread d0/f6b [0,4194304] 0 2026-03-10T10:20:13.705 INFO:tasks.workunit.client.1.vm05.stdout:1/906: symlink d4/df/d76/l10b 0 2026-03-10T10:20:13.711 INFO:tasks.workunit.client.1.vm05.stdout:6/830: creat dd/d36/d7d/d102/f10d x:0 0 0 2026-03-10T10:20:13.712 INFO:tasks.workunit.client.1.vm05.stdout:0/849: dwrite d1/d2/d39/d6e/dc0/fcd [0,4194304] 0 2026-03-10T10:20:13.718 INFO:tasks.workunit.client.0.vm02.stdout:8/825: rename d1/d1c/d24/dad/dbe/dda/fe5 to d1/d1c/d43/d6a/da8/d8e/ff8 0 2026-03-10T10:20:13.723 INFO:tasks.workunit.client.1.vm05.stdout:5/835: fsync da/db/d26/d70/d72/df6/d10e/f103 0 2026-03-10T10:20:13.723 INFO:tasks.workunit.client.1.vm05.stdout:5/836: chown da/d9a/dc7/db4/f113 240911615 1 2026-03-10T10:20:13.724 INFO:tasks.workunit.client.1.vm05.stdout:5/837: stat da/db/dee/f2a 0 2026-03-10T10:20:13.731 INFO:tasks.workunit.client.1.vm05.stdout:3/871: dread dd/d15/d24/d2c/f3f [0,4194304] 0 2026-03-10T10:20:13.738 INFO:tasks.workunit.client.1.vm05.stdout:3/872: write dd/d15/d24/d2c/fd1 [1820655,124435] 0 2026-03-10T10:20:13.738 INFO:tasks.workunit.client.1.vm05.stdout:8/765: mkdir d7/d14/d24/d3f/df0 0 2026-03-10T10:20:13.738 INFO:tasks.workunit.client.0.vm02.stdout:4/966: creat d1/d75/ddd/d10e/d5e/d78/d1a/d49/d81/f144 x:0 0 0 2026-03-10T10:20:13.738 INFO:tasks.workunit.client.0.vm02.stdout:4/967: stat d1/d75/ddd/d10e/d5e/d78/d7f/f13c 0 2026-03-10T10:20:13.738 INFO:tasks.workunit.client.0.vm02.stdout:4/968: dread - d1/d52/d53/dda/de0/f138 zero size 2026-03-10T10:20:13.740 INFO:tasks.workunit.client.0.vm02.stdout:5/958: read d1/db/d11/d16/d79/d85/fb0 [967728,104021] 0 2026-03-10T10:20:13.747 INFO:tasks.workunit.client.0.vm02.stdout:9/792: dwrite da/d3c/d4c/fbf [0,4194304] 0 2026-03-10T10:20:13.748 INFO:tasks.workunit.client.1.vm05.stdout:9/738: creat d0/d1/d13/de/d21/ff7 x:0 0 0 2026-03-10T10:20:13.763 INFO:tasks.workunit.client.1.vm05.stdout:6/831: unlink dd/d36/d3f/d12/d58/f5a 0 2026-03-10T10:20:13.767 INFO:tasks.workunit.client.1.vm05.stdout:0/850: fdatasync d1/d2/d9/d31/d12/d20/f81 0 2026-03-10T10:20:13.768 INFO:tasks.workunit.client.1.vm05.stdout:4/696: getdents d1/d3/d65/db0 0 2026-03-10T10:20:13.769 INFO:tasks.workunit.client.1.vm05.stdout:4/697: truncate d1/d31/dc/fe1 1036181 0 2026-03-10T10:20:13.770 INFO:tasks.workunit.client.0.vm02.stdout:5/959: sync 2026-03-10T10:20:13.774 INFO:tasks.workunit.client.1.vm05.stdout:5/838: mknod da/db/d26/d70/d72/d10b/c11e 0 2026-03-10T10:20:13.775 INFO:tasks.workunit.client.1.vm05.stdout:5/839: read da/db/f6d [976383,46665] 0 2026-03-10T10:20:13.776 INFO:tasks.workunit.client.1.vm05.stdout:4/698: dread d1/d64/f99 [0,4194304] 0 2026-03-10T10:20:13.786 INFO:tasks.workunit.client.1.vm05.stdout:3/873: fsync dd/d20/d94/fa9 0 2026-03-10T10:20:13.787 INFO:tasks.workunit.client.0.vm02.stdout:0/872: write d9/d18/d1a/d22/d24/d8e/fce [623622,94684] 0 2026-03-10T10:20:13.790 INFO:tasks.workunit.client.0.vm02.stdout:3/832: write d1/d8/f2e [4903721,88304] 0 2026-03-10T10:20:13.798 INFO:tasks.workunit.client.0.vm02.stdout:1/873: dwrite d4/d1b/f44 [0,4194304] 0 2026-03-10T10:20:13.799 INFO:tasks.workunit.client.0.vm02.stdout:0/873: sync 2026-03-10T10:20:13.800 INFO:tasks.workunit.client.1.vm05.stdout:3/874: sync 2026-03-10T10:20:13.806 INFO:tasks.workunit.client.1.vm05.stdout:2/792: rename db/d28/d4f/d8b/d9a to db/d61/dfc 0 2026-03-10T10:20:13.810 INFO:tasks.workunit.client.1.vm05.stdout:9/739: unlink d0/df/d74/d8c/fac 0 2026-03-10T10:20:13.811 INFO:tasks.workunit.client.1.vm05.stdout:7/871: link d5/d1d/d29/ccb d5/d1d/d29/d60/de8/c104 0 2026-03-10T10:20:13.814 INFO:tasks.workunit.client.1.vm05.stdout:9/740: sync 2026-03-10T10:20:13.827 INFO:tasks.workunit.client.1.vm05.stdout:6/832: write dd/d36/d3f/d12/d58/f9d [688954,115520] 0 2026-03-10T10:20:13.828 INFO:tasks.workunit.client.1.vm05.stdout:5/840: creat da/d63/f11f x:0 0 0 2026-03-10T10:20:13.835 INFO:tasks.workunit.client.1.vm05.stdout:5/841: dwrite da/db/dee/d38/fe4 [0,4194304] 0 2026-03-10T10:20:13.848 INFO:tasks.workunit.client.1.vm05.stdout:1/907: rename d4/d20/l2b to d4/d79/de6/l10c 0 2026-03-10T10:20:13.861 INFO:tasks.workunit.client.1.vm05.stdout:6/833: fdatasync dd/d36/d3f/d12/d44/d2a/d3d/d48/f82 0 2026-03-10T10:20:13.864 INFO:tasks.workunit.client.1.vm05.stdout:4/699: dwrite d1/d3/f12 [0,4194304] 0 2026-03-10T10:20:13.888 INFO:tasks.workunit.client.1.vm05.stdout:3/875: write dd/d15/d24/f8a [2643814,120875] 0 2026-03-10T10:20:13.888 INFO:tasks.workunit.client.1.vm05.stdout:4/700: truncate d1/d3/f60 206590 0 2026-03-10T10:20:13.888 INFO:tasks.workunit.client.1.vm05.stdout:8/766: dwrite d7/d14/d24/f95 [0,4194304] 0 2026-03-10T10:20:13.890 INFO:tasks.workunit.client.1.vm05.stdout:4/701: chown d1/d3/d65/ld2 55416 1 2026-03-10T10:20:13.892 INFO:tasks.workunit.client.1.vm05.stdout:5/842: write da/d9a/fa7 [2581819,28886] 0 2026-03-10T10:20:13.894 INFO:tasks.workunit.client.1.vm05.stdout:2/793: link db/d61/dfc/d9d/fc1 db/d28/d4f/d8b/ffd 0 2026-03-10T10:20:13.895 INFO:tasks.workunit.client.1.vm05.stdout:1/908: dread d4/fda [0,4194304] 0 2026-03-10T10:20:13.897 INFO:tasks.workunit.client.1.vm05.stdout:7/872: creat d5/f105 x:0 0 0 2026-03-10T10:20:13.898 INFO:tasks.workunit.client.0.vm02.stdout:9/793: creat da/d3c/d4c/d38/d82/da3/fff x:0 0 0 2026-03-10T10:20:13.909 INFO:tasks.workunit.client.1.vm05.stdout:8/767: fsync d7/d14/d15/d3b/fc5 0 2026-03-10T10:20:13.914 INFO:tasks.workunit.client.0.vm02.stdout:9/794: dread da/d3c/d4c/f8e [0,4194304] 0 2026-03-10T10:20:13.924 INFO:tasks.workunit.client.1.vm05.stdout:0/851: rename d1/d2/d9/d50/d9a/da0/ca3 to d1/d2/d9/d50/c11e 0 2026-03-10T10:20:13.930 INFO:tasks.workunit.client.0.vm02.stdout:8/826: mkdir d1/d1c/d43/df9 0 2026-03-10T10:20:13.930 INFO:tasks.workunit.client.1.vm05.stdout:5/843: stat da/db/d26/d5c/fc5 0 2026-03-10T10:20:13.940 INFO:tasks.workunit.client.1.vm05.stdout:2/794: mkdir db/d28/d4f/d59/d94/dfe 0 2026-03-10T10:20:13.942 INFO:tasks.workunit.client.1.vm05.stdout:6/834: write dd/d36/d3f/d12/d44/d2a/d77/fb4 [989831,46909] 0 2026-03-10T10:20:13.943 INFO:tasks.workunit.client.0.vm02.stdout:6/790: write d0/d8/f9b [3387959,105827] 0 2026-03-10T10:20:13.949 INFO:tasks.workunit.client.1.vm05.stdout:6/835: dwrite dd/d36/d3f/d12/d44/d63/fc5 [0,4194304] 0 2026-03-10T10:20:13.953 INFO:tasks.workunit.client.1.vm05.stdout:1/909: mknod d4/d79/d83/dc5/dcb/c10d 0 2026-03-10T10:20:13.954 INFO:tasks.workunit.client.1.vm05.stdout:3/876: dwrite dd/d15/d24/d2c/dd0/f102 [0,4194304] 0 2026-03-10T10:20:13.963 INFO:tasks.workunit.client.1.vm05.stdout:6/836: dwrite dd/d36/d3f/d12/d44/d2a/d77/fb4 [0,4194304] 0 2026-03-10T10:20:13.964 INFO:tasks.workunit.client.1.vm05.stdout:6/837: fdatasync dd/d36/d7d/f97 0 2026-03-10T10:20:13.971 INFO:tasks.workunit.client.1.vm05.stdout:6/838: dread dd/d36/d3f/d12/d44/d2a/d3d/d3e/f64 [0,4194304] 0 2026-03-10T10:20:13.973 INFO:tasks.workunit.client.1.vm05.stdout:7/873: symlink d5/d1d/d20/d2d/d68/l106 0 2026-03-10T10:20:13.975 INFO:tasks.workunit.client.0.vm02.stdout:4/969: mknod d1/d75/ddd/d10e/d5e/d78/c145 0 2026-03-10T10:20:13.985 INFO:tasks.workunit.client.0.vm02.stdout:0/874: write d9/d18/d1a/d22/d24/d8e/d9b/daa/ff5 [530115,10983] 0 2026-03-10T10:20:13.993 INFO:tasks.workunit.client.1.vm05.stdout:9/741: rename d0/df/d74/d8c/d8f/lc1 to d0/d1/d13/de/d93/lf8 0 2026-03-10T10:20:13.993 INFO:tasks.workunit.client.1.vm05.stdout:5/844: unlink da/db/d26/d5c/c62 0 2026-03-10T10:20:13.994 INFO:tasks.workunit.client.0.vm02.stdout:7/847: getdents d1/d1b/d8f/dad/d7e/dba/dea 0 2026-03-10T10:20:13.994 INFO:tasks.workunit.client.0.vm02.stdout:5/960: mkdir d1/db/d11/d13/d28/d11a/d146 0 2026-03-10T10:20:13.995 INFO:tasks.workunit.client.1.vm05.stdout:9/742: read d0/f28 [2887482,12867] 0 2026-03-10T10:20:14.013 INFO:tasks.workunit.client.0.vm02.stdout:8/827: symlink d1/d1c/d24/dad/dbe/lfa 0 2026-03-10T10:20:14.014 INFO:tasks.workunit.client.0.vm02.stdout:3/833: mkdir d1/d8/d21/d73/d101/d114 0 2026-03-10T10:20:14.014 INFO:tasks.workunit.client.0.vm02.stdout:4/970: creat d1/d75/ddd/d10e/d5e/d78/d44/f146 x:0 0 0 2026-03-10T10:20:14.015 INFO:tasks.workunit.client.0.vm02.stdout:2/850: getdents d0/d1a/d49/dcc 0 2026-03-10T10:20:14.022 INFO:tasks.workunit.client.1.vm05.stdout:4/702: rename d1/d31/d76/dac/db8/dbf/fca to d1/d3/d65/ddb/fe6 0 2026-03-10T10:20:14.024 INFO:tasks.workunit.client.1.vm05.stdout:9/743: symlink d0/d1/d13/d62/lf9 0 2026-03-10T10:20:14.024 INFO:tasks.workunit.client.1.vm05.stdout:1/910: getdents d4/df/d76/d101 0 2026-03-10T10:20:14.024 INFO:tasks.workunit.client.0.vm02.stdout:2/851: dread d0/dd4/fdd [0,4194304] 0 2026-03-10T10:20:14.025 INFO:tasks.workunit.client.1.vm05.stdout:3/877: fdatasync dd/d15/d24/d2c/f60 0 2026-03-10T10:20:14.026 INFO:tasks.workunit.client.1.vm05.stdout:8/768: creat d7/d14/d15/ff1 x:0 0 0 2026-03-10T10:20:14.026 INFO:tasks.workunit.client.0.vm02.stdout:8/828: unlink d1/d1c/f66 0 2026-03-10T10:20:14.026 INFO:tasks.workunit.client.1.vm05.stdout:8/769: stat d7/d14/d24/d3f/f7d 0 2026-03-10T10:20:14.027 INFO:tasks.workunit.client.0.vm02.stdout:8/829: chown d1/d1c/d43/d5b/d88/dac 585601761 1 2026-03-10T10:20:14.027 INFO:tasks.workunit.client.1.vm05.stdout:4/703: creat d1/d3/d65/ddb/fe7 x:0 0 0 2026-03-10T10:20:14.029 INFO:tasks.workunit.client.1.vm05.stdout:4/704: read d1/d31/dc/d40/d63/f74 [3895902,86553] 0 2026-03-10T10:20:14.033 INFO:tasks.workunit.client.0.vm02.stdout:6/791: creat d0/d8/d29/d2f/d50/f108 x:0 0 0 2026-03-10T10:20:14.035 INFO:tasks.workunit.client.0.vm02.stdout:1/874: getdents d4/da/d1a/d5b/d93/de8 0 2026-03-10T10:20:14.035 INFO:tasks.workunit.client.0.vm02.stdout:6/792: chown d0/d8/d29/d2f/c42 56650 1 2026-03-10T10:20:14.035 INFO:tasks.workunit.client.0.vm02.stdout:9/795: rename da/d3c/d4c/cda to da/d3c/d4c/d2c/d34/c100 0 2026-03-10T10:20:14.035 INFO:tasks.workunit.client.0.vm02.stdout:6/793: chown d0/d8/d29/d2f/d50/f108 267815168 1 2026-03-10T10:20:14.040 INFO:tasks.workunit.client.0.vm02.stdout:2/852: truncate d0/d10/f4b 4351784 0 2026-03-10T10:20:14.044 INFO:tasks.workunit.client.1.vm05.stdout:6/839: link dd/d36/d3f/d12/d44/d2a/d3d/d3e/ld7 dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6/l10e 0 2026-03-10T10:20:14.046 INFO:tasks.workunit.client.1.vm05.stdout:4/705: symlink d1/d31/d4b/le8 0 2026-03-10T10:20:14.047 INFO:tasks.workunit.client.0.vm02.stdout:8/830: symlink d1/d1c/d24/dad/dbe/dda/lfb 0 2026-03-10T10:20:14.047 INFO:tasks.workunit.client.0.vm02.stdout:3/834: link d1/d20/cd8 d1/d58/d104/c115 0 2026-03-10T10:20:14.047 INFO:tasks.workunit.client.0.vm02.stdout:1/875: unlink d4/l41 0 2026-03-10T10:20:14.047 INFO:tasks.workunit.client.0.vm02.stdout:9/796: rmdir da/d3c/d4c 39 2026-03-10T10:20:14.049 INFO:tasks.workunit.client.0.vm02.stdout:6/794: rmdir d0/d8/d29/d6d 39 2026-03-10T10:20:14.049 INFO:tasks.workunit.client.0.vm02.stdout:7/848: getdents d1/dc 0 2026-03-10T10:20:14.051 INFO:tasks.workunit.client.1.vm05.stdout:9/744: link d0/df/d74/l8b d0/df/d74/d8c/de4/lfa 0 2026-03-10T10:20:14.051 INFO:tasks.workunit.client.0.vm02.stdout:7/849: chown d1/dc/d16/d28 17576249 1 2026-03-10T10:20:14.051 INFO:tasks.workunit.client.1.vm05.stdout:6/840: dread - dd/d36/d3f/d12/d44/d2a/fec zero size 2026-03-10T10:20:14.052 INFO:tasks.workunit.client.0.vm02.stdout:7/850: dread - d1/d1b/d49/fe5 zero size 2026-03-10T10:20:14.055 INFO:tasks.workunit.client.1.vm05.stdout:4/706: chown d1/d31/f36 63461191 1 2026-03-10T10:20:14.056 INFO:tasks.workunit.client.1.vm05.stdout:0/852: dread d1/d2/d9/d31/d13/f7a [0,4194304] 0 2026-03-10T10:20:14.061 INFO:tasks.workunit.client.0.vm02.stdout:5/961: getdents d1/db/d11/d84/d40 0 2026-03-10T10:20:14.063 INFO:tasks.workunit.client.1.vm05.stdout:9/745: rmdir d0/d1/d16/d6e 39 2026-03-10T10:20:14.065 INFO:tasks.workunit.client.1.vm05.stdout:9/746: dread d0/d1/d16/f5c [0,4194304] 0 2026-03-10T10:20:14.080 INFO:tasks.workunit.client.1.vm05.stdout:6/841: creat dd/d36/d3f/f10f x:0 0 0 2026-03-10T10:20:14.082 INFO:tasks.workunit.client.0.vm02.stdout:0/875: write d9/d18/d1a/d22/d24/d80/f90 [6836390,33797] 0 2026-03-10T10:20:14.091 INFO:tasks.workunit.client.0.vm02.stdout:3/835: mkdir d1/d8/d86/db1/d116 0 2026-03-10T10:20:14.094 INFO:tasks.workunit.client.1.vm05.stdout:2/795: truncate db/d28/d4f/f8a 6971756 0 2026-03-10T10:20:14.096 INFO:tasks.workunit.client.1.vm05.stdout:1/911: dread d4/d39/d3e/f7d [0,4194304] 0 2026-03-10T10:20:14.096 INFO:tasks.workunit.client.0.vm02.stdout:8/831: unlink d1/d1c/d43/d5b/d88/fb9 0 2026-03-10T10:20:14.099 INFO:tasks.workunit.client.1.vm05.stdout:7/874: dwrite d5/d1d/d20/d3b/fe5 [0,4194304] 0 2026-03-10T10:20:14.102 INFO:tasks.workunit.client.0.vm02.stdout:2/853: mknod d0/d71/dd8/c11e 0 2026-03-10T10:20:14.102 INFO:tasks.workunit.client.0.vm02.stdout:4/971: link d1/c103 d1/d10/d12b/c147 0 2026-03-10T10:20:14.105 INFO:tasks.workunit.client.0.vm02.stdout:4/972: chown d1/d75/ddd/d10e/d5e/d122 0 1 2026-03-10T10:20:14.109 INFO:tasks.workunit.client.0.vm02.stdout:3/836: dread d1/f81 [0,4194304] 0 2026-03-10T10:20:14.110 INFO:tasks.workunit.client.1.vm05.stdout:3/878: dwrite dd/d20/f35 [4194304,4194304] 0 2026-03-10T10:20:14.113 INFO:tasks.workunit.client.1.vm05.stdout:5/845: dwrite da/db/dee/d38/f48 [0,4194304] 0 2026-03-10T10:20:14.114 INFO:tasks.workunit.client.1.vm05.stdout:4/707: mkdir d1/d3/d65/de0/de9 0 2026-03-10T10:20:14.119 INFO:tasks.workunit.client.0.vm02.stdout:1/876: write d4/da/fbb [36301,21543] 0 2026-03-10T10:20:14.120 INFO:tasks.workunit.client.1.vm05.stdout:9/747: write d0/f28 [4928776,10141] 0 2026-03-10T10:20:14.125 INFO:tasks.workunit.client.0.vm02.stdout:9/797: symlink da/d3c/d4c/d75/l101 0 2026-03-10T10:20:14.125 INFO:tasks.workunit.client.0.vm02.stdout:0/876: symlink d9/d18/d1a/d22/d24/d80/d74/d7f/l11a 0 2026-03-10T10:20:14.126 INFO:tasks.workunit.client.1.vm05.stdout:8/770: dwrite d7/d14/d15/d3b/f8b [0,4194304] 0 2026-03-10T10:20:14.137 INFO:tasks.workunit.client.1.vm05.stdout:4/708: dread d1/d31/dc/d40/d63/f94 [0,4194304] 0 2026-03-10T10:20:14.147 INFO:tasks.workunit.client.1.vm05.stdout:4/709: dread d1/d3/f5f [0,4194304] 0 2026-03-10T10:20:14.149 INFO:tasks.workunit.client.0.vm02.stdout:8/832: symlink d1/d1c/d43/d6a/da8/d8e/lfc 0 2026-03-10T10:20:14.149 INFO:tasks.workunit.client.0.vm02.stdout:6/795: creat d0/d8/d29/d6d/d96/de4/def/f109 x:0 0 0 2026-03-10T10:20:14.149 INFO:tasks.workunit.client.0.vm02.stdout:5/962: truncate d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fc3 2810397 0 2026-03-10T10:20:14.150 INFO:tasks.workunit.client.0.vm02.stdout:8/833: chown d1/d1c/c3a 22947 1 2026-03-10T10:20:14.150 INFO:tasks.workunit.client.0.vm02.stdout:2/854: truncate d0/f2c 8609899 0 2026-03-10T10:20:14.180 INFO:tasks.workunit.client.0.vm02.stdout:0/877: creat d9/d34/d3d/d65/d89/dd3/d9c/f11b x:0 0 0 2026-03-10T10:20:14.184 INFO:tasks.workunit.client.0.vm02.stdout:6/796: chown d0/d8/d29/c71 1 1 2026-03-10T10:20:14.190 INFO:tasks.workunit.client.0.vm02.stdout:4/973: getdents d1/d75/ddd/d10e/d5e/d122 0 2026-03-10T10:20:14.197 INFO:tasks.workunit.client.0.vm02.stdout:7/851: link d1/d1b/d8f/f59 d1/dc/d16/d28/f108 0 2026-03-10T10:20:14.206 INFO:tasks.workunit.client.0.vm02.stdout:5/963: unlink d1/db/d11/d84/d40/d4f/d5f/f73 0 2026-03-10T10:20:14.207 INFO:tasks.workunit.client.0.vm02.stdout:4/974: mknod d1/d32/da3/d11d/c148 0 2026-03-10T10:20:14.217 INFO:tasks.workunit.client.0.vm02.stdout:1/877: write d4/da/d1a/d47/d78/fdc [105005,15810] 0 2026-03-10T10:20:14.221 INFO:tasks.workunit.client.0.vm02.stdout:3/837: dwrite d1/d6/d8e/fa6 [0,4194304] 0 2026-03-10T10:20:14.240 INFO:tasks.workunit.client.0.vm02.stdout:9/798: creat da/f102 x:0 0 0 2026-03-10T10:20:14.252 INFO:tasks.workunit.client.0.vm02.stdout:7/852: write d1/dc/d99/fdb [2905103,127055] 0 2026-03-10T10:20:14.262 INFO:tasks.workunit.client.0.vm02.stdout:6/797: fdatasync d0/d8/d29/d2f/d50/f78 0 2026-03-10T10:20:14.264 INFO:tasks.workunit.client.0.vm02.stdout:8/834: write d1/d1c/d23/d25/fdc [34543,106838] 0 2026-03-10T10:20:14.266 INFO:tasks.workunit.client.0.vm02.stdout:2/855: link d0/l1c d0/d71/d108/d65/dc4/dfa/l11f 0 2026-03-10T10:20:14.266 INFO:tasks.workunit.client.0.vm02.stdout:5/964: read d1/db/d11/d84/d40/d4f/f60 [355136,118465] 0 2026-03-10T10:20:14.281 INFO:tasks.workunit.client.1.vm05.stdout:2/796: write db/d61/f92 [3087645,127518] 0 2026-03-10T10:20:14.281 INFO:tasks.workunit.client.0.vm02.stdout:1/878: stat d4/d4a/l5a 0 2026-03-10T10:20:14.284 INFO:tasks.workunit.client.1.vm05.stdout:3/879: dwrite dd/d39/d66/fd5 [0,4194304] 0 2026-03-10T10:20:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:14 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:12] ENGINE Client ('192.168.123.102', 48130) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:20:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:14 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:12] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T10:20:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:14 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:12] ENGINE Bus STARTED 2026-03-10T10:20:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:14 vm05.local ceph-mon[59051]: mgrmap e30: vm02.zmavgl(active, since 4s), standbys: vm05.coparq 2026-03-10T10:20:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:14 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:14 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:14.289 INFO:tasks.workunit.client.1.vm05.stdout:4/710: dwrite d1/d3/f5 [4194304,4194304] 0 2026-03-10T10:20:14.295 INFO:tasks.workunit.client.1.vm05.stdout:4/711: dwrite d1/d31/dc/d40/f7d [0,4194304] 0 2026-03-10T10:20:14.302 INFO:tasks.workunit.client.0.vm02.stdout:0/878: link d9/d18/d1a/d46/l10d d9/d34/d3d/d65/d89/dd3/da8/l11c 0 2026-03-10T10:20:14.313 INFO:tasks.workunit.client.1.vm05.stdout:5/846: fdatasync da/d9a/dc7/f6a 0 2026-03-10T10:20:14.315 INFO:tasks.workunit.client.1.vm05.stdout:0/853: rename d1/d2/d9/d31/d12/d41/l4d to d1/d2/d9/d31/d13/da2/dab/l11f 0 2026-03-10T10:20:14.320 INFO:tasks.workunit.client.1.vm05.stdout:9/748: symlink d0/df/lfb 0 2026-03-10T10:20:14.321 INFO:tasks.workunit.client.0.vm02.stdout:6/798: read d0/d8/d29/d6d/d96/de4/def/fcf [1350297,58891] 0 2026-03-10T10:20:14.327 INFO:tasks.workunit.client.1.vm05.stdout:2/797: creat db/d28/d4f/d8b/fff x:0 0 0 2026-03-10T10:20:14.329 INFO:tasks.workunit.client.0.vm02.stdout:2/856: mkdir d0/dd4/d120 0 2026-03-10T10:20:14.335 INFO:tasks.workunit.client.1.vm05.stdout:6/842: unlink dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/cfd 0 2026-03-10T10:20:14.336 INFO:tasks.workunit.client.1.vm05.stdout:6/843: stat dd/d36/d3f/d12/d44/d2a/d3d/d3e/f64 0 2026-03-10T10:20:14.339 INFO:tasks.workunit.client.1.vm05.stdout:3/880: creat dd/d39/d66/f13b x:0 0 0 2026-03-10T10:20:14.342 INFO:tasks.workunit.client.0.vm02.stdout:5/965: dread d1/db/d11/d16/d48/dcf/f10e [0,4194304] 0 2026-03-10T10:20:14.344 INFO:tasks.workunit.client.0.vm02.stdout:5/966: truncate d1/db/d11/d16/d79/d85/d93/f107 9310004 0 2026-03-10T10:20:14.346 INFO:tasks.workunit.client.1.vm05.stdout:4/712: mknod d1/d31/d4b/cea 0 2026-03-10T10:20:14.347 INFO:tasks.workunit.client.0.vm02.stdout:9/799: mknod da/d3c/d4c/d2c/c103 0 2026-03-10T10:20:14.348 INFO:tasks.workunit.client.1.vm05.stdout:7/875: link d5/dd/l10 d5/d1d/d29/d3e/d8c/d82/d90/d9a/l107 0 2026-03-10T10:20:14.360 INFO:tasks.workunit.client.1.vm05.stdout:5/847: fsync da/d96/dd9/f114 0 2026-03-10T10:20:14.363 INFO:tasks.workunit.client.1.vm05.stdout:1/912: dread d4/df/d1c/d92/f97 [4194304,4194304] 0 2026-03-10T10:20:14.363 INFO:tasks.workunit.client.1.vm05.stdout:8/771: rename d7/d2f/d57/c9f to d7/d2f/d57/de3/cf2 0 2026-03-10T10:20:14.364 INFO:tasks.workunit.client.1.vm05.stdout:8/772: readlink d7/d14/d3a/d49/l85 0 2026-03-10T10:20:14.364 INFO:tasks.workunit.client.1.vm05.stdout:1/913: dread - d4/d39/d3e/db1/db8/fd6 zero size 2026-03-10T10:20:14.364 INFO:tasks.workunit.client.1.vm05.stdout:1/914: chown d4/d39/d3e/da0/fc9 0 1 2026-03-10T10:20:14.370 INFO:tasks.workunit.client.1.vm05.stdout:2/798: creat db/d2d/d5e/f100 x:0 0 0 2026-03-10T10:20:14.379 INFO:tasks.workunit.client.0.vm02.stdout:0/879: dwrite d9/d18/dc7/dca/f95 [0,4194304] 0 2026-03-10T10:20:14.381 INFO:tasks.workunit.client.0.vm02.stdout:0/880: write d9/d34/d3d/d67/f9f [5164247,99307] 0 2026-03-10T10:20:14.383 INFO:tasks.workunit.client.0.vm02.stdout:3/838: creat d1/d8/d21/f117 x:0 0 0 2026-03-10T10:20:14.384 INFO:tasks.workunit.client.1.vm05.stdout:0/854: dread d1/d2/d9/d31/d12/d41/fa9 [0,4194304] 0 2026-03-10T10:20:14.392 INFO:tasks.workunit.client.0.vm02.stdout:5/967: mknod d1/db/d11/d13/d28/d37/d3d/da3/c147 0 2026-03-10T10:20:14.393 INFO:tasks.workunit.client.1.vm05.stdout:5/848: truncate da/db/de9/fd2 551792 0 2026-03-10T10:20:14.394 INFO:tasks.workunit.client.0.vm02.stdout:9/800: dread - da/d3c/d4c/d38/da6/fc9 zero size 2026-03-10T10:20:14.397 INFO:tasks.workunit.client.0.vm02.stdout:7/853: rmdir d1/d1b/d49/d98/dee 0 2026-03-10T10:20:14.400 INFO:tasks.workunit.client.1.vm05.stdout:1/915: unlink d4/d37/l5e 0 2026-03-10T10:20:14.401 INFO:tasks.workunit.client.1.vm05.stdout:9/749: mknod d0/d1/cfc 0 2026-03-10T10:20:14.419 INFO:tasks.workunit.client.0.vm02.stdout:2/857: dwrite d0/d71/d108/d65/dc4/dfa/d80/d10f/fcf [0,4194304] 0 2026-03-10T10:20:14.426 INFO:tasks.workunit.client.0.vm02.stdout:4/975: getdents d1/d10 0 2026-03-10T10:20:14.431 INFO:tasks.workunit.client.1.vm05.stdout:5/849: mkdir da/d9a/d120 0 2026-03-10T10:20:14.435 INFO:tasks.workunit.client.0.vm02.stdout:0/881: dread - d9/d34/d3d/d65/d89/fd9 zero size 2026-03-10T10:20:14.447 INFO:tasks.workunit.client.1.vm05.stdout:1/916: creat d4/d39/d3e/da0/f10e x:0 0 0 2026-03-10T10:20:14.447 INFO:tasks.workunit.client.1.vm05.stdout:9/750: creat d0/df/d74/d8c/d8f/ddd/ffd x:0 0 0 2026-03-10T10:20:14.455 INFO:tasks.workunit.client.1.vm05.stdout:6/844: write dd/d36/d3f/dbd/f10b [4988239,74101] 0 2026-03-10T10:20:14.455 INFO:tasks.workunit.client.0.vm02.stdout:5/968: write d1/db/d11/d84/fb2 [2947753,100134] 0 2026-03-10T10:20:14.455 INFO:tasks.workunit.client.0.vm02.stdout:3/839: write d1/d8/d86/da2/fd2 [415495,130955] 0 2026-03-10T10:20:14.455 INFO:tasks.workunit.client.0.vm02.stdout:3/840: dread - d1/d6/ff1 zero size 2026-03-10T10:20:14.460 INFO:tasks.workunit.client.0.vm02.stdout:6/799: creat d0/d8/d29/d6d/d96/de4/d102/f10a x:0 0 0 2026-03-10T10:20:14.470 INFO:tasks.workunit.client.1.vm05.stdout:0/855: dwrite d1/d2/d9/d31/d13/d2f/f88 [0,4194304] 0 2026-03-10T10:20:14.492 INFO:tasks.workunit.client.1.vm05.stdout:5/850: creat da/db/d28/d8a/de3/f121 x:0 0 0 2026-03-10T10:20:14.492 INFO:tasks.workunit.client.1.vm05.stdout:3/881: rename dd/d15/d24/d8e/dac/f119 to dd/d15/d24/d2c/f13c 0 2026-03-10T10:20:14.493 INFO:tasks.workunit.client.1.vm05.stdout:8/773: creat d7/d14/d3a/ff3 x:0 0 0 2026-03-10T10:20:14.493 INFO:tasks.workunit.client.1.vm05.stdout:1/917: dread - d4/d20/ff7 zero size 2026-03-10T10:20:14.495 INFO:tasks.workunit.client.0.vm02.stdout:1/879: getdents d4/da/d1a/d5b/d93 0 2026-03-10T10:20:14.496 INFO:tasks.workunit.client.1.vm05.stdout:6/845: unlink dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/lbf 0 2026-03-10T10:20:14.499 INFO:tasks.workunit.client.1.vm05.stdout:4/713: getdents d1/d64/da9 0 2026-03-10T10:20:14.504 INFO:tasks.workunit.client.1.vm05.stdout:0/856: read d1/d2/d9/d31/f109 [3364197,113132] 0 2026-03-10T10:20:14.506 INFO:tasks.workunit.client.0.vm02.stdout:8/835: rename d1/d1c/d43/d5b/d88/dac/cd8 to d1/d1c/d23/cfd 0 2026-03-10T10:20:14.506 INFO:tasks.workunit.client.0.vm02.stdout:6/800: fdatasync d0/d8/d29/d6d/d96/de4/def/d6f/f7c 0 2026-03-10T10:20:14.508 INFO:tasks.workunit.client.1.vm05.stdout:7/876: getdents d5/d1d/d29/d3e 0 2026-03-10T10:20:14.516 INFO:tasks.workunit.client.0.vm02.stdout:2/858: dwrite d0/d71/d108/fb5 [0,4194304] 0 2026-03-10T10:20:14.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:14 vm02.local ceph-mon[50200]: [10/Mar/2026:10:20:12] ENGINE Client ('192.168.123.102', 48130) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:20:14.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:14 vm02.local ceph-mon[50200]: [10/Mar/2026:10:20:12] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T10:20:14.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:14 vm02.local ceph-mon[50200]: [10/Mar/2026:10:20:12] ENGINE Bus STARTED 2026-03-10T10:20:14.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:14 vm02.local ceph-mon[50200]: mgrmap e30: vm02.zmavgl(active, since 4s), standbys: vm05.coparq 2026-03-10T10:20:14.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:14 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:14.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:14 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:14.636 INFO:tasks.workunit.client.0.vm02.stdout:0/882: write d9/d34/d3d/d65/d89/dd3/f66 [446888,33814] 0 2026-03-10T10:20:14.642 INFO:tasks.workunit.client.0.vm02.stdout:0/883: dwrite d9/d18/d1a/d22/d24/d80/d49/f53 [0,4194304] 0 2026-03-10T10:20:14.737 INFO:tasks.workunit.client.1.vm05.stdout:2/799: rename db/d61/f92 to db/d28/d4f/d59/d94/dfe/f101 0 2026-03-10T10:20:14.743 INFO:tasks.workunit.client.0.vm02.stdout:9/801: dwrite da/d3c/d4c/d2c/d34/f83 [0,4194304] 0 2026-03-10T10:20:14.759 INFO:tasks.workunit.client.1.vm05.stdout:8/774: dread - d7/d14/d15/fa8 zero size 2026-03-10T10:20:14.762 INFO:tasks.workunit.client.0.vm02.stdout:3/841: symlink d1/d58/dc9/l118 0 2026-03-10T10:20:14.764 INFO:tasks.workunit.client.1.vm05.stdout:9/751: mkdir d0/dc4/dfe 0 2026-03-10T10:20:14.769 INFO:tasks.workunit.client.0.vm02.stdout:6/801: readlink d0/d8/d29/d2f/d50/d98/l9e 0 2026-03-10T10:20:14.771 INFO:tasks.workunit.client.1.vm05.stdout:0/857: symlink d1/d2/d39/d6e/d95/l120 0 2026-03-10T10:20:14.771 INFO:tasks.workunit.client.1.vm05.stdout:6/846: creat dd/d36/d3f/dbd/dd5/f110 x:0 0 0 2026-03-10T10:20:14.778 INFO:tasks.workunit.client.1.vm05.stdout:7/877: mknod d5/d1d/d20/d2d/d5d/d7a/c108 0 2026-03-10T10:20:14.783 INFO:tasks.workunit.client.1.vm05.stdout:5/851: fdatasync da/db/f3b 0 2026-03-10T10:20:14.783 INFO:tasks.workunit.client.1.vm05.stdout:5/852: fdatasync da/db/d26/d70/fd1 0 2026-03-10T10:20:14.794 INFO:tasks.workunit.client.1.vm05.stdout:3/882: creat dd/d15/d1f/d116/f13d x:0 0 0 2026-03-10T10:20:14.796 INFO:tasks.workunit.client.1.vm05.stdout:8/775: mkdir d7/d14/d62/d90/dac/df4 0 2026-03-10T10:20:14.816 INFO:tasks.workunit.client.1.vm05.stdout:6/847: dread dd/d36/f71 [0,4194304] 0 2026-03-10T10:20:14.828 INFO:tasks.workunit.client.1.vm05.stdout:7/878: mkdir d5/d1d/d29/d3e/d8c/d82/d90/d9a/d109 0 2026-03-10T10:20:14.839 INFO:tasks.workunit.client.1.vm05.stdout:2/800: mknod db/d61/dfc/c102 0 2026-03-10T10:20:14.846 INFO:tasks.workunit.client.1.vm05.stdout:1/918: link d4/df/c6c d4/d3d/ddc/d108/c10f 0 2026-03-10T10:20:14.846 INFO:tasks.workunit.client.1.vm05.stdout:0/858: truncate d1/d2/d9/d31/d54/f6b 866466 0 2026-03-10T10:20:14.872 INFO:tasks.workunit.client.1.vm05.stdout:1/919: chown d4/d20/dbe/de8/lff 3897 1 2026-03-10T10:20:14.872 INFO:tasks.workunit.client.1.vm05.stdout:1/920: readlink d4/df/d76/l10b 0 2026-03-10T10:20:14.889 INFO:tasks.workunit.client.1.vm05.stdout:0/859: mknod d1/d2/d9/d31/daa/d11c/c121 0 2026-03-10T10:20:14.890 INFO:tasks.workunit.client.1.vm05.stdout:5/853: link da/db/de9/fd2 da/d9a/dbe/f122 0 2026-03-10T10:20:14.904 INFO:tasks.workunit.client.1.vm05.stdout:0/860: fsync d1/d2/d9/fbc 0 2026-03-10T10:20:14.916 INFO:tasks.workunit.client.1.vm05.stdout:4/714: symlink d1/d64/leb 0 2026-03-10T10:20:14.916 INFO:tasks.workunit.client.1.vm05.stdout:4/715: chown d1/d31/d4b/le8 781032 1 2026-03-10T10:20:14.918 INFO:tasks.workunit.client.1.vm05.stdout:4/716: chown d1/d31/dc/d40/d63 482257629 1 2026-03-10T10:20:14.941 INFO:tasks.workunit.client.1.vm05.stdout:4/717: sync 2026-03-10T10:20:14.963 INFO:tasks.workunit.client.1.vm05.stdout:0/861: link d1/d2/d9/d31/d12/d41/f101 d1/d2/d9/d31/f122 0 2026-03-10T10:20:14.965 INFO:tasks.workunit.client.0.vm02.stdout:9/802: dread da/f28 [0,4194304] 0 2026-03-10T10:20:14.967 INFO:tasks.workunit.client.0.vm02.stdout:9/803: dread da/d3c/d4c/f8e [0,4194304] 0 2026-03-10T10:20:14.967 INFO:tasks.workunit.client.0.vm02.stdout:9/804: readlink da/d3c/d4c/lf7 0 2026-03-10T10:20:14.976 INFO:tasks.workunit.client.0.vm02.stdout:7/854: link d1/dc/cb1 d1/dc/d55/d9a/dd9/df7/c109 0 2026-03-10T10:20:14.988 INFO:tasks.workunit.client.1.vm05.stdout:4/718: dread d1/d31/dc/f3d [0,4194304] 0 2026-03-10T10:20:14.998 INFO:tasks.workunit.client.1.vm05.stdout:0/862: symlink d1/d2/d9/d31/d13/d15/d4e/d8a/l123 0 2026-03-10T10:20:15.000 INFO:tasks.workunit.client.0.vm02.stdout:2/859: dwrite d0/d10/da6/fb6 [0,4194304] 0 2026-03-10T10:20:15.044 INFO:tasks.workunit.client.0.vm02.stdout:6/802: dwrite d0/d8/d29/d2f/d50/ff0 [0,4194304] 0 2026-03-10T10:20:15.060 INFO:tasks.workunit.client.1.vm05.stdout:4/719: rmdir d1/d64/da9/dae 39 2026-03-10T10:20:15.069 INFO:tasks.workunit.client.0.vm02.stdout:8/836: unlink d1/d1c/c1f 0 2026-03-10T10:20:15.071 INFO:tasks.workunit.client.1.vm05.stdout:0/863: truncate d1/d2/d9/d31/d12/d41/fa9 2162712 0 2026-03-10T10:20:15.071 INFO:tasks.workunit.client.0.vm02.stdout:4/976: getdents d1/def 0 2026-03-10T10:20:15.073 INFO:tasks.workunit.client.0.vm02.stdout:0/884: symlink d9/d34/d3d/d65/l11d 0 2026-03-10T10:20:15.076 INFO:tasks.workunit.client.1.vm05.stdout:4/720: mknod d1/d31/d76/dac/dc5/cec 0 2026-03-10T10:20:15.076 INFO:tasks.workunit.client.0.vm02.stdout:9/805: dread - da/d3c/d4c/d38/da6/fec zero size 2026-03-10T10:20:15.079 INFO:tasks.workunit.client.1.vm05.stdout:4/721: dread d1/d3/f60 [0,4194304] 0 2026-03-10T10:20:15.084 INFO:tasks.workunit.client.1.vm05.stdout:4/722: mkdir d1/d31/dc/d40/d45/ded 0 2026-03-10T10:20:15.085 INFO:tasks.workunit.client.0.vm02.stdout:2/860: creat d0/d10/dee/f121 x:0 0 0 2026-03-10T10:20:15.087 INFO:tasks.workunit.client.0.vm02.stdout:5/969: rename d1/db/d11/d7b/l126 to d1/db/d11/d13/d28/d37/l148 0 2026-03-10T10:20:15.096 INFO:tasks.workunit.client.0.vm02.stdout:6/803: creat d0/d87/f10b x:0 0 0 2026-03-10T10:20:15.096 INFO:tasks.workunit.client.0.vm02.stdout:6/804: chown d0/d8/d29/d94/d9a/f101 2480 1 2026-03-10T10:20:15.104 INFO:tasks.workunit.client.0.vm02.stdout:0/885: symlink d9/d34/d3d/l11e 0 2026-03-10T10:20:15.106 INFO:tasks.workunit.client.0.vm02.stdout:9/806: truncate da/d3c/d4c/d38/d4a/f54 565414 0 2026-03-10T10:20:15.107 INFO:tasks.workunit.client.0.vm02.stdout:9/807: stat da/d3c/d4c/d2c/d96/fee 0 2026-03-10T10:20:15.118 INFO:tasks.workunit.client.0.vm02.stdout:7/855: fsync d1/d1b/d8f/f59 0 2026-03-10T10:20:15.131 INFO:tasks.workunit.client.0.vm02.stdout:5/970: mknod d1/db/d11/d84/d40/d4f/d5f/d6d/d71/c149 0 2026-03-10T10:20:15.139 INFO:tasks.workunit.client.0.vm02.stdout:8/837: mknod d1/d1c/d43/cfe 0 2026-03-10T10:20:15.162 INFO:tasks.workunit.client.0.vm02.stdout:0/886: dread d9/d18/f1e [0,4194304] 0 2026-03-10T10:20:15.172 INFO:tasks.workunit.client.0.vm02.stdout:9/808: fdatasync da/d3c/d4c/d2c/d34/f81 0 2026-03-10T10:20:15.197 INFO:tasks.workunit.client.0.vm02.stdout:1/880: rename d4/da/f12 to d4/d2c/d53/da6/db8/dd9/dea/f116 0 2026-03-10T10:20:15.223 INFO:tasks.workunit.client.0.vm02.stdout:8/838: write d1/d1c/d43/f7a [2454048,90075] 0 2026-03-10T10:20:15.252 INFO:tasks.workunit.client.0.vm02.stdout:2/861: rename d0/f44 to d0/d8c/dc5/f122 0 2026-03-10T10:20:15.260 INFO:tasks.workunit.client.0.vm02.stdout:1/881: truncate d4/da/d1a/fa1 226618 0 2026-03-10T10:20:15.260 INFO:tasks.workunit.client.0.vm02.stdout:1/882: chown d4/d2c/d53/fbd 1660 1 2026-03-10T10:20:15.266 INFO:tasks.workunit.client.0.vm02.stdout:8/839: mkdir d1/d2/dff 0 2026-03-10T10:20:15.270 INFO:tasks.workunit.client.1.vm05.stdout:9/752: write d0/df/f97 [419000,8927] 0 2026-03-10T10:20:15.271 INFO:tasks.workunit.client.1.vm05.stdout:7/879: write d5/d1d/d20/d35/f78 [259957,89630] 0 2026-03-10T10:20:15.286 INFO:tasks.workunit.client.0.vm02.stdout:0/887: creat d9/d18/d1a/d22/d24/d80/d57/d107/f11f x:0 0 0 2026-03-10T10:20:15.290 INFO:tasks.workunit.client.0.vm02.stdout:7/856: rename d1/dc/d16/lc9 to d1/dc/d60/l10a 0 2026-03-10T10:20:15.291 INFO:tasks.workunit.client.0.vm02.stdout:1/883: mkdir d4/da/d27/d117 0 2026-03-10T10:20:15.292 INFO:tasks.workunit.client.0.vm02.stdout:8/840: mknod d1/d1c/d43/d6a/da8/d56/c100 0 2026-03-10T10:20:15.295 INFO:tasks.workunit.client.1.vm05.stdout:9/753: creat d0/d1/d57/fff x:0 0 0 2026-03-10T10:20:15.312 INFO:tasks.workunit.client.0.vm02.stdout:1/884: symlink d4/d2c/d91/l118 0 2026-03-10T10:20:15.318 INFO:tasks.workunit.client.1.vm05.stdout:7/880: mkdir d5/d1d/d29/d60/de1/d10a 0 2026-03-10T10:20:15.322 INFO:tasks.workunit.client.0.vm02.stdout:8/841: dread d1/d1c/d23/f3b [0,4194304] 0 2026-03-10T10:20:15.328 INFO:tasks.workunit.client.1.vm05.stdout:4/723: rename d1/d31/dc/lab to d1/d64/da9/dae/dcc/lee 0 2026-03-10T10:20:15.328 INFO:tasks.workunit.client.1.vm05.stdout:4/724: truncate d1/d3/f5 8993507 0 2026-03-10T10:20:15.332 INFO:tasks.workunit.client.1.vm05.stdout:7/881: dread d5/d1d/d20/d2d/d68/fc4 [0,4194304] 0 2026-03-10T10:20:15.362 INFO:tasks.workunit.client.0.vm02.stdout:7/857: link d1/dc/ff d1/dc/d10/f10b 0 2026-03-10T10:20:15.362 INFO:tasks.workunit.client.0.vm02.stdout:7/858: readlink d1/dc/d16/d28/d2d/l45 0 2026-03-10T10:20:15.363 INFO:tasks.workunit.client.1.vm05.stdout:3/883: mkdir dd/d39/d13e 0 2026-03-10T10:20:15.364 INFO:tasks.workunit.client.1.vm05.stdout:7/882: creat d5/d1d/d20/d2d/f10b x:0 0 0 2026-03-10T10:20:15.365 INFO:tasks.workunit.client.0.vm02.stdout:3/842: mknod d1/d6/c119 0 2026-03-10T10:20:15.368 INFO:tasks.workunit.client.0.vm02.stdout:8/842: link d1/d1c/f20 d1/d2/dff/f101 0 2026-03-10T10:20:15.374 INFO:tasks.workunit.client.1.vm05.stdout:3/884: mknod dd/d15/d24/d2c/d6d/da7/dbb/dbd/c13f 0 2026-03-10T10:20:15.375 INFO:tasks.workunit.client.1.vm05.stdout:3/885: stat dd/d20/d130/f131 0 2026-03-10T10:20:15.376 INFO:tasks.workunit.client.1.vm05.stdout:6/848: dwrite dd/d36/d3f/f1e [0,4194304] 0 2026-03-10T10:20:15.386 INFO:tasks.workunit.client.0.vm02.stdout:3/843: rename d1/d8/d21/d7d/l83 to d1/d8/d21/d73/d78/d84/dfa/l11a 0 2026-03-10T10:20:15.390 INFO:tasks.workunit.client.0.vm02.stdout:8/843: mkdir d1/d1c/d43/d5b/dab/d102 0 2026-03-10T10:20:15.406 INFO:tasks.workunit.client.0.vm02.stdout:7/859: creat d1/d1b/f10c x:0 0 0 2026-03-10T10:20:15.416 INFO:tasks.workunit.client.0.vm02.stdout:3/844: symlink d1/d8/d21/d73/d101/d114/l11b 0 2026-03-10T10:20:15.422 INFO:tasks.workunit.client.0.vm02.stdout:7/860: creat d1/d1b/d8f/dad/d7e/f10d x:0 0 0 2026-03-10T10:20:15.423 INFO:tasks.workunit.client.1.vm05.stdout:2/801: rmdir db/d12 39 2026-03-10T10:20:15.424 INFO:tasks.workunit.client.1.vm05.stdout:7/883: creat d5/d26/d9c/de7/f10c x:0 0 0 2026-03-10T10:20:15.424 INFO:tasks.workunit.client.1.vm05.stdout:7/884: stat d5/d1d/d20/d2d/fb0 0 2026-03-10T10:20:15.425 INFO:tasks.workunit.client.0.vm02.stdout:3/845: truncate d1/f1c 4486065 0 2026-03-10T10:20:15.427 INFO:tasks.workunit.client.1.vm05.stdout:8/776: link d7/d2f/d57/l8e d7/d2f/lf5 0 2026-03-10T10:20:15.430 INFO:tasks.workunit.client.1.vm05.stdout:2/802: chown db/d28/d4f/d59/ce7 18777195 1 2026-03-10T10:20:15.431 INFO:tasks.workunit.client.1.vm05.stdout:5/854: write da/d9a/fda [508701,125767] 0 2026-03-10T10:20:15.436 INFO:tasks.workunit.client.1.vm05.stdout:7/885: sync 2026-03-10T10:20:15.445 INFO:tasks.workunit.client.1.vm05.stdout:7/886: mkdir d5/d1d/d20/d35/d10d 0 2026-03-10T10:20:15.445 INFO:tasks.workunit.client.1.vm05.stdout:7/887: chown d5/d17/dae 109 1 2026-03-10T10:20:15.450 INFO:tasks.workunit.client.1.vm05.stdout:1/921: write d4/df/de0/fb5 [347353,96971] 0 2026-03-10T10:20:15.452 INFO:tasks.workunit.client.1.vm05.stdout:8/777: rename d7/d14/d3a/d49/d65/fdc to d7/d14/d24/d3f/d4f/ff6 0 2026-03-10T10:20:15.454 INFO:tasks.workunit.client.0.vm02.stdout:7/861: getdents d1/d1b/d8f 0 2026-03-10T10:20:15.461 INFO:tasks.workunit.client.0.vm02.stdout:7/862: truncate d1/d1b/d8f/f93 2851192 0 2026-03-10T10:20:15.464 INFO:tasks.workunit.client.1.vm05.stdout:5/855: mkdir da/d63/df2/d123 0 2026-03-10T10:20:15.464 INFO:tasks.workunit.client.1.vm05.stdout:5/856: read - da/db/d28/d97/fb2 zero size 2026-03-10T10:20:15.466 INFO:tasks.workunit.client.1.vm05.stdout:7/888: creat d5/d17/d66/f10e x:0 0 0 2026-03-10T10:20:15.467 INFO:tasks.workunit.client.1.vm05.stdout:7/889: chown d5/d1d/d29/d3e/d8c 124309 1 2026-03-10T10:20:15.477 INFO:tasks.workunit.client.0.vm02.stdout:7/863: creat d1/d1b/d8f/dad/d7e/dd2/f10e x:0 0 0 2026-03-10T10:20:15.483 INFO:tasks.workunit.client.0.vm02.stdout:7/864: mknod d1/dc/d55/d9a/dd9/c10f 0 2026-03-10T10:20:15.484 INFO:tasks.workunit.client.0.vm02.stdout:7/865: chown d1/d1b/d8f/dad/fd8 119250 1 2026-03-10T10:20:15.485 INFO:tasks.workunit.client.1.vm05.stdout:2/803: symlink db/d12/l103 0 2026-03-10T10:20:15.489 INFO:tasks.workunit.client.1.vm05.stdout:5/857: creat da/db/d26/d5c/f124 x:0 0 0 2026-03-10T10:20:15.504 INFO:tasks.workunit.client.1.vm05.stdout:2/804: unlink db/d28/dbc/le9 0 2026-03-10T10:20:15.507 INFO:tasks.workunit.client.1.vm05.stdout:5/858: creat da/db/d26/d70/d72/f125 x:0 0 0 2026-03-10T10:20:15.512 INFO:tasks.workunit.client.1.vm05.stdout:0/864: dwrite d1/d2/d9/d31/d13/d15/d4e/f89 [0,4194304] 0 2026-03-10T10:20:15.515 INFO:tasks.workunit.client.1.vm05.stdout:5/859: symlink da/db/d26/d5c/l126 0 2026-03-10T10:20:15.517 INFO:tasks.workunit.client.1.vm05.stdout:5/860: read da/db/d26/d5c/fb5 [2165192,113877] 0 2026-03-10T10:20:15.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:15 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:15.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:15 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:15.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:15 vm02.local ceph-mon[50200]: pgmap v6: 65 pgs: 65 active+clean; 3.4 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 12 MiB/s rd, 16 MiB/s wr, 29 op/s 2026-03-10T10:20:15.537 INFO:tasks.workunit.client.1.vm05.stdout:0/865: mkdir d1/d2/d9/d31/d13/d124 0 2026-03-10T10:20:15.539 INFO:tasks.workunit.client.0.vm02.stdout:4/977: write d1/d75/ddd/d10e/fc3 [490935,104500] 0 2026-03-10T10:20:15.539 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:15 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:15.539 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:15 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:15.539 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:15 vm05.local ceph-mon[59051]: pgmap v6: 65 pgs: 65 active+clean; 3.4 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 12 MiB/s rd, 16 MiB/s wr, 29 op/s 2026-03-10T10:20:15.539 INFO:tasks.workunit.client.1.vm05.stdout:7/890: getdents d5/d1d/d29/d60 0 2026-03-10T10:20:15.543 INFO:tasks.workunit.client.1.vm05.stdout:5/861: fsync da/db/fad 0 2026-03-10T10:20:15.545 INFO:tasks.workunit.client.1.vm05.stdout:1/922: getdents d4/df/d1c/d53/daa 0 2026-03-10T10:20:15.548 INFO:tasks.workunit.client.0.vm02.stdout:4/978: unlink d1/d32/da3/fd7 0 2026-03-10T10:20:15.549 INFO:tasks.workunit.client.1.vm05.stdout:1/923: sync 2026-03-10T10:20:15.551 INFO:tasks.workunit.client.1.vm05.stdout:0/866: rmdir d1/d2/d9/d50/d9a 39 2026-03-10T10:20:15.557 INFO:tasks.workunit.client.1.vm05.stdout:7/891: symlink d5/d1d/d20/d3b/l10f 0 2026-03-10T10:20:15.560 INFO:tasks.workunit.client.0.vm02.stdout:5/971: write d1/db/d11/d62/fbf [8942,41838] 0 2026-03-10T10:20:15.563 INFO:tasks.workunit.client.0.vm02.stdout:6/805: truncate d0/f4c 3792190 0 2026-03-10T10:20:15.563 INFO:tasks.workunit.client.0.vm02.stdout:6/806: fdatasync d0/d87/f10b 0 2026-03-10T10:20:15.566 INFO:tasks.workunit.client.1.vm05.stdout:5/862: creat da/d63/df2/f127 x:0 0 0 2026-03-10T10:20:15.568 INFO:tasks.workunit.client.0.vm02.stdout:5/972: unlink d1/fdd 0 2026-03-10T10:20:15.573 INFO:tasks.workunit.client.0.vm02.stdout:9/809: dwrite da/d3c/d4c/d38/d4a/d99/fef [0,4194304] 0 2026-03-10T10:20:15.574 INFO:tasks.workunit.client.1.vm05.stdout:5/863: dread da/db/d26/f7e [0,4194304] 0 2026-03-10T10:20:15.579 INFO:tasks.workunit.client.0.vm02.stdout:4/979: link d1/f94 d1/d75/ddd/d10e/d5e/d78/d7f/f149 0 2026-03-10T10:20:15.586 INFO:tasks.workunit.client.1.vm05.stdout:0/867: creat d1/d2/d9/d31/d13/d15/d4e/f125 x:0 0 0 2026-03-10T10:20:15.598 INFO:tasks.workunit.client.0.vm02.stdout:9/810: mknod da/d3c/d4c/d38/d4a/d70/c104 0 2026-03-10T10:20:15.600 INFO:tasks.workunit.client.1.vm05.stdout:5/864: chown da/db/dee/c34 16407769 1 2026-03-10T10:20:15.609 INFO:tasks.workunit.client.0.vm02.stdout:0/888: dwrite d9/d18/d1a/f7e [0,4194304] 0 2026-03-10T10:20:15.635 INFO:tasks.workunit.client.1.vm05.stdout:9/754: write d0/d1/d13/f6b [1171183,60833] 0 2026-03-10T10:20:15.656 INFO:tasks.workunit.client.0.vm02.stdout:6/807: mkdir d0/d8/d29/d6d/d96/de4/def/d6f/d10c 0 2026-03-10T10:20:15.659 INFO:tasks.workunit.client.1.vm05.stdout:4/725: write f0 [1287410,102247] 0 2026-03-10T10:20:15.663 INFO:tasks.workunit.client.0.vm02.stdout:1/885: truncate d4/d1b/f44 1342465 0 2026-03-10T10:20:15.673 INFO:tasks.workunit.client.0.vm02.stdout:2/862: dread d0/d10/f4b [0,4194304] 0 2026-03-10T10:20:15.678 INFO:tasks.workunit.client.0.vm02.stdout:0/889: mkdir d9/d34/d120 0 2026-03-10T10:20:15.678 INFO:tasks.workunit.client.0.vm02.stdout:0/890: truncate d9/d18/dc7/dca/f95 4593451 0 2026-03-10T10:20:15.686 INFO:tasks.workunit.client.0.vm02.stdout:6/808: write d0/d8/d9/fac [3708726,76178] 0 2026-03-10T10:20:15.686 INFO:tasks.workunit.client.0.vm02.stdout:6/809: chown d0/d8/d29/d6d 145291030 1 2026-03-10T10:20:15.691 INFO:tasks.workunit.client.0.vm02.stdout:5/973: creat d1/db/d11/d16/d79/f14a x:0 0 0 2026-03-10T10:20:15.692 INFO:tasks.workunit.client.1.vm05.stdout:3/886: dwrite dd/d15/d24/d2c/d3b/f55 [4194304,4194304] 0 2026-03-10T10:20:15.692 INFO:tasks.workunit.client.1.vm05.stdout:6/849: write dd/d36/d3f/d12/d44/d30/f9f [1046665,11181] 0 2026-03-10T10:20:15.695 INFO:tasks.workunit.client.0.vm02.stdout:8/844: dwrite d1/f16 [0,4194304] 0 2026-03-10T10:20:15.706 INFO:tasks.workunit.client.0.vm02.stdout:9/811: mknod da/d3c/d4c/d2c/d34/c105 0 2026-03-10T10:20:15.707 INFO:tasks.workunit.client.0.vm02.stdout:9/812: chown da/d3c/fc0 768549 1 2026-03-10T10:20:15.711 INFO:tasks.workunit.client.1.vm05.stdout:9/755: mknod d0/df/d74/d8c/c100 0 2026-03-10T10:20:15.727 INFO:tasks.workunit.client.0.vm02.stdout:3/846: write d1/d8/fb [244913,35078] 0 2026-03-10T10:20:15.727 INFO:tasks.workunit.client.0.vm02.stdout:3/847: readlink d1/d8/d44/l72 0 2026-03-10T10:20:15.741 INFO:tasks.workunit.client.0.vm02.stdout:6/810: creat d0/d8/d29/d52/de8/f10d x:0 0 0 2026-03-10T10:20:15.752 INFO:tasks.workunit.client.1.vm05.stdout:4/726: creat d1/d31/fef x:0 0 0 2026-03-10T10:20:15.752 INFO:tasks.workunit.client.0.vm02.stdout:7/866: write d1/dc/d60/ff0 [791943,38411] 0 2026-03-10T10:20:15.762 INFO:tasks.workunit.client.1.vm05.stdout:8/778: truncate d7/d14/d24/f95 1611647 0 2026-03-10T10:20:15.763 INFO:tasks.workunit.client.1.vm05.stdout:3/887: mknod dd/c140 0 2026-03-10T10:20:15.764 INFO:tasks.workunit.client.1.vm05.stdout:3/888: readlink dd/d39/d5c/l62 0 2026-03-10T10:20:15.764 INFO:tasks.workunit.client.1.vm05.stdout:3/889: chown dd/d15/d24 4909 1 2026-03-10T10:20:15.766 INFO:tasks.workunit.client.0.vm02.stdout:8/845: truncate d1/f40 595010 0 2026-03-10T10:20:15.779 INFO:tasks.workunit.client.0.vm02.stdout:1/886: symlink d4/d2c/l119 0 2026-03-10T10:20:15.781 INFO:tasks.workunit.client.1.vm05.stdout:2/805: dwrite db/d1c/f1f [0,4194304] 0 2026-03-10T10:20:15.784 INFO:tasks.workunit.client.1.vm05.stdout:4/727: creat d1/d31/d4b/ff0 x:0 0 0 2026-03-10T10:20:15.794 INFO:tasks.workunit.client.1.vm05.stdout:8/779: rename d7/d14/d24/d3f/d6a/db0/cd7 to d7/d14/d24/d3f/d6a/db0/cf7 0 2026-03-10T10:20:15.804 INFO:tasks.workunit.client.0.vm02.stdout:3/848: mknod d1/d8/d21/d73/d78/d79/c11c 0 2026-03-10T10:20:15.810 INFO:tasks.workunit.client.1.vm05.stdout:5/865: truncate da/db/fad 304107 0 2026-03-10T10:20:15.812 INFO:tasks.workunit.client.0.vm02.stdout:0/891: unlink d9/d18/cdc 0 2026-03-10T10:20:15.814 INFO:tasks.workunit.client.1.vm05.stdout:2/806: creat db/d61/dcc/f104 x:0 0 0 2026-03-10T10:20:15.818 INFO:tasks.workunit.client.1.vm05.stdout:4/728: symlink d1/d31/d4b/lf1 0 2026-03-10T10:20:15.822 INFO:tasks.workunit.client.1.vm05.stdout:1/924: write d4/d79/d83/dc5/dcb/fde [3178376,19517] 0 2026-03-10T10:20:15.823 INFO:tasks.workunit.client.1.vm05.stdout:5/866: stat da/d9a/dc7/cd6 0 2026-03-10T10:20:15.826 INFO:tasks.workunit.client.1.vm05.stdout:7/892: dwrite d5/dd/f12 [0,4194304] 0 2026-03-10T10:20:15.837 INFO:tasks.workunit.client.1.vm05.stdout:4/729: symlink d1/d3/d65/lf2 0 2026-03-10T10:20:15.838 INFO:tasks.workunit.client.1.vm05.stdout:4/730: chown d1/d64/da9/fb9 46 1 2026-03-10T10:20:15.844 INFO:tasks.workunit.client.1.vm05.stdout:0/868: dwrite d1/d2/d9/d31/d13/d15/d4e/f60 [0,4194304] 0 2026-03-10T10:20:15.848 INFO:tasks.workunit.client.0.vm02.stdout:4/980: dwrite d1/d10/db/f116 [0,4194304] 0 2026-03-10T10:20:15.852 INFO:tasks.workunit.client.1.vm05.stdout:3/890: creat dd/d20/f141 x:0 0 0 2026-03-10T10:20:15.887 INFO:tasks.workunit.client.1.vm05.stdout:5/867: mknod da/d9a/dbe/c128 0 2026-03-10T10:20:15.896 INFO:tasks.workunit.client.1.vm05.stdout:5/868: dread da/db/d26/f101 [0,4194304] 0 2026-03-10T10:20:15.897 INFO:tasks.workunit.client.1.vm05.stdout:6/850: write dd/d36/d3f/d12/d44/d2a/d3d/d3e/f73 [334699,91657] 0 2026-03-10T10:20:15.900 INFO:tasks.workunit.client.1.vm05.stdout:9/756: dwrite d0/df/d74/f9e [0,4194304] 0 2026-03-10T10:20:15.901 INFO:tasks.workunit.client.1.vm05.stdout:4/731: truncate d1/d31/dc/d40/fc3 225969 0 2026-03-10T10:20:15.901 INFO:tasks.workunit.client.1.vm05.stdout:0/869: mknod d1/d2/d9/d31/d13/d15/d4e/df6/c126 0 2026-03-10T10:20:15.902 INFO:tasks.workunit.client.1.vm05.stdout:0/870: stat d1/d2/d39/d6e/dc0 0 2026-03-10T10:20:15.903 INFO:tasks.workunit.client.1.vm05.stdout:3/891: read - dd/d15/f11f zero size 2026-03-10T10:20:15.912 INFO:tasks.workunit.client.1.vm05.stdout:1/925: mknod d4/df/c110 0 2026-03-10T10:20:15.917 INFO:tasks.workunit.client.0.vm02.stdout:5/974: creat d1/db/d11/d84/d40/d4f/d5f/d6d/f14b x:0 0 0 2026-03-10T10:20:15.923 INFO:tasks.workunit.client.0.vm02.stdout:6/811: mknod d0/d8/d29/d6d/c10e 0 2026-03-10T10:20:15.923 INFO:tasks.workunit.client.0.vm02.stdout:6/812: chown d0/d8/d9/d7a/dc0/fdf 9595439 1 2026-03-10T10:20:15.931 INFO:tasks.workunit.client.1.vm05.stdout:4/732: mknod d1/d31/dc/d40/d45/daa/cf3 0 2026-03-10T10:20:15.932 INFO:tasks.workunit.client.1.vm05.stdout:4/733: write d1/d31/dc/d40/d45/daa/fe4 [850063,47461] 0 2026-03-10T10:20:15.940 INFO:tasks.workunit.client.0.vm02.stdout:9/813: creat da/f106 x:0 0 0 2026-03-10T10:20:15.951 INFO:tasks.workunit.client.0.vm02.stdout:2/863: dwrite d0/d71/d108/d65/dc4/dfa/f34 [0,4194304] 0 2026-03-10T10:20:15.953 INFO:tasks.workunit.client.0.vm02.stdout:2/864: chown d0/d71/d108/c7b 3028849 1 2026-03-10T10:20:15.960 INFO:tasks.workunit.client.1.vm05.stdout:1/926: rmdir d4/d20/dbe/de8 39 2026-03-10T10:20:15.961 INFO:tasks.workunit.client.1.vm05.stdout:2/807: getdents db/d28/d4f/d59/d94/d95 0 2026-03-10T10:20:15.963 INFO:tasks.workunit.client.0.vm02.stdout:8/846: link d1/d1c/d43/ce8 d1/d1c/d43/d6a/c103 0 2026-03-10T10:20:15.969 INFO:tasks.workunit.client.0.vm02.stdout:1/887: rename d4/d2c/d53/lda to d4/da/d27/d38/l11a 0 2026-03-10T10:20:15.973 INFO:tasks.workunit.client.0.vm02.stdout:8/847: creat d1/d1c/d24/dad/dbe/dda/f104 x:0 0 0 2026-03-10T10:20:15.973 INFO:tasks.workunit.client.0.vm02.stdout:8/848: fdatasync d1/d1c/d23/d25/fdc 0 2026-03-10T10:20:15.976 INFO:tasks.workunit.client.0.vm02.stdout:9/814: mknod da/d3c/c107 0 2026-03-10T10:20:15.993 INFO:tasks.workunit.client.1.vm05.stdout:1/927: unlink d4/df/d1c/f9c 0 2026-03-10T10:20:15.993 INFO:tasks.workunit.client.1.vm05.stdout:7/893: getdents d5/d1d/d20/d2d/d5d/d7a 0 2026-03-10T10:20:15.993 INFO:tasks.workunit.client.0.vm02.stdout:1/888: mknod d4/da/d27/c11b 0 2026-03-10T10:20:15.993 INFO:tasks.workunit.client.0.vm02.stdout:1/889: chown d4/da/d27/d38/d3c/lc1 919951 1 2026-03-10T10:20:15.993 INFO:tasks.workunit.client.0.vm02.stdout:1/890: dwrite d4/d2c/d53/da6/db8/f101 [4194304,4194304] 0 2026-03-10T10:20:15.993 INFO:tasks.workunit.client.0.vm02.stdout:9/815: mknod da/d3c/d53/c108 0 2026-03-10T10:20:15.993 INFO:tasks.workunit.client.0.vm02.stdout:3/849: rename d1/d8/d21/df4/ff6 to d1/d20/db2/dcb/f11d 0 2026-03-10T10:20:15.993 INFO:tasks.workunit.client.0.vm02.stdout:3/850: chown d1/d8/d21/d73/d101/d114 22529372 1 2026-03-10T10:20:16.005 INFO:tasks.workunit.client.0.vm02.stdout:1/891: write d4/d4a/ff8 [1730118,10028] 0 2026-03-10T10:20:16.005 INFO:tasks.workunit.client.1.vm05.stdout:8/780: dwrite d7/d2f/d57/f66 [0,4194304] 0 2026-03-10T10:20:16.009 INFO:tasks.workunit.client.1.vm05.stdout:2/808: getdents db/d28 0 2026-03-10T10:20:16.012 INFO:tasks.workunit.client.0.vm02.stdout:1/892: getdents d4/da/d1a/d47/d88/d10b 0 2026-03-10T10:20:16.019 INFO:tasks.workunit.client.0.vm02.stdout:3/851: creat d1/d58/dc9/f11e x:0 0 0 2026-03-10T10:20:16.020 INFO:tasks.workunit.client.1.vm05.stdout:7/894: getdents d5/d1d/de3 0 2026-03-10T10:20:16.021 INFO:tasks.workunit.client.0.vm02.stdout:1/893: symlink d4/da/d27/d38/d3c/dd1/l11c 0 2026-03-10T10:20:16.023 INFO:tasks.workunit.client.1.vm05.stdout:2/809: mknod db/d1c/d40/d62/c105 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.0.vm02.stdout:3/852: creat d1/d58/f11f x:0 0 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.0.vm02.stdout:3/853: dwrite d1/d8/d86/f111 [0,4194304] 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.0.vm02.stdout:3/854: creat d1/d6/f120 x:0 0 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.0.vm02.stdout:3/855: creat d1/d8/d44/deb/f121 x:0 0 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/895: dwrite d5/d1d/f31 [0,4194304] 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/896: chown d5/d1d/d29/d60/fee 9145 1 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:2/810: symlink db/d28/d4f/d59/d94/d95/l106 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/897: creat d5/d1d/d20/d35/f110 x:0 0 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/898: dread d5/d26/f39 [0,4194304] 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/899: creat d5/d1d/d29/d3e/d8c/d82/f111 x:0 0 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/900: chown d5/d1d/d20/d3b/fba 3767 1 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/901: unlink d5/d1d/d20/d2d/d68/f98 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/902: creat d5/d1d/d29/dbe/f112 x:0 0 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/903: stat d5/d1d/d20/d2d/d5d/d7a/c108 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/904: creat d5/d1d/d20/d2d/d68/f113 x:0 0 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/905: readlink d5/d17/d66/lf4 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/906: mkdir d5/d1d/d20/d91/da7/d114 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/907: chown d5/dd/fa9 2724705 1 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/908: creat d5/d1d/d20/d2d/d80/f115 x:0 0 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/909: dwrite d5/d1d/d29/d3e/d8c/d7f/f93 [0,4194304] 0 2026-03-10T10:20:16.069 INFO:tasks.workunit.client.1.vm05.stdout:7/910: mkdir d5/d1d/d29/d3e/d8c/d7f/d116 0 2026-03-10T10:20:16.108 INFO:tasks.workunit.client.1.vm05.stdout:6/851: dread dd/d36/d3f/d12/d44/d63/f78 [0,4194304] 0 2026-03-10T10:20:16.114 INFO:tasks.workunit.client.1.vm05.stdout:6/852: getdents dd/d36/d7d 0 2026-03-10T10:20:16.126 INFO:tasks.workunit.client.1.vm05.stdout:6/853: creat dd/d36/d3f/d12/d59/df5/f111 x:0 0 0 2026-03-10T10:20:16.127 INFO:tasks.workunit.client.0.vm02.stdout:6/813: sync 2026-03-10T10:20:16.131 INFO:tasks.workunit.client.0.vm02.stdout:6/814: stat d0/d8/d29/d6d/d96/de4/d102/l2e 0 2026-03-10T10:20:16.134 INFO:tasks.workunit.client.0.vm02.stdout:6/815: mkdir d0/d8/d29/d2f/d50/d10f 0 2026-03-10T10:20:16.137 INFO:tasks.workunit.client.0.vm02.stdout:6/816: unlink d0/le7 0 2026-03-10T10:20:16.161 INFO:tasks.workunit.client.0.vm02.stdout:6/817: sync 2026-03-10T10:20:16.165 INFO:tasks.workunit.client.0.vm02.stdout:6/818: creat d0/d87/f110 x:0 0 0 2026-03-10T10:20:16.191 INFO:tasks.workunit.client.0.vm02.stdout:7/867: write d1/d1b/f72 [1539707,21945] 0 2026-03-10T10:20:16.200 INFO:tasks.workunit.client.0.vm02.stdout:7/868: creat d1/d1b/d8f/f110 x:0 0 0 2026-03-10T10:20:16.210 INFO:tasks.workunit.client.0.vm02.stdout:7/869: fdatasync d1/dc/d16/fda 0 2026-03-10T10:20:16.223 INFO:tasks.workunit.client.0.vm02.stdout:0/892: dwrite d9/d18/d1a/d22/d24/d8e/fec [0,4194304] 0 2026-03-10T10:20:16.225 INFO:tasks.workunit.client.0.vm02.stdout:0/893: stat d9/d34/d3d/d65/d89/fd9 0 2026-03-10T10:20:16.230 INFO:tasks.workunit.client.0.vm02.stdout:0/894: fdatasync d9/d34/d3d/d7b/fc0 0 2026-03-10T10:20:16.232 INFO:tasks.workunit.client.1.vm05.stdout:5/869: write da/db/d28/d32/f69 [1314653,57348] 0 2026-03-10T10:20:16.233 INFO:tasks.workunit.client.1.vm05.stdout:5/870: chown da/d63/df2/d123 46287 1 2026-03-10T10:20:16.234 INFO:tasks.workunit.client.0.vm02.stdout:4/981: dwrite d1/d32/fc4 [4194304,4194304] 0 2026-03-10T10:20:16.249 INFO:tasks.workunit.client.0.vm02.stdout:0/895: mknod d9/c121 0 2026-03-10T10:20:16.249 INFO:tasks.workunit.client.1.vm05.stdout:0/871: dwrite d1/d2/d9/d31/d54/f27 [0,4194304] 0 2026-03-10T10:20:16.254 INFO:tasks.workunit.client.1.vm05.stdout:5/871: creat da/db/dee/d109/f129 x:0 0 0 2026-03-10T10:20:16.254 INFO:tasks.workunit.client.1.vm05.stdout:3/892: write dd/d20/d130/fd4 [288068,97867] 0 2026-03-10T10:20:16.256 INFO:tasks.workunit.client.0.vm02.stdout:0/896: creat d9/d34/d3d/d65/d89/f122 x:0 0 0 2026-03-10T10:20:16.260 INFO:tasks.workunit.client.0.vm02.stdout:2/865: dwrite d0/d71/d108/d65/db0/fcb [0,4194304] 0 2026-03-10T10:20:16.266 INFO:tasks.workunit.client.1.vm05.stdout:4/734: write d1/d3/f5f [2592101,41613] 0 2026-03-10T10:20:16.267 INFO:tasks.workunit.client.1.vm05.stdout:9/757: truncate d0/d1/d16/d6e/daf/fdb 398314 0 2026-03-10T10:20:16.268 INFO:tasks.workunit.client.1.vm05.stdout:1/928: write d4/df/d1c/d53/daa/fa9 [4275396,45527] 0 2026-03-10T10:20:16.278 INFO:tasks.workunit.client.0.vm02.stdout:8/849: write d1/d1c/d43/d6a/da8/d8e/ff8 [1605017,20519] 0 2026-03-10T10:20:16.282 INFO:tasks.workunit.client.0.vm02.stdout:5/975: dwrite d1/f68 [0,4194304] 0 2026-03-10T10:20:16.291 INFO:tasks.workunit.client.0.vm02.stdout:4/982: mknod d1/d75/ddd/d102/c14a 0 2026-03-10T10:20:16.292 INFO:tasks.workunit.client.1.vm05.stdout:8/781: write d7/d14/d62/f9d [407482,65449] 0 2026-03-10T10:20:16.301 INFO:tasks.workunit.client.0.vm02.stdout:9/816: dwrite da/ff [4194304,4194304] 0 2026-03-10T10:20:16.307 INFO:tasks.workunit.client.1.vm05.stdout:5/872: symlink da/db/d28/d97/l12a 0 2026-03-10T10:20:16.316 INFO:tasks.workunit.client.0.vm02.stdout:0/897: symlink d9/d34/d3d/d65/d89/dd3/l123 0 2026-03-10T10:20:16.324 INFO:tasks.workunit.client.1.vm05.stdout:4/735: dread d1/d31/f2f [0,4194304] 0 2026-03-10T10:20:16.325 INFO:tasks.workunit.client.1.vm05.stdout:4/736: readlink d1/d3/d65/db0/lcf 0 2026-03-10T10:20:16.328 INFO:tasks.workunit.client.0.vm02.stdout:1/894: dwrite d4/fe [4194304,4194304] 0 2026-03-10T10:20:16.336 INFO:tasks.workunit.client.1.vm05.stdout:9/758: rename d0/d1/d13/d26/l37 to d0/d1/d13/de/ddf/l101 0 2026-03-10T10:20:16.338 INFO:tasks.workunit.client.1.vm05.stdout:2/811: write db/d28/d4f/d59/da4/d81/da7/fe5 [794712,94181] 0 2026-03-10T10:20:16.341 INFO:tasks.workunit.client.0.vm02.stdout:5/976: unlink d1/db/d11/d13/d28/d37/d3d/c69 0 2026-03-10T10:20:16.344 INFO:tasks.workunit.client.0.vm02.stdout:5/977: chown d1/db/d11/d16/d48/fb5 32426542 1 2026-03-10T10:20:16.351 INFO:tasks.workunit.client.1.vm05.stdout:3/893: unlink dd/d15/d24/d74/d88/f12b 0 2026-03-10T10:20:16.351 INFO:tasks.workunit.client.1.vm05.stdout:3/894: dwrite dd/d39/f51 [0,4194304] 0 2026-03-10T10:20:16.359 INFO:tasks.workunit.client.1.vm05.stdout:8/782: rename d7/d14/d15/f84 to d7/d14/d62/d90/ff8 0 2026-03-10T10:20:16.365 INFO:tasks.workunit.client.1.vm05.stdout:7/911: dwrite d5/d1d/d29/fa1 [0,4194304] 0 2026-03-10T10:20:16.367 INFO:tasks.workunit.client.1.vm05.stdout:4/737: sync 2026-03-10T10:20:16.367 INFO:tasks.workunit.client.1.vm05.stdout:2/812: sync 2026-03-10T10:20:16.385 INFO:tasks.workunit.client.1.vm05.stdout:5/873: mkdir da/d9a/dbe/d11d/d12b 0 2026-03-10T10:20:16.386 INFO:tasks.workunit.client.0.vm02.stdout:8/850: creat d1/d2/dff/f105 x:0 0 0 2026-03-10T10:20:16.393 INFO:tasks.workunit.client.1.vm05.stdout:1/929: creat d4/d3d/d6e/f111 x:0 0 0 2026-03-10T10:20:16.403 INFO:tasks.workunit.client.1.vm05.stdout:6/854: write dd/fdd [199068,44513] 0 2026-03-10T10:20:16.403 INFO:tasks.workunit.client.1.vm05.stdout:1/930: dwrite d4/d79/d83/dc5/dcb/fde [0,4194304] 0 2026-03-10T10:20:16.407 INFO:tasks.workunit.client.0.vm02.stdout:9/817: link da/f15 da/d3c/d4c/d38/d82/da3/f109 0 2026-03-10T10:20:16.413 INFO:tasks.workunit.client.0.vm02.stdout:6/819: write d0/d8/d29/d52/fbc [752349,105842] 0 2026-03-10T10:20:16.428 INFO:tasks.workunit.client.1.vm05.stdout:7/912: fsync d5/d1d/d20/d3b/fba 0 2026-03-10T10:20:16.430 INFO:tasks.workunit.client.0.vm02.stdout:4/983: creat d1/f14b x:0 0 0 2026-03-10T10:20:16.434 INFO:tasks.workunit.client.0.vm02.stdout:9/818: dread - da/d3c/d4c/d38/d82/d89/fe9 zero size 2026-03-10T10:20:16.434 INFO:tasks.workunit.client.1.vm05.stdout:5/874: symlink da/d9a/l12c 0 2026-03-10T10:20:16.435 INFO:tasks.workunit.client.0.vm02.stdout:9/819: stat da/d3c/d4c/d2c/lfe 0 2026-03-10T10:20:16.435 INFO:tasks.workunit.client.1.vm05.stdout:5/875: write da/db/d26/d70/fd1 [1140191,52498] 0 2026-03-10T10:20:16.439 INFO:tasks.workunit.client.0.vm02.stdout:5/978: link d1/db/d11/d13/dc9/ced d1/db/d11/d16/d48/dcf/d134/c14c 0 2026-03-10T10:20:16.442 INFO:tasks.workunit.client.0.vm02.stdout:4/984: rename d1/d75/ddd/d10e/d5e/d78/d7f/l9a to d1/d32/da3/d11d/l14c 0 2026-03-10T10:20:16.448 INFO:tasks.workunit.client.0.vm02.stdout:6/820: symlink d0/d8/d29/d6d/d96/de4/def/d6f/d10c/l111 0 2026-03-10T10:20:16.454 INFO:tasks.workunit.client.1.vm05.stdout:4/738: mkdir d1/d31/df4 0 2026-03-10T10:20:16.457 INFO:tasks.workunit.client.0.vm02.stdout:9/820: mkdir da/d3c/d4c/d38/d4a/d70/d10a 0 2026-03-10T10:20:16.458 INFO:tasks.workunit.client.0.vm02.stdout:9/821: chown da/d3c/d4c/d2c/d34/c42 866522 1 2026-03-10T10:20:16.461 INFO:tasks.workunit.client.0.vm02.stdout:9/822: write da/d3c/d4c/d2c/d34/f83 [4144417,78603] 0 2026-03-10T10:20:16.472 INFO:tasks.workunit.client.0.vm02.stdout:4/985: creat d1/d32/da3/d11d/f14d x:0 0 0 2026-03-10T10:20:16.491 INFO:tasks.workunit.client.0.vm02.stdout:9/823: rmdir da/d3c/d4c/d56 39 2026-03-10T10:20:16.497 INFO:tasks.workunit.client.1.vm05.stdout:1/931: dread - d4/df/de0/d82/f102 zero size 2026-03-10T10:20:16.499 INFO:tasks.workunit.client.1.vm05.stdout:0/872: dwrite d1/d2/d9/d31/f109 [0,4194304] 0 2026-03-10T10:20:16.501 INFO:tasks.workunit.client.0.vm02.stdout:4/986: creat d1/d75/ddd/d10e/d5e/d78/d1a/d49/d81/dc6/f14e x:0 0 0 2026-03-10T10:20:16.506 INFO:tasks.workunit.client.1.vm05.stdout:0/873: readlink d1/d2/d5d/l70 0 2026-03-10T10:20:16.507 INFO:tasks.workunit.client.1.vm05.stdout:0/874: readlink d1/d2/d5d/l70 0 2026-03-10T10:20:16.513 INFO:tasks.workunit.client.0.vm02.stdout:9/824: creat da/d3c/d4c/d38/da6/f10b x:0 0 0 2026-03-10T10:20:16.520 INFO:tasks.workunit.client.1.vm05.stdout:2/813: link db/d28/lda db/d1c/l107 0 2026-03-10T10:20:16.520 INFO:tasks.workunit.client.1.vm05.stdout:1/932: fdatasync d4/d39/d3e/f7d 0 2026-03-10T10:20:16.520 INFO:tasks.workunit.client.0.vm02.stdout:5/979: creat d1/db/d11/d16/d79/f14d x:0 0 0 2026-03-10T10:20:16.520 INFO:tasks.workunit.client.0.vm02.stdout:4/987: fsync d1/d52/d53/f9b 0 2026-03-10T10:20:16.520 INFO:tasks.workunit.client.0.vm02.stdout:4/988: chown d1/d75/ddd/d10e/d5e/d78/d1a/fad 107 1 2026-03-10T10:20:16.520 INFO:tasks.workunit.client.0.vm02.stdout:6/821: rename d0/d8/d29/d52/de8/db2/dbb/lc9 to d0/d8/d29/d6d/d96/de4/def/d6f/l112 0 2026-03-10T10:20:16.520 INFO:tasks.workunit.client.0.vm02.stdout:4/989: chown d1/d75/ddd/d10e/d5e/d78/d1a/f4c 507 1 2026-03-10T10:20:16.520 INFO:tasks.workunit.client.1.vm05.stdout:0/875: creat d1/d2/d9/d31/d13/d17/da1/dbd/f127 x:0 0 0 2026-03-10T10:20:16.522 INFO:tasks.workunit.client.1.vm05.stdout:0/876: chown d1/d2/d9/d31/d54 9628 1 2026-03-10T10:20:16.525 INFO:tasks.workunit.client.1.vm05.stdout:0/877: dread - d1/d2/d9/d31/d13/f9c zero size 2026-03-10T10:20:16.526 INFO:tasks.workunit.client.1.vm05.stdout:4/739: creat d1/d31/dc/d40/d45/ded/ff5 x:0 0 0 2026-03-10T10:20:16.527 INFO:tasks.workunit.client.0.vm02.stdout:5/980: dread - d1/db/d11/d13/dc9/f110 zero size 2026-03-10T10:20:16.535 INFO:tasks.workunit.client.1.vm05.stdout:1/933: readlink d4/df/d1c/l6d 0 2026-03-10T10:20:16.542 INFO:tasks.workunit.client.0.vm02.stdout:2/866: write d0/d10/f93 [2396959,48999] 0 2026-03-10T10:20:16.544 INFO:tasks.workunit.client.0.vm02.stdout:9/825: dread da/d3c/d4c/d2c/d96/fee [0,4194304] 0 2026-03-10T10:20:16.546 INFO:tasks.workunit.client.0.vm02.stdout:9/826: write da/d3c/d4c/d38/da6/f10b [478562,32211] 0 2026-03-10T10:20:16.546 INFO:tasks.workunit.client.0.vm02.stdout:3/856: dwrite d1/d20/db2/dcb/f11d [0,4194304] 0 2026-03-10T10:20:16.550 INFO:tasks.workunit.client.0.vm02.stdout:0/898: write d9/d18/d1a/d22/d24/ffb [501480,28522] 0 2026-03-10T10:20:16.550 INFO:tasks.workunit.client.1.vm05.stdout:4/740: mkdir d1/d3/d65/df6 0 2026-03-10T10:20:16.551 INFO:tasks.workunit.client.0.vm02.stdout:5/981: mkdir d1/db/d11/d13/dc9/d14e 0 2026-03-10T10:20:16.568 INFO:tasks.workunit.client.0.vm02.stdout:6/822: mknod d0/d8/c113 0 2026-03-10T10:20:16.572 INFO:tasks.workunit.client.1.vm05.stdout:1/934: mkdir d4/d39/d88/d112 0 2026-03-10T10:20:16.578 INFO:tasks.workunit.client.0.vm02.stdout:2/867: chown d0/l5 11 1 2026-03-10T10:20:16.588 INFO:tasks.workunit.client.1.vm05.stdout:4/741: rename d1/d31/f2d to d1/d31/dc/d40/d45/ff7 0 2026-03-10T10:20:16.589 INFO:tasks.workunit.client.1.vm05.stdout:4/742: stat d1/d31/dc/d40/f7d 0 2026-03-10T10:20:16.595 INFO:tasks.workunit.client.0.vm02.stdout:1/895: write d4/dc3/fec [216381,40225] 0 2026-03-10T10:20:16.608 INFO:tasks.workunit.client.1.vm05.stdout:0/878: link d1/d2/d9/fbc d1/d2/d9/d31/d13/d15/f128 0 2026-03-10T10:20:16.608 INFO:tasks.workunit.client.1.vm05.stdout:0/879: chown d1/d2/d9/d31/f8c 2377681 1 2026-03-10T10:20:16.608 INFO:tasks.workunit.client.0.vm02.stdout:1/896: readlink d4/da/d27/lf2 0 2026-03-10T10:20:16.608 INFO:tasks.workunit.client.0.vm02.stdout:3/857: truncate d1/d8/d21/d7d/fdf 64615 0 2026-03-10T10:20:16.608 INFO:tasks.workunit.client.0.vm02.stdout:3/858: chown d1/d20/d52/f92 10196006 1 2026-03-10T10:20:16.608 INFO:tasks.workunit.client.0.vm02.stdout:0/899: mknod d9/d18/d1a/d22/d24/d80/d57/c124 0 2026-03-10T10:20:16.616 INFO:tasks.workunit.client.0.vm02.stdout:2/868: sync 2026-03-10T10:20:16.616 INFO:tasks.workunit.client.0.vm02.stdout:2/869: chown d0/d10/f46 524454472 1 2026-03-10T10:20:16.616 INFO:tasks.workunit.client.0.vm02.stdout:0/900: dwrite d9/d34/d3d/d65/d89/f122 [0,4194304] 0 2026-03-10T10:20:16.621 INFO:tasks.workunit.client.0.vm02.stdout:0/901: dread - d9/d18/d1a/d22/d24/d80/fe4 zero size 2026-03-10T10:20:16.622 INFO:tasks.workunit.client.0.vm02.stdout:0/902: chown d9/d34/d3d/d7b/fdb 713747 1 2026-03-10T10:20:16.624 INFO:tasks.workunit.client.1.vm05.stdout:4/743: dread d1/d31/dc/d40/d45/f48 [0,4194304] 0 2026-03-10T10:20:16.635 INFO:tasks.workunit.client.1.vm05.stdout:1/935: link d4/d39/d88/l9f d4/d39/d3e/db1/l113 0 2026-03-10T10:20:16.641 INFO:tasks.workunit.client.0.vm02.stdout:3/859: symlink d1/d6/d8b/l122 0 2026-03-10T10:20:16.641 INFO:tasks.workunit.client.0.vm02.stdout:3/860: chown d1/f90 362439116 1 2026-03-10T10:20:16.652 INFO:tasks.workunit.client.0.vm02.stdout:2/870: creat d0/d71/d108/d65/dc4/f123 x:0 0 0 2026-03-10T10:20:16.652 INFO:tasks.workunit.client.0.vm02.stdout:2/871: write d0/d10/da6/fb6 [1531520,128910] 0 2026-03-10T10:20:16.655 INFO:tasks.workunit.client.1.vm05.stdout:4/744: creat d1/d3/d65/df6/ff8 x:0 0 0 2026-03-10T10:20:16.663 INFO:tasks.workunit.client.0.vm02.stdout:0/903: creat d9/d34/d3d/d65/d89/dd3/f125 x:0 0 0 2026-03-10T10:20:16.669 INFO:tasks.workunit.client.1.vm05.stdout:4/745: read - d1/d3/d65/ddb/fe6 zero size 2026-03-10T10:20:16.673 INFO:tasks.workunit.client.0.vm02.stdout:9/827: truncate da/f15 3556839 0 2026-03-10T10:20:16.675 INFO:tasks.workunit.client.0.vm02.stdout:8/851: write d1/d1c/d23/f75 [1399436,6011] 0 2026-03-10T10:20:16.675 INFO:tasks.workunit.client.1.vm05.stdout:8/783: write d7/d14/d15/fa8 [393255,3078] 0 2026-03-10T10:20:16.677 INFO:tasks.workunit.client.1.vm05.stdout:6/855: write dd/d36/d3f/d12/d44/d2a/d3d/d48/fb2 [403730,30430] 0 2026-03-10T10:20:16.685 INFO:tasks.workunit.client.1.vm05.stdout:3/895: dwrite dd/d20/d130/fef [0,4194304] 0 2026-03-10T10:20:16.687 INFO:tasks.workunit.client.1.vm05.stdout:3/896: chown dd/dbe/l120 2354 1 2026-03-10T10:20:16.688 INFO:tasks.workunit.client.1.vm05.stdout:3/897: write dd/d15/d24/d8e/dac/f13a [12025,128915] 0 2026-03-10T10:20:16.692 INFO:tasks.workunit.client.1.vm05.stdout:9/759: truncate d0/d1/f9 190724 0 2026-03-10T10:20:16.692 INFO:tasks.workunit.client.1.vm05.stdout:7/913: dwrite d5/d26/f33 [4194304,4194304] 0 2026-03-10T10:20:16.692 INFO:tasks.workunit.client.1.vm05.stdout:1/936: link d4/d39/cca d4/d3d/ddc/c114 0 2026-03-10T10:20:16.698 INFO:tasks.workunit.client.1.vm05.stdout:5/876: dwrite da/db/d26/d5c/fb5 [0,4194304] 0 2026-03-10T10:20:16.702 INFO:tasks.workunit.client.1.vm05.stdout:8/784: truncate d7/d14/d3a/f5f 923407 0 2026-03-10T10:20:16.706 INFO:tasks.workunit.client.1.vm05.stdout:6/856: mkdir dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/d112 0 2026-03-10T10:20:16.708 INFO:tasks.workunit.client.0.vm02.stdout:2/872: creat d0/d10/dee/d116/f124 x:0 0 0 2026-03-10T10:20:16.714 INFO:tasks.workunit.client.0.vm02.stdout:9/828: symlink da/d3c/d4c/d38/d82/dd9/l10c 0 2026-03-10T10:20:16.722 INFO:tasks.workunit.client.0.vm02.stdout:7/870: dwrite d1/d1b/d8f/dad/f75 [0,4194304] 0 2026-03-10T10:20:16.734 INFO:tasks.workunit.client.0.vm02.stdout:1/897: rename d4/d2c to d4/da/d1a/d11d 0 2026-03-10T10:20:16.740 INFO:tasks.workunit.client.0.vm02.stdout:0/904: getdents d9/d34/d3d/d7b 0 2026-03-10T10:20:16.742 INFO:tasks.workunit.client.0.vm02.stdout:0/905: stat d9/d18/d1a/d22/d24/d8e/d91 0 2026-03-10T10:20:16.742 INFO:tasks.workunit.client.0.vm02.stdout:0/906: stat d9/d18/d1a/cfc 0 2026-03-10T10:20:16.746 INFO:tasks.workunit.client.0.vm02.stdout:2/873: truncate d0/d10/d81/f9b 1618122 0 2026-03-10T10:20:16.746 INFO:tasks.workunit.client.0.vm02.stdout:8/852: dread d1/d1c/d43/d6a/da8/d56/fd9 [0,4194304] 0 2026-03-10T10:20:16.754 INFO:tasks.workunit.client.1.vm05.stdout:5/877: dread - da/db/d26/d70/d72/df6/d10e/f103 zero size 2026-03-10T10:20:16.766 INFO:tasks.workunit.client.1.vm05.stdout:1/937: dread d4/df/d1c/d53/d66/fb3 [0,4194304] 0 2026-03-10T10:20:16.766 INFO:tasks.workunit.client.1.vm05.stdout:7/914: dread d5/d1d/d20/d91/fbd [0,4194304] 0 2026-03-10T10:20:16.767 INFO:tasks.workunit.client.1.vm05.stdout:7/915: chown d5/d1d/d20/d2d/d5d/d7a/laf 24 1 2026-03-10T10:20:16.771 INFO:tasks.workunit.client.1.vm05.stdout:2/814: write db/d1c/f9b [1274920,115191] 0 2026-03-10T10:20:16.773 INFO:tasks.workunit.client.0.vm02.stdout:4/990: write d1/d75/ddd/d10e/d5e/d78/f3f [7297980,34348] 0 2026-03-10T10:20:16.775 INFO:tasks.workunit.client.1.vm05.stdout:7/916: dwrite d5/d1d/d20/d91/fc9 [0,4194304] 0 2026-03-10T10:20:16.794 INFO:tasks.workunit.client.0.vm02.stdout:1/898: creat d4/dc3/df0/f11e x:0 0 0 2026-03-10T10:20:16.794 INFO:tasks.workunit.client.0.vm02.stdout:1/899: chown d4/dc3/ffe 14666 1 2026-03-10T10:20:16.797 INFO:tasks.workunit.client.0.vm02.stdout:5/982: write d1/db/d11/d13/d28/da7/ffa [532552,24618] 0 2026-03-10T10:20:16.805 INFO:tasks.workunit.client.1.vm05.stdout:9/760: mknod d0/d1/d13/de/c102 0 2026-03-10T10:20:16.810 INFO:tasks.workunit.client.0.vm02.stdout:6/823: dwrite d0/d8/d29/d6d/d96/de4/def/fcf [0,4194304] 0 2026-03-10T10:20:16.812 INFO:tasks.workunit.client.0.vm02.stdout:6/824: readlink d0/d8/d29/d6d/d96/de4/def/ld8 0 2026-03-10T10:20:16.819 INFO:tasks.workunit.client.1.vm05.stdout:0/880: write d1/d2/d9/f98 [1572830,11069] 0 2026-03-10T10:20:16.820 INFO:tasks.workunit.client.1.vm05.stdout:0/881: dread - d1/d2/d9/d31/d13/d17/da1/df5/f117 zero size 2026-03-10T10:20:16.838 INFO:tasks.workunit.client.1.vm05.stdout:1/938: unlink d4/df/d76/f10a 0 2026-03-10T10:20:16.843 INFO:tasks.workunit.client.0.vm02.stdout:3/861: write d1/f3 [2469805,108345] 0 2026-03-10T10:20:16.854 INFO:tasks.workunit.client.1.vm05.stdout:7/917: fdatasync d5/f6c 0 2026-03-10T10:20:16.863 INFO:tasks.workunit.client.1.vm05.stdout:3/898: dwrite dd/dbe/f109 [0,4194304] 0 2026-03-10T10:20:16.869 INFO:tasks.workunit.client.1.vm05.stdout:2/815: dread db/d12/f1d [0,4194304] 0 2026-03-10T10:20:16.883 INFO:tasks.workunit.client.1.vm05.stdout:9/761: symlink d0/df/d74/d90/l103 0 2026-03-10T10:20:16.884 INFO:tasks.workunit.client.0.vm02.stdout:9/829: write da/d3c/d4c/d2c/d34/f57 [4130788,570] 0 2026-03-10T10:20:16.886 INFO:tasks.workunit.client.0.vm02.stdout:0/907: mkdir d9/d34/d3d/d65/d89/dd3/d9c/d126 0 2026-03-10T10:20:16.891 INFO:tasks.workunit.client.1.vm05.stdout:4/746: rename d1/d31/d72/ld8 to d1/d64/da9/lf9 0 2026-03-10T10:20:16.891 INFO:tasks.workunit.client.0.vm02.stdout:7/871: write d1/d1b/d8f/dad/d7e/dba/dea/fed [523526,27440] 0 2026-03-10T10:20:16.891 INFO:tasks.workunit.client.0.vm02.stdout:7/872: write d1/f5 [1521520,56639] 0 2026-03-10T10:20:16.891 INFO:tasks.workunit.client.1.vm05.stdout:8/785: write d7/f59 [2300450,23200] 0 2026-03-10T10:20:16.897 INFO:tasks.workunit.client.1.vm05.stdout:4/747: dwrite d1/d3/d65/db0/fe5 [0,4194304] 0 2026-03-10T10:20:16.900 INFO:tasks.workunit.client.0.vm02.stdout:4/991: symlink d1/d75/ddd/d10e/d5e/d78/d7f/d82/l14f 0 2026-03-10T10:20:16.912 INFO:tasks.workunit.client.1.vm05.stdout:7/918: mknod d5/d1d/d29/d3e/d8c/d82/d90/d9a/c117 0 2026-03-10T10:20:16.917 INFO:tasks.workunit.client.0.vm02.stdout:5/983: fsync d1/d9c/fa9 0 2026-03-10T10:20:16.919 INFO:tasks.workunit.client.0.vm02.stdout:1/900: dread d4/dc3/dd6/ffc [0,4194304] 0 2026-03-10T10:20:16.954 INFO:tasks.workunit.client.1.vm05.stdout:3/899: creat dd/d15/d24/d2c/d107/f142 x:0 0 0 2026-03-10T10:20:16.954 INFO:tasks.workunit.client.1.vm05.stdout:3/900: chown f9 1775249 1 2026-03-10T10:20:16.954 INFO:tasks.workunit.client.0.vm02.stdout:3/862: creat d1/d8/d21/d73/d78/d79/f123 x:0 0 0 2026-03-10T10:20:16.955 INFO:tasks.workunit.client.0.vm02.stdout:9/830: mknod da/d3c/d4c/d38/d82/d8c/c10d 0 2026-03-10T10:20:16.956 INFO:tasks.workunit.client.0.vm02.stdout:3/863: chown d1/d20/l59 27280 1 2026-03-10T10:20:16.962 INFO:tasks.workunit.client.1.vm05.stdout:6/857: rename dd/d36/d7d/f97 to dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/f113 0 2026-03-10T10:20:16.967 INFO:tasks.workunit.client.1.vm05.stdout:5/878: dwrite da/db/d26/d70/ff0 [0,4194304] 0 2026-03-10T10:20:16.985 INFO:tasks.workunit.client.1.vm05.stdout:1/939: symlink d4/d79/l115 0 2026-03-10T10:20:16.986 INFO:tasks.workunit.client.0.vm02.stdout:2/874: fdatasync d0/d1a/d49/deb/de6/f106 0 2026-03-10T10:20:16.986 INFO:tasks.workunit.client.0.vm02.stdout:7/873: fdatasync d1/dc/d55/f8d 0 2026-03-10T10:20:16.987 INFO:tasks.workunit.client.0.vm02.stdout:8/853: truncate d1/f7d 348732 0 2026-03-10T10:20:16.990 INFO:tasks.workunit.client.1.vm05.stdout:2/816: write db/d2d/f65 [85006,95917] 0 2026-03-10T10:20:16.990 INFO:tasks.workunit.client.0.vm02.stdout:8/854: chown d1/d1c/d43/laa 205 1 2026-03-10T10:20:16.992 INFO:tasks.workunit.client.1.vm05.stdout:2/817: dread db/d1c/d40/f73 [0,4194304] 0 2026-03-10T10:20:16.997 INFO:tasks.workunit.client.1.vm05.stdout:3/901: mkdir dd/d20/d130/d143 0 2026-03-10T10:20:17.001 INFO:tasks.workunit.client.0.vm02.stdout:6/825: mkdir d0/d8/d29/d2f/d50/d98/df6/d114 0 2026-03-10T10:20:17.001 INFO:tasks.workunit.client.1.vm05.stdout:6/858: sync 2026-03-10T10:20:17.002 INFO:tasks.workunit.client.0.vm02.stdout:6/826: truncate d0/d8/d29/d52/f8b 4914698 0 2026-03-10T10:20:17.004 INFO:tasks.workunit.client.0.vm02.stdout:3/864: rmdir d1/d8/d21/d73/d78/d84 39 2026-03-10T10:20:17.005 INFO:tasks.workunit.client.0.vm02.stdout:3/865: truncate d1/d6/f120 473630 0 2026-03-10T10:20:17.007 INFO:tasks.workunit.client.0.vm02.stdout:3/866: write d1/d6/d8e/fa6 [2800635,128235] 0 2026-03-10T10:20:17.015 INFO:tasks.workunit.client.0.vm02.stdout:3/867: dwrite d1/d8/f2e [4194304,4194304] 0 2026-03-10T10:20:17.016 INFO:tasks.workunit.client.1.vm05.stdout:9/762: dwrite d0/df/d11/f8d [4194304,4194304] 0 2026-03-10T10:20:17.038 INFO:tasks.workunit.client.0.vm02.stdout:0/908: write d9/d34/d3d/fae [473561,66449] 0 2026-03-10T10:20:17.050 INFO:tasks.workunit.client.0.vm02.stdout:2/875: fdatasync d0/d10/dee/fff 0 2026-03-10T10:20:17.050 INFO:tasks.workunit.client.1.vm05.stdout:0/882: truncate d1/d2/d9/f98 845092 0 2026-03-10T10:20:17.050 INFO:tasks.workunit.client.1.vm05.stdout:2/818: mknod db/d28/d4f/d59/da4/c108 0 2026-03-10T10:20:17.050 INFO:tasks.workunit.client.1.vm05.stdout:7/919: creat d5/d17/dae/f118 x:0 0 0 2026-03-10T10:20:17.059 INFO:tasks.workunit.client.1.vm05.stdout:3/902: symlink dd/d15/d24/d2c/d3b/l144 0 2026-03-10T10:20:17.061 INFO:tasks.workunit.client.1.vm05.stdout:9/763: read d0/f2f [17053,26994] 0 2026-03-10T10:20:17.076 INFO:tasks.workunit.client.1.vm05.stdout:1/940: unlink d4/df/de0/d82/lb4 0 2026-03-10T10:20:17.076 INFO:tasks.workunit.client.1.vm05.stdout:2/819: creat db/d2d/dc6/f109 x:0 0 0 2026-03-10T10:20:17.078 INFO:tasks.workunit.client.1.vm05.stdout:7/920: unlink d5/d1d/d29/d3e/d8c/f81 0 2026-03-10T10:20:17.078 INFO:tasks.workunit.client.1.vm05.stdout:2/820: sync 2026-03-10T10:20:17.086 INFO:tasks.workunit.client.0.vm02.stdout:7/874: symlink d1/d1b/d8f/d67/da7/l111 0 2026-03-10T10:20:17.093 INFO:tasks.workunit.client.0.vm02.stdout:4/992: write d1/d32/fd3 [388813,95165] 0 2026-03-10T10:20:17.102 INFO:tasks.workunit.client.0.vm02.stdout:4/993: dwrite d1/d10/db/f116 [0,4194304] 0 2026-03-10T10:20:17.108 INFO:tasks.workunit.client.1.vm05.stdout:5/879: dwrite da/d96/fcd [0,4194304] 0 2026-03-10T10:20:17.109 INFO:tasks.workunit.client.1.vm05.stdout:8/786: dwrite d7/d2f/f7e [0,4194304] 0 2026-03-10T10:20:17.136 INFO:tasks.workunit.client.1.vm05.stdout:1/941: truncate d4/d3d/f77 623947 0 2026-03-10T10:20:17.136 INFO:tasks.workunit.client.0.vm02.stdout:8/855: write d1/d1c/d43/d6a/da8/fbf [1866627,128837] 0 2026-03-10T10:20:17.137 INFO:tasks.workunit.client.1.vm05.stdout:6/859: write dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/fb6 [201124,18876] 0 2026-03-10T10:20:17.141 INFO:tasks.workunit.client.1.vm05.stdout:0/883: write d1/d2/d9/d31/d13/da2/fd6 [783373,60307] 0 2026-03-10T10:20:17.143 INFO:tasks.workunit.client.0.vm02.stdout:3/868: dwrite d1/d8/f3d [4194304,4194304] 0 2026-03-10T10:20:17.148 INFO:tasks.workunit.client.0.vm02.stdout:0/909: dwrite d9/d34/d3d/d65/d89/fcd [0,4194304] 0 2026-03-10T10:20:17.155 INFO:tasks.workunit.client.0.vm02.stdout:1/901: link d4/d4a/cb6 d4/da/d1a/d47/dbc/dcb/c11f 0 2026-03-10T10:20:17.158 INFO:tasks.workunit.client.1.vm05.stdout:7/921: mkdir d5/d17/dae/d119 0 2026-03-10T10:20:17.164 INFO:tasks.workunit.client.0.vm02.stdout:7/875: truncate d1/dc/d16/d28/fca 521007 0 2026-03-10T10:20:17.174 INFO:tasks.workunit.client.0.vm02.stdout:4/994: truncate d1/d75/ddd/d10e/f113 216044 0 2026-03-10T10:20:17.174 INFO:tasks.workunit.client.0.vm02.stdout:4/995: truncate d1/d75/f13d 455760 0 2026-03-10T10:20:17.174 INFO:tasks.workunit.client.0.vm02.stdout:5/984: rename d1/f12 to d1/db/d11/d62/f14f 0 2026-03-10T10:20:17.174 INFO:tasks.workunit.client.0.vm02.stdout:9/831: rename da/d3c/d4c to da/d3c/d4c/d38/d82/da3/d10e 22 2026-03-10T10:20:17.174 INFO:tasks.workunit.client.0.vm02.stdout:2/876: read d0/d8c/fab [1328177,77933] 0 2026-03-10T10:20:17.185 INFO:tasks.workunit.client.0.vm02.stdout:3/869: fsync d1/d8/d21/d73/d78/d79/fea 0 2026-03-10T10:20:17.189 INFO:tasks.workunit.client.0.vm02.stdout:6/827: link d0/d8/d9/c2c d0/d8/d29/d52/de8/db2/dbb/de5/c115 0 2026-03-10T10:20:17.197 INFO:tasks.workunit.client.0.vm02.stdout:7/876: creat d1/dc/d16/f112 x:0 0 0 2026-03-10T10:20:17.199 INFO:tasks.workunit.client.0.vm02.stdout:5/985: stat d1/f119 0 2026-03-10T10:20:17.200 INFO:tasks.workunit.client.1.vm05.stdout:0/884: dread d1/d2/d9/d31/d12/d20/f71 [0,4194304] 0 2026-03-10T10:20:17.202 INFO:tasks.workunit.client.1.vm05.stdout:4/748: rename d1/d31/dc/f1f to d1/d31/dc/d40/ffa 0 2026-03-10T10:20:17.214 INFO:tasks.workunit.client.0.vm02.stdout:2/877: truncate d0/d71/d108/ff0 497236 0 2026-03-10T10:20:17.217 INFO:tasks.workunit.client.0.vm02.stdout:8/856: dread d1/d1c/d23/d25/f76 [0,4194304] 0 2026-03-10T10:20:17.224 INFO:tasks.workunit.client.0.vm02.stdout:3/870: symlink d1/d6/d8e/l124 0 2026-03-10T10:20:17.232 INFO:tasks.workunit.client.1.vm05.stdout:8/787: creat d7/d2f/d57/ff9 x:0 0 0 2026-03-10T10:20:17.235 INFO:tasks.workunit.client.1.vm05.stdout:1/942: fsync d4/dd/f60 0 2026-03-10T10:20:17.238 INFO:tasks.workunit.client.1.vm05.stdout:6/860: dread - dd/d36/d3f/d12/d96/fcd zero size 2026-03-10T10:20:17.238 INFO:tasks.workunit.client.1.vm05.stdout:6/861: stat dd/f14 0 2026-03-10T10:20:17.263 INFO:tasks.workunit.client.0.vm02.stdout:2/878: sync 2026-03-10T10:20:17.263 INFO:tasks.workunit.client.1.vm05.stdout:9/764: rename d0/d1/d57 to d0/df/d74/d8c/de4/d104 0 2026-03-10T10:20:17.276 INFO:tasks.workunit.client.0.vm02.stdout:7/877: mknod d1/dc/d16/c113 0 2026-03-10T10:20:17.281 INFO:tasks.workunit.client.0.vm02.stdout:5/986: fsync d1/db/d11/d13/d28/d37/f3c 0 2026-03-10T10:20:17.300 INFO:tasks.workunit.client.0.vm02.stdout:1/902: dwrite d4/da/f13 [4194304,4194304] 0 2026-03-10T10:20:17.301 INFO:tasks.workunit.client.1.vm05.stdout:8/788: fdatasync d7/d14/d24/d3f/d4f/fbf 0 2026-03-10T10:20:17.301 INFO:tasks.workunit.client.0.vm02.stdout:1/903: chown d4/da/d27/d38/lca 441631 1 2026-03-10T10:20:17.306 INFO:tasks.workunit.client.1.vm05.stdout:1/943: dread - d4/d79/d83/dc5/dcb/ff5 zero size 2026-03-10T10:20:17.313 INFO:tasks.workunit.client.0.vm02.stdout:0/910: truncate d9/d18/dc7/dca/f95 3777604 0 2026-03-10T10:20:17.313 INFO:tasks.workunit.client.0.vm02.stdout:4/996: rmdir d1/d52/d53/dda/df7/d135 0 2026-03-10T10:20:17.314 INFO:tasks.workunit.client.0.vm02.stdout:4/997: stat d1/de8/d109/l10c 0 2026-03-10T10:20:17.314 INFO:tasks.workunit.client.0.vm02.stdout:2/879: fdatasync d0/d10/f46 0 2026-03-10T10:20:17.315 INFO:tasks.workunit.client.1.vm05.stdout:5/880: dwrite da/db/d28/f107 [0,4194304] 0 2026-03-10T10:20:17.321 INFO:tasks.workunit.client.1.vm05.stdout:5/881: write da/db/d26/d5c/f124 [876508,121082] 0 2026-03-10T10:20:17.324 INFO:tasks.workunit.client.0.vm02.stdout:6/828: dwrite d0/d8/d29/d2f/f8e [0,4194304] 0 2026-03-10T10:20:17.331 INFO:tasks.workunit.client.1.vm05.stdout:7/922: dwrite d5/d1d/d20/d35/fbb [0,4194304] 0 2026-03-10T10:20:17.337 INFO:tasks.workunit.client.1.vm05.stdout:9/765: rmdir d0/dc4 39 2026-03-10T10:20:17.337 INFO:tasks.workunit.client.1.vm05.stdout:7/923: fdatasync d5/dd/ffb 0 2026-03-10T10:20:17.344 INFO:tasks.workunit.client.1.vm05.stdout:3/903: link dd/d20/d56/f68 dd/d20/d56/db3/f145 0 2026-03-10T10:20:17.357 INFO:tasks.workunit.client.1.vm05.stdout:1/944: chown d4/df/d1c/f38 83889 1 2026-03-10T10:20:17.357 INFO:tasks.workunit.client.1.vm05.stdout:0/885: dwrite d1/dd7/fe8 [4194304,4194304] 0 2026-03-10T10:20:17.357 INFO:tasks.workunit.client.1.vm05.stdout:1/945: dread d4/df/d1c/d53/d66/fb3 [0,4194304] 0 2026-03-10T10:20:17.357 INFO:tasks.workunit.client.0.vm02.stdout:3/871: dread d1/d8/f46 [0,4194304] 0 2026-03-10T10:20:17.357 INFO:tasks.workunit.client.0.vm02.stdout:9/832: dwrite da/d3c/d4c/d38/d82/da3/f109 [0,4194304] 0 2026-03-10T10:20:17.358 INFO:tasks.workunit.client.0.vm02.stdout:7/878: truncate d1/d1b/d8f/f8c 730980 0 2026-03-10T10:20:17.362 INFO:tasks.workunit.client.0.vm02.stdout:6/829: creat d0/d8/d9/d7a/dc0/f116 x:0 0 0 2026-03-10T10:20:17.363 INFO:tasks.workunit.client.0.vm02.stdout:1/904: symlink d4/df1/l120 0 2026-03-10T10:20:17.381 INFO:tasks.workunit.client.1.vm05.stdout:2/821: rename db/d28/dd4 to db/d61/d10a 0 2026-03-10T10:20:17.381 INFO:tasks.workunit.client.1.vm05.stdout:0/886: rmdir d1 39 2026-03-10T10:20:17.381 INFO:tasks.workunit.client.0.vm02.stdout:4/998: mknod d1/d75/c150 0 2026-03-10T10:20:17.381 INFO:tasks.workunit.client.0.vm02.stdout:7/879: read - d1/d1b/d8e/ffb zero size 2026-03-10T10:20:17.381 INFO:tasks.workunit.client.0.vm02.stdout:6/830: creat d0/d8/d29/d6d/d96/dfd/f117 x:0 0 0 2026-03-10T10:20:17.381 INFO:tasks.workunit.client.0.vm02.stdout:3/872: mkdir d1/d58/d125 0 2026-03-10T10:20:17.382 INFO:tasks.workunit.client.1.vm05.stdout:5/882: mknod da/db/d28/d32/def/c12d 0 2026-03-10T10:20:17.384 INFO:tasks.workunit.client.0.vm02.stdout:7/880: rename d1/dc/d99 to d1/dc/d16/d28/d2d/d114 0 2026-03-10T10:20:17.384 INFO:tasks.workunit.client.1.vm05.stdout:6/862: rename dd/d36/d3f/d12/d96/f9a to dd/d36/d3f/dbd/f114 0 2026-03-10T10:20:17.388 INFO:tasks.workunit.client.1.vm05.stdout:8/789: getdents d7/d14/d15/d3b/da0 0 2026-03-10T10:20:17.390 INFO:tasks.workunit.client.1.vm05.stdout:6/863: symlink dd/d36/d3f/d12/d96/l115 0 2026-03-10T10:20:17.390 INFO:tasks.workunit.client.1.vm05.stdout:5/883: rename da/db/d26/d5c/cb9 to da/db/d28/d8a/c12e 0 2026-03-10T10:20:17.391 INFO:tasks.workunit.client.1.vm05.stdout:6/864: dread - dd/d36/d3f/d12/d44/d2a/d77/f108 zero size 2026-03-10T10:20:17.391 INFO:tasks.workunit.client.1.vm05.stdout:1/946: dread d4/dd/f64 [0,4194304] 0 2026-03-10T10:20:17.392 INFO:tasks.workunit.client.1.vm05.stdout:8/790: mkdir d7/d14/d3a/d49/d65/db8/dfa 0 2026-03-10T10:20:17.393 INFO:tasks.workunit.client.0.vm02.stdout:9/833: creat da/d3c/d4c/f10f x:0 0 0 2026-03-10T10:20:17.394 INFO:tasks.workunit.client.0.vm02.stdout:5/987: dread d1/d6a/f133 [0,4194304] 0 2026-03-10T10:20:17.394 INFO:tasks.workunit.client.0.vm02.stdout:9/834: chown da/d3c/d4c/d38/d82/d89/lfd 76212 1 2026-03-10T10:20:17.394 INFO:tasks.workunit.client.0.vm02.stdout:9/835: chown da/d3c/d4c/d75/fbb 76070 1 2026-03-10T10:20:17.396 INFO:tasks.workunit.client.1.vm05.stdout:0/887: rename d1/d2/d9/d31/daa/d11c/c121 to d1/d2/d9/d31/d13/d15/d4e/df6/d112/c129 0 2026-03-10T10:20:17.397 INFO:tasks.workunit.client.1.vm05.stdout:0/888: readlink d1/d2/d9/d31/d13/d15/l7d 0 2026-03-10T10:20:17.397 INFO:tasks.workunit.client.1.vm05.stdout:9/766: dread d0/df/d11/f50 [0,4194304] 0 2026-03-10T10:20:17.403 INFO:tasks.workunit.client.0.vm02.stdout:7/881: creat d1/dc/d55/f115 x:0 0 0 2026-03-10T10:20:17.404 INFO:tasks.workunit.client.0.vm02.stdout:7/882: chown d1/d1b/c40 108684 1 2026-03-10T10:20:17.407 INFO:tasks.workunit.client.1.vm05.stdout:3/904: dread dd/d15/f23 [0,4194304] 0 2026-03-10T10:20:17.407 INFO:tasks.workunit.client.1.vm05.stdout:6/865: creat dd/d36/d3f/d12/d44/daa/f116 x:0 0 0 2026-03-10T10:20:17.407 INFO:tasks.workunit.client.0.vm02.stdout:2/880: getdents d0/d71/d108/d65/dc4/de0/dfc 0 2026-03-10T10:20:17.407 INFO:tasks.workunit.client.0.vm02.stdout:6/831: truncate d0/d8/d29/d94/ff9 1909385 0 2026-03-10T10:20:17.408 INFO:tasks.workunit.client.0.vm02.stdout:9/836: creat da/d3c/d53/f110 x:0 0 0 2026-03-10T10:20:17.412 INFO:tasks.workunit.client.0.vm02.stdout:5/988: fdatasync d1/db/d11/d84/d40/d4f/d5f/d6d/d71/fd8 0 2026-03-10T10:20:17.422 INFO:tasks.workunit.client.0.vm02.stdout:5/989: readlink d1/db/d11/l34 0 2026-03-10T10:20:17.423 INFO:tasks.workunit.client.0.vm02.stdout:5/990: dread - d1/db/d11/d16/d79/f14a zero size 2026-03-10T10:20:17.423 INFO:tasks.workunit.client.0.vm02.stdout:4/999: link d1/d75/ddd/d10e/d5e/d78/d1a/d49/la2 d1/d10/d88/db2/l151 0 2026-03-10T10:20:17.423 INFO:tasks.workunit.client.0.vm02.stdout:2/881: mknod d0/d10/da6/c125 0 2026-03-10T10:20:17.423 INFO:tasks.workunit.client.0.vm02.stdout:2/882: fsync d0/d71/d108/d65/dc4/dfa/d80/d10f/fcf 0 2026-03-10T10:20:17.426 INFO:tasks.workunit.client.0.vm02.stdout:9/837: symlink da/d3c/d4c/d38/da6/l111 0 2026-03-10T10:20:17.427 INFO:tasks.workunit.client.0.vm02.stdout:8/857: write d1/d1c/d23/d25/fc2 [371351,85841] 0 2026-03-10T10:20:17.427 INFO:tasks.workunit.client.0.vm02.stdout:8/858: chown d1/d1c/d43/d5b 10 1 2026-03-10T10:20:17.429 INFO:tasks.workunit.client.0.vm02.stdout:9/838: dread da/d3c/d4c/d2c/d96/fee [0,4194304] 0 2026-03-10T10:20:17.432 INFO:tasks.workunit.client.0.vm02.stdout:6/832: truncate d0/d8/d29/d2f/d50/d98/f9f 1276812 0 2026-03-10T10:20:17.439 INFO:tasks.workunit.client.1.vm05.stdout:3/905: dread dd/d39/d66/f6e [0,4194304] 0 2026-03-10T10:20:17.441 INFO:tasks.workunit.client.1.vm05.stdout:8/791: link d7/d14/d15/da7/fbe d7/d14/d24/d3f/d6a/d8a/d96/db7/ffb 0 2026-03-10T10:20:17.441 INFO:tasks.workunit.client.0.vm02.stdout:8/859: fsync d1/dc7/dd2/fe7 0 2026-03-10T10:20:17.442 INFO:tasks.workunit.client.1.vm05.stdout:8/792: chown d7/d14/d3a/d49/d65/ce2 1019769035 1 2026-03-10T10:20:17.446 INFO:tasks.workunit.client.1.vm05.stdout:6/866: mknod dd/d36/d3f/d12/d44/d2a/d3d/c117 0 2026-03-10T10:20:17.447 INFO:tasks.workunit.client.0.vm02.stdout:2/883: sync 2026-03-10T10:20:17.448 INFO:tasks.workunit.client.1.vm05.stdout:3/906: unlink dd/d20/cf1 0 2026-03-10T10:20:17.448 INFO:tasks.workunit.client.1.vm05.stdout:3/907: chown dd/d20/d130/d143 13 1 2026-03-10T10:20:17.450 INFO:tasks.workunit.client.1.vm05.stdout:8/793: creat d7/d14/d24/d3f/d6a/d8a/d96/ffc x:0 0 0 2026-03-10T10:20:17.453 INFO:tasks.workunit.client.0.vm02.stdout:2/884: mkdir d0/d71/d126 0 2026-03-10T10:20:17.453 INFO:tasks.workunit.client.1.vm05.stdout:6/867: rmdir dd/d36/d3f/dbd/dd5 39 2026-03-10T10:20:17.459 INFO:tasks.workunit.client.0.vm02.stdout:6/833: unlink d0/d8/d9/f8a 0 2026-03-10T10:20:17.459 INFO:tasks.workunit.client.0.vm02.stdout:8/860: mkdir d1/d1c/d106 0 2026-03-10T10:20:17.460 INFO:tasks.workunit.client.0.vm02.stdout:6/834: chown d0/d8/d8c/l5b 5921 1 2026-03-10T10:20:17.465 INFO:tasks.workunit.client.0.vm02.stdout:0/911: dwrite d9/d18/d1a/f6f [0,4194304] 0 2026-03-10T10:20:17.472 INFO:tasks.workunit.client.0.vm02.stdout:2/885: dread d0/d1a/d49/deb/de6/fe7 [0,4194304] 0 2026-03-10T10:20:17.486 INFO:tasks.workunit.client.0.vm02.stdout:1/905: dwrite d4/da/d1a/d11d/d53/fc5 [0,4194304] 0 2026-03-10T10:20:17.491 INFO:tasks.workunit.client.0.vm02.stdout:3/873: dwrite d1/d20/ff7 [0,4194304] 0 2026-03-10T10:20:17.496 INFO:tasks.workunit.client.1.vm05.stdout:4/749: write d1/d3/f10 [3855325,52520] 0 2026-03-10T10:20:17.499 INFO:tasks.workunit.client.1.vm05.stdout:7/924: dwrite d5/d1d/d20/d35/fbb [4194304,4194304] 0 2026-03-10T10:20:17.521 INFO:tasks.workunit.client.1.vm05.stdout:6/868: rename dd/d36/d3f/dbd/fd0 to dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6/f118 0 2026-03-10T10:20:17.521 INFO:tasks.workunit.client.1.vm05.stdout:5/884: write da/db/de9/fe5 [1095410,77879] 0 2026-03-10T10:20:17.521 INFO:tasks.workunit.client.0.vm02.stdout:6/835: mknod d0/d8/d29/d94/c118 0 2026-03-10T10:20:17.522 INFO:tasks.workunit.client.1.vm05.stdout:6/869: chown dd/d36/d3f 56034 1 2026-03-10T10:20:17.522 INFO:tasks.workunit.client.1.vm05.stdout:6/870: chown dd/d36/d3f/d12/d96/fcd 3 1 2026-03-10T10:20:17.524 INFO:tasks.workunit.client.0.vm02.stdout:0/912: mkdir d9/d34/d3d/d65/d89/dd3/da7/db7/d127 0 2026-03-10T10:20:17.529 INFO:tasks.workunit.client.0.vm02.stdout:7/883: fdatasync d1/dc/d16/d28/fca 0 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: pgmap v7: 65 pgs: 65 active+clean; 3.6 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 22 MiB/s rd, 49 MiB/s wr, 157 op/s 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:17 vm02.local ceph-mon[50200]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:20:17.541 INFO:tasks.workunit.client.1.vm05.stdout:1/947: dwrite d4/df/f73 [0,4194304] 0 2026-03-10T10:20:17.541 INFO:tasks.workunit.client.1.vm05.stdout:9/767: write d0/d1/d16/f40 [1247687,30979] 0 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T10:20:17.541 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T10:20:17.542 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:20:17.542 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:20:17.542 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: pgmap v7: 65 pgs: 65 active+clean; 3.6 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 22 MiB/s rd, 49 MiB/s wr, 157 op/s 2026-03-10T10:20:17.542 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:20:17.542 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:17 vm05.local ceph-mon[59051]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:20:17.542 INFO:tasks.workunit.client.0.vm02.stdout:7/884: chown d1/d1b/d8e/fe0 1 1 2026-03-10T10:20:17.542 INFO:tasks.workunit.client.0.vm02.stdout:7/885: dwrite d1/dc/d16/f48 [0,4194304] 0 2026-03-10T10:20:17.543 INFO:tasks.workunit.client.1.vm05.stdout:8/794: dread d7/d14/d15/da7/fbe [0,4194304] 0 2026-03-10T10:20:17.548 INFO:tasks.workunit.client.1.vm05.stdout:2/822: stat db/d28/d4f/d59/dce/fd8 0 2026-03-10T10:20:17.552 INFO:tasks.workunit.client.1.vm05.stdout:4/750: mkdir d1/d31/dc/d40/d45/ded/dfb 0 2026-03-10T10:20:17.559 INFO:tasks.workunit.client.1.vm05.stdout:4/751: write d1/d3/f12 [3221888,65866] 0 2026-03-10T10:20:17.559 INFO:tasks.workunit.client.1.vm05.stdout:0/889: dwrite d1/d2/d9/f40 [0,4194304] 0 2026-03-10T10:20:17.561 INFO:tasks.workunit.client.0.vm02.stdout:1/906: rmdir d4/da/d27/d38/d80 39 2026-03-10T10:20:17.569 INFO:tasks.workunit.client.0.vm02.stdout:5/991: write d1/d6a/f133 [1056931,72583] 0 2026-03-10T10:20:17.570 INFO:tasks.workunit.client.0.vm02.stdout:5/992: readlink d1/db/d11/l50 0 2026-03-10T10:20:17.571 INFO:tasks.workunit.client.0.vm02.stdout:5/993: read - d1/db/d11/d13/dc9/f110 zero size 2026-03-10T10:20:17.575 INFO:tasks.workunit.client.1.vm05.stdout:6/871: creat dd/d36/d3f/d12/d96/f119 x:0 0 0 2026-03-10T10:20:17.580 INFO:tasks.workunit.client.1.vm05.stdout:1/948: fsync d4/fda 0 2026-03-10T10:20:17.580 INFO:tasks.workunit.client.0.vm02.stdout:9/839: dwrite da/d3c/d4c/d38/d82/d8c/fca [0,4194304] 0 2026-03-10T10:20:17.580 INFO:tasks.workunit.client.0.vm02.stdout:6/836: dread - d0/d8/d29/d52/de8/db2/dbb/ff7 zero size 2026-03-10T10:20:17.581 INFO:tasks.workunit.client.1.vm05.stdout:9/768: readlink d0/d1/d13/de/d93/lf8 0 2026-03-10T10:20:17.599 INFO:tasks.workunit.client.0.vm02.stdout:0/913: dread d9/d18/d1a/d3c/f113 [0,4194304] 0 2026-03-10T10:20:17.607 INFO:tasks.workunit.client.0.vm02.stdout:1/907: fsync d4/da/d1a/d47/fa0 0 2026-03-10T10:20:17.611 INFO:tasks.workunit.client.1.vm05.stdout:4/752: dread d1/d31/f7a [0,4194304] 0 2026-03-10T10:20:17.613 INFO:tasks.workunit.client.0.vm02.stdout:5/994: mknod d1/db/d11/d16/d79/d85/d93/c150 0 2026-03-10T10:20:17.616 INFO:tasks.workunit.client.0.vm02.stdout:3/874: dread - d1/d8/d21/d73/d78/d84/fe5 zero size 2026-03-10T10:20:17.621 INFO:tasks.workunit.client.0.vm02.stdout:9/840: rename da/d3c/d4c/de1 to da/d3c/d4c/db1/d112 0 2026-03-10T10:20:17.625 INFO:tasks.workunit.client.1.vm05.stdout:9/769: creat d0/d1/d13/d26/f105 x:0 0 0 2026-03-10T10:20:17.625 INFO:tasks.workunit.client.0.vm02.stdout:9/841: write da/f102 [526579,93181] 0 2026-03-10T10:20:17.626 INFO:tasks.workunit.client.1.vm05.stdout:8/795: creat d7/d14/d24/d3f/dc4/ffd x:0 0 0 2026-03-10T10:20:17.632 INFO:tasks.workunit.client.0.vm02.stdout:0/914: fdatasync d9/f28 0 2026-03-10T10:20:17.637 INFO:tasks.workunit.client.1.vm05.stdout:8/796: dread d7/d14/f5b [0,4194304] 0 2026-03-10T10:20:17.645 INFO:tasks.workunit.client.1.vm05.stdout:0/890: symlink d1/d2/d9/d50/d9a/da0/l12a 0 2026-03-10T10:20:17.646 INFO:tasks.workunit.client.1.vm05.stdout:0/891: fsync d1/d2/d9/d31/d13/d17/da1/dbd/f127 0 2026-03-10T10:20:17.651 INFO:tasks.workunit.client.1.vm05.stdout:7/925: getdents d5/d1d/d29/d3e 0 2026-03-10T10:20:17.651 INFO:tasks.workunit.client.1.vm05.stdout:7/926: fdatasync d5/d1d/d29/fa1 0 2026-03-10T10:20:17.654 INFO:tasks.workunit.client.1.vm05.stdout:3/908: write dd/d15/d69/f99 [3684183,11768] 0 2026-03-10T10:20:17.657 INFO:tasks.workunit.client.1.vm05.stdout:4/753: rename d1/d3/d65/db0 to d1/d64/da9/dae/dfc 0 2026-03-10T10:20:17.662 INFO:tasks.workunit.client.0.vm02.stdout:5/995: mknod d1/db/d11/d13/d28/da7/c151 0 2026-03-10T10:20:17.675 INFO:tasks.workunit.client.0.vm02.stdout:8/861: write d1/d2/f36 [5227423,67215] 0 2026-03-10T10:20:17.675 INFO:tasks.workunit.client.0.vm02.stdout:8/862: stat d1/d1c/d24/dcf/lea 0 2026-03-10T10:20:17.677 INFO:tasks.workunit.client.0.vm02.stdout:6/837: creat d0/d8/d29/d94/d9a/dc2/f119 x:0 0 0 2026-03-10T10:20:17.680 INFO:tasks.workunit.client.0.vm02.stdout:5/996: sync 2026-03-10T10:20:17.703 INFO:tasks.workunit.client.0.vm02.stdout:2/886: dwrite d0/d71/d108/d65/dc4/dfa/f48 [0,4194304] 0 2026-03-10T10:20:17.706 INFO:tasks.workunit.client.0.vm02.stdout:2/887: chown d0/d71/d108/d65/dc4/dfa/d80/c112 71734 1 2026-03-10T10:20:17.713 INFO:tasks.workunit.client.1.vm05.stdout:7/927: truncate d5/d26/f5a 676306 0 2026-03-10T10:20:17.713 INFO:tasks.workunit.client.1.vm05.stdout:7/928: chown d5/d1d/d29/d60/de1 1962178 1 2026-03-10T10:20:17.714 INFO:tasks.workunit.client.1.vm05.stdout:7/929: dread - d5/d1d/d20/d2d/f10b zero size 2026-03-10T10:20:17.715 INFO:tasks.workunit.client.1.vm05.stdout:3/909: creat dd/d20/d130/f146 x:0 0 0 2026-03-10T10:20:17.715 INFO:tasks.workunit.client.1.vm05.stdout:7/930: write d5/d1d/d20/d2d/d68/f113 [525955,46834] 0 2026-03-10T10:20:17.715 INFO:tasks.workunit.client.1.vm05.stdout:5/885: write da/d9a/fae [138751,125307] 0 2026-03-10T10:20:17.716 INFO:tasks.workunit.client.1.vm05.stdout:5/886: readlink da/d9a/dc7/lb1 0 2026-03-10T10:20:17.719 INFO:tasks.workunit.client.0.vm02.stdout:6/838: creat d0/d87/f11a x:0 0 0 2026-03-10T10:20:17.724 INFO:tasks.workunit.client.0.vm02.stdout:7/886: dwrite d1/dc/d16/fda [0,4194304] 0 2026-03-10T10:20:17.733 INFO:tasks.workunit.client.0.vm02.stdout:0/915: mkdir d9/d18/d1a/d22/d24/d8e/d128 0 2026-03-10T10:20:17.734 INFO:tasks.workunit.client.1.vm05.stdout:2/823: link db/d1c/l45 db/d28/d4f/d59/l10b 0 2026-03-10T10:20:17.737 INFO:tasks.workunit.client.1.vm05.stdout:8/797: mkdir d7/d14/d15/da7/def/dfe 0 2026-03-10T10:20:17.738 INFO:tasks.workunit.client.0.vm02.stdout:2/888: creat d0/d71/d108/d65/db0/f127 x:0 0 0 2026-03-10T10:20:17.747 INFO:tasks.workunit.client.1.vm05.stdout:1/949: dwrite d4/d39/f7b [0,4194304] 0 2026-03-10T10:20:17.750 INFO:tasks.workunit.client.1.vm05.stdout:0/892: unlink d1/d2/d9/d50/c11e 0 2026-03-10T10:20:17.753 INFO:tasks.workunit.client.1.vm05.stdout:7/931: rmdir d5/d1d/d20/d35/dd2 39 2026-03-10T10:20:17.753 INFO:tasks.workunit.client.1.vm05.stdout:7/932: chown d5/d26/d9c/de7 181 1 2026-03-10T10:20:17.754 INFO:tasks.workunit.client.0.vm02.stdout:6/839: chown d0/d8/d29/d6d/d96/de4/d102/f17 0 1 2026-03-10T10:20:17.756 INFO:tasks.workunit.client.1.vm05.stdout:3/910: rename dd/d15/c82 to dd/d15/d24/d8e/c147 0 2026-03-10T10:20:17.758 INFO:tasks.workunit.client.0.vm02.stdout:1/908: dwrite d4/d4a/fd2 [0,4194304] 0 2026-03-10T10:20:17.763 INFO:tasks.workunit.client.0.vm02.stdout:1/909: write d4/da/d1a/d47/d78/fdc [1036227,40956] 0 2026-03-10T10:20:17.763 INFO:tasks.workunit.client.0.vm02.stdout:1/910: fsync d4/fe 0 2026-03-10T10:20:17.772 INFO:tasks.workunit.client.0.vm02.stdout:0/916: mkdir d9/d34/d3d/d65/d89/dd3/da7/db9/d129 0 2026-03-10T10:20:17.773 INFO:tasks.workunit.client.1.vm05.stdout:9/770: write d0/d1/f7b [4178767,112069] 0 2026-03-10T10:20:17.775 INFO:tasks.workunit.client.0.vm02.stdout:3/875: dwrite d1/d8/d21/f88 [0,4194304] 0 2026-03-10T10:20:17.779 INFO:tasks.workunit.client.1.vm05.stdout:6/872: getdents dd/d36/d3f/d12/d96 0 2026-03-10T10:20:17.780 INFO:tasks.workunit.client.1.vm05.stdout:6/873: readlink dd/d36/d3f/d12/d59/l5b 0 2026-03-10T10:20:17.781 INFO:tasks.workunit.client.1.vm05.stdout:6/874: dread - dd/d36/d3f/d12/d44/d2a/d7f/fea zero size 2026-03-10T10:20:17.784 INFO:tasks.workunit.client.1.vm05.stdout:0/893: truncate d1/d2/d39/f69 163666 0 2026-03-10T10:20:17.785 INFO:tasks.workunit.client.1.vm05.stdout:7/933: mkdir d5/d1d/d20/d2d/d80/dd6/d11a 0 2026-03-10T10:20:17.792 INFO:tasks.workunit.client.1.vm05.stdout:9/771: creat d0/d1/d13/f106 x:0 0 0 2026-03-10T10:20:17.798 INFO:tasks.workunit.client.1.vm05.stdout:1/950: creat d4/d39/d88/d112/f116 x:0 0 0 2026-03-10T10:20:17.798 INFO:tasks.workunit.client.1.vm05.stdout:1/951: stat d4/dd 0 2026-03-10T10:20:17.798 INFO:tasks.workunit.client.1.vm05.stdout:1/952: write d4/d79/f8d [4351962,99132] 0 2026-03-10T10:20:17.798 INFO:tasks.workunit.client.1.vm05.stdout:0/894: getdents d1/d2/d9/d31/d12/d41/d10b 0 2026-03-10T10:20:17.799 INFO:tasks.workunit.client.0.vm02.stdout:6/840: sync 2026-03-10T10:20:17.801 INFO:tasks.workunit.client.1.vm05.stdout:7/934: creat d5/d1d/d20/d2d/d80/dd6/df9/f11b x:0 0 0 2026-03-10T10:20:17.802 INFO:tasks.workunit.client.1.vm05.stdout:1/953: creat d4/dd/f117 x:0 0 0 2026-03-10T10:20:17.805 INFO:tasks.workunit.client.0.vm02.stdout:3/876: sync 2026-03-10T10:20:17.805 INFO:tasks.workunit.client.1.vm05.stdout:3/911: sync 2026-03-10T10:20:17.808 INFO:tasks.workunit.client.0.vm02.stdout:2/889: creat d0/d71/d108/d65/dc4/de0/f128 x:0 0 0 2026-03-10T10:20:17.809 INFO:tasks.workunit.client.1.vm05.stdout:1/954: creat d4/df/d1c/d53/d66/f118 x:0 0 0 2026-03-10T10:20:17.809 INFO:tasks.workunit.client.0.vm02.stdout:9/842: rename da/fae to da/d3c/f113 0 2026-03-10T10:20:17.815 INFO:tasks.workunit.client.1.vm05.stdout:9/772: dread d0/df/d11/f24 [0,4194304] 0 2026-03-10T10:20:17.816 INFO:tasks.workunit.client.1.vm05.stdout:0/895: dread d1/d2/d9/d31/f84 [0,4194304] 0 2026-03-10T10:20:17.820 INFO:tasks.workunit.client.1.vm05.stdout:3/912: read dd/d15/d24/d2c/f13c [286259,76873] 0 2026-03-10T10:20:17.820 INFO:tasks.workunit.client.1.vm05.stdout:6/875: read f3 [5438240,80878] 0 2026-03-10T10:20:17.821 INFO:tasks.workunit.client.0.vm02.stdout:1/911: mknod d4/da/d1a/d47/dbc/dcb/c121 0 2026-03-10T10:20:17.821 INFO:tasks.workunit.client.1.vm05.stdout:1/955: unlink d4/d3d/d6e/l8f 0 2026-03-10T10:20:17.822 INFO:tasks.workunit.client.0.vm02.stdout:1/912: chown d4/da/d1a/d11d/d53/da6/fbe 5 1 2026-03-10T10:20:17.823 INFO:tasks.workunit.client.1.vm05.stdout:9/773: dwrite d0/d1/f7b [0,4194304] 0 2026-03-10T10:20:17.824 INFO:tasks.workunit.client.0.vm02.stdout:5/997: link d1/db/d11/d13/d28/da7/c151 d1/db/d11/d13/d28/d37/dce/c152 0 2026-03-10T10:20:17.834 INFO:tasks.workunit.client.1.vm05.stdout:3/913: dread dd/d15/d69/f99 [0,4194304] 0 2026-03-10T10:20:17.836 INFO:tasks.workunit.client.1.vm05.stdout:3/914: readlink dd/lf 0 2026-03-10T10:20:17.836 INFO:tasks.workunit.client.1.vm05.stdout:3/915: chown dd/d15/d24/d74/d88 1252113 1 2026-03-10T10:20:17.840 INFO:tasks.workunit.client.0.vm02.stdout:6/841: unlink d0/d8/d8c/f36 0 2026-03-10T10:20:17.844 INFO:tasks.workunit.client.1.vm05.stdout:7/935: dread d5/d1d/d20/d35/f47 [4194304,4194304] 0 2026-03-10T10:20:17.855 INFO:tasks.workunit.client.1.vm05.stdout:4/754: dwrite d1/d31/dc/d40/d45/f66 [0,4194304] 0 2026-03-10T10:20:17.863 INFO:tasks.workunit.client.0.vm02.stdout:9/843: readlink da/d3c/d4c/d56/lb7 0 2026-03-10T10:20:17.866 INFO:tasks.workunit.client.1.vm05.stdout:5/887: write da/db/dee/d38/f94 [1003002,91121] 0 2026-03-10T10:20:17.866 INFO:tasks.workunit.client.1.vm05.stdout:2/824: write db/d1c/d40/f73 [1034093,12259] 0 2026-03-10T10:20:17.869 INFO:tasks.workunit.client.1.vm05.stdout:8/798: write d7/d14/d15/d3b/f43 [658070,86312] 0 2026-03-10T10:20:17.884 INFO:tasks.workunit.client.1.vm05.stdout:9/774: mkdir d0/df/d74/d8c/de4/d104/d107 0 2026-03-10T10:20:17.886 INFO:tasks.workunit.client.1.vm05.stdout:2/825: sync 2026-03-10T10:20:17.886 INFO:tasks.workunit.client.1.vm05.stdout:2/826: write db/d2d/dc6/f109 [801123,14874] 0 2026-03-10T10:20:17.887 INFO:tasks.workunit.client.0.vm02.stdout:5/998: read - d1/db/d11/d13/d28/d37/d3d/da3/d113/f131 zero size 2026-03-10T10:20:17.897 INFO:tasks.workunit.client.0.vm02.stdout:0/917: mknod d9/d18/d1a/d22/d24/c12a 0 2026-03-10T10:20:17.902 INFO:tasks.workunit.client.1.vm05.stdout:0/896: creat d1/d2/dc6/de7/f12b x:0 0 0 2026-03-10T10:20:17.903 INFO:tasks.workunit.client.0.vm02.stdout:6/842: read d0/d8/d9/f54 [968303,75614] 0 2026-03-10T10:20:17.909 INFO:tasks.workunit.client.0.vm02.stdout:8/863: rename d1/d1c/l4a to d1/d1c/d23/d25/df1/l107 0 2026-03-10T10:20:17.910 INFO:tasks.workunit.client.0.vm02.stdout:3/877: write d1/d20/d52/f6c [3150904,56686] 0 2026-03-10T10:20:17.919 INFO:tasks.workunit.client.0.vm02.stdout:5/999: creat d1/db/d11/d13/d28/da7/dd9/f153 x:0 0 0 2026-03-10T10:20:17.920 INFO:tasks.workunit.client.1.vm05.stdout:5/888: dread - da/d9a/dc7/db4/f104 zero size 2026-03-10T10:20:17.921 INFO:tasks.workunit.client.1.vm05.stdout:5/889: chown da/db/d28/d32/f79 122 1 2026-03-10T10:20:17.921 INFO:tasks.workunit.client.0.vm02.stdout:0/918: truncate d9/d18/d1a/d22/d24/d8e/d9b/daa/fe2 807556 0 2026-03-10T10:20:17.921 INFO:tasks.workunit.client.1.vm05.stdout:3/916: dwrite dd/d20/d130/fdc [0,4194304] 0 2026-03-10T10:20:17.922 INFO:tasks.workunit.client.1.vm05.stdout:5/890: readlink da/db/d28/d97/l12a 0 2026-03-10T10:20:17.923 INFO:tasks.workunit.client.1.vm05.stdout:1/956: symlink d4/d39/d3e/db1/df0/l119 0 2026-03-10T10:20:17.925 INFO:tasks.workunit.client.0.vm02.stdout:6/843: symlink d0/d8/d29/dce/l11b 0 2026-03-10T10:20:17.929 INFO:tasks.workunit.client.1.vm05.stdout:2/827: read db/d1c/f56 [7396798,117794] 0 2026-03-10T10:20:17.932 INFO:tasks.workunit.client.0.vm02.stdout:7/887: rename d1/d1b/d8f/f110 to d1/d1b/d8e/f116 0 2026-03-10T10:20:17.936 INFO:tasks.workunit.client.0.vm02.stdout:9/844: write da/d3c/d4c/d2c/f93 [1879467,49430] 0 2026-03-10T10:20:17.945 INFO:tasks.workunit.client.0.vm02.stdout:9/845: write da/d3c/d4c/d38/da6/f10b [1155895,69146] 0 2026-03-10T10:20:17.945 INFO:tasks.workunit.client.0.vm02.stdout:1/913: write d4/da/d27/d38/d80/fb7 [565295,112125] 0 2026-03-10T10:20:17.948 INFO:tasks.workunit.client.1.vm05.stdout:8/799: write d7/d14/d24/d3f/d6a/fe6 [1034597,110268] 0 2026-03-10T10:20:17.950 INFO:tasks.workunit.client.0.vm02.stdout:8/864: symlink d1/d2/dff/l108 0 2026-03-10T10:20:17.951 INFO:tasks.workunit.client.1.vm05.stdout:8/800: sync 2026-03-10T10:20:17.954 INFO:tasks.workunit.client.0.vm02.stdout:3/878: unlink d1/d8/f46 0 2026-03-10T10:20:17.959 INFO:tasks.workunit.client.0.vm02.stdout:2/890: getdents d0/d10/dee/d116 0 2026-03-10T10:20:17.961 INFO:tasks.workunit.client.0.vm02.stdout:1/914: dread d4/da/d1a/d47/d78/fdc [0,4194304] 0 2026-03-10T10:20:17.966 INFO:tasks.workunit.client.0.vm02.stdout:6/844: fdatasync d0/d8/d29/d6d/d96/de4/d102/fd5 0 2026-03-10T10:20:17.968 INFO:tasks.workunit.client.1.vm05.stdout:1/957: creat d4/d39/d3e/f11a x:0 0 0 2026-03-10T10:20:17.973 INFO:tasks.workunit.client.1.vm05.stdout:2/828: fsync db/d28/d4f/d59/f8d 0 2026-03-10T10:20:17.974 INFO:tasks.workunit.client.0.vm02.stdout:9/846: mkdir da/d3c/d4c/db1/d112/d114 0 2026-03-10T10:20:17.976 INFO:tasks.workunit.client.0.vm02.stdout:9/847: read da/d3c/d4c/d2c/d34/f81 [596752,124456] 0 2026-03-10T10:20:17.977 INFO:tasks.workunit.client.0.vm02.stdout:2/891: sync 2026-03-10T10:20:17.979 INFO:tasks.workunit.client.1.vm05.stdout:8/801: truncate d7/d14/d24/d3f/d4f/fbf 260521 0 2026-03-10T10:20:17.980 INFO:tasks.workunit.client.1.vm05.stdout:9/775: write d0/df/d11/f84 [1664068,49806] 0 2026-03-10T10:20:17.984 INFO:tasks.workunit.client.0.vm02.stdout:8/865: rename d1/d1c/d43/f46 to d1/d1c/d43/d5b/d88/dac/f109 0 2026-03-10T10:20:17.987 INFO:tasks.workunit.client.1.vm05.stdout:4/755: rename d1/d31/d76/dac/dc5 to d1/dfd 0 2026-03-10T10:20:17.989 INFO:tasks.workunit.client.0.vm02.stdout:3/879: truncate d1/d6/f53 4496292 0 2026-03-10T10:20:17.990 INFO:tasks.workunit.client.1.vm05.stdout:4/756: dwrite f0 [4194304,4194304] 0 2026-03-10T10:20:17.991 INFO:tasks.workunit.client.1.vm05.stdout:6/876: link dd/d36/d3f/lac dd/d36/d3f/d12/d44/daa/de4/l11a 0 2026-03-10T10:20:17.991 INFO:tasks.workunit.client.1.vm05.stdout:6/877: chown dd/d36/d3f/d12 1 1 2026-03-10T10:20:18.005 INFO:tasks.workunit.client.1.vm05.stdout:5/891: symlink da/d96/d117/l12f 0 2026-03-10T10:20:18.007 INFO:tasks.workunit.client.1.vm05.stdout:1/958: readlink d4/df/d1c/d92/lce 0 2026-03-10T10:20:18.019 INFO:tasks.workunit.client.1.vm05.stdout:7/936: getdents d5/d1d/d29/d3e/d8c/d82/d90 0 2026-03-10T10:20:18.020 INFO:tasks.workunit.client.0.vm02.stdout:6/845: dread - d0/d8/d29/d6d/d96/fe0 zero size 2026-03-10T10:20:18.020 INFO:tasks.workunit.client.0.vm02.stdout:2/892: mkdir d0/d1a/d49/deb/d129 0 2026-03-10T10:20:18.020 INFO:tasks.workunit.client.0.vm02.stdout:2/893: stat d0/d1a/d49/l7d 0 2026-03-10T10:20:18.022 INFO:tasks.workunit.client.1.vm05.stdout:9/776: dread d0/d1/d16/d6e/daf/fdb [0,4194304] 0 2026-03-10T10:20:18.027 INFO:tasks.workunit.client.0.vm02.stdout:0/919: creat d9/d18/d1a/f12b x:0 0 0 2026-03-10T10:20:18.030 INFO:tasks.workunit.client.1.vm05.stdout:0/897: rename d1/d2/d9/d50/d9a/fbf to d1/d2/d9/d31/d13/d15/d4e/d8a/f12c 0 2026-03-10T10:20:18.033 INFO:tasks.workunit.client.0.vm02.stdout:1/915: write d4/da/fde [1103139,112154] 0 2026-03-10T10:20:18.034 INFO:tasks.workunit.client.1.vm05.stdout:0/898: sync 2026-03-10T10:20:18.036 INFO:tasks.workunit.client.0.vm02.stdout:3/880: dwrite d1/d8/d21/f5e [0,4194304] 0 2026-03-10T10:20:18.037 INFO:tasks.workunit.client.0.vm02.stdout:3/881: chown d1/d20/ff2 0 1 2026-03-10T10:20:18.051 INFO:tasks.workunit.client.1.vm05.stdout:4/757: dread d1/d31/dc/d40/d45/f48 [4194304,4194304] 0 2026-03-10T10:20:18.052 INFO:tasks.workunit.client.0.vm02.stdout:7/888: mknod d1/d1b/d8f/dad/d7e/c117 0 2026-03-10T10:20:18.054 INFO:tasks.workunit.client.1.vm05.stdout:5/892: fsync da/db/d28/d97/f87 0 2026-03-10T10:20:18.056 INFO:tasks.workunit.client.0.vm02.stdout:9/848: mkdir da/d3c/d4c/d2c/d34/dc2/d115 0 2026-03-10T10:20:18.061 INFO:tasks.workunit.client.0.vm02.stdout:9/849: dwrite da/f106 [0,4194304] 0 2026-03-10T10:20:18.062 INFO:tasks.workunit.client.0.vm02.stdout:2/894: mkdir d0/d71/d12a 0 2026-03-10T10:20:18.081 INFO:tasks.workunit.client.1.vm05.stdout:9/777: creat d0/d1/dcc/f108 x:0 0 0 2026-03-10T10:20:18.087 INFO:tasks.workunit.client.1.vm05.stdout:6/878: creat dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/d4d/d112/f11b x:0 0 0 2026-03-10T10:20:18.091 INFO:tasks.workunit.client.1.vm05.stdout:3/917: getdents dd/d20/d130 0 2026-03-10T10:20:18.091 INFO:tasks.workunit.client.1.vm05.stdout:3/918: chown dd/d15/d24/d74/d88/l91 255 1 2026-03-10T10:20:18.103 INFO:tasks.workunit.client.1.vm05.stdout:5/893: read da/db/d28/f44 [978377,104765] 0 2026-03-10T10:20:18.103 INFO:tasks.workunit.client.0.vm02.stdout:7/889: write d1/dc/d16/d28/f73 [3660559,6711] 0 2026-03-10T10:20:18.112 INFO:tasks.workunit.client.0.vm02.stdout:2/895: dwrite d0/d1a/d49/dcc/ff4 [0,4194304] 0 2026-03-10T10:20:18.125 INFO:tasks.workunit.client.0.vm02.stdout:0/920: write d9/d18/d1a/d22/d24/d8e/d9b/daa/f105 [42446,67263] 0 2026-03-10T10:20:18.126 INFO:tasks.workunit.client.1.vm05.stdout:8/802: creat d7/d14/d24/d3f/fff x:0 0 0 2026-03-10T10:20:18.127 INFO:tasks.workunit.client.1.vm05.stdout:8/803: chown d7/d14/d24/d3f/d6a/d8a/d96/db7/ffb 4 1 2026-03-10T10:20:18.130 INFO:tasks.workunit.client.0.vm02.stdout:3/882: symlink d1/d8/l126 0 2026-03-10T10:20:18.136 INFO:tasks.workunit.client.0.vm02.stdout:9/850: mknod da/d3c/d4c/d2c/d34/dc2/dcc/c116 0 2026-03-10T10:20:18.139 INFO:tasks.workunit.client.0.vm02.stdout:8/866: getdents d1/d1c/d24/dad 0 2026-03-10T10:20:18.148 INFO:tasks.workunit.client.0.vm02.stdout:9/851: dread da/d3c/d4c/f41 [0,4194304] 0 2026-03-10T10:20:18.148 INFO:tasks.workunit.client.0.vm02.stdout:1/916: creat d4/da/d1a/f122 x:0 0 0 2026-03-10T10:20:18.153 INFO:tasks.workunit.client.0.vm02.stdout:3/883: sync 2026-03-10T10:20:18.153 INFO:tasks.workunit.client.0.vm02.stdout:1/917: sync 2026-03-10T10:20:18.165 INFO:tasks.workunit.client.0.vm02.stdout:1/918: chown d4/da/d1a/d47/d88/d10b 5414842 1 2026-03-10T10:20:18.165 INFO:tasks.workunit.client.0.vm02.stdout:6/846: link d0/d8/d29/d6d/d96/de4/def/d6f/l112 d0/d8/d9/d7a/l11c 0 2026-03-10T10:20:18.165 INFO:tasks.workunit.client.0.vm02.stdout:0/921: symlink d9/d18/dc7/dca/l12c 0 2026-03-10T10:20:18.166 INFO:tasks.workunit.client.0.vm02.stdout:6/847: stat d0/d8/d29/d6d/d96/de4/def/ld8 0 2026-03-10T10:20:18.167 INFO:tasks.workunit.client.1.vm05.stdout:0/899: dwrite d1/d2/d39/f69 [0,4194304] 0 2026-03-10T10:20:18.172 INFO:tasks.workunit.client.0.vm02.stdout:0/922: read d9/d18/d1a/d22/d24/d80/d74/f96 [302239,53302] 0 2026-03-10T10:20:18.183 INFO:tasks.workunit.client.0.vm02.stdout:8/867: dwrite d1/d1c/d43/d5b/fbc [0,4194304] 0 2026-03-10T10:20:18.184 INFO:tasks.workunit.client.0.vm02.stdout:2/896: dwrite d0/d10/ff6 [0,4194304] 0 2026-03-10T10:20:18.184 INFO:tasks.workunit.client.0.vm02.stdout:1/919: read - d4/dc3/fd0 zero size 2026-03-10T10:20:18.190 INFO:tasks.workunit.client.0.vm02.stdout:1/920: write d4/dc3/fec [52702,90517] 0 2026-03-10T10:20:18.193 INFO:tasks.workunit.client.0.vm02.stdout:9/852: rename da/d3c/d4c/d38/d4a/d70/lbe to da/d3c/d4c/d2c/d34/d35/l117 0 2026-03-10T10:20:18.201 INFO:tasks.workunit.client.0.vm02.stdout:9/853: dwrite da/d3c/d4c/d38/f47 [0,4194304] 0 2026-03-10T10:20:18.202 INFO:tasks.workunit.client.0.vm02.stdout:8/868: dwrite d1/d1c/d24/dad/dbe/dda/f104 [0,4194304] 0 2026-03-10T10:20:18.213 INFO:tasks.workunit.client.0.vm02.stdout:7/890: creat d1/dc/d16/d28/f118 x:0 0 0 2026-03-10T10:20:18.218 INFO:tasks.workunit.client.0.vm02.stdout:6/848: read - d0/d8/d29/d6d/d96/de4/d102/fd5 zero size 2026-03-10T10:20:18.233 INFO:tasks.workunit.client.1.vm05.stdout:3/919: chown dd/dbe/d106/f10d 24299560 1 2026-03-10T10:20:18.237 INFO:tasks.workunit.client.1.vm05.stdout:4/758: mknod d1/d31/dc/d40/cfe 0 2026-03-10T10:20:18.243 INFO:tasks.workunit.client.1.vm05.stdout:5/894: mkdir da/d9a/dc7/d130 0 2026-03-10T10:20:18.243 INFO:tasks.workunit.client.1.vm05.stdout:5/895: stat da/db/dee/d38/l45 0 2026-03-10T10:20:18.244 INFO:tasks.workunit.client.0.vm02.stdout:6/849: sync 2026-03-10T10:20:18.247 INFO:tasks.workunit.client.1.vm05.stdout:2/829: getdents db/d28/d4f/d8b 0 2026-03-10T10:20:18.250 INFO:tasks.workunit.client.1.vm05.stdout:7/937: creat d5/d1d/d20/d35/f11c x:0 0 0 2026-03-10T10:20:18.257 INFO:tasks.workunit.client.1.vm05.stdout:8/804: creat d7/d14/d62/d90/dac/f100 x:0 0 0 2026-03-10T10:20:18.257 INFO:tasks.workunit.client.0.vm02.stdout:9/854: mkdir da/d3c/d4c/d38/da6/d118 0 2026-03-10T10:20:18.257 INFO:tasks.workunit.client.1.vm05.stdout:8/805: chown d7/d2f/c58 22603806 1 2026-03-10T10:20:18.260 INFO:tasks.workunit.client.1.vm05.stdout:9/778: mknod d0/df/d74/d8c/c109 0 2026-03-10T10:20:18.270 INFO:tasks.workunit.client.0.vm02.stdout:8/869: dread d1/d1c/d43/d5b/d88/dac/d83/f99 [0,4194304] 0 2026-03-10T10:20:18.276 INFO:tasks.workunit.client.1.vm05.stdout:3/920: symlink dd/d15/d24/d2c/d6d/da7/dbb/dbd/l148 0 2026-03-10T10:20:18.279 INFO:tasks.workunit.client.0.vm02.stdout:9/855: fsync da/d3c/d4c/d56/fd3 0 2026-03-10T10:20:18.281 INFO:tasks.workunit.client.1.vm05.stdout:5/896: write da/d9a/daf/fdf [3506974,26634] 0 2026-03-10T10:20:18.284 INFO:tasks.workunit.client.1.vm05.stdout:1/959: getdents d4/d39/d3e/db1 0 2026-03-10T10:20:18.286 INFO:tasks.workunit.client.0.vm02.stdout:0/923: symlink d9/d18/d1a/d22/db4/l12d 0 2026-03-10T10:20:18.291 INFO:tasks.workunit.client.0.vm02.stdout:0/924: chown d9/d34/d3d/d65/d89/dd3/d9c/cbb 251 1 2026-03-10T10:20:18.293 INFO:tasks.workunit.client.1.vm05.stdout:7/938: creat d5/d1d/d20/d2d/d80/f11d x:0 0 0 2026-03-10T10:20:18.298 INFO:tasks.workunit.client.1.vm05.stdout:6/879: write dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/fe1 [1992722,6249] 0 2026-03-10T10:20:18.299 INFO:tasks.workunit.client.0.vm02.stdout:3/884: write d1/f12 [3219760,36255] 0 2026-03-10T10:20:18.299 INFO:tasks.workunit.client.0.vm02.stdout:1/921: write d4/d1b/f4c [32174,126275] 0 2026-03-10T10:20:18.303 INFO:tasks.workunit.client.1.vm05.stdout:8/806: creat d7/d14/d24/d3f/d6a/d8a/d96/f101 x:0 0 0 2026-03-10T10:20:18.303 INFO:tasks.workunit.client.0.vm02.stdout:2/897: write d0/f36 [7770925,60877] 0 2026-03-10T10:20:18.311 INFO:tasks.workunit.client.1.vm05.stdout:0/900: symlink d1/d2/d9/d31/l12d 0 2026-03-10T10:20:18.311 INFO:tasks.workunit.client.1.vm05.stdout:3/921: mknod dd/d15/d24/d2c/dd0/dd9/c149 0 2026-03-10T10:20:18.311 INFO:tasks.workunit.client.0.vm02.stdout:9/856: symlink da/d3c/d4c/d2c/d34/d35/l119 0 2026-03-10T10:20:18.313 INFO:tasks.workunit.client.0.vm02.stdout:7/891: creat d1/d1b/d8f/dad/d7e/dba/f119 x:0 0 0 2026-03-10T10:20:18.313 INFO:tasks.workunit.client.1.vm05.stdout:3/922: dwrite dd/d15/d24/f8a [0,4194304] 0 2026-03-10T10:20:18.328 INFO:tasks.workunit.client.0.vm02.stdout:8/870: dwrite d1/d1c/d24/d71/fdd [0,4194304] 0 2026-03-10T10:20:18.335 INFO:tasks.workunit.client.1.vm05.stdout:1/960: truncate d4/d39/fb2 2921981 0 2026-03-10T10:20:18.348 INFO:tasks.workunit.client.0.vm02.stdout:3/885: creat d1/d8/d44/f127 x:0 0 0 2026-03-10T10:20:18.348 INFO:tasks.workunit.client.0.vm02.stdout:2/898: creat d0/d71/f12b x:0 0 0 2026-03-10T10:20:18.349 INFO:tasks.workunit.client.0.vm02.stdout:3/886: fsync d1/d8/fb 0 2026-03-10T10:20:18.353 INFO:tasks.workunit.client.1.vm05.stdout:8/807: dread - d7/d14/d24/d3f/fab zero size 2026-03-10T10:20:18.353 INFO:tasks.workunit.client.0.vm02.stdout:6/850: write d0/d8/f9b [1795841,62257] 0 2026-03-10T10:20:18.355 INFO:tasks.workunit.client.1.vm05.stdout:9/779: symlink d0/df/d74/d8c/l10a 0 2026-03-10T10:20:18.359 INFO:tasks.workunit.client.0.vm02.stdout:2/899: dwrite d0/d71/d108/d65/dc4/dfa/f6e [4194304,4194304] 0 2026-03-10T10:20:18.364 INFO:tasks.workunit.client.0.vm02.stdout:9/857: unlink da/d3c/d4c/f3b 0 2026-03-10T10:20:18.380 INFO:tasks.workunit.client.1.vm05.stdout:0/901: mknod d1/d2/d39/d3d/d9f/c12e 0 2026-03-10T10:20:18.382 INFO:tasks.workunit.client.0.vm02.stdout:7/892: chown d1/d1b/d8f/d67/fc2 1764295 1 2026-03-10T10:20:18.388 INFO:tasks.workunit.client.0.vm02.stdout:0/925: write d9/d34/d3d/d7b/fc0 [3120024,109130] 0 2026-03-10T10:20:18.388 INFO:tasks.workunit.client.0.vm02.stdout:0/926: chown d9/d18/d1a/d46/ce5 494 1 2026-03-10T10:20:18.390 INFO:tasks.workunit.client.1.vm05.stdout:4/759: dwrite d1/f19 [0,4194304] 0 2026-03-10T10:20:18.392 INFO:tasks.workunit.client.1.vm05.stdout:4/760: read - d1/d31/d4b/d6d/fbc zero size 2026-03-10T10:20:18.418 INFO:tasks.workunit.client.1.vm05.stdout:2/830: dwrite db/d28/fc5 [0,4194304] 0 2026-03-10T10:20:18.419 INFO:tasks.workunit.client.1.vm05.stdout:5/897: mkdir da/d131 0 2026-03-10T10:20:18.419 INFO:tasks.workunit.client.0.vm02.stdout:1/922: mkdir d4/da/d27/d117/d123 0 2026-03-10T10:20:18.419 INFO:tasks.workunit.client.1.vm05.stdout:2/831: chown db/d28/fc5 1526 1 2026-03-10T10:20:18.423 INFO:tasks.workunit.client.1.vm05.stdout:7/939: dwrite d5/d1d/d20/d2d/d5d/f75 [0,4194304] 0 2026-03-10T10:20:18.435 INFO:tasks.workunit.client.1.vm05.stdout:7/940: sync 2026-03-10T10:20:18.435 INFO:tasks.workunit.client.1.vm05.stdout:7/941: dread - d5/d17/fb1 zero size 2026-03-10T10:20:18.435 INFO:tasks.workunit.client.1.vm05.stdout:6/880: symlink dd/d36/d3f/d12/d44/d2a/d3d/l11c 0 2026-03-10T10:20:18.439 INFO:tasks.workunit.client.1.vm05.stdout:8/808: rename d7/d14/cbb to d7/d14/d24/d3f/df0/c102 0 2026-03-10T10:20:18.454 INFO:tasks.workunit.client.1.vm05.stdout:0/902: fsync d1/d2/d9/d31/d12/d20/f37 0 2026-03-10T10:20:18.459 INFO:tasks.workunit.client.0.vm02.stdout:7/893: symlink d1/dc/d55/d9c/dfd/l11a 0 2026-03-10T10:20:18.464 INFO:tasks.workunit.client.0.vm02.stdout:7/894: chown d1/dc/d16/d28/d2d/ce7 16656364 1 2026-03-10T10:20:18.464 INFO:tasks.workunit.client.0.vm02.stdout:8/871: creat d1/d1c/d43/d5b/dab/d102/f10a x:0 0 0 2026-03-10T10:20:18.464 INFO:tasks.workunit.client.0.vm02.stdout:8/872: read - d1/d1c/d24/dad/fc3 zero size 2026-03-10T10:20:18.465 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:18 vm02.local ceph-mon[50200]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:20:18.465 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:18 vm02.local ceph-mon[50200]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:20:18.465 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:18 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.465 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:18 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.465 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:18 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.465 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:18 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.465 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:18 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.466 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:18 vm02.local ceph-mon[50200]: Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-10T10:20:18.466 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:18 vm02.local ceph-mon[50200]: Reconfiguring daemon prometheus.vm02 on vm02 2026-03-10T10:20:18.474 INFO:tasks.workunit.client.1.vm05.stdout:1/961: dwrite d4/d39/d3e/f3f [0,4194304] 0 2026-03-10T10:20:18.475 INFO:tasks.workunit.client.1.vm05.stdout:6/881: symlink dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/l11d 0 2026-03-10T10:20:18.477 INFO:tasks.workunit.client.0.vm02.stdout:9/858: dwrite da/d3c/d4c/d38/f88 [4194304,4194304] 0 2026-03-10T10:20:18.487 INFO:tasks.workunit.client.0.vm02.stdout:0/927: mknod d9/d18/d1a/d22/d24/c12e 0 2026-03-10T10:20:18.487 INFO:tasks.workunit.client.0.vm02.stdout:2/900: write d0/d1a/d49/f54 [2309122,52364] 0 2026-03-10T10:20:18.491 INFO:tasks.workunit.client.1.vm05.stdout:9/780: unlink d0/dc4/ce2 0 2026-03-10T10:20:18.492 INFO:tasks.workunit.client.1.vm05.stdout:4/761: write d1/d31/d76/faf [648017,117414] 0 2026-03-10T10:20:18.494 INFO:tasks.workunit.client.1.vm05.stdout:3/923: dwrite dd/d20/d56/f129 [0,4194304] 0 2026-03-10T10:20:18.495 INFO:tasks.workunit.client.0.vm02.stdout:8/873: symlink d1/dc7/l10b 0 2026-03-10T10:20:18.496 INFO:tasks.workunit.client.0.vm02.stdout:6/851: link d0/d8/d9/c58 d0/d8/d29/d2f/d50/d98/df6/c11d 0 2026-03-10T10:20:18.497 INFO:tasks.workunit.client.0.vm02.stdout:6/852: chown d0/d8/d9/l5e 352 1 2026-03-10T10:20:18.516 INFO:tasks.workunit.client.0.vm02.stdout:9/859: truncate da/d3c/d4c/d56/f77 3293497 0 2026-03-10T10:20:18.522 INFO:tasks.workunit.client.1.vm05.stdout:5/898: creat da/db/d28/d6e/f132 x:0 0 0 2026-03-10T10:20:18.528 INFO:tasks.workunit.client.0.vm02.stdout:3/887: truncate d1/d8/d21/f5e 1300989 0 2026-03-10T10:20:18.528 INFO:tasks.workunit.client.0.vm02.stdout:1/923: write d4/da/d1a/d11d/fa2 [7042484,105999] 0 2026-03-10T10:20:18.528 INFO:tasks.workunit.client.1.vm05.stdout:2/832: mkdir db/d1c/d40/d62/d10c 0 2026-03-10T10:20:18.528 INFO:tasks.workunit.client.1.vm05.stdout:2/833: dread - db/d12/fb2 zero size 2026-03-10T10:20:18.535 INFO:tasks.workunit.client.1.vm05.stdout:7/942: getdents d5/d1d/d20/d2d/d80/dd6/d11a 0 2026-03-10T10:20:18.535 INFO:tasks.workunit.client.1.vm05.stdout:7/943: chown d5/d1d/d20/d3b 26 1 2026-03-10T10:20:18.536 INFO:tasks.workunit.client.1.vm05.stdout:7/944: dread - d5/d1d/d20/d2d/d80/f11d zero size 2026-03-10T10:20:18.538 INFO:tasks.workunit.client.0.vm02.stdout:3/888: dwrite d1/d8/d21/f117 [0,4194304] 0 2026-03-10T10:20:18.554 INFO:tasks.workunit.client.0.vm02.stdout:1/924: dread d4/da/d1a/d11d/d53/f99 [0,4194304] 0 2026-03-10T10:20:18.558 INFO:tasks.workunit.client.0.vm02.stdout:6/853: dread d0/d8/d29/d6d/d96/de4/def/d6f/fc6 [0,4194304] 0 2026-03-10T10:20:18.575 INFO:tasks.workunit.client.0.vm02.stdout:9/860: creat da/d3c/d53/f11a x:0 0 0 2026-03-10T10:20:18.588 INFO:tasks.workunit.client.1.vm05.stdout:5/899: mkdir da/db/d26/d70/d72/df6/d133 0 2026-03-10T10:20:18.598 INFO:tasks.workunit.client.0.vm02.stdout:6/854: unlink d0/d8/d29/d6d/d96/de4/def/d6f/fc6 0 2026-03-10T10:20:18.608 INFO:tasks.workunit.client.1.vm05.stdout:2/834: creat db/d2d/dc6/f10d x:0 0 0 2026-03-10T10:20:18.613 INFO:tasks.workunit.client.0.vm02.stdout:0/928: dread d9/d18/d1a/d22/d24/d80/f72 [0,4194304] 0 2026-03-10T10:20:18.619 INFO:tasks.workunit.client.1.vm05.stdout:0/903: write d1/d2/d9/d31/d13/f7a [1517759,42344] 0 2026-03-10T10:20:18.622 INFO:tasks.workunit.client.0.vm02.stdout:7/895: write d1/d1b/d8f/f105 [1243327,88606] 0 2026-03-10T10:20:18.625 INFO:tasks.workunit.client.0.vm02.stdout:7/896: chown d1/dc/ld7 371182527 1 2026-03-10T10:20:18.630 INFO:tasks.workunit.client.0.vm02.stdout:2/901: write d0/d1a/d49/deb/de6/f106 [341999,55866] 0 2026-03-10T10:20:18.635 INFO:tasks.workunit.client.0.vm02.stdout:8/874: dwrite d1/d1c/d43/d5b/d88/dac/d83/d9f/fb8 [0,4194304] 0 2026-03-10T10:20:18.636 INFO:tasks.workunit.client.0.vm02.stdout:8/875: chown d1 32513 1 2026-03-10T10:20:18.637 INFO:tasks.workunit.client.0.vm02.stdout:3/889: dwrite d1/f50 [0,4194304] 0 2026-03-10T10:20:18.645 INFO:tasks.workunit.client.1.vm05.stdout:6/882: write dd/d36/d3f/d12/d59/df5/fda [533530,67827] 0 2026-03-10T10:20:18.650 INFO:tasks.workunit.client.1.vm05.stdout:9/781: write d0/df/d11/f24 [1047482,110785] 0 2026-03-10T10:20:18.650 INFO:tasks.workunit.client.1.vm05.stdout:2/835: dread db/d28/d4f/d8b/fa8 [0,4194304] 0 2026-03-10T10:20:18.651 INFO:tasks.workunit.client.0.vm02.stdout:1/925: dwrite d4/dc3/ff4 [0,4194304] 0 2026-03-10T10:20:18.652 INFO:tasks.workunit.client.0.vm02.stdout:1/926: chown d4/d1b/ce5 0 1 2026-03-10T10:20:18.654 INFO:tasks.workunit.client.0.vm02.stdout:2/902: sync 2026-03-10T10:20:18.662 INFO:tasks.workunit.client.0.vm02.stdout:6/855: truncate d0/d8/d29/d52/de8/db2/dbb/ff7 1028518 0 2026-03-10T10:20:18.668 INFO:tasks.workunit.client.1.vm05.stdout:3/924: mknod dd/c14a 0 2026-03-10T10:20:18.671 INFO:tasks.workunit.client.0.vm02.stdout:0/929: mknod d9/d34/d3d/d67/c12f 0 2026-03-10T10:20:18.671 INFO:tasks.workunit.client.1.vm05.stdout:4/762: rename d1/d31/dc/f25 to d1/d31/d76/dac/db8/dbf/fff 0 2026-03-10T10:20:18.673 INFO:tasks.workunit.client.0.vm02.stdout:7/897: creat d1/d1b/d49/d98/f11b x:0 0 0 2026-03-10T10:20:18.677 INFO:tasks.workunit.client.1.vm05.stdout:7/945: mkdir d5/d1d/d20/d2d/d80/dd6/d11a/d11e 0 2026-03-10T10:20:18.678 INFO:tasks.workunit.client.0.vm02.stdout:3/890: rename d1/d8/d86/db1/lda to d1/d20/db2/l128 0 2026-03-10T10:20:18.680 INFO:tasks.workunit.client.1.vm05.stdout:0/904: symlink d1/d2/d9/d31/d54/l12f 0 2026-03-10T10:20:18.681 INFO:tasks.workunit.client.1.vm05.stdout:1/962: link d4/l9b d4/d3d/ddc/d108/l11b 0 2026-03-10T10:20:18.686 INFO:tasks.workunit.client.1.vm05.stdout:8/809: unlink d7/d14/d62/d90/dac/ccf 0 2026-03-10T10:20:18.687 INFO:tasks.workunit.client.1.vm05.stdout:2/836: creat db/d61/d10a/f10e x:0 0 0 2026-03-10T10:20:18.690 INFO:tasks.workunit.client.0.vm02.stdout:3/891: fsync d1/d6/d8e/fb5 0 2026-03-10T10:20:18.691 INFO:tasks.workunit.client.1.vm05.stdout:0/905: mknod d1/d2/d39/d6e/c130 0 2026-03-10T10:20:18.693 INFO:tasks.workunit.client.0.vm02.stdout:9/861: getdents da/d9d 0 2026-03-10T10:20:18.694 INFO:tasks.workunit.client.0.vm02.stdout:0/930: mkdir d9/d18/d130 0 2026-03-10T10:20:18.695 INFO:tasks.workunit.client.0.vm02.stdout:8/876: creat d1/d1c/d23/f10c x:0 0 0 2026-03-10T10:20:18.698 INFO:tasks.workunit.client.0.vm02.stdout:8/877: dwrite d1/d1c/d43/d6a/da8/fbf [0,4194304] 0 2026-03-10T10:20:18.709 INFO:tasks.workunit.client.1.vm05.stdout:1/963: dread - d4/d79/d83/dc5/dcb/f103 zero size 2026-03-10T10:20:18.710 INFO:tasks.workunit.client.1.vm05.stdout:2/837: mknod db/d61/dcc/c10f 0 2026-03-10T10:20:18.711 INFO:tasks.workunit.client.0.vm02.stdout:7/898: rmdir d1/d1b/d49/d98 39 2026-03-10T10:20:18.715 INFO:tasks.workunit.client.1.vm05.stdout:5/900: dwrite da/d9a/dc7/f6a [0,4194304] 0 2026-03-10T10:20:18.728 INFO:tasks.workunit.client.0.vm02.stdout:3/892: fdatasync d1/d8/d21/f4c 0 2026-03-10T10:20:18.728 INFO:tasks.workunit.client.0.vm02.stdout:3/893: write d1/d8/f3d [5134776,100956] 0 2026-03-10T10:20:18.728 INFO:tasks.workunit.client.0.vm02.stdout:3/894: dwrite d1/d8/d86/da2/fd2 [0,4194304] 0 2026-03-10T10:20:18.728 INFO:tasks.workunit.client.1.vm05.stdout:5/901: readlink da/d9a/dc7/l57 0 2026-03-10T10:20:18.728 INFO:tasks.workunit.client.1.vm05.stdout:7/946: mkdir d5/d1d/d20/d35/d11f 0 2026-03-10T10:20:18.728 INFO:tasks.workunit.client.1.vm05.stdout:3/925: dread dd/d20/d56/fb7 [0,4194304] 0 2026-03-10T10:20:18.728 INFO:tasks.workunit.client.1.vm05.stdout:3/926: dread - dd/d20/d130/f131 zero size 2026-03-10T10:20:18.731 INFO:tasks.workunit.client.0.vm02.stdout:9/862: fsync da/f28 0 2026-03-10T10:20:18.734 INFO:tasks.workunit.client.1.vm05.stdout:6/883: creat dd/d36/d3f/d12/d44/f11e x:0 0 0 2026-03-10T10:20:18.744 INFO:tasks.workunit.client.1.vm05.stdout:8/810: write d7/d14/d24/d3f/fc9 [1257381,39676] 0 2026-03-10T10:20:18.749 INFO:tasks.workunit.client.0.vm02.stdout:2/903: rename d0/d71/d108/f60 to d0/f12c 0 2026-03-10T10:20:18.749 INFO:tasks.workunit.client.1.vm05.stdout:2/838: unlink db/d61/dcc/ff1 0 2026-03-10T10:20:18.757 INFO:tasks.workunit.client.0.vm02.stdout:7/899: fsync d1/f15 0 2026-03-10T10:20:18.758 INFO:tasks.workunit.client.1.vm05.stdout:5/902: mknod da/d96/dd9/c134 0 2026-03-10T10:20:18.759 INFO:tasks.workunit.client.1.vm05.stdout:7/947: unlink d5/d1d/d20/d3b/l6a 0 2026-03-10T10:20:18.760 INFO:tasks.workunit.client.1.vm05.stdout:3/927: truncate dd/dbe/d106/f10d 891984 0 2026-03-10T10:20:18.765 INFO:tasks.workunit.client.1.vm05.stdout:6/884: chown dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6/l10e 1 1 2026-03-10T10:20:18.766 INFO:tasks.workunit.client.1.vm05.stdout:8/811: dread - d7/d14/d15/fd9 zero size 2026-03-10T10:20:18.768 INFO:tasks.workunit.client.1.vm05.stdout:2/839: chown db/d1c/d40/d80/fd3 15585 1 2026-03-10T10:20:18.769 INFO:tasks.workunit.client.1.vm05.stdout:2/840: write db/d1c/d40/d62/f83 [2641870,128064] 0 2026-03-10T10:20:18.770 INFO:tasks.workunit.client.1.vm05.stdout:4/763: rename d1/cc6 to d1/d31/dc/c100 0 2026-03-10T10:20:18.775 INFO:tasks.workunit.client.1.vm05.stdout:2/841: symlink db/d61/d10a/l110 0 2026-03-10T10:20:18.778 INFO:tasks.workunit.client.1.vm05.stdout:8/812: rename d7/l27 to d7/d2f/d57/de3/l103 0 2026-03-10T10:20:18.779 INFO:tasks.workunit.client.1.vm05.stdout:5/903: mknod da/d9a/dc7/db4/dfe/c135 0 2026-03-10T10:20:18.781 INFO:tasks.workunit.client.1.vm05.stdout:9/782: write d0/d1/dcc/dd0/fd9 [502789,113378] 0 2026-03-10T10:20:18.781 INFO:tasks.workunit.client.1.vm05.stdout:4/764: truncate d1/d31/d76/dac/db8/fd1 981394 0 2026-03-10T10:20:18.785 INFO:tasks.workunit.client.1.vm05.stdout:6/885: getdents dd/d36/d3f/d12/d44/d2a/d7f/dff 0 2026-03-10T10:20:18.786 INFO:tasks.workunit.client.1.vm05.stdout:2/842: mknod db/d28/d4f/d8b/c111 0 2026-03-10T10:20:18.802 INFO:tasks.workunit.client.0.vm02.stdout:1/927: write d4/da/d1a/d47/d78/fcf [967581,38369] 0 2026-03-10T10:20:18.802 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:18 vm05.local ceph-mon[59051]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:20:18.802 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:18 vm05.local ceph-mon[59051]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:20:18.802 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:18 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.802 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:18 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.802 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:18 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.802 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:18 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.802 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:18 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:18.802 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:18 vm05.local ceph-mon[59051]: Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-10T10:20:18.802 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:18 vm05.local ceph-mon[59051]: Reconfiguring daemon prometheus.vm02 on vm02 2026-03-10T10:20:18.803 INFO:tasks.workunit.client.1.vm05.stdout:2/843: chown db/d28/d4f/fb0 13928 1 2026-03-10T10:20:18.803 INFO:tasks.workunit.client.1.vm05.stdout:2/844: write db/d28/d4f/d59/da4/d81/da7/fe5 [917366,18532] 0 2026-03-10T10:20:18.803 INFO:tasks.workunit.client.1.vm05.stdout:8/813: symlink d7/d14/d24/d3f/df0/l104 0 2026-03-10T10:20:18.803 INFO:tasks.workunit.client.1.vm05.stdout:6/886: dwrite dd/d36/d3f/d12/d44/d30/f8d [0,4194304] 0 2026-03-10T10:20:18.803 INFO:tasks.workunit.client.1.vm05.stdout:8/814: symlink d7/d14/d24/d3f/d6a/l105 0 2026-03-10T10:20:18.804 INFO:tasks.workunit.client.1.vm05.stdout:5/904: sync 2026-03-10T10:20:18.808 INFO:tasks.workunit.client.1.vm05.stdout:7/948: dread d5/d1d/d29/d3e/d8c/d82/f86 [0,4194304] 0 2026-03-10T10:20:18.809 INFO:tasks.workunit.client.1.vm05.stdout:2/845: unlink db/d2d/l2f 0 2026-03-10T10:20:18.809 INFO:tasks.workunit.client.1.vm05.stdout:3/928: getdents dd/d15/d24/d2c/d6d/da7/dbb/dbd/dff 0 2026-03-10T10:20:18.814 INFO:tasks.workunit.client.1.vm05.stdout:9/783: creat d0/d1/d13/d55/f10b x:0 0 0 2026-03-10T10:20:18.815 INFO:tasks.workunit.client.0.vm02.stdout:8/878: rmdir d1/d1c/d23 39 2026-03-10T10:20:18.822 INFO:tasks.workunit.client.1.vm05.stdout:8/815: truncate d7/d14/d15/f1f 4150262 0 2026-03-10T10:20:18.823 INFO:tasks.workunit.client.1.vm05.stdout:8/816: write d7/d14/d24/d3f/d6a/d8a/d96/ffc [229648,74051] 0 2026-03-10T10:20:18.827 INFO:tasks.workunit.client.1.vm05.stdout:1/964: dwrite d4/d79/d83/dc5/dcb/f103 [0,4194304] 0 2026-03-10T10:20:18.841 INFO:tasks.workunit.client.1.vm05.stdout:8/817: dread d7/d14/d62/f9d [0,4194304] 0 2026-03-10T10:20:18.850 INFO:tasks.workunit.client.0.vm02.stdout:2/904: fsync d0/d10/ff6 0 2026-03-10T10:20:18.853 INFO:tasks.workunit.client.1.vm05.stdout:2/846: creat db/d61/d10a/f112 x:0 0 0 2026-03-10T10:20:18.854 INFO:tasks.workunit.client.1.vm05.stdout:2/847: chown db/d28/d4f/da3/lc3 105925520 1 2026-03-10T10:20:18.869 INFO:tasks.workunit.client.1.vm05.stdout:2/848: read db/d12/fb5 [3167533,57981] 0 2026-03-10T10:20:18.869 INFO:tasks.workunit.client.1.vm05.stdout:2/849: chown db/d61/d10a/l110 4665930 1 2026-03-10T10:20:18.870 INFO:tasks.workunit.client.1.vm05.stdout:4/765: getdents d1/d64/da9 0 2026-03-10T10:20:18.871 INFO:tasks.workunit.client.1.vm05.stdout:0/906: write d1/d2/d9/d31/d13/d15/d4e/d8a/dfc/f11a [293595,99872] 0 2026-03-10T10:20:18.875 INFO:tasks.workunit.client.0.vm02.stdout:9/863: dwrite da/d3c/f8b [0,4194304] 0 2026-03-10T10:20:18.876 INFO:tasks.workunit.client.1.vm05.stdout:4/766: dwrite d1/d3/d65/ddb/fe7 [0,4194304] 0 2026-03-10T10:20:18.876 INFO:tasks.workunit.client.0.vm02.stdout:0/931: dwrite d9/d34/d3d/d65/d89/fc4 [0,4194304] 0 2026-03-10T10:20:18.878 INFO:tasks.workunit.client.1.vm05.stdout:0/907: sync 2026-03-10T10:20:18.889 INFO:tasks.workunit.client.0.vm02.stdout:0/932: dread d9/d34/d3d/f4e [4194304,4194304] 0 2026-03-10T10:20:18.894 INFO:tasks.workunit.client.0.vm02.stdout:1/928: creat d4/da/d1a/d5b/d93/de8/f124 x:0 0 0 2026-03-10T10:20:18.900 INFO:tasks.workunit.client.0.vm02.stdout:6/856: rename d0/d8/d29/d6d/d96/de4/d102/c10 to d0/d8/d29/dce/c11e 0 2026-03-10T10:20:18.908 INFO:tasks.workunit.client.1.vm05.stdout:8/818: truncate d7/d2f/f7f 467881 0 2026-03-10T10:20:18.908 INFO:tasks.workunit.client.0.vm02.stdout:7/900: creat d1/dc/d16/dfc/f11c x:0 0 0 2026-03-10T10:20:18.911 INFO:tasks.workunit.client.1.vm05.stdout:9/784: unlink d0/dc4/c60 0 2026-03-10T10:20:18.911 INFO:tasks.workunit.client.1.vm05.stdout:5/905: write da/d96/fea [405867,101620] 0 2026-03-10T10:20:18.912 INFO:tasks.workunit.client.1.vm05.stdout:3/929: write f2 [1381691,112280] 0 2026-03-10T10:20:18.913 INFO:tasks.workunit.client.1.vm05.stdout:2/850: chown db/d28/l38 132113 1 2026-03-10T10:20:18.915 INFO:tasks.workunit.client.0.vm02.stdout:1/929: unlink d4/da/d1a/d22/l51 0 2026-03-10T10:20:18.921 INFO:tasks.workunit.client.1.vm05.stdout:4/767: rename d1/d3/f62 to d1/d64/da9/f101 0 2026-03-10T10:20:18.921 INFO:tasks.workunit.client.0.vm02.stdout:0/933: rename d9/c32 to d9/d18/d1a/d22/d24/d79/c131 0 2026-03-10T10:20:18.924 INFO:tasks.workunit.client.1.vm05.stdout:7/949: creat d5/d1d/d20/d2d/f120 x:0 0 0 2026-03-10T10:20:18.924 INFO:tasks.workunit.client.0.vm02.stdout:9/864: creat da/d3c/d4c/db1/d112/d114/f11b x:0 0 0 2026-03-10T10:20:18.931 INFO:tasks.workunit.client.0.vm02.stdout:0/934: truncate d9/d18/d1a/f88 593979 0 2026-03-10T10:20:18.943 INFO:tasks.workunit.client.0.vm02.stdout:2/905: link d0/d1a/fb4 d0/d10/d81/f12d 0 2026-03-10T10:20:18.943 INFO:tasks.workunit.client.0.vm02.stdout:0/935: symlink d9/d34/d3d/d65/d89/dd3/da8/l132 0 2026-03-10T10:20:18.943 INFO:tasks.workunit.client.0.vm02.stdout:0/936: chown d9/d18/le3 7451 1 2026-03-10T10:20:18.949 INFO:tasks.workunit.client.1.vm05.stdout:7/950: symlink d5/dd/l121 0 2026-03-10T10:20:18.950 INFO:tasks.workunit.client.0.vm02.stdout:1/930: dread d4/da/f73 [0,4194304] 0 2026-03-10T10:20:18.953 INFO:tasks.workunit.client.0.vm02.stdout:7/901: truncate d1/f15 1402335 0 2026-03-10T10:20:18.954 INFO:tasks.workunit.client.0.vm02.stdout:3/895: write d1/d20/d52/f76 [380591,15562] 0 2026-03-10T10:20:18.955 INFO:tasks.workunit.client.1.vm05.stdout:1/965: write d4/dd/f60 [6903258,63985] 0 2026-03-10T10:20:18.959 INFO:tasks.workunit.client.1.vm05.stdout:6/887: link dd/ca4 dd/d36/d3f/d12/d44/d2a/d3d/c11f 0 2026-03-10T10:20:18.963 INFO:tasks.workunit.client.1.vm05.stdout:9/785: mknod d0/df/c10c 0 2026-03-10T10:20:18.964 INFO:tasks.workunit.client.0.vm02.stdout:0/937: mkdir d9/d18/d1a/d22/d24/d51/d133 0 2026-03-10T10:20:18.966 INFO:tasks.workunit.client.1.vm05.stdout:5/906: creat da/d131/f136 x:0 0 0 2026-03-10T10:20:18.968 INFO:tasks.workunit.client.0.vm02.stdout:2/906: mknod d0/d71/d108/d65/dc4/dfa/dd3/de8/c12e 0 2026-03-10T10:20:18.970 INFO:tasks.workunit.client.0.vm02.stdout:1/931: creat d4/dc3/df0/f125 x:0 0 0 2026-03-10T10:20:18.972 INFO:tasks.workunit.client.0.vm02.stdout:8/879: write d1/d1c/d43/d5b/d88/fd1 [4349999,102514] 0 2026-03-10T10:20:18.973 INFO:tasks.workunit.client.0.vm02.stdout:7/902: mkdir d1/dc/d55/d11d 0 2026-03-10T10:20:18.975 INFO:tasks.workunit.client.1.vm05.stdout:4/768: symlink d1/d3/d65/de0/de9/l102 0 2026-03-10T10:20:18.979 INFO:tasks.workunit.client.1.vm05.stdout:0/908: dwrite d1/d2/d9/d31/d54/f7f [0,4194304] 0 2026-03-10T10:20:18.979 INFO:tasks.workunit.client.0.vm02.stdout:6/857: dwrite d0/d8/d9/f54 [4194304,4194304] 0 2026-03-10T10:20:18.985 INFO:tasks.workunit.client.0.vm02.stdout:0/938: creat d9/d18/d1a/d22/d24/d8e/d91/f134 x:0 0 0 2026-03-10T10:20:18.986 INFO:tasks.workunit.client.0.vm02.stdout:0/939: dread - d9/d18/d1a/d22/d24/d8e/d91/f134 zero size 2026-03-10T10:20:18.997 INFO:tasks.workunit.client.1.vm05.stdout:8/819: dwrite d7/d14/d24/d3f/d6a/d8a/d96/db7/fb9 [0,4194304] 0 2026-03-10T10:20:19.001 INFO:tasks.workunit.client.0.vm02.stdout:9/865: dwrite da/d3c/d4c/f49 [0,4194304] 0 2026-03-10T10:20:19.008 INFO:tasks.workunit.client.0.vm02.stdout:2/907: readlink d0/laa 0 2026-03-10T10:20:19.014 INFO:tasks.workunit.client.0.vm02.stdout:2/908: dwrite d0/d1a/d49/dcc/ff4 [0,4194304] 0 2026-03-10T10:20:19.020 INFO:tasks.workunit.client.1.vm05.stdout:5/907: rmdir da/d63 39 2026-03-10T10:20:19.029 INFO:tasks.workunit.client.1.vm05.stdout:5/908: stat da/db/d26/d70/d72/d10b/f11a 0 2026-03-10T10:20:19.029 INFO:tasks.workunit.client.1.vm05.stdout:2/851: creat db/d61/f113 x:0 0 0 2026-03-10T10:20:19.029 INFO:tasks.workunit.client.1.vm05.stdout:3/930: link dd/d15/d24/d2c/d107/l138 dd/d39/d5f/df7/l14b 0 2026-03-10T10:20:19.029 INFO:tasks.workunit.client.0.vm02.stdout:7/903: rename d1/d1b/d8f/d67/da7/fec to d1/dc/d10/df5/f11e 0 2026-03-10T10:20:19.029 INFO:tasks.workunit.client.0.vm02.stdout:7/904: write d1/dc/d60/ff0 [1039538,85254] 0 2026-03-10T10:20:19.029 INFO:tasks.workunit.client.0.vm02.stdout:6/858: creat d0/d87/f11f x:0 0 0 2026-03-10T10:20:19.029 INFO:tasks.workunit.client.0.vm02.stdout:6/859: dread - d0/d8/d29/d6d/d96/de4/d102/f10a zero size 2026-03-10T10:20:19.030 INFO:tasks.workunit.client.0.vm02.stdout:8/880: dread d1/d1c/d24/f6b [0,4194304] 0 2026-03-10T10:20:19.033 INFO:tasks.workunit.client.0.vm02.stdout:8/881: dread d1/d1c/d43/d5b/d88/dac/d83/d9f/fb8 [0,4194304] 0 2026-03-10T10:20:19.041 INFO:tasks.workunit.client.1.vm05.stdout:0/909: truncate d1/d2/d9/d31/d13/da2/dab/dce/d106/f111 863210 0 2026-03-10T10:20:19.044 INFO:tasks.workunit.client.0.vm02.stdout:2/909: creat d0/d71/dfb/f12f x:0 0 0 2026-03-10T10:20:19.046 INFO:tasks.workunit.client.1.vm05.stdout:1/966: fsync d4/d3d/f77 0 2026-03-10T10:20:19.047 INFO:tasks.workunit.client.1.vm05.stdout:5/909: creat da/d9a/daf/f137 x:0 0 0 2026-03-10T10:20:19.048 INFO:tasks.workunit.client.1.vm05.stdout:0/910: read d1/f11 [4033382,9941] 0 2026-03-10T10:20:19.049 INFO:tasks.workunit.client.0.vm02.stdout:3/896: link d1/f50 d1/d6/f129 0 2026-03-10T10:20:19.050 INFO:tasks.workunit.client.0.vm02.stdout:6/860: creat d0/d8/d9/d7a/dc0/f120 x:0 0 0 2026-03-10T10:20:19.054 INFO:tasks.workunit.client.1.vm05.stdout:3/931: creat dd/d20/d9e/f14c x:0 0 0 2026-03-10T10:20:19.055 INFO:tasks.workunit.client.1.vm05.stdout:3/932: fdatasync dd/d20/d9e/f14c 0 2026-03-10T10:20:19.056 INFO:tasks.workunit.client.1.vm05.stdout:4/769: dread d1/d31/d4b/d6d/f9f [0,4194304] 0 2026-03-10T10:20:19.056 INFO:tasks.workunit.client.1.vm05.stdout:7/951: rename d5/dd/f2f to d5/f122 0 2026-03-10T10:20:19.060 INFO:tasks.workunit.client.1.vm05.stdout:1/967: fsync d4/d3d/d6e/fee 0 2026-03-10T10:20:19.061 INFO:tasks.workunit.client.1.vm05.stdout:5/910: unlink da/d9a/dc7/db4/dbd/le7 0 2026-03-10T10:20:19.063 INFO:tasks.workunit.client.0.vm02.stdout:2/910: creat d0/d10/da6/d107/f130 x:0 0 0 2026-03-10T10:20:19.065 INFO:tasks.workunit.client.1.vm05.stdout:9/786: rename d0/d1/d16/d6e to d0/d70/d10d 0 2026-03-10T10:20:19.066 INFO:tasks.workunit.client.1.vm05.stdout:7/952: creat d5/d26/d9c/f123 x:0 0 0 2026-03-10T10:20:19.069 INFO:tasks.workunit.client.1.vm05.stdout:3/933: creat dd/d20/d94/dba/f14d x:0 0 0 2026-03-10T10:20:19.069 INFO:tasks.workunit.client.1.vm05.stdout:3/934: readlink dd/l13 0 2026-03-10T10:20:19.070 INFO:tasks.workunit.client.1.vm05.stdout:5/911: dread da/db/dee/d38/f65 [0,4194304] 0 2026-03-10T10:20:19.070 INFO:tasks.workunit.client.1.vm05.stdout:9/787: symlink d0/df/d74/d8c/d8f/ddd/l10e 0 2026-03-10T10:20:19.071 INFO:tasks.workunit.client.1.vm05.stdout:0/911: rmdir d1/d2/d9/d116 0 2026-03-10T10:20:19.071 INFO:tasks.workunit.client.1.vm05.stdout:9/788: readlink d0/df/d74/l8b 0 2026-03-10T10:20:19.072 INFO:tasks.workunit.client.1.vm05.stdout:9/789: dread - d0/df/d74/d8c/de4/d104/fff zero size 2026-03-10T10:20:19.073 INFO:tasks.workunit.client.1.vm05.stdout:4/770: link d1/d31/dc/d40/d63/l87 d1/d31/dc/d40/d45/daa/l103 0 2026-03-10T10:20:19.073 INFO:tasks.workunit.client.1.vm05.stdout:0/912: mkdir d1/d2/d39/d6e/d8e/d131 0 2026-03-10T10:20:19.073 INFO:tasks.workunit.client.1.vm05.stdout:3/935: getdents dd/d20/d125 0 2026-03-10T10:20:19.074 INFO:tasks.workunit.client.1.vm05.stdout:5/912: rename da/db/d26/d70/ff0 to da/d63/df2/d123/f138 0 2026-03-10T10:20:19.079 INFO:tasks.workunit.client.0.vm02.stdout:2/911: dread d0/f91 [0,4194304] 0 2026-03-10T10:20:19.080 INFO:tasks.workunit.client.1.vm05.stdout:5/913: dread da/d9a/dc7/f6a [0,4194304] 0 2026-03-10T10:20:19.080 INFO:tasks.workunit.client.1.vm05.stdout:7/953: getdents d5/d17/d85 0 2026-03-10T10:20:19.080 INFO:tasks.workunit.client.1.vm05.stdout:3/936: creat dd/d15/d24/d2c/d6d/da7/dbb/dbd/dff/f14e x:0 0 0 2026-03-10T10:20:19.089 INFO:tasks.workunit.client.1.vm05.stdout:0/913: dread d1/d2/d9/d31/d13/d15/d4e/f89 [0,4194304] 0 2026-03-10T10:20:19.089 INFO:tasks.workunit.client.1.vm05.stdout:0/914: chown d1/d2/d9/d31/d13/d15/d4e/f125 802 1 2026-03-10T10:20:19.091 INFO:tasks.workunit.client.1.vm05.stdout:0/915: dread d1/d2/d9/d31/d54/f7f [0,4194304] 0 2026-03-10T10:20:19.103 INFO:tasks.workunit.client.1.vm05.stdout:9/790: unlink d0/d1/d13/de/c102 0 2026-03-10T10:20:19.108 INFO:tasks.workunit.client.0.vm02.stdout:1/932: write d4/f8 [1529144,65648] 0 2026-03-10T10:20:19.112 INFO:tasks.workunit.client.1.vm05.stdout:5/914: fsync da/db/d28/fec 0 2026-03-10T10:20:19.112 INFO:tasks.workunit.client.1.vm05.stdout:6/888: dwrite f2 [0,4194304] 0 2026-03-10T10:20:19.112 INFO:tasks.workunit.client.0.vm02.stdout:6/861: sync 2026-03-10T10:20:19.119 INFO:tasks.workunit.client.1.vm05.stdout:5/915: stat da/d9a/lbf 0 2026-03-10T10:20:19.129 INFO:tasks.workunit.client.1.vm05.stdout:3/937: creat dd/d15/d1f/d116/f14f x:0 0 0 2026-03-10T10:20:19.130 INFO:tasks.workunit.client.0.vm02.stdout:0/940: write d9/d34/d3d/d65/d89/fd9 [311091,27307] 0 2026-03-10T10:20:19.130 INFO:tasks.workunit.client.1.vm05.stdout:3/938: chown dd/d15/d24/d8e/dac/f13a 89518684 1 2026-03-10T10:20:19.135 INFO:tasks.workunit.client.0.vm02.stdout:9/866: dwrite da/d3c/d4c/d2c/fb8 [0,4194304] 0 2026-03-10T10:20:19.142 INFO:tasks.workunit.client.1.vm05.stdout:8/820: write d7/d2f/d57/de3/fe9 [371320,71847] 0 2026-03-10T10:20:19.146 INFO:tasks.workunit.client.0.vm02.stdout:7/905: dwrite d1/dc/d55/f85 [0,4194304] 0 2026-03-10T10:20:19.154 INFO:tasks.workunit.client.1.vm05.stdout:2/852: dwrite db/d2d/f5d [4194304,4194304] 0 2026-03-10T10:20:19.156 INFO:tasks.workunit.client.0.vm02.stdout:8/882: dwrite d1/d1c/d43/d6a/da8/d56/f85 [0,4194304] 0 2026-03-10T10:20:19.162 INFO:tasks.workunit.client.0.vm02.stdout:3/897: dwrite d1/d20/f51 [0,4194304] 0 2026-03-10T10:20:19.173 INFO:tasks.workunit.client.1.vm05.stdout:4/771: truncate d1/d31/f13 3275960 0 2026-03-10T10:20:19.175 INFO:tasks.workunit.client.1.vm05.stdout:1/968: write d4/df/d1c/d53/f65 [2167150,83325] 0 2026-03-10T10:20:19.183 INFO:tasks.workunit.client.0.vm02.stdout:8/883: rename d1/d1c/d43/d5b/d88/dac/d83/d9f/cc4 to d1/d1c/d43/d6a/c10d 0 2026-03-10T10:20:19.189 INFO:tasks.workunit.client.0.vm02.stdout:2/912: dwrite d0/fe2 [4194304,4194304] 0 2026-03-10T10:20:19.190 INFO:tasks.workunit.client.0.vm02.stdout:2/913: readlink d0/d71/d108/d65/dc4/le9 0 2026-03-10T10:20:19.192 INFO:tasks.workunit.client.0.vm02.stdout:3/898: chown d1/d8/d21/d73/d78/d84/dfa/l11a 969 1 2026-03-10T10:20:19.195 INFO:tasks.workunit.client.0.vm02.stdout:6/862: mkdir d0/d8/d29/d2f/d50/d121 0 2026-03-10T10:20:19.202 INFO:tasks.workunit.client.0.vm02.stdout:0/941: creat d9/d34/d3d/d65/d89/dd3/da7/db7/d127/f135 x:0 0 0 2026-03-10T10:20:19.202 INFO:tasks.workunit.client.0.vm02.stdout:0/942: chown d9/d34/d3d/d65/d89/dd3/da7/db9/fba 15877 1 2026-03-10T10:20:19.202 INFO:tasks.workunit.client.0.vm02.stdout:8/884: creat d1/d1c/d43/d6a/da8/d56/f10e x:0 0 0 2026-03-10T10:20:19.202 INFO:tasks.workunit.client.0.vm02.stdout:8/885: dread - d1/d1c/d43/d5b/dab/d102/f10a zero size 2026-03-10T10:20:19.202 INFO:tasks.workunit.client.0.vm02.stdout:8/886: chown d1/d2/lf7 107062983 1 2026-03-10T10:20:19.205 INFO:tasks.workunit.client.0.vm02.stdout:3/899: creat d1/d8/d44/deb/f12a x:0 0 0 2026-03-10T10:20:19.207 INFO:tasks.workunit.client.0.vm02.stdout:0/943: unlink d9/d18/d1a/d22/d24/d80/f90 0 2026-03-10T10:20:19.208 INFO:tasks.workunit.client.0.vm02.stdout:8/887: mknod d1/d1c/d24/dad/dbe/dda/c10f 0 2026-03-10T10:20:19.213 INFO:tasks.workunit.client.0.vm02.stdout:0/944: creat d9/d34/d3d/d65/d89/df3/f136 x:0 0 0 2026-03-10T10:20:19.216 INFO:tasks.workunit.client.0.vm02.stdout:0/945: dwrite d9/d18/d1a/d22/d24/d8e/fec [0,4194304] 0 2026-03-10T10:20:19.217 INFO:tasks.workunit.client.0.vm02.stdout:8/888: rename d1/d1c/d43/d5b/dab/cc5 to d1/d1c/d43/d6a/d7c/c110 0 2026-03-10T10:20:19.220 INFO:tasks.workunit.client.0.vm02.stdout:8/889: read d1/d1c/d43/d5b/d88/dac/d83/d9f/fdb [3688151,43769] 0 2026-03-10T10:20:19.231 INFO:tasks.workunit.client.1.vm05.stdout:9/791: mknod d0/d1/d13/de/ddf/c10f 0 2026-03-10T10:20:19.242 INFO:tasks.workunit.client.1.vm05.stdout:6/889: truncate dd/d36/d3f/d12/d44/d2a/f84 364017 0 2026-03-10T10:20:19.243 INFO:tasks.workunit.client.1.vm05.stdout:6/890: chown dd/d36/d3f/d12/d44/lc1 58066 1 2026-03-10T10:20:19.246 INFO:tasks.workunit.client.1.vm05.stdout:8/821: symlink d7/d14/d62/d90/l106 0 2026-03-10T10:20:19.246 INFO:tasks.workunit.client.0.vm02.stdout:8/890: getdents d1/d1c/d43/d5b/d88/dac/d83 0 2026-03-10T10:20:19.248 INFO:tasks.workunit.client.1.vm05.stdout:2/853: mkdir db/d28/d4f/d59/da4/d114 0 2026-03-10T10:20:19.248 INFO:tasks.workunit.client.1.vm05.stdout:4/772: mknod d1/dfd/c104 0 2026-03-10T10:20:19.249 INFO:tasks.workunit.client.0.vm02.stdout:8/891: symlink d1/d1c/d24/dad/l111 0 2026-03-10T10:20:19.251 INFO:tasks.workunit.client.0.vm02.stdout:8/892: chown d1/d1c/f20 95163 1 2026-03-10T10:20:19.256 INFO:tasks.workunit.client.1.vm05.stdout:0/916: symlink d1/d2/d9/d31/d13/da2/l132 0 2026-03-10T10:20:19.268 INFO:tasks.workunit.client.0.vm02.stdout:8/893: symlink d1/d1c/d106/l112 0 2026-03-10T10:20:19.268 INFO:tasks.workunit.client.1.vm05.stdout:4/773: chown d1/d64/da9 3591082 1 2026-03-10T10:20:19.268 INFO:tasks.workunit.client.1.vm05.stdout:6/891: creat dd/d36/d3f/d12/d44/d30/d4a/df4/f120 x:0 0 0 2026-03-10T10:20:19.268 INFO:tasks.workunit.client.1.vm05.stdout:1/969: link d4/d37/l9a d4/df/d76/d101/l11c 0 2026-03-10T10:20:19.268 INFO:tasks.workunit.client.1.vm05.stdout:0/917: symlink d1/d2/d9/d31/d13/d15/d114/l133 0 2026-03-10T10:20:19.268 INFO:tasks.workunit.client.1.vm05.stdout:6/892: rename dd to dd/d36/d3f/dbd/d121 22 2026-03-10T10:20:19.268 INFO:tasks.workunit.client.1.vm05.stdout:4/774: mkdir d1/d31/dc/d40/d45/ded/dfb/d105 0 2026-03-10T10:20:19.268 INFO:tasks.workunit.client.1.vm05.stdout:0/918: mknod d1/d2/d9/d31/d13/da2/c134 0 2026-03-10T10:20:19.268 INFO:tasks.workunit.client.1.vm05.stdout:9/792: link d0/d70/lb4 d0/d1/d13/de/d93/l110 0 2026-03-10T10:20:19.271 INFO:tasks.workunit.client.1.vm05.stdout:6/893: dwrite dd/d36/d3f/dbd/f10b [0,4194304] 0 2026-03-10T10:20:19.271 INFO:tasks.workunit.client.1.vm05.stdout:1/970: dwrite d4/d39/f7b [0,4194304] 0 2026-03-10T10:20:19.288 INFO:tasks.workunit.client.1.vm05.stdout:4/775: rename d1/d3/d65/df6 to d1/d31/d72/d106 0 2026-03-10T10:20:19.294 INFO:tasks.workunit.client.1.vm05.stdout:9/793: unlink d0/dc4/f7e 0 2026-03-10T10:20:19.295 INFO:tasks.workunit.client.1.vm05.stdout:7/954: write d5/d26/f39 [679662,81342] 0 2026-03-10T10:20:19.297 INFO:tasks.workunit.client.1.vm05.stdout:9/794: dread d0/d1/fa7 [0,4194304] 0 2026-03-10T10:20:19.297 INFO:tasks.workunit.client.0.vm02.stdout:1/933: dwrite d4/da/d27/d38/d3c/fa7 [0,4194304] 0 2026-03-10T10:20:19.298 INFO:tasks.workunit.client.0.vm02.stdout:1/934: readlink d4/da/d27/d38/d3c/l5f 0 2026-03-10T10:20:19.308 INFO:tasks.workunit.client.1.vm05.stdout:0/919: unlink d1/d2/d9/d31/d13/d15/d4e/d8a/fd8 0 2026-03-10T10:20:19.308 INFO:tasks.workunit.client.1.vm05.stdout:0/920: chown d1/d2/d9/d31/d54/l12f 781580 1 2026-03-10T10:20:19.308 INFO:tasks.workunit.client.1.vm05.stdout:0/921: stat d1/d2/d9/d50/d9a/da0/ff7 0 2026-03-10T10:20:19.308 INFO:tasks.workunit.client.1.vm05.stdout:1/971: creat d4/df/d1c/d53/d66/f11d x:0 0 0 2026-03-10T10:20:19.308 INFO:tasks.workunit.client.1.vm05.stdout:4/776: creat d1/d31/d72/f107 x:0 0 0 2026-03-10T10:20:19.309 INFO:tasks.workunit.client.1.vm05.stdout:6/894: dread dd/d36/d3f/dbd/f114 [0,4194304] 0 2026-03-10T10:20:19.311 INFO:tasks.workunit.client.1.vm05.stdout:1/972: dread d4/df/d1c/d53/daa/fdd [0,4194304] 0 2026-03-10T10:20:19.312 INFO:tasks.workunit.client.1.vm05.stdout:7/955: symlink d5/d1d/d20/d91/da7/l124 0 2026-03-10T10:20:19.315 INFO:tasks.workunit.client.0.vm02.stdout:1/935: link d4/d4a/fd2 d4/da/d1a/d5b/d93/f126 0 2026-03-10T10:20:19.319 INFO:tasks.workunit.client.1.vm05.stdout:1/973: fdatasync d4/d39/f54 0 2026-03-10T10:20:19.320 INFO:tasks.workunit.client.1.vm05.stdout:4/777: dwrite d1/d31/dc/fe1 [0,4194304] 0 2026-03-10T10:20:19.323 INFO:tasks.workunit.client.1.vm05.stdout:9/795: symlink d0/df/df1/l111 0 2026-03-10T10:20:19.325 INFO:tasks.workunit.client.0.vm02.stdout:8/894: read d1/d1c/d43/f7a [1463431,50688] 0 2026-03-10T10:20:19.326 INFO:tasks.workunit.client.0.vm02.stdout:8/895: stat d1/d2/fef 0 2026-03-10T10:20:19.331 INFO:tasks.workunit.client.0.vm02.stdout:1/936: creat d4/da/d1a/f127 x:0 0 0 2026-03-10T10:20:19.331 INFO:tasks.workunit.client.0.vm02.stdout:9/867: write da/d3c/d4c/d2c/d34/f81 [1879763,42546] 0 2026-03-10T10:20:19.339 INFO:tasks.workunit.client.0.vm02.stdout:7/906: write d1/d1b/d8f/f66 [3735487,114011] 0 2026-03-10T10:20:19.342 INFO:tasks.workunit.client.0.vm02.stdout:2/914: dwrite d0/d10/f6a [0,4194304] 0 2026-03-10T10:20:19.344 INFO:tasks.workunit.client.0.vm02.stdout:6/863: write d0/d8/d29/d52/de8/db2/dbb/ff7 [1452917,81537] 0 2026-03-10T10:20:19.347 INFO:tasks.workunit.client.0.vm02.stdout:3/900: write d1/d8/fd6 [701358,23102] 0 2026-03-10T10:20:19.347 INFO:tasks.workunit.client.0.vm02.stdout:0/946: write d9/d18/d1a/d22/d24/f26 [3698533,75333] 0 2026-03-10T10:20:19.353 INFO:tasks.workunit.client.1.vm05.stdout:4/778: read d1/d64/da9/f101 [5088114,86555] 0 2026-03-10T10:20:19.353 INFO:tasks.workunit.client.1.vm05.stdout:1/974: truncate d4/d39/d3e/f7d 2351080 0 2026-03-10T10:20:19.354 INFO:tasks.workunit.client.1.vm05.stdout:5/916: dwrite da/d9a/dc7/db4/fe2 [0,4194304] 0 2026-03-10T10:20:19.354 INFO:tasks.workunit.client.0.vm02.stdout:2/915: fsync d0/d71/d108/d65/f9e 0 2026-03-10T10:20:19.356 INFO:tasks.workunit.client.1.vm05.stdout:4/779: chown d1/d31/d76/fd7 77 1 2026-03-10T10:20:19.362 INFO:tasks.workunit.client.0.vm02.stdout:3/901: truncate d1/d20/fa4 468660 0 2026-03-10T10:20:19.362 INFO:tasks.workunit.client.0.vm02.stdout:3/902: stat d1/d20/d52/f92 0 2026-03-10T10:20:19.362 INFO:tasks.workunit.client.1.vm05.stdout:3/939: dwrite dd/d15/f10b [0,4194304] 0 2026-03-10T10:20:19.362 INFO:tasks.workunit.client.1.vm05.stdout:6/895: rename dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/ce2 to dd/d36/d3f/dbd/c122 0 2026-03-10T10:20:19.362 INFO:tasks.workunit.client.1.vm05.stdout:3/940: chown dd/f41 23720 1 2026-03-10T10:20:19.363 INFO:tasks.workunit.client.1.vm05.stdout:1/975: fdatasync d4/fda 0 2026-03-10T10:20:19.368 INFO:tasks.workunit.client.1.vm05.stdout:3/941: read dd/d15/d24/d8e/dac/fd7 [170427,26557] 0 2026-03-10T10:20:19.371 INFO:tasks.workunit.client.1.vm05.stdout:9/796: rename d0/d1/d13/de/l9c to d0/df/d74/d8c/d8f/ddd/de6/l112 0 2026-03-10T10:20:19.373 INFO:tasks.workunit.client.1.vm05.stdout:6/896: rmdir dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6 39 2026-03-10T10:20:19.377 INFO:tasks.workunit.client.0.vm02.stdout:3/903: mkdir d1/d8/d86/da2/d12b 0 2026-03-10T10:20:19.393 INFO:tasks.workunit.client.1.vm05.stdout:4/780: mkdir d1/d64/da9/dae/dfc/d108 0 2026-03-10T10:20:19.393 INFO:tasks.workunit.client.1.vm05.stdout:3/942: creat dd/d15/d24/d2c/d6d/da7/f150 x:0 0 0 2026-03-10T10:20:19.393 INFO:tasks.workunit.client.0.vm02.stdout:1/937: rename d4/da/d27/d38/d3c/f8f to d4/da/d1a/d5b/f128 0 2026-03-10T10:20:19.393 INFO:tasks.workunit.client.0.vm02.stdout:0/947: creat d9/d18/d1a/f137 x:0 0 0 2026-03-10T10:20:19.393 INFO:tasks.workunit.client.0.vm02.stdout:3/904: symlink d1/d6/d8e/l12c 0 2026-03-10T10:20:19.393 INFO:tasks.workunit.client.0.vm02.stdout:8/896: sync 2026-03-10T10:20:19.393 INFO:tasks.workunit.client.0.vm02.stdout:7/907: sync 2026-03-10T10:20:19.393 INFO:tasks.workunit.client.1.vm05.stdout:5/917: rename da/db/dee/d109/f129 to da/d9a/d120/f139 0 2026-03-10T10:20:19.401 INFO:tasks.workunit.client.0.vm02.stdout:1/938: symlink d4/da/d27/d117/l129 0 2026-03-10T10:20:19.405 INFO:tasks.workunit.client.1.vm05.stdout:1/976: link d4/dd/f60 d4/d3d/ddc/f11e 0 2026-03-10T10:20:19.408 INFO:tasks.workunit.client.1.vm05.stdout:4/781: mkdir d1/d31/d76/d109 0 2026-03-10T10:20:19.409 INFO:tasks.workunit.client.1.vm05.stdout:3/943: symlink dd/d15/d1f/d116/l151 0 2026-03-10T10:20:19.410 INFO:tasks.workunit.client.1.vm05.stdout:7/956: dread d5/d26/f39 [0,4194304] 0 2026-03-10T10:20:19.417 INFO:tasks.workunit.client.0.vm02.stdout:8/897: symlink d1/d1c/d24/d71/l113 0 2026-03-10T10:20:19.418 INFO:tasks.workunit.client.0.vm02.stdout:9/868: link da/d3c/d4c/d2c/d96/cba da/d3c/c11c 0 2026-03-10T10:20:19.419 INFO:tasks.workunit.client.0.vm02.stdout:9/869: read da/d3c/d4c/d2c/d34/fed [1164181,21492] 0 2026-03-10T10:20:19.424 INFO:tasks.workunit.client.1.vm05.stdout:6/897: dread dd/d36/d3f/d12/d44/d2a/d3d/fa2 [0,4194304] 0 2026-03-10T10:20:19.425 INFO:tasks.workunit.client.1.vm05.stdout:9/797: symlink d0/d1/d13/d26/l113 0 2026-03-10T10:20:19.428 INFO:tasks.workunit.client.1.vm05.stdout:2/854: dwrite db/d12/d74/fef [0,4194304] 0 2026-03-10T10:20:19.432 INFO:tasks.workunit.client.0.vm02.stdout:8/898: rmdir d1/dc7/dd2 39 2026-03-10T10:20:19.432 INFO:tasks.workunit.client.1.vm05.stdout:8/822: dwrite d7/d2f/fd6 [4194304,4194304] 0 2026-03-10T10:20:19.432 INFO:tasks.workunit.client.1.vm05.stdout:4/782: unlink d1/d64/da9/f101 0 2026-03-10T10:20:19.432 INFO:tasks.workunit.client.1.vm05.stdout:1/977: creat d4/d39/d3e/f11f x:0 0 0 2026-03-10T10:20:19.432 INFO:tasks.workunit.client.1.vm05.stdout:3/944: creat dd/d20/d94/dba/f152 x:0 0 0 2026-03-10T10:20:19.440 INFO:tasks.workunit.client.0.vm02.stdout:1/939: mkdir d4/da/d1a/d47/d88/d12a 0 2026-03-10T10:20:19.442 INFO:tasks.workunit.client.1.vm05.stdout:5/918: fdatasync da/f41 0 2026-03-10T10:20:19.446 INFO:tasks.workunit.client.0.vm02.stdout:3/905: rename d1/d20/db2/lb9 to d1/l12d 0 2026-03-10T10:20:19.448 INFO:tasks.workunit.client.0.vm02.stdout:3/906: dwrite d1/d20/d52/f76 [0,4194304] 0 2026-03-10T10:20:19.449 INFO:tasks.workunit.client.0.vm02.stdout:3/907: chown d1/d8/l126 55 1 2026-03-10T10:20:19.452 INFO:tasks.workunit.client.0.vm02.stdout:0/948: dread d9/d34/d3d/d65/d89/fcd [0,4194304] 0 2026-03-10T10:20:19.454 INFO:tasks.workunit.client.0.vm02.stdout:3/908: dwrite d1/d8/f3d [0,4194304] 0 2026-03-10T10:20:19.457 INFO:tasks.workunit.client.0.vm02.stdout:3/909: write d1/d8/d44/deb/f12a [112534,126780] 0 2026-03-10T10:20:19.458 INFO:tasks.workunit.client.0.vm02.stdout:3/910: truncate d1/fe 810207 0 2026-03-10T10:20:19.463 INFO:tasks.workunit.client.1.vm05.stdout:6/898: mknod dd/d36/d3f/d12/d44/d63/c123 0 2026-03-10T10:20:19.466 INFO:tasks.workunit.client.0.vm02.stdout:1/940: chown d4/da/d1a/d11d/d53/c59 3526028 1 2026-03-10T10:20:19.470 INFO:tasks.workunit.client.0.vm02.stdout:9/870: rename da/d3c/d4c/d2c/d34/f81 to da/d3c/d4c/d38/d4a/d70/d10a/f11d 0 2026-03-10T10:20:19.471 INFO:tasks.workunit.client.1.vm05.stdout:2/855: creat db/d2d/dc6/f115 x:0 0 0 2026-03-10T10:20:19.471 INFO:tasks.workunit.client.0.vm02.stdout:9/871: truncate da/d3c/d4c/d38/fb2 4467315 0 2026-03-10T10:20:19.475 INFO:tasks.workunit.client.1.vm05.stdout:4/783: symlink d1/dfd/l10a 0 2026-03-10T10:20:19.479 INFO:tasks.workunit.client.0.vm02.stdout:0/949: dread - d9/d18/d1a/d22/d24/f106 zero size 2026-03-10T10:20:19.489 INFO:tasks.workunit.client.1.vm05.stdout:9/798: truncate d0/d70/fb6 4665209 0 2026-03-10T10:20:19.496 INFO:tasks.workunit.client.0.vm02.stdout:0/950: dread d9/d34/d3d/f69 [0,4194304] 0 2026-03-10T10:20:19.496 INFO:tasks.workunit.client.0.vm02.stdout:0/951: chown d9/d18/d1a/d46/fef 431112 1 2026-03-10T10:20:19.499 INFO:tasks.workunit.client.1.vm05.stdout:4/784: creat d1/d64/f10b x:0 0 0 2026-03-10T10:20:19.500 INFO:tasks.workunit.client.0.vm02.stdout:9/872: fsync da/f65 0 2026-03-10T10:20:19.501 INFO:tasks.workunit.client.1.vm05.stdout:4/785: read d1/d31/d76/faf [285431,128284] 0 2026-03-10T10:20:19.502 INFO:tasks.workunit.client.1.vm05.stdout:4/786: chown d1/d3/lb4 0 1 2026-03-10T10:20:19.504 INFO:tasks.workunit.client.1.vm05.stdout:6/899: rmdir dd/d36/d3f/dbd/dd5 39 2026-03-10T10:20:19.513 INFO:tasks.workunit.client.1.vm05.stdout:2/856: rename db/d61/dfc/fd2 to db/d28/d4f/da3/f116 0 2026-03-10T10:20:19.514 INFO:tasks.workunit.client.1.vm05.stdout:2/857: read - db/d61/f113 zero size 2026-03-10T10:20:19.514 INFO:tasks.workunit.client.0.vm02.stdout:1/941: creat d4/da/d1a/d5b/f12b x:0 0 0 2026-03-10T10:20:19.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:19 vm02.local ceph-mon[50200]: pgmap v8: 65 pgs: 65 active+clean; 3.6 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 17 MiB/s rd, 38 MiB/s wr, 122 op/s 2026-03-10T10:20:19.570 INFO:tasks.workunit.client.0.vm02.stdout:9/873: dread da/f14 [0,4194304] 0 2026-03-10T10:20:19.570 INFO:tasks.workunit.client.0.vm02.stdout:9/874: readlink da/d3c/d4c/d56/l7d 0 2026-03-10T10:20:19.589 INFO:tasks.workunit.client.1.vm05.stdout:0/922: dwrite d1/d2/d9/f32 [0,4194304] 0 2026-03-10T10:20:19.592 INFO:tasks.workunit.client.0.vm02.stdout:6/864: dwrite d0/d87/fe2 [0,4194304] 0 2026-03-10T10:20:19.605 INFO:tasks.workunit.client.0.vm02.stdout:3/911: rename d1/d58/dc9/c10c to d1/d58/dc9/c12e 0 2026-03-10T10:20:19.608 INFO:tasks.workunit.client.0.vm02.stdout:2/916: dwrite d0/d10/f1f [0,4194304] 0 2026-03-10T10:20:19.611 INFO:tasks.workunit.client.1.vm05.stdout:6/900: creat dd/d36/d3f/d12/d44/d2a/d7f/f124 x:0 0 0 2026-03-10T10:20:19.624 INFO:tasks.workunit.client.0.vm02.stdout:1/942: mknod d4/da/d1a/d47/dbc/c12c 0 2026-03-10T10:20:19.624 INFO:tasks.workunit.client.1.vm05.stdout:2/858: chown db/d1c/d40/d62/d85 1011485698 1 2026-03-10T10:20:19.629 INFO:tasks.workunit.client.0.vm02.stdout:7/908: write d1/dc/f26 [2661310,110542] 0 2026-03-10T10:20:19.632 INFO:tasks.workunit.client.1.vm05.stdout:5/919: getdents da/d96/d117 0 2026-03-10T10:20:19.636 INFO:tasks.workunit.client.1.vm05.stdout:3/945: rmdir dd/d20 39 2026-03-10T10:20:19.641 INFO:tasks.workunit.client.0.vm02.stdout:9/875: truncate da/d3c/d4c/fe0 145975 0 2026-03-10T10:20:19.643 INFO:tasks.workunit.client.1.vm05.stdout:2/859: fdatasync db/d1c/d40/d80/fd3 0 2026-03-10T10:20:19.644 INFO:tasks.workunit.client.1.vm05.stdout:7/957: dwrite d5/d1d/d20/fa2 [0,4194304] 0 2026-03-10T10:20:19.644 INFO:tasks.workunit.client.1.vm05.stdout:2/860: chown db/d28/d4f/d59/d94 2060950315 1 2026-03-10T10:20:19.662 INFO:tasks.workunit.client.1.vm05.stdout:5/920: rmdir da/d9a/dc7/db4/dfe 39 2026-03-10T10:20:19.662 INFO:tasks.workunit.client.0.vm02.stdout:6/865: mkdir d0/d122 0 2026-03-10T10:20:19.665 INFO:tasks.workunit.client.0.vm02.stdout:3/912: mkdir d1/d8/d86/db1/d102/d12f 0 2026-03-10T10:20:19.673 INFO:tasks.workunit.client.1.vm05.stdout:8/823: write d7/d14/d15/f2e [7086808,109291] 0 2026-03-10T10:20:19.673 INFO:tasks.workunit.client.1.vm05.stdout:2/861: fdatasync db/d28/d4f/d8b/fa8 0 2026-03-10T10:20:19.675 INFO:tasks.workunit.client.0.vm02.stdout:3/913: truncate d1/d8/d44/ff9 4615954 0 2026-03-10T10:20:19.681 INFO:tasks.workunit.client.1.vm05.stdout:6/901: creat dd/d36/d3f/dbd/f125 x:0 0 0 2026-03-10T10:20:19.681 INFO:tasks.workunit.client.1.vm05.stdout:1/978: dwrite d4/d20/dbe/f107 [0,4194304] 0 2026-03-10T10:20:19.681 INFO:tasks.workunit.client.0.vm02.stdout:8/899: write d1/f91 [3293089,41218] 0 2026-03-10T10:20:19.683 INFO:tasks.workunit.client.0.vm02.stdout:7/909: dread d1/dc/ff [0,4194304] 0 2026-03-10T10:20:19.684 INFO:tasks.workunit.client.0.vm02.stdout:0/952: dwrite d9/d34/d3d/d65/d89/dd3/d9c/f101 [0,4194304] 0 2026-03-10T10:20:19.687 INFO:tasks.workunit.client.1.vm05.stdout:9/799: dwrite d0/df/d11/f2c [4194304,4194304] 0 2026-03-10T10:20:19.693 INFO:tasks.workunit.client.0.vm02.stdout:3/914: rename d1/d6/f3a to d1/d58/d104/f130 0 2026-03-10T10:20:19.696 INFO:tasks.workunit.client.0.vm02.stdout:8/900: symlink d1/d1c/d24/d71/l114 0 2026-03-10T10:20:19.701 INFO:tasks.workunit.client.1.vm05.stdout:3/946: sync 2026-03-10T10:20:19.701 INFO:tasks.workunit.client.0.vm02.stdout:7/910: creat d1/dc/d55/d9c/dfd/f11f x:0 0 0 2026-03-10T10:20:19.704 INFO:tasks.workunit.client.0.vm02.stdout:0/953: unlink d9/d34/d3d/f112 0 2026-03-10T10:20:19.708 INFO:tasks.workunit.client.1.vm05.stdout:5/921: dread da/db/d28/d6e/f89 [0,4194304] 0 2026-03-10T10:20:19.710 INFO:tasks.workunit.client.1.vm05.stdout:8/824: creat d7/d14/d15/da7/def/f107 x:0 0 0 2026-03-10T10:20:19.710 INFO:tasks.workunit.client.1.vm05.stdout:7/958: link d5/d1d/d29/c63 d5/d1d/d29/d60/de8/c125 0 2026-03-10T10:20:19.713 INFO:tasks.workunit.client.0.vm02.stdout:0/954: rename d9/d18/d1a/d46/l10d to d9/d18/d1a/d22/db4/l138 0 2026-03-10T10:20:19.724 INFO:tasks.workunit.client.0.vm02.stdout:1/943: dread d4/da/d1a/f1c [0,4194304] 0 2026-03-10T10:20:19.724 INFO:tasks.workunit.client.0.vm02.stdout:1/944: dread - d4/da/d1a/d5b/f12b zero size 2026-03-10T10:20:19.729 INFO:tasks.workunit.client.1.vm05.stdout:5/922: read da/db/d26/d5c/f92 [173598,67734] 0 2026-03-10T10:20:19.738 INFO:tasks.workunit.client.0.vm02.stdout:3/915: link d1/d6/d8e/l93 d1/d8/d44/deb/l131 0 2026-03-10T10:20:19.755 INFO:tasks.workunit.client.0.vm02.stdout:3/916: dread d1/d20/d52/f6c [0,4194304] 0 2026-03-10T10:20:19.756 INFO:tasks.workunit.client.1.vm05.stdout:8/825: truncate d7/d14/d15/ff1 565055 0 2026-03-10T10:20:19.773 INFO:tasks.workunit.client.0.vm02.stdout:6/866: symlink d0/d8/d29/d2f/l123 0 2026-03-10T10:20:19.786 INFO:tasks.workunit.client.1.vm05.stdout:3/947: creat dd/d20/d12c/f153 x:0 0 0 2026-03-10T10:20:19.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:19 vm05.local ceph-mon[59051]: pgmap v8: 65 pgs: 65 active+clean; 3.6 GiB data, 12 GiB used, 108 GiB / 120 GiB avail; 17 MiB/s rd, 38 MiB/s wr, 122 op/s 2026-03-10T10:20:19.790 INFO:tasks.workunit.client.1.vm05.stdout:8/826: dwrite d7/d14/f5b [0,4194304] 0 2026-03-10T10:20:19.826 INFO:tasks.workunit.client.1.vm05.stdout:8/827: mknod d7/d14/d15/da7/def/dfe/c108 0 2026-03-10T10:20:19.830 INFO:tasks.workunit.client.0.vm02.stdout:1/945: symlink d4/da/d1a/d47/d88/d12a/l12d 0 2026-03-10T10:20:19.830 INFO:tasks.workunit.client.0.vm02.stdout:1/946: dread - d4/da/d1a/f122 zero size 2026-03-10T10:20:19.833 INFO:tasks.workunit.client.0.vm02.stdout:1/947: dwrite d4/ff [0,4194304] 0 2026-03-10T10:20:19.844 INFO:tasks.workunit.client.1.vm05.stdout:8/828: creat d7/f109 x:0 0 0 2026-03-10T10:20:19.851 INFO:tasks.workunit.client.0.vm02.stdout:6/867: mkdir d0/d8/d9/d7a/dc0/d124 0 2026-03-10T10:20:19.855 INFO:tasks.workunit.client.0.vm02.stdout:1/948: rmdir d4/da/d1a/d47/dbc/dcb 39 2026-03-10T10:20:19.858 INFO:tasks.workunit.client.0.vm02.stdout:3/917: truncate d1/d6/d8e/fc7 944127 0 2026-03-10T10:20:19.861 INFO:tasks.workunit.client.1.vm05.stdout:8/829: symlink d7/dd5/l10a 0 2026-03-10T10:20:19.878 INFO:tasks.workunit.client.0.vm02.stdout:6/868: mknod d0/c125 0 2026-03-10T10:20:19.886 INFO:tasks.workunit.client.1.vm05.stdout:4/787: dwrite d1/fc7 [0,4194304] 0 2026-03-10T10:20:19.898 INFO:tasks.workunit.client.1.vm05.stdout:8/830: mkdir d7/d14/d10b 0 2026-03-10T10:20:19.909 INFO:tasks.workunit.client.1.vm05.stdout:4/788: mknod d1/d64/da9/dae/dfc/d108/c10c 0 2026-03-10T10:20:19.909 INFO:tasks.workunit.client.0.vm02.stdout:1/949: truncate d4/da/d1a/d11d/d53/f74 6501858 0 2026-03-10T10:20:19.909 INFO:tasks.workunit.client.0.vm02.stdout:1/950: rename d4 to d4/da/d1a/d47/d78/d12e 22 2026-03-10T10:20:19.910 INFO:tasks.workunit.client.0.vm02.stdout:6/869: creat d0/d8/d29/dce/f126 x:0 0 0 2026-03-10T10:20:19.910 INFO:tasks.workunit.client.1.vm05.stdout:8/831: dread d7/d14/d24/d3f/fb3 [0,4194304] 0 2026-03-10T10:20:19.913 INFO:tasks.workunit.client.1.vm05.stdout:8/832: readlink d7/d14/d15/d3b/da0/lc6 0 2026-03-10T10:20:19.918 INFO:tasks.workunit.client.0.vm02.stdout:6/870: unlink d0/d8/d29/d94/c118 0 2026-03-10T10:20:19.926 INFO:tasks.workunit.client.1.vm05.stdout:4/789: symlink d1/d31/l10d 0 2026-03-10T10:20:19.927 INFO:tasks.workunit.client.1.vm05.stdout:4/790: write d1/d31/dc/d40/d45/ded/ff5 [831975,35499] 0 2026-03-10T10:20:19.927 INFO:tasks.workunit.client.0.vm02.stdout:6/871: rename d0/d8/d29/d6d/d96/f97 to d0/d8/d29/dce/f127 0 2026-03-10T10:20:19.937 INFO:tasks.workunit.client.0.vm02.stdout:2/917: write d0/f70 [1834834,59596] 0 2026-03-10T10:20:19.941 INFO:tasks.workunit.client.0.vm02.stdout:9/876: dwrite da/de5/ff3 [0,4194304] 0 2026-03-10T10:20:19.960 INFO:tasks.workunit.client.1.vm05.stdout:4/791: symlink d1/d31/dc/d40/d45/l10e 0 2026-03-10T10:20:19.961 INFO:tasks.workunit.client.1.vm05.stdout:0/923: dread d1/d2/d9/d31/d12/d41/fa9 [0,4194304] 0 2026-03-10T10:20:19.962 INFO:tasks.workunit.client.1.vm05.stdout:8/833: mkdir d7/d14/d62/d90/dac/df4/d10c 0 2026-03-10T10:20:19.964 INFO:tasks.workunit.client.0.vm02.stdout:9/877: dread - da/d3c/d4c/df6/ffb zero size 2026-03-10T10:20:19.966 INFO:tasks.workunit.client.1.vm05.stdout:4/792: rmdir d1/d64/da9 39 2026-03-10T10:20:19.989 INFO:tasks.workunit.client.0.vm02.stdout:9/878: mkdir da/d3c/d4c/d38/d4a/d70/d11e 0 2026-03-10T10:20:19.989 INFO:tasks.workunit.client.0.vm02.stdout:7/911: dwrite d1/d1b/d49/ff9 [0,4194304] 0 2026-03-10T10:20:19.989 INFO:tasks.workunit.client.0.vm02.stdout:7/912: getdents d1/dc/d55/d9c 0 2026-03-10T10:20:19.989 INFO:tasks.workunit.client.0.vm02.stdout:7/913: truncate d1/f17 4479936 0 2026-03-10T10:20:19.989 INFO:tasks.workunit.client.0.vm02.stdout:7/914: readlink d1/d1b/d8f/dad/l47 0 2026-03-10T10:20:19.990 INFO:tasks.workunit.client.0.vm02.stdout:7/915: mkdir d1/d1b/d8f/d67/d120 0 2026-03-10T10:20:19.995 INFO:tasks.workunit.client.1.vm05.stdout:1/979: truncate d4/d39/f7b 2008052 0 2026-03-10T10:20:20.013 INFO:tasks.workunit.client.1.vm05.stdout:1/980: creat d4/df/d1c/d53/daa/dfc/f120 x:0 0 0 2026-03-10T10:20:20.013 INFO:tasks.workunit.client.0.vm02.stdout:8/901: dwrite d1/d1c/d23/d25/f5d [0,4194304] 0 2026-03-10T10:20:20.013 INFO:tasks.workunit.client.0.vm02.stdout:7/916: dwrite d1/d1b/f10c [0,4194304] 0 2026-03-10T10:20:20.013 INFO:tasks.workunit.client.0.vm02.stdout:8/902: read d1/d1c/d43/d5b/fbc [2224449,59148] 0 2026-03-10T10:20:20.013 INFO:tasks.workunit.client.0.vm02.stdout:0/955: dwrite d9/d18/d1a/d22/d24/d51/ffe [0,4194304] 0 2026-03-10T10:20:20.014 INFO:tasks.workunit.client.1.vm05.stdout:0/924: sync 2026-03-10T10:20:20.031 INFO:tasks.workunit.client.0.vm02.stdout:9/879: dread da/d3c/d4c/f2b [0,4194304] 0 2026-03-10T10:20:20.036 INFO:tasks.workunit.client.1.vm05.stdout:1/981: mkdir d4/df/d1c/d53/d66/d121 0 2026-03-10T10:20:20.036 INFO:tasks.workunit.client.0.vm02.stdout:0/956: mknod d9/d34/d3d/d65/d89/dd3/da7/db7/c139 0 2026-03-10T10:20:20.039 INFO:tasks.workunit.client.1.vm05.stdout:1/982: mkdir d4/df/d1c/d53/d66/d122 0 2026-03-10T10:20:20.039 INFO:tasks.workunit.client.1.vm05.stdout:7/959: write d5/dd/f28 [1060232,81705] 0 2026-03-10T10:20:20.042 INFO:tasks.workunit.client.0.vm02.stdout:0/957: rename d9/d34/d3d/d65/d89/dd3/f66 to d9/d18/d1a/d22/d24/d80/d49/f13a 0 2026-03-10T10:20:20.045 INFO:tasks.workunit.client.1.vm05.stdout:0/925: truncate d1/d2/d9/f98 1847851 0 2026-03-10T10:20:20.047 INFO:tasks.workunit.client.0.vm02.stdout:6/872: sync 2026-03-10T10:20:20.047 INFO:tasks.workunit.client.0.vm02.stdout:8/903: creat d1/d1c/d43/f115 x:0 0 0 2026-03-10T10:20:20.049 INFO:tasks.workunit.client.0.vm02.stdout:7/917: sync 2026-03-10T10:20:20.051 INFO:tasks.workunit.client.0.vm02.stdout:8/904: dwrite d1/d1c/d24/dad/dbe/dda/f104 [0,4194304] 0 2026-03-10T10:20:20.055 INFO:tasks.workunit.client.1.vm05.stdout:7/960: read d5/d1d/f7c [1009290,11118] 0 2026-03-10T10:20:20.056 INFO:tasks.workunit.client.1.vm05.stdout:7/961: write d5/d1d/d20/d35/fbb [2525806,38511] 0 2026-03-10T10:20:20.056 INFO:tasks.workunit.client.1.vm05.stdout:7/962: readlink d5/d1d/de3/lfe 0 2026-03-10T10:20:20.059 INFO:tasks.workunit.client.0.vm02.stdout:0/958: write d9/d34/d3d/d67/f9f [5496997,4469] 0 2026-03-10T10:20:20.062 INFO:tasks.workunit.client.1.vm05.stdout:0/926: truncate d1/d2/d9/d50/d9a/da0/f105 101729 0 2026-03-10T10:20:20.065 INFO:tasks.workunit.client.0.vm02.stdout:6/873: dwrite d0/d8/d29/d6d/d96/de4/d102/f53 [0,4194304] 0 2026-03-10T10:20:20.075 INFO:tasks.workunit.client.0.vm02.stdout:6/874: chown d0/d8/d29/d2f/d50/d121 12774 1 2026-03-10T10:20:20.075 INFO:tasks.workunit.client.0.vm02.stdout:6/875: dread - d0/d8/d9/d7a/dc0/f116 zero size 2026-03-10T10:20:20.085 INFO:tasks.workunit.client.1.vm05.stdout:0/927: chown d1/d2/d9/d31/d12/d20/lc9 3415 1 2026-03-10T10:20:20.087 INFO:tasks.workunit.client.0.vm02.stdout:6/876: creat d0/d8/d29/dce/f128 x:0 0 0 2026-03-10T10:20:20.088 INFO:tasks.workunit.client.1.vm05.stdout:7/963: symlink d5/d1d/d20/d2d/d5d/l126 0 2026-03-10T10:20:20.090 INFO:tasks.workunit.client.1.vm05.stdout:0/928: rmdir d1/d2/d9/d31/d13/d15/d4e/df6/d112 39 2026-03-10T10:20:20.090 INFO:tasks.workunit.client.0.vm02.stdout:6/877: readlink d0/d8/d29/d52/l104 0 2026-03-10T10:20:20.094 INFO:tasks.workunit.client.0.vm02.stdout:7/918: getdents d1/dc 0 2026-03-10T10:20:20.096 INFO:tasks.workunit.client.1.vm05.stdout:3/948: write dd/d15/d24/d8e/dac/fd7 [608298,2143] 0 2026-03-10T10:20:20.101 INFO:tasks.workunit.client.0.vm02.stdout:7/919: dwrite d1/d1b/d8e/f116 [0,4194304] 0 2026-03-10T10:20:20.117 INFO:tasks.workunit.client.0.vm02.stdout:6/878: link d0/d8/d29/d6d/d96/ff5 d0/d8/d29/d2f/d50/f129 0 2026-03-10T10:20:20.117 INFO:tasks.workunit.client.0.vm02.stdout:6/879: write d0/d87/f11a [791694,52720] 0 2026-03-10T10:20:20.123 INFO:tasks.workunit.client.0.vm02.stdout:6/880: dread - d0/d8/d29/d6d/d96/ff5 zero size 2026-03-10T10:20:20.127 INFO:tasks.workunit.client.0.vm02.stdout:3/918: write d1/d20/ff2 [1048074,121966] 0 2026-03-10T10:20:20.127 INFO:tasks.workunit.client.0.vm02.stdout:7/920: getdents d1/dc/d16/d28 0 2026-03-10T10:20:20.130 INFO:tasks.workunit.client.0.vm02.stdout:7/921: dread d1/d1b/f10c [0,4194304] 0 2026-03-10T10:20:20.142 INFO:tasks.workunit.client.0.vm02.stdout:1/951: dwrite d4/da/d27/d38/fad [0,4194304] 0 2026-03-10T10:20:20.142 INFO:tasks.workunit.client.1.vm05.stdout:3/949: dread - dd/d15/d24/d74/ffb zero size 2026-03-10T10:20:20.147 INFO:tasks.workunit.client.1.vm05.stdout:7/964: symlink d5/d1d/d29/d60/de8/l127 0 2026-03-10T10:20:20.150 INFO:tasks.workunit.client.1.vm05.stdout:4/793: getdents d1/d31 0 2026-03-10T10:20:20.155 INFO:tasks.workunit.client.0.vm02.stdout:6/881: read d0/d8/d29/d6d/d96/de4/def/d6f/fa2 [3178933,31993] 0 2026-03-10T10:20:20.156 INFO:tasks.workunit.client.1.vm05.stdout:7/965: chown d5/d1d/d29/d3e/d8c/c7e 73261 1 2026-03-10T10:20:20.168 INFO:tasks.workunit.client.1.vm05.stdout:3/950: chown dd/d15/d24/d8e/dac/l124 5464 1 2026-03-10T10:20:20.170 INFO:tasks.workunit.client.0.vm02.stdout:1/952: creat d4/f12f x:0 0 0 2026-03-10T10:20:20.171 INFO:tasks.workunit.client.0.vm02.stdout:1/953: mknod d4/dc3/dd4/c130 0 2026-03-10T10:20:20.172 INFO:tasks.workunit.client.0.vm02.stdout:1/954: readlink d4/da/d1a/d5b/d93/de8/lf6 0 2026-03-10T10:20:20.174 INFO:tasks.workunit.client.1.vm05.stdout:7/966: mkdir d5/d1d/d29/d60/de1/d10a/d128 0 2026-03-10T10:20:20.183 INFO:tasks.workunit.client.1.vm05.stdout:7/967: mkdir d5/d1d/d20/d2d/d129 0 2026-03-10T10:20:20.184 INFO:tasks.workunit.client.1.vm05.stdout:7/968: dread - d5/d1d/d29/d3e/d8c/d82/f111 zero size 2026-03-10T10:20:20.191 INFO:tasks.workunit.client.1.vm05.stdout:7/969: symlink d5/d1d/d29/d3e/d8c/d82/d90/l12a 0 2026-03-10T10:20:20.198 INFO:tasks.workunit.client.1.vm05.stdout:7/970: dwrite d5/d26/f33 [0,4194304] 0 2026-03-10T10:20:20.199 INFO:tasks.workunit.client.1.vm05.stdout:7/971: stat d5/d1d/d20/d2d/f10b 0 2026-03-10T10:20:20.210 INFO:tasks.workunit.client.1.vm05.stdout:7/972: unlink d5/d1d/d20/d2d/fdb 0 2026-03-10T10:20:20.226 INFO:tasks.workunit.client.1.vm05.stdout:9/800: creat d0/df/f114 x:0 0 0 2026-03-10T10:20:20.227 INFO:tasks.workunit.client.0.vm02.stdout:2/918: dwrite d0/d1a/f31 [0,4194304] 0 2026-03-10T10:20:20.229 INFO:tasks.workunit.client.0.vm02.stdout:2/919: readlink d0/l1c 0 2026-03-10T10:20:20.230 INFO:tasks.workunit.client.0.vm02.stdout:2/920: mkdir d0/d10/dee/d116/d131 0 2026-03-10T10:20:20.231 INFO:tasks.workunit.client.1.vm05.stdout:9/801: creat d0/df/d11/dc6/f115 x:0 0 0 2026-03-10T10:20:20.232 INFO:tasks.workunit.client.0.vm02.stdout:2/921: symlink d0/d71/d108/d65/dc4/dfa/dd3/de8/l132 0 2026-03-10T10:20:20.233 INFO:tasks.workunit.client.0.vm02.stdout:2/922: creat d0/d71/dfb/f133 x:0 0 0 2026-03-10T10:20:20.234 INFO:tasks.workunit.client.0.vm02.stdout:2/923: chown d0/d1a/d49/lca 5796042 1 2026-03-10T10:20:20.237 INFO:tasks.workunit.client.0.vm02.stdout:2/924: symlink d0/d8c/l134 0 2026-03-10T10:20:20.245 INFO:tasks.workunit.client.1.vm05.stdout:2/862: rename db/d1c/d40/d62/d85/ce0 to db/d28/d4f/c117 0 2026-03-10T10:20:20.246 INFO:tasks.workunit.client.0.vm02.stdout:2/925: truncate d0/d71/d108/d65/dc4/dfa/d80/d10f/fcf 3601330 0 2026-03-10T10:20:20.253 INFO:tasks.workunit.client.0.vm02.stdout:2/926: creat d0/d71/d108/d65/dc4/dfa/d80/f135 x:0 0 0 2026-03-10T10:20:20.253 INFO:tasks.workunit.client.0.vm02.stdout:2/927: mknod d0/d8c/c136 0 2026-03-10T10:20:20.253 INFO:tasks.workunit.client.0.vm02.stdout:2/928: rename d0/d1a/f33 to d0/d1a/d49/deb/de6/f137 0 2026-03-10T10:20:20.253 INFO:tasks.workunit.client.1.vm05.stdout:7/973: dread d5/d1d/d29/d3e/f65 [0,4194304] 0 2026-03-10T10:20:20.253 INFO:tasks.workunit.client.1.vm05.stdout:6/902: rename dd/d36/d3f/d12/d96/l115 to dd/d36/d3f/d12/d44/d2a/d77/l126 0 2026-03-10T10:20:20.253 INFO:tasks.workunit.client.1.vm05.stdout:6/903: chown dd 32792209 1 2026-03-10T10:20:20.254 INFO:tasks.workunit.client.1.vm05.stdout:6/904: chown f3 197540 1 2026-03-10T10:20:20.255 INFO:tasks.workunit.client.1.vm05.stdout:8/834: rename d7/d14/d3a/d49/d65/l88 to d7/d14/d24/d3f/d6a/db0/l10d 0 2026-03-10T10:20:20.257 INFO:tasks.workunit.client.0.vm02.stdout:2/929: rmdir d0/d10/da6 39 2026-03-10T10:20:20.259 INFO:tasks.workunit.client.1.vm05.stdout:5/923: rmdir da/d131 39 2026-03-10T10:20:20.260 INFO:tasks.workunit.client.1.vm05.stdout:6/905: mkdir dd/d36/d7d/d127 0 2026-03-10T10:20:20.262 INFO:tasks.workunit.client.0.vm02.stdout:2/930: creat d0/d1a/d49/deb/de6/f138 x:0 0 0 2026-03-10T10:20:20.262 INFO:tasks.workunit.client.0.vm02.stdout:2/931: write d0/d10/ff6 [2363230,121574] 0 2026-03-10T10:20:20.267 INFO:tasks.workunit.client.0.vm02.stdout:2/932: creat d0/d10/dee/d116/f139 x:0 0 0 2026-03-10T10:20:20.271 INFO:tasks.workunit.client.1.vm05.stdout:6/906: read dd/d36/d3f/d12/d44/d2a/d3d/d48/fe9 [99534,24142] 0 2026-03-10T10:20:20.271 INFO:tasks.workunit.client.1.vm05.stdout:5/924: symlink da/db/d28/l13a 0 2026-03-10T10:20:20.274 INFO:tasks.workunit.client.1.vm05.stdout:4/794: rename d1/d3/c14 to d1/d31/dc/c10f 0 2026-03-10T10:20:20.277 INFO:tasks.workunit.client.0.vm02.stdout:9/880: write da/f14 [1473805,52556] 0 2026-03-10T10:20:20.278 INFO:tasks.workunit.client.1.vm05.stdout:6/907: fsync dd/d36/d3f/d12/d44/daa/fbc 0 2026-03-10T10:20:20.278 INFO:tasks.workunit.client.1.vm05.stdout:2/863: rename db/d28/d4f/d59/d94/d95 to db/d28/d4f/d8b/de3/d118 0 2026-03-10T10:20:20.281 INFO:tasks.workunit.client.0.vm02.stdout:9/881: symlink da/d3c/d4c/d2c/d96/l11f 0 2026-03-10T10:20:20.282 INFO:tasks.workunit.client.0.vm02.stdout:2/933: dread d0/d10/da6/fb6 [0,4194304] 0 2026-03-10T10:20:20.283 INFO:tasks.workunit.client.1.vm05.stdout:7/974: rename d5/d17/ce4 to d5/d1d/d29/d60/de8/c12b 0 2026-03-10T10:20:20.283 INFO:tasks.workunit.client.0.vm02.stdout:9/882: rmdir da/d3c/d4c/d38 39 2026-03-10T10:20:20.287 INFO:tasks.workunit.client.0.vm02.stdout:2/934: creat d0/d71/d108/d65/dc4/dfa/d80/f13a x:0 0 0 2026-03-10T10:20:20.289 INFO:tasks.workunit.client.0.vm02.stdout:2/935: creat d0/d1a/d49/deb/de6/f13b x:0 0 0 2026-03-10T10:20:20.289 INFO:tasks.workunit.client.0.vm02.stdout:2/936: chown d0/d1a/d49/deb/de6/f13b 33496 1 2026-03-10T10:20:20.290 INFO:tasks.workunit.client.0.vm02.stdout:2/937: write d0/f70 [2566515,90437] 0 2026-03-10T10:20:20.291 INFO:tasks.workunit.client.0.vm02.stdout:2/938: dread - d0/d71/d108/d65/dc4/dfa/d80/f13a zero size 2026-03-10T10:20:20.297 INFO:tasks.workunit.client.0.vm02.stdout:8/905: write d1/d1c/d43/d5b/d88/dac/ff2 [629636,36319] 0 2026-03-10T10:20:20.297 INFO:tasks.workunit.client.1.vm05.stdout:1/983: write d4/d3d/d6e/fd1 [627878,56094] 0 2026-03-10T10:20:20.300 INFO:tasks.workunit.client.0.vm02.stdout:0/959: dwrite d9/f28 [0,4194304] 0 2026-03-10T10:20:20.303 INFO:tasks.workunit.client.1.vm05.stdout:8/835: dread d7/d2f/f4b [0,4194304] 0 2026-03-10T10:20:20.303 INFO:tasks.workunit.client.1.vm05.stdout:8/836: dread - d7/d2f/d57/fae zero size 2026-03-10T10:20:20.320 INFO:tasks.workunit.client.1.vm05.stdout:2/864: creat db/d28/d4f/d59/da4/d114/f119 x:0 0 0 2026-03-10T10:20:20.322 INFO:tasks.workunit.client.0.vm02.stdout:8/906: symlink d1/d1c/d43/d6a/da8/d56/db5/l116 0 2026-03-10T10:20:20.323 INFO:tasks.workunit.client.0.vm02.stdout:9/883: sync 2026-03-10T10:20:20.323 INFO:tasks.workunit.client.0.vm02.stdout:8/907: truncate d1/d1c/d43/d6a/da8/d56/f10e 20906 0 2026-03-10T10:20:20.324 INFO:tasks.workunit.client.1.vm05.stdout:5/925: dread da/db/dee/d38/f6c [0,4194304] 0 2026-03-10T10:20:20.324 INFO:tasks.workunit.client.0.vm02.stdout:9/884: stat da/d3c/d4c/d2c/d34/d35/fd2 0 2026-03-10T10:20:20.324 INFO:tasks.workunit.client.1.vm05.stdout:5/926: chown da/d9a/dc7/l5d 209 1 2026-03-10T10:20:20.326 INFO:tasks.workunit.client.0.vm02.stdout:8/908: creat d1/f117 x:0 0 0 2026-03-10T10:20:20.332 INFO:tasks.workunit.client.1.vm05.stdout:5/927: creat da/d96/f13b x:0 0 0 2026-03-10T10:20:20.333 INFO:tasks.workunit.client.0.vm02.stdout:9/885: symlink da/d3c/d4c/d38/d82/da3/l120 0 2026-03-10T10:20:20.336 INFO:tasks.workunit.client.0.vm02.stdout:3/919: write d1/f50 [738605,105366] 0 2026-03-10T10:20:20.336 INFO:tasks.workunit.client.1.vm05.stdout:1/984: symlink d4/df/de0/d82/dea/l123 0 2026-03-10T10:20:20.337 INFO:tasks.workunit.client.1.vm05.stdout:1/985: stat d4/d3d/ddc/f11e 0 2026-03-10T10:20:20.339 INFO:tasks.workunit.client.0.vm02.stdout:3/920: dwrite d1/f3 [0,4194304] 0 2026-03-10T10:20:20.346 INFO:tasks.workunit.client.0.vm02.stdout:7/922: write d1/d1b/d49/fc4 [1412565,6435] 0 2026-03-10T10:20:20.347 INFO:tasks.workunit.client.0.vm02.stdout:9/886: truncate da/d3c/d4c/d38/d82/f90 1053113 0 2026-03-10T10:20:20.348 INFO:tasks.workunit.client.1.vm05.stdout:8/837: unlink d7/ld4 0 2026-03-10T10:20:20.348 INFO:tasks.workunit.client.1.vm05.stdout:2/865: mknod db/d1c/d40/c11a 0 2026-03-10T10:20:20.348 INFO:tasks.workunit.client.1.vm05.stdout:0/929: write d1/d2/d9/d50/d9a/da0/ff7 [3158678,124121] 0 2026-03-10T10:20:20.350 INFO:tasks.workunit.client.0.vm02.stdout:9/887: dwrite da/d3c/d4c/f49 [0,4194304] 0 2026-03-10T10:20:20.351 INFO:tasks.workunit.client.1.vm05.stdout:0/930: chown d1/d2/d39/d3d/f64 1 1 2026-03-10T10:20:20.353 INFO:tasks.workunit.client.1.vm05.stdout:8/838: dread d7/d14/d24/d3f/d6a/d8a/d96/ffc [0,4194304] 0 2026-03-10T10:20:20.362 INFO:tasks.workunit.client.0.vm02.stdout:3/921: rmdir d1/d8/d21/d73/d78/d84 39 2026-03-10T10:20:20.362 INFO:tasks.workunit.client.1.vm05.stdout:5/928: creat da/db/d26/d70/d72/df6/f13c x:0 0 0 2026-03-10T10:20:20.363 INFO:tasks.workunit.client.0.vm02.stdout:6/882: write d0/d8/d29/d6d/d96/de4/d102/f39 [749577,75181] 0 2026-03-10T10:20:20.365 INFO:tasks.workunit.client.0.vm02.stdout:8/909: rename d1/d1c/d43/d5b/dab/d102/f10a to d1/d1c/d43/d5b/d88/f118 0 2026-03-10T10:20:20.367 INFO:tasks.workunit.client.0.vm02.stdout:1/955: write d4/da/d1a/fa1 [278515,128539] 0 2026-03-10T10:20:20.373 INFO:tasks.workunit.client.0.vm02.stdout:7/923: creat d1/d1b/d8e/f121 x:0 0 0 2026-03-10T10:20:20.378 INFO:tasks.workunit.client.1.vm05.stdout:3/951: dwrite dd/f41 [0,4194304] 0 2026-03-10T10:20:20.378 INFO:tasks.workunit.client.1.vm05.stdout:0/931: dread - d1/d2/d9/d31/d13/da2/dab/dce/fdf zero size 2026-03-10T10:20:20.379 INFO:tasks.workunit.client.1.vm05.stdout:9/802: dwrite d0/d1/d13/de/f38 [4194304,4194304] 0 2026-03-10T10:20:20.380 INFO:tasks.workunit.client.1.vm05.stdout:3/952: write dd/dbe/f109 [974337,2701] 0 2026-03-10T10:20:20.390 INFO:tasks.workunit.client.1.vm05.stdout:8/839: rename d7/d14/d3a/d49/d65/db8/dfa to d7/dd5/d10e 0 2026-03-10T10:20:20.390 INFO:tasks.workunit.client.1.vm05.stdout:1/986: mkdir d4/df/d1c/d92/d124 0 2026-03-10T10:20:20.395 INFO:tasks.workunit.client.0.vm02.stdout:6/883: fdatasync d0/d8/d9/f6a 0 2026-03-10T10:20:20.398 INFO:tasks.workunit.client.0.vm02.stdout:8/910: fdatasync d1/d1c/fe0 0 2026-03-10T10:20:20.400 INFO:tasks.workunit.client.0.vm02.stdout:1/956: truncate d4/d1b/fc6 3358857 0 2026-03-10T10:20:20.402 INFO:tasks.workunit.client.1.vm05.stdout:4/795: write d1/d31/dc/d40/d63/f94 [2194017,122091] 0 2026-03-10T10:20:20.419 INFO:tasks.workunit.client.0.vm02.stdout:2/939: write d0/d8c/fab [3637416,82532] 0 2026-03-10T10:20:20.419 INFO:tasks.workunit.client.0.vm02.stdout:0/960: write d9/d18/d1a/f88 [668882,114230] 0 2026-03-10T10:20:20.419 INFO:tasks.workunit.client.1.vm05.stdout:9/803: creat d0/d1/dcc/f116 x:0 0 0 2026-03-10T10:20:20.419 INFO:tasks.workunit.client.1.vm05.stdout:6/908: dwrite dd/d36/d3f/d12/d44/d2a/d77/fb4 [0,4194304] 0 2026-03-10T10:20:20.419 INFO:tasks.workunit.client.1.vm05.stdout:7/975: write d5/d1d/d29/d3e/d8c/fef [70092,67045] 0 2026-03-10T10:20:20.419 INFO:tasks.workunit.client.1.vm05.stdout:7/976: readlink d5/d1d/d20/d2d/d5d/l126 0 2026-03-10T10:20:20.419 INFO:tasks.workunit.client.1.vm05.stdout:8/840: creat d7/d14/d24/d3f/dc4/f10f x:0 0 0 2026-03-10T10:20:20.419 INFO:tasks.workunit.client.1.vm05.stdout:8/841: dread d7/d2f/f4b [0,4194304] 0 2026-03-10T10:20:20.420 INFO:tasks.workunit.client.1.vm05.stdout:7/977: dwrite d5/d1d/d29/d3e/d8c/d7f/f93 [0,4194304] 0 2026-03-10T10:20:20.424 INFO:tasks.workunit.client.0.vm02.stdout:0/961: dwrite d9/d34/d3d/d65/d89/dd3/d9c/f101 [0,4194304] 0 2026-03-10T10:20:20.425 INFO:tasks.workunit.client.1.vm05.stdout:3/953: sync 2026-03-10T10:20:20.438 INFO:tasks.workunit.client.0.vm02.stdout:1/957: mknod d4/da/d1a/d11d/d91/c131 0 2026-03-10T10:20:20.443 INFO:tasks.workunit.client.0.vm02.stdout:7/924: mknod d1/d1b/d49/c122 0 2026-03-10T10:20:20.443 INFO:tasks.workunit.client.1.vm05.stdout:4/796: truncate d1/d31/dc/d40/d63/f90 5165580 0 2026-03-10T10:20:20.443 INFO:tasks.workunit.client.1.vm05.stdout:9/804: creat d0/df/d11/f117 x:0 0 0 2026-03-10T10:20:20.448 INFO:tasks.workunit.client.0.vm02.stdout:2/940: creat d0/d71/d108/d65/f13c x:0 0 0 2026-03-10T10:20:20.455 INFO:tasks.workunit.client.1.vm05.stdout:6/909: truncate dd/d1b/fa8 140755 0 2026-03-10T10:20:20.457 INFO:tasks.workunit.client.1.vm05.stdout:7/978: dread d5/d17/f52 [0,4194304] 0 2026-03-10T10:20:20.458 INFO:tasks.workunit.client.0.vm02.stdout:6/884: mkdir d0/d8/d29/d2f/d50/d10f/d12a 0 2026-03-10T10:20:20.463 INFO:tasks.workunit.client.1.vm05.stdout:5/929: creat da/db/d26/f13d x:0 0 0 2026-03-10T10:20:20.463 INFO:tasks.workunit.client.1.vm05.stdout:2/866: getdents db/d28/d4f/d59/d94/dfe 0 2026-03-10T10:20:20.464 INFO:tasks.workunit.client.0.vm02.stdout:9/888: getdents da/d3c/d53 0 2026-03-10T10:20:20.467 INFO:tasks.workunit.client.1.vm05.stdout:1/987: creat d4/d79/d83/dc5/f125 x:0 0 0 2026-03-10T10:20:20.470 INFO:tasks.workunit.client.0.vm02.stdout:3/922: link d1/d6/ff1 d1/d8/d21/d73/d78/d84/dfa/f132 0 2026-03-10T10:20:20.473 INFO:tasks.workunit.client.1.vm05.stdout:9/805: creat d0/df/d74/d8c/de4/d104/f118 x:0 0 0 2026-03-10T10:20:20.474 INFO:tasks.workunit.client.0.vm02.stdout:2/941: creat d0/d10/d81/f13d x:0 0 0 2026-03-10T10:20:20.475 INFO:tasks.workunit.client.0.vm02.stdout:2/942: chown d0/d71/d108/d65/dc4/dfa/d80/d10f 2685 1 2026-03-10T10:20:20.479 INFO:tasks.workunit.client.0.vm02.stdout:2/943: dwrite d0/d71/d108/d65/dc4/de0/f128 [0,4194304] 0 2026-03-10T10:20:20.482 INFO:tasks.workunit.client.1.vm05.stdout:6/910: dwrite dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/f113 [8388608,4194304] 0 2026-03-10T10:20:20.489 INFO:tasks.workunit.client.0.vm02.stdout:1/958: unlink d4/da/d1a/d47/dbc/dcb/c11f 0 2026-03-10T10:20:20.491 INFO:tasks.workunit.client.1.vm05.stdout:5/930: unlink da/db/d28/d97/f8e 0 2026-03-10T10:20:20.497 INFO:tasks.workunit.client.1.vm05.stdout:4/797: dread d1/d64/fa1 [0,4194304] 0 2026-03-10T10:20:20.497 INFO:tasks.workunit.client.1.vm05.stdout:2/867: dread db/d1c/d40/d62/fed [0,4194304] 0 2026-03-10T10:20:20.497 INFO:tasks.workunit.client.1.vm05.stdout:2/868: dread - db/d2d/d5e/fae zero size 2026-03-10T10:20:20.498 INFO:tasks.workunit.client.1.vm05.stdout:2/869: fsync db/f26 0 2026-03-10T10:20:20.500 INFO:tasks.workunit.client.0.vm02.stdout:7/925: unlink d1/dc/d16/d28/d2c/lb3 0 2026-03-10T10:20:20.500 INFO:tasks.workunit.client.0.vm02.stdout:7/926: chown d1/d1b/d8f 11791076 1 2026-03-10T10:20:20.501 INFO:tasks.workunit.client.1.vm05.stdout:1/988: chown d4/d3d/d6e/l80 0 1 2026-03-10T10:20:20.503 INFO:tasks.workunit.client.1.vm05.stdout:7/979: symlink d5/d1d/d20/d35/d11f/l12c 0 2026-03-10T10:20:20.503 INFO:tasks.workunit.client.0.vm02.stdout:6/885: creat d0/d8/d29/d2f/d50/d10f/d12a/f12b x:0 0 0 2026-03-10T10:20:20.504 INFO:tasks.workunit.client.0.vm02.stdout:6/886: chown d0/d8/d9/f4f 53 1 2026-03-10T10:20:20.504 INFO:tasks.workunit.client.0.vm02.stdout:2/944: creat d0/d71/dfb/f13e x:0 0 0 2026-03-10T10:20:20.505 INFO:tasks.workunit.client.0.vm02.stdout:2/945: chown d0/f70 1353 1 2026-03-10T10:20:20.507 INFO:tasks.workunit.client.0.vm02.stdout:8/911: getdents d1/d1c/d43/d6a/da8/d56 0 2026-03-10T10:20:20.508 INFO:tasks.workunit.client.0.vm02.stdout:6/887: dwrite d0/d8/d29/d6d/d96/de4/def/fcf [0,4194304] 0 2026-03-10T10:20:20.516 INFO:tasks.workunit.client.0.vm02.stdout:1/959: mkdir d4/dc3/d115/d132 0 2026-03-10T10:20:20.517 INFO:tasks.workunit.client.1.vm05.stdout:6/911: link dd/d36/d3f/d12/d44/d30/d4a/df4/f120 dd/d36/d3f/d12/d44/d2a/d3d/d3e/f128 0 2026-03-10T10:20:20.518 INFO:tasks.workunit.client.0.vm02.stdout:7/927: chown d1/dc/le 14571967 1 2026-03-10T10:20:20.518 INFO:tasks.workunit.client.0.vm02.stdout:7/928: dread - d1/d1b/d8e/f121 zero size 2026-03-10T10:20:20.519 INFO:tasks.workunit.client.1.vm05.stdout:6/912: write dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/fe1 [2977359,13073] 0 2026-03-10T10:20:20.519 INFO:tasks.workunit.client.0.vm02.stdout:6/888: creat d0/d8/d29/d94/d9a/f12c x:0 0 0 2026-03-10T10:20:20.519 INFO:tasks.workunit.client.0.vm02.stdout:7/929: chown d1/dc/d16/d28/d2d/ce7 35 1 2026-03-10T10:20:20.519 INFO:tasks.workunit.client.1.vm05.stdout:6/913: stat dd/d36/d3f/d12/f35 0 2026-03-10T10:20:20.520 INFO:tasks.workunit.client.0.vm02.stdout:2/946: fsync d0/f2c 0 2026-03-10T10:20:20.529 INFO:tasks.workunit.client.0.vm02.stdout:1/960: rmdir d4/d4a 39 2026-03-10T10:20:20.533 INFO:tasks.workunit.client.1.vm05.stdout:4/798: rmdir d1/d31/df4 0 2026-03-10T10:20:20.548 INFO:tasks.workunit.client.0.vm02.stdout:6/889: rmdir d0/d8/d29/d2f/d50/d98/df6/d114 0 2026-03-10T10:20:20.548 INFO:tasks.workunit.client.0.vm02.stdout:7/930: link d1/dc/d16/l6e d1/dc/d55/d9a/l123 0 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:4/799: fsync d1/d3/d65/ddb/fe7 0 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:1/989: link d4/df/d1c/d53/f6b d4/d37/f126 0 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:4/800: dread - d1/d31/d4b/d6d/fbc zero size 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:7/980: getdents d5/d1d/d29/d3e/d8c/d82 0 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:1/990: chown d4/df/la6 62033 1 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:6/914: link dd/d36/d3f/d12/d44/d30/c39 dd/d36/d7d/d102/c129 0 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:4/801: rmdir d1/d31/dc/d40/d45/ded/dfb 39 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:4/802: truncate d1/d3/f5 3766995 0 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:4/803: dread d1/d64/fa1 [0,4194304] 0 2026-03-10T10:20:20.549 INFO:tasks.workunit.client.1.vm05.stdout:7/981: dread d5/d1d/d29/d3e/d8c/d7f/f93 [0,4194304] 0 2026-03-10T10:20:20.551 INFO:tasks.workunit.client.1.vm05.stdout:4/804: creat d1/d31/d4b/f110 x:0 0 0 2026-03-10T10:20:20.553 INFO:tasks.workunit.client.1.vm05.stdout:4/805: mknod d1/d31/d76/c111 0 2026-03-10T10:20:20.556 INFO:tasks.workunit.client.0.vm02.stdout:3/923: dread d1/f90 [0,4194304] 0 2026-03-10T10:20:20.560 INFO:tasks.workunit.client.0.vm02.stdout:2/947: sync 2026-03-10T10:20:20.563 INFO:tasks.workunit.client.1.vm05.stdout:9/806: dread d0/df/d11/f8d [4194304,4194304] 0 2026-03-10T10:20:20.563 INFO:tasks.workunit.client.1.vm05.stdout:1/991: dread d4/dd/f60 [0,4194304] 0 2026-03-10T10:20:20.563 INFO:tasks.workunit.client.0.vm02.stdout:2/948: truncate d0/d10/d81/f12d 155185 0 2026-03-10T10:20:20.564 INFO:tasks.workunit.client.1.vm05.stdout:1/992: write d4/df/d1c/d53/f65 [1088690,10907] 0 2026-03-10T10:20:20.571 INFO:tasks.workunit.client.1.vm05.stdout:9/807: stat d0/d1/d16/la5 0 2026-03-10T10:20:20.582 INFO:tasks.workunit.client.1.vm05.stdout:7/982: sync 2026-03-10T10:20:20.589 INFO:tasks.workunit.client.1.vm05.stdout:7/983: creat d5/d1d/d29/d3e/d8c/d7f/d116/f12d x:0 0 0 2026-03-10T10:20:20.593 INFO:tasks.workunit.client.1.vm05.stdout:7/984: link d5/d17/d66/ce9 d5/d1d/d29/d3e/c12e 0 2026-03-10T10:20:20.628 INFO:tasks.workunit.client.1.vm05.stdout:0/932: dwrite d1/d2/d9/f6c [4194304,4194304] 0 2026-03-10T10:20:20.630 INFO:tasks.workunit.client.0.vm02.stdout:0/962: dwrite d9/d18/d1a/d22/d24/d51/ff4 [0,4194304] 0 2026-03-10T10:20:20.639 INFO:tasks.workunit.client.1.vm05.stdout:8/842: dwrite d7/f9 [0,4194304] 0 2026-03-10T10:20:20.650 INFO:tasks.workunit.client.0.vm02.stdout:0/963: fdatasync d9/d34/d3d/d65/d89/fea 0 2026-03-10T10:20:20.652 INFO:tasks.workunit.client.1.vm05.stdout:0/933: rename d1/d2/d39/d3d to d1/d2/d9/d31/d13/da2/dab/dce/d106/d135 0 2026-03-10T10:20:20.654 INFO:tasks.workunit.client.1.vm05.stdout:0/934: chown d1/d2/d9/d31/d13/d17/da1/df5/c10c 16 1 2026-03-10T10:20:20.665 INFO:tasks.workunit.client.0.vm02.stdout:7/931: dread d1/dc/d16/d28/f108 [0,4194304] 0 2026-03-10T10:20:20.669 INFO:tasks.workunit.client.0.vm02.stdout:0/964: creat d9/d18/d1a/d22/d24/d8e/d9b/d115/f13b x:0 0 0 2026-03-10T10:20:20.672 INFO:tasks.workunit.client.0.vm02.stdout:7/932: mknod d1/def/c124 0 2026-03-10T10:20:20.676 INFO:tasks.workunit.client.1.vm05.stdout:0/935: rename d1/d2/d39/l47 to d1/d2/d9/d31/d13/da2/dab/dce/d106/l136 0 2026-03-10T10:20:20.681 INFO:tasks.workunit.client.1.vm05.stdout:0/936: chown d1/d2/d9/d31/d13/d15/l2c 5 1 2026-03-10T10:20:20.684 INFO:tasks.workunit.client.1.vm05.stdout:6/915: dread f2 [4194304,4194304] 0 2026-03-10T10:20:20.684 INFO:tasks.workunit.client.1.vm05.stdout:8/843: link d7/d14/f4c d7/d14/d15/f110 0 2026-03-10T10:20:20.684 INFO:tasks.workunit.client.0.vm02.stdout:0/965: creat d9/d18/d1a/d22/f13c x:0 0 0 2026-03-10T10:20:20.684 INFO:tasks.workunit.client.0.vm02.stdout:0/966: stat d9/d18/d1a/d22/d24/d80/dcc 0 2026-03-10T10:20:20.684 INFO:tasks.workunit.client.0.vm02.stdout:0/967: stat d9/d18/d1a/cfc 0 2026-03-10T10:20:20.684 INFO:tasks.workunit.client.0.vm02.stdout:0/968: dread - d9/d18/d1a/d22/d24/f106 zero size 2026-03-10T10:20:20.689 INFO:tasks.workunit.client.1.vm05.stdout:6/916: mkdir dd/d12a 0 2026-03-10T10:20:20.689 INFO:tasks.workunit.client.1.vm05.stdout:0/937: dread d1/d2/d9/d31/d13/d17/fd3 [0,4194304] 0 2026-03-10T10:20:20.691 INFO:tasks.workunit.client.1.vm05.stdout:8/844: dread d7/d14/d24/d3f/fc9 [0,4194304] 0 2026-03-10T10:20:20.696 INFO:tasks.workunit.client.1.vm05.stdout:8/845: mkdir d7/d14/d24/d3f/d4f/d111 0 2026-03-10T10:20:20.696 INFO:tasks.workunit.client.1.vm05.stdout:0/938: creat d1/d2/d9/d50/d9a/f137 x:0 0 0 2026-03-10T10:20:20.696 INFO:tasks.workunit.client.1.vm05.stdout:6/917: mknod dd/d36/d3f/dbd/dd5/c12b 0 2026-03-10T10:20:20.697 INFO:tasks.workunit.client.1.vm05.stdout:6/918: chown dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3 44544 1 2026-03-10T10:20:20.704 INFO:tasks.workunit.client.0.vm02.stdout:9/889: write da/d3c/d4c/d38/d82/d8c/f98 [2763324,28568] 0 2026-03-10T10:20:20.704 INFO:tasks.workunit.client.0.vm02.stdout:9/890: fsync da/f106 0 2026-03-10T10:20:20.708 INFO:tasks.workunit.client.1.vm05.stdout:0/939: dwrite d1/d2/d9/d31/d13/d15/d4e/d8a/dfc/f11a [0,4194304] 0 2026-03-10T10:20:20.708 INFO:tasks.workunit.client.0.vm02.stdout:9/891: creat da/d9d/f121 x:0 0 0 2026-03-10T10:20:20.710 INFO:tasks.workunit.client.0.vm02.stdout:9/892: truncate da/d3c/d4c/f1d 1116114 0 2026-03-10T10:20:20.710 INFO:tasks.workunit.client.1.vm05.stdout:2/870: write db/d61/d10a/fee [869222,55423] 0 2026-03-10T10:20:20.710 INFO:tasks.workunit.client.1.vm05.stdout:6/919: dread dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6/f118 [0,4194304] 0 2026-03-10T10:20:20.710 INFO:tasks.workunit.client.0.vm02.stdout:8/912: write d1/d1c/d43/f52 [1283808,105118] 0 2026-03-10T10:20:20.713 INFO:tasks.workunit.client.1.vm05.stdout:3/954: dwrite dd/d39/d66/f7e [0,4194304] 0 2026-03-10T10:20:20.720 INFO:tasks.workunit.client.1.vm05.stdout:2/871: truncate db/d2d/f52 667146 0 2026-03-10T10:20:20.720 INFO:tasks.workunit.client.1.vm05.stdout:5/931: dwrite da/db/d28/d97/fb2 [0,4194304] 0 2026-03-10T10:20:20.726 INFO:tasks.workunit.client.0.vm02.stdout:1/961: write d4/da/f73 [1439757,38554] 0 2026-03-10T10:20:20.730 INFO:tasks.workunit.client.1.vm05.stdout:0/940: sync 2026-03-10T10:20:20.730 INFO:tasks.workunit.client.1.vm05.stdout:6/920: mknod dd/d36/d3f/d12/d44/d2a/d7f/c12c 0 2026-03-10T10:20:20.730 INFO:tasks.workunit.client.1.vm05.stdout:3/955: creat dd/d15/d24/d74/f154 x:0 0 0 2026-03-10T10:20:20.732 INFO:tasks.workunit.client.1.vm05.stdout:6/921: write dd/d36/d3f/d12/d44/d2a/d3d/d3e/f73 [3269068,7637] 0 2026-03-10T10:20:20.733 INFO:tasks.workunit.client.0.vm02.stdout:6/890: dwrite d0/d8/d9/f84 [0,4194304] 0 2026-03-10T10:20:20.734 INFO:tasks.workunit.client.0.vm02.stdout:6/891: chown d0/d8/d9/d7a/l88 21303422 1 2026-03-10T10:20:20.738 INFO:tasks.workunit.client.1.vm05.stdout:2/872: truncate db/d28/d4f/d59/da4/fca 817913 0 2026-03-10T10:20:20.739 INFO:tasks.workunit.client.0.vm02.stdout:8/913: sync 2026-03-10T10:20:20.740 INFO:tasks.workunit.client.0.vm02.stdout:6/892: stat d0/d8/d29/d2f/f55 0 2026-03-10T10:20:20.742 INFO:tasks.workunit.client.0.vm02.stdout:1/962: rename d4/da/d1a/fc8 to d4/da/d1a/d47/f133 0 2026-03-10T10:20:20.748 INFO:tasks.workunit.client.1.vm05.stdout:3/956: getdents dd/d15/d24/dc8 0 2026-03-10T10:20:20.748 INFO:tasks.workunit.client.0.vm02.stdout:6/893: mknod d0/d8/d29/d6d/d96/de4/def/d6f/d10c/c12d 0 2026-03-10T10:20:20.748 INFO:tasks.workunit.client.0.vm02.stdout:6/894: chown d0/d8/d9/f54 2950640 1 2026-03-10T10:20:20.749 INFO:tasks.workunit.client.1.vm05.stdout:2/873: read db/d28/d4f/d59/f7c [506153,124023] 0 2026-03-10T10:20:20.750 INFO:tasks.workunit.client.1.vm05.stdout:0/941: creat d1/d2/d9/d31/d12/f138 x:0 0 0 2026-03-10T10:20:20.760 INFO:tasks.workunit.client.1.vm05.stdout:3/957: dread dd/d39/d66/f6e [0,4194304] 0 2026-03-10T10:20:20.762 INFO:tasks.workunit.client.0.vm02.stdout:1/963: mknod d4/dc3/dd6/c134 0 2026-03-10T10:20:20.764 INFO:tasks.workunit.client.0.vm02.stdout:6/895: mknod d0/d87/c12e 0 2026-03-10T10:20:20.768 INFO:tasks.workunit.client.0.vm02.stdout:1/964: stat d4/da/d1a/d47/d88/fdd 0 2026-03-10T10:20:20.768 INFO:tasks.workunit.client.1.vm05.stdout:3/958: symlink dd/d20/d94/l155 0 2026-03-10T10:20:20.768 INFO:tasks.workunit.client.1.vm05.stdout:0/942: dwrite d1/d2/d9/f40 [0,4194304] 0 2026-03-10T10:20:20.769 INFO:tasks.workunit.client.1.vm05.stdout:3/959: fsync dd/d20/d130/fdc 0 2026-03-10T10:20:20.772 INFO:tasks.workunit.client.0.vm02.stdout:3/924: write d1/d8/d21/f29 [2972030,82323] 0 2026-03-10T10:20:20.772 INFO:tasks.workunit.client.0.vm02.stdout:3/925: readlink d1/d20/db2/dcb/lec 0 2026-03-10T10:20:20.775 INFO:tasks.workunit.client.1.vm05.stdout:4/806: write d1/d31/d4b/d6d/fbc [454985,18624] 0 2026-03-10T10:20:20.775 INFO:tasks.workunit.client.1.vm05.stdout:0/943: stat d1/d2/d9/d31/d13/da2/l132 0 2026-03-10T10:20:20.775 INFO:tasks.workunit.client.1.vm05.stdout:0/944: stat d1/d2/d9/d31/d13/d17/da1/dbd/f127 0 2026-03-10T10:20:20.787 INFO:tasks.workunit.client.0.vm02.stdout:1/965: creat d4/d4a/da5/f135 x:0 0 0 2026-03-10T10:20:20.788 INFO:tasks.workunit.client.0.vm02.stdout:3/926: mkdir d1/d20/d133 0 2026-03-10T10:20:20.790 INFO:tasks.workunit.client.0.vm02.stdout:6/896: link d0/d8/f8f d0/d8/d29/d6d/d96/de4/def/d6f/da1/da8/f12f 0 2026-03-10T10:20:20.792 INFO:tasks.workunit.client.1.vm05.stdout:4/807: fsync d1/d31/d4b/f9b 0 2026-03-10T10:20:20.793 INFO:tasks.workunit.client.0.vm02.stdout:6/897: mkdir d0/d8/d9/d130 0 2026-03-10T10:20:20.793 INFO:tasks.workunit.client.1.vm05.stdout:3/960: chown dd/d15/d1f/le0 367250 1 2026-03-10T10:20:20.794 INFO:tasks.workunit.client.1.vm05.stdout:3/961: stat dd/d15/d1f/d116/f14f 0 2026-03-10T10:20:20.794 INFO:tasks.workunit.client.0.vm02.stdout:1/966: rename d4/da/d1a/d47/d88/la4 to d4/da/d1a/d47/dbc/dcb/l136 0 2026-03-10T10:20:20.795 INFO:tasks.workunit.client.1.vm05.stdout:3/962: readlink dd/d39/d5f/l93 0 2026-03-10T10:20:20.796 INFO:tasks.workunit.client.0.vm02.stdout:6/898: creat d0/d8/d9/d7a/dc0/d124/f131 x:0 0 0 2026-03-10T10:20:20.796 INFO:tasks.workunit.client.0.vm02.stdout:6/899: fdatasync d0/d8/d29/d2f/f8e 0 2026-03-10T10:20:20.799 INFO:tasks.workunit.client.0.vm02.stdout:1/967: creat d4/da/d27/d117/d123/f137 x:0 0 0 2026-03-10T10:20:20.811 INFO:tasks.workunit.client.1.vm05.stdout:0/945: rename d1/d2/d9/d31/d13/d2f/d49/df3 to d1/d2/d9/d31/d13/d17/da1/dbd/d139 0 2026-03-10T10:20:20.813 INFO:tasks.workunit.client.0.vm02.stdout:2/949: write d0/d1a/f25 [278001,29133] 0 2026-03-10T10:20:20.815 INFO:tasks.workunit.client.0.vm02.stdout:3/927: dread d1/d8/d21/f2a [0,4194304] 0 2026-03-10T10:20:20.821 INFO:tasks.workunit.client.1.vm05.stdout:1/993: write d4/fdf [1910328,36942] 0 2026-03-10T10:20:20.827 INFO:tasks.workunit.client.1.vm05.stdout:9/808: truncate d0/df/d74/f9e 4091031 0 2026-03-10T10:20:20.831 INFO:tasks.workunit.client.0.vm02.stdout:3/928: fsync d1/d6/d8e/fa0 0 2026-03-10T10:20:20.833 INFO:tasks.workunit.client.0.vm02.stdout:8/914: dread d1/d1c/d43/f52 [0,4194304] 0 2026-03-10T10:20:20.834 INFO:tasks.workunit.client.0.vm02.stdout:8/915: stat d1/d1c/d43/d6a/d7c/da6/fb1 0 2026-03-10T10:20:20.834 INFO:tasks.workunit.client.0.vm02.stdout:8/916: dread - d1/d1c/d23/f10c zero size 2026-03-10T10:20:20.835 INFO:tasks.workunit.client.0.vm02.stdout:8/917: chown d1/d1c/d24/dad/fc3 3086 1 2026-03-10T10:20:20.853 INFO:tasks.workunit.client.0.vm02.stdout:1/968: dread d4/f26 [0,4194304] 0 2026-03-10T10:20:20.854 INFO:tasks.workunit.client.1.vm05.stdout:4/808: creat d1/d64/f112 x:0 0 0 2026-03-10T10:20:20.855 INFO:tasks.workunit.client.0.vm02.stdout:6/900: rename d0/d8/d29/d52/de8/db2/dbb/de5/c115 to d0/d87/c132 0 2026-03-10T10:20:20.855 INFO:tasks.workunit.client.0.vm02.stdout:6/901: write d0/d8/d29/d2f/f8e [1958300,10938] 0 2026-03-10T10:20:20.864 INFO:tasks.workunit.client.0.vm02.stdout:1/969: dread d4/dc3/fd8 [0,4194304] 0 2026-03-10T10:20:20.866 INFO:tasks.workunit.client.1.vm05.stdout:7/985: write d5/d1d/d20/d2d/fb0 [5221865,102016] 0 2026-03-10T10:20:20.866 INFO:tasks.workunit.client.1.vm05.stdout:7/986: readlink d5/d1d/d20/d2d/d5d/d7a/laf 0 2026-03-10T10:20:20.868 INFO:tasks.workunit.client.1.vm05.stdout:2/874: dread db/d1c/f3d [0,4194304] 0 2026-03-10T10:20:20.869 INFO:tasks.workunit.client.0.vm02.stdout:6/902: sync 2026-03-10T10:20:20.869 INFO:tasks.workunit.client.1.vm05.stdout:4/809: write d1/d64/da9/dae/dfc/fe5 [900618,24116] 0 2026-03-10T10:20:20.869 INFO:tasks.workunit.client.0.vm02.stdout:7/933: write d1/dc/d16/d28/f108 [350505,113410] 0 2026-03-10T10:20:20.870 INFO:tasks.workunit.client.0.vm02.stdout:7/934: chown d1/dc/d16/d28/fca 658005429 1 2026-03-10T10:20:20.875 INFO:tasks.workunit.client.1.vm05.stdout:9/809: fsync d0/df/d11/f64 0 2026-03-10T10:20:20.876 INFO:tasks.workunit.client.1.vm05.stdout:7/987: truncate d5/d1d/d29/d3e/d8c/d82/fda 957581 0 2026-03-10T10:20:20.876 INFO:tasks.workunit.client.0.vm02.stdout:8/918: chown d1/d1c/d23/f9d 101710 1 2026-03-10T10:20:20.879 INFO:tasks.workunit.client.1.vm05.stdout:2/875: dread db/d28/f7f [0,4194304] 0 2026-03-10T10:20:20.880 INFO:tasks.workunit.client.0.vm02.stdout:0/969: write d9/d18/d1a/d22/d24/d8e/d9b/fc2 [4926148,21566] 0 2026-03-10T10:20:20.883 INFO:tasks.workunit.client.1.vm05.stdout:9/810: sync 2026-03-10T10:20:20.887 INFO:tasks.workunit.client.0.vm02.stdout:7/935: mknod d1/d1b/d8e/c125 0 2026-03-10T10:20:20.890 INFO:tasks.workunit.client.0.vm02.stdout:7/936: dread d1/dc/d55/f85 [0,4194304] 0 2026-03-10T10:20:20.891 INFO:tasks.workunit.client.0.vm02.stdout:0/970: dread d9/d18/d1a/d22/d24/d80/d74/f96 [0,4194304] 0 2026-03-10T10:20:20.894 INFO:tasks.workunit.client.1.vm05.stdout:8/846: write d7/d14/d15/d3b/da0/fc8 [1540912,111649] 0 2026-03-10T10:20:20.905 INFO:tasks.workunit.client.0.vm02.stdout:9/893: dwrite da/f1f [4194304,4194304] 0 2026-03-10T10:20:20.914 INFO:tasks.workunit.client.0.vm02.stdout:1/970: creat d4/da/d1a/d47/d88/da8/f138 x:0 0 0 2026-03-10T10:20:20.924 INFO:tasks.workunit.client.1.vm05.stdout:5/932: write da/db/de9/feb [888880,56155] 0 2026-03-10T10:20:20.925 INFO:tasks.workunit.client.1.vm05.stdout:6/922: write dd/d36/d3f/d12/f8f [671019,86838] 0 2026-03-10T10:20:20.927 INFO:tasks.workunit.client.1.vm05.stdout:6/923: chown dd/d1b/l54 171 1 2026-03-10T10:20:20.928 INFO:tasks.workunit.client.1.vm05.stdout:6/924: stat dd/d36/d3f/d12/d44/daa/de4/l11a 0 2026-03-10T10:20:20.931 INFO:tasks.workunit.client.0.vm02.stdout:7/937: fdatasync d1/dc/d60/f79 0 2026-03-10T10:20:20.938 INFO:tasks.workunit.client.1.vm05.stdout:2/876: dread db/d28/f30 [0,4194304] 0 2026-03-10T10:20:20.938 INFO:tasks.workunit.client.1.vm05.stdout:8/847: mknod d7/d14/d62/d90/dd3/c112 0 2026-03-10T10:20:20.940 INFO:tasks.workunit.client.1.vm05.stdout:4/810: fdatasync d1/d31/dc/f33 0 2026-03-10T10:20:20.940 INFO:tasks.workunit.client.0.vm02.stdout:3/929: getdents d1/d8 0 2026-03-10T10:20:20.940 INFO:tasks.workunit.client.1.vm05.stdout:2/877: dread db/d1c/f3d [0,4194304] 0 2026-03-10T10:20:20.940 INFO:tasks.workunit.client.0.vm02.stdout:3/930: readlink d1/d6/d8e/lc5 0 2026-03-10T10:20:20.943 INFO:tasks.workunit.client.0.vm02.stdout:9/894: creat da/d3c/d4c/d38/d7c/f122 x:0 0 0 2026-03-10T10:20:20.946 INFO:tasks.workunit.client.1.vm05.stdout:5/933: rename da/db/dee/d38/fa2 to da/d9a/dc7/d130/f13e 0 2026-03-10T10:20:20.956 INFO:tasks.workunit.client.1.vm05.stdout:3/963: write dd/d39/d5c/fb9 [1119871,47348] 0 2026-03-10T10:20:20.962 INFO:tasks.workunit.client.0.vm02.stdout:2/950: write d0/d10/dee/fff [295853,86475] 0 2026-03-10T10:20:20.963 INFO:tasks.workunit.client.0.vm02.stdout:2/951: stat d0/d1a/d49/deb/de6/f106 0 2026-03-10T10:20:20.967 INFO:tasks.workunit.client.1.vm05.stdout:0/946: dwrite d1/d2/d9/d31/d54/f24 [0,4194304] 0 2026-03-10T10:20:20.974 INFO:tasks.workunit.client.0.vm02.stdout:7/938: dread d1/d1b/d8f/d67/f70 [0,4194304] 0 2026-03-10T10:20:20.976 INFO:tasks.workunit.client.1.vm05.stdout:4/811: creat d1/dfd/f113 x:0 0 0 2026-03-10T10:20:20.976 INFO:tasks.workunit.client.0.vm02.stdout:3/931: symlink d1/d20/d52/l134 0 2026-03-10T10:20:20.977 INFO:tasks.workunit.client.0.vm02.stdout:3/932: write d1/d8/d86/f111 [175438,54820] 0 2026-03-10T10:20:20.983 INFO:tasks.workunit.client.1.vm05.stdout:1/994: dwrite d4/f36 [4194304,4194304] 0 2026-03-10T10:20:20.992 INFO:tasks.workunit.client.0.vm02.stdout:6/903: dwrite d0/d8/d29/d6d/d96/de4/def/d6f/fa2 [0,4194304] 0 2026-03-10T10:20:20.992 INFO:tasks.workunit.client.0.vm02.stdout:6/904: write d0/d8/d29/d94/d9a/dc2/f119 [518803,2753] 0 2026-03-10T10:20:20.996 INFO:tasks.workunit.client.1.vm05.stdout:1/995: chown d4/d39/d3e/da0 54 1 2026-03-10T10:20:20.997 INFO:tasks.workunit.client.1.vm05.stdout:5/934: mknod da/d9a/daf/ded/c13f 0 2026-03-10T10:20:20.999 INFO:tasks.workunit.client.0.vm02.stdout:8/919: link d1/d1c/d24/dcf/lea d1/l119 0 2026-03-10T10:20:21.004 INFO:tasks.workunit.client.0.vm02.stdout:2/952: creat d0/d71/d108/d65/dc4/dfa/dbf/f13f x:0 0 0 2026-03-10T10:20:21.005 INFO:tasks.workunit.client.1.vm05.stdout:7/988: link d5/d26/db2/fd8 d5/d1d/d29/d3e/d8c/d82/d90/f12f 0 2026-03-10T10:20:21.005 INFO:tasks.workunit.client.0.vm02.stdout:7/939: truncate d1/dc/f2e 2186321 0 2026-03-10T10:20:21.006 INFO:tasks.workunit.client.0.vm02.stdout:7/940: fdatasync d1/dc/f26 0 2026-03-10T10:20:21.010 INFO:tasks.workunit.client.1.vm05.stdout:6/925: mknod dd/d36/d7d/d127/c12d 0 2026-03-10T10:20:21.012 INFO:tasks.workunit.client.0.vm02.stdout:3/933: mknod d1/d8/d21/d7d/c135 0 2026-03-10T10:20:21.013 INFO:tasks.workunit.client.1.vm05.stdout:9/811: dwrite d0/d1/d13/d55/fc9 [0,4194304] 0 2026-03-10T10:20:21.017 INFO:tasks.workunit.client.0.vm02.stdout:6/905: symlink d0/d8/d29/d52/de8/l133 0 2026-03-10T10:20:21.017 INFO:tasks.workunit.client.1.vm05.stdout:0/947: mknod d1/d2/d9/d31/daa/d11c/c13a 0 2026-03-10T10:20:21.017 INFO:tasks.workunit.client.1.vm05.stdout:4/812: creat d1/d31/d4b/f114 x:0 0 0 2026-03-10T10:20:21.023 INFO:tasks.workunit.client.0.vm02.stdout:1/971: link d4/da/d1a/d11d/fc7 d4/da/d27/d38/d3c/f139 0 2026-03-10T10:20:21.024 INFO:tasks.workunit.client.1.vm05.stdout:1/996: truncate d4/d37/ffa 635231 0 2026-03-10T10:20:21.025 INFO:tasks.workunit.client.0.vm02.stdout:2/953: rename d0/d71/d108/d65/db0/cf7 to d0/d10/da6/c140 0 2026-03-10T10:20:21.026 INFO:tasks.workunit.client.1.vm05.stdout:1/997: dread - d4/d39/d3e/f11a zero size 2026-03-10T10:20:21.026 INFO:tasks.workunit.client.1.vm05.stdout:1/998: chown d4/fe9 49759 1 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:20 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:20 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:20 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:20 vm02.local ceph-mon[50200]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:20 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:20 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:20 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:20 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:20 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:21.037 INFO:tasks.workunit.client.1.vm05.stdout:3/964: symlink dd/d15/d24/l156 0 2026-03-10T10:20:21.037 INFO:tasks.workunit.client.1.vm05.stdout:3/965: stat dd/d20/d9e/dc0/f115 0 2026-03-10T10:20:21.037 INFO:tasks.workunit.client.1.vm05.stdout:3/966: stat dd/d20/d130/lf0 0 2026-03-10T10:20:21.037 INFO:tasks.workunit.client.1.vm05.stdout:9/812: creat d0/d1/dcc/f119 x:0 0 0 2026-03-10T10:20:21.037 INFO:tasks.workunit.client.0.vm02.stdout:2/954: mknod d0/d10/d81/c141 0 2026-03-10T10:20:21.037 INFO:tasks.workunit.client.0.vm02.stdout:3/934: mknod d1/d8/d86/db1/d102/d12f/c136 0 2026-03-10T10:20:21.037 INFO:tasks.workunit.client.0.vm02.stdout:3/935: chown d1/l27 517 1 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:20 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:20 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:20 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:20 vm05.local ceph-mon[59051]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:20 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:20 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:20 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:20 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:20:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:20 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:21.037 INFO:tasks.workunit.client.1.vm05.stdout:1/999: symlink d4/df/de0/d82/l127 0 2026-03-10T10:20:21.040 INFO:tasks.workunit.client.0.vm02.stdout:2/955: creat d0/d71/d108/d65/dc4/dfa/dd3/de8/f142 x:0 0 0 2026-03-10T10:20:21.043 INFO:tasks.workunit.client.0.vm02.stdout:0/971: write d9/d18/d1a/d22/d24/d80/d49/f109 [984771,55945] 0 2026-03-10T10:20:21.045 INFO:tasks.workunit.client.0.vm02.stdout:9/895: write da/d3c/d4c/d2c/d34/d35/fd2 [59178,119997] 0 2026-03-10T10:20:21.049 INFO:tasks.workunit.client.0.vm02.stdout:3/936: unlink d1/d20/d52/f92 0 2026-03-10T10:20:21.049 INFO:tasks.workunit.client.1.vm05.stdout:4/813: mkdir d1/d3/d115 0 2026-03-10T10:20:21.049 INFO:tasks.workunit.client.1.vm05.stdout:3/967: creat dd/d20/d9e/dc0/ddd/f157 x:0 0 0 2026-03-10T10:20:21.050 INFO:tasks.workunit.client.1.vm05.stdout:4/814: chown d1/d31/d4b/d6d 45 1 2026-03-10T10:20:21.052 INFO:tasks.workunit.client.0.vm02.stdout:2/956: rename d0/d71/d108/d65/dc4/dfa/d80/la7 to d0/d71/d108/d65/dc4/dfa/l143 0 2026-03-10T10:20:21.053 INFO:tasks.workunit.client.1.vm05.stdout:4/815: chown d1/d31/d4b/ff0 466287586 1 2026-03-10T10:20:21.054 INFO:tasks.workunit.client.1.vm05.stdout:0/948: getdents d1/d2/d9/d31/d13/da2/dab/dce/d106/d135/d11d 0 2026-03-10T10:20:21.055 INFO:tasks.workunit.client.0.vm02.stdout:0/972: rmdir d9/d18/d1a/d3c 39 2026-03-10T10:20:21.056 INFO:tasks.workunit.client.0.vm02.stdout:1/972: rmdir d4/da/d27/d38/d3c 39 2026-03-10T10:20:21.061 INFO:tasks.workunit.client.0.vm02.stdout:6/906: link d0/d8/d9/c16 d0/d8/d29/d6d/c134 0 2026-03-10T10:20:21.062 INFO:tasks.workunit.client.1.vm05.stdout:3/968: sync 2026-03-10T10:20:21.063 INFO:tasks.workunit.client.1.vm05.stdout:9/813: truncate d0/d1/d13/de/d93/fbd 639179 0 2026-03-10T10:20:21.064 INFO:tasks.workunit.client.1.vm05.stdout:8/848: write d7/d14/d24/f34 [551006,92574] 0 2026-03-10T10:20:21.064 INFO:tasks.workunit.client.1.vm05.stdout:2/878: write db/d28/f98 [530945,94067] 0 2026-03-10T10:20:21.072 INFO:tasks.workunit.client.1.vm05.stdout:0/949: rename d1/d2/d9/d31/d13/d17/da1/dbd/f127 to d1/dd7/f13b 0 2026-03-10T10:20:21.075 INFO:tasks.workunit.client.0.vm02.stdout:6/907: mknod d0/d87/c135 0 2026-03-10T10:20:21.077 INFO:tasks.workunit.client.0.vm02.stdout:9/896: rename c0 to da/d3c/d4c/db1/d112/c123 0 2026-03-10T10:20:21.082 INFO:tasks.workunit.client.1.vm05.stdout:7/989: write d5/f76 [684472,92296] 0 2026-03-10T10:20:21.082 INFO:tasks.workunit.client.1.vm05.stdout:5/935: dwrite da/d9a/dc7/db4/f104 [0,4194304] 0 2026-03-10T10:20:21.083 INFO:tasks.workunit.client.0.vm02.stdout:7/941: write d1/d1b/fe8 [845234,112398] 0 2026-03-10T10:20:21.091 INFO:tasks.workunit.client.0.vm02.stdout:3/937: rename d1/d8/d21/d73/d78/d84/dfa/l11a to d1/d8/d21/d73/d78/d84/l137 0 2026-03-10T10:20:21.092 INFO:tasks.workunit.client.0.vm02.stdout:3/938: readlink d1/d8/d86/db1/dbc/l113 0 2026-03-10T10:20:21.093 INFO:tasks.workunit.client.1.vm05.stdout:4/816: truncate d1/d64/da9/fc0 402712 0 2026-03-10T10:20:21.096 INFO:tasks.workunit.client.0.vm02.stdout:8/920: truncate d1/d2/f36 2992823 0 2026-03-10T10:20:21.097 INFO:tasks.workunit.client.1.vm05.stdout:4/817: read d1/d64/fa1 [1324477,16615] 0 2026-03-10T10:20:21.097 INFO:tasks.workunit.client.1.vm05.stdout:4/818: readlink d1/d3/l71 0 2026-03-10T10:20:21.097 INFO:tasks.workunit.client.1.vm05.stdout:6/926: dwrite dd/d36/d3f/d12/d44/d2a/d3d/ffa [0,4194304] 0 2026-03-10T10:20:21.098 INFO:tasks.workunit.client.0.vm02.stdout:2/957: dread d0/d71/d108/d65/dc4/dfa/dd3/de8/d105/f10e [0,4194304] 0 2026-03-10T10:20:21.099 INFO:tasks.workunit.client.0.vm02.stdout:1/973: dread d4/da/d1a/d11d/d53/f74 [4194304,4194304] 0 2026-03-10T10:20:21.100 INFO:tasks.workunit.client.0.vm02.stdout:1/974: chown d4/laf 206 1 2026-03-10T10:20:21.108 INFO:tasks.workunit.client.0.vm02.stdout:8/921: symlink d1/d1c/d43/d6a/da8/d56/l11a 0 2026-03-10T10:20:21.109 INFO:tasks.workunit.client.1.vm05.stdout:0/950: truncate d1/dd7/f11b 2857838 0 2026-03-10T10:20:21.113 INFO:tasks.workunit.client.0.vm02.stdout:9/897: dread da/d3c/d4c/d56/f61 [0,4194304] 0 2026-03-10T10:20:21.118 INFO:tasks.workunit.client.0.vm02.stdout:2/958: chown d0/d10/f6b 845077 1 2026-03-10T10:20:21.123 INFO:tasks.workunit.client.1.vm05.stdout:7/990: rename d5/d26/fec to d5/d1d/d29/d3e/d8c/d82/d90/f130 0 2026-03-10T10:20:21.123 INFO:tasks.workunit.client.0.vm02.stdout:1/975: creat d4/dc3/dd6/f13a x:0 0 0 2026-03-10T10:20:21.124 INFO:tasks.workunit.client.1.vm05.stdout:5/936: creat da/d96/d117/f140 x:0 0 0 2026-03-10T10:20:21.128 INFO:tasks.workunit.client.0.vm02.stdout:9/898: creat da/d3c/d4c/d38/d4a/d99/f124 x:0 0 0 2026-03-10T10:20:21.132 INFO:tasks.workunit.client.0.vm02.stdout:1/976: creat d4/dc3/df0/f13b x:0 0 0 2026-03-10T10:20:21.135 INFO:tasks.workunit.client.0.vm02.stdout:8/922: dread d1/d1c/d43/d5b/d88/dac/f5a [0,4194304] 0 2026-03-10T10:20:21.136 INFO:tasks.workunit.client.0.vm02.stdout:6/908: link d0/d8/d9/d7a/l11c d0/d8/d29/d94/d9a/l136 0 2026-03-10T10:20:21.138 INFO:tasks.workunit.client.0.vm02.stdout:0/973: write d9/d18/d1a/d22/d24/d8e/d9b/f102 [706128,24432] 0 2026-03-10T10:20:21.139 INFO:tasks.workunit.client.0.vm02.stdout:0/974: chown d9/d18/dc7/dca 10357186 1 2026-03-10T10:20:21.141 INFO:tasks.workunit.client.1.vm05.stdout:2/879: write db/d12/f31 [3090232,29745] 0 2026-03-10T10:20:21.142 INFO:tasks.workunit.client.1.vm05.stdout:9/814: dwrite d0/d1/d13/de/f5b [0,4194304] 0 2026-03-10T10:20:21.156 INFO:tasks.workunit.client.0.vm02.stdout:3/939: write d1/f6a [132876,64841] 0 2026-03-10T10:20:21.161 INFO:tasks.workunit.client.0.vm02.stdout:8/923: symlink d1/d1c/d43/d6a/da8/d56/l11b 0 2026-03-10T10:20:21.163 INFO:tasks.workunit.client.1.vm05.stdout:8/849: dwrite d7/d14/d3a/d49/f72 [0,4194304] 0 2026-03-10T10:20:21.165 INFO:tasks.workunit.client.0.vm02.stdout:6/909: creat d0/d87/f137 x:0 0 0 2026-03-10T10:20:21.165 INFO:tasks.workunit.client.1.vm05.stdout:8/850: truncate d7/d14/d15/da7/def/f107 526212 0 2026-03-10T10:20:21.177 INFO:tasks.workunit.client.0.vm02.stdout:2/959: dwrite d0/f2c [0,4194304] 0 2026-03-10T10:20:21.180 INFO:tasks.workunit.client.0.vm02.stdout:9/899: symlink da/d3c/d4c/d38/da6/d118/l125 0 2026-03-10T10:20:21.184 INFO:tasks.workunit.client.0.vm02.stdout:3/940: creat d1/d8/d86/db1/f138 x:0 0 0 2026-03-10T10:20:21.184 INFO:tasks.workunit.client.0.vm02.stdout:3/941: readlink d1/d8/d21/d73/d78/l105 0 2026-03-10T10:20:21.190 INFO:tasks.workunit.client.0.vm02.stdout:1/977: rename d4/da/d27/d38/d3c/fa7 to d4/da/d1a/d22/f13c 0 2026-03-10T10:20:21.193 INFO:tasks.workunit.client.0.vm02.stdout:6/910: symlink d0/d8/d29/d52/de8/db2/l138 0 2026-03-10T10:20:21.197 INFO:tasks.workunit.client.0.vm02.stdout:2/960: rmdir d0/d10/d81 39 2026-03-10T10:20:21.205 INFO:tasks.workunit.client.0.vm02.stdout:9/900: write da/d3c/d4c/d38/d4a/d70/d10a/f11d [4124286,8633] 0 2026-03-10T10:20:21.211 INFO:tasks.workunit.client.0.vm02.stdout:7/942: getdents d1/dc 0 2026-03-10T10:20:21.215 INFO:tasks.workunit.client.1.vm05.stdout:3/969: link dd/d39/f6f dd/d15/d24/d2c/dd0/dd9/d103/f158 0 2026-03-10T10:20:21.219 INFO:tasks.workunit.client.0.vm02.stdout:0/975: dwrite d9/d18/d1a/d22/d24/d80/fe4 [0,4194304] 0 2026-03-10T10:20:21.220 INFO:tasks.workunit.client.0.vm02.stdout:8/924: write d1/d1c/d23/d25/f3d [1143096,1564] 0 2026-03-10T10:20:21.222 INFO:tasks.workunit.client.0.vm02.stdout:1/978: write d4/da/d1a/d47/d78/fdc [538289,31294] 0 2026-03-10T10:20:21.223 INFO:tasks.workunit.client.0.vm02.stdout:6/911: truncate d0/d8/d9/f82 2554195 0 2026-03-10T10:20:21.232 INFO:tasks.workunit.client.0.vm02.stdout:3/942: symlink d1/d8/d86/db1/d116/l139 0 2026-03-10T10:20:21.232 INFO:tasks.workunit.client.1.vm05.stdout:6/927: dwrite dd/d36/d3f/dbd/f114 [0,4194304] 0 2026-03-10T10:20:21.232 INFO:tasks.workunit.client.0.vm02.stdout:3/943: read - d1/d8/d44/f127 zero size 2026-03-10T10:20:21.234 INFO:tasks.workunit.client.1.vm05.stdout:5/937: unlink da/db/d26/c37 0 2026-03-10T10:20:21.241 INFO:tasks.workunit.client.0.vm02.stdout:0/976: creat d9/f13d x:0 0 0 2026-03-10T10:20:21.245 INFO:tasks.workunit.client.0.vm02.stdout:1/979: chown d4/d4a/c64 27932 1 2026-03-10T10:20:21.247 INFO:tasks.workunit.client.1.vm05.stdout:4/819: symlink d1/d31/dc/l116 0 2026-03-10T10:20:21.247 INFO:tasks.workunit.client.1.vm05.stdout:0/951: symlink d1/d2/d9/d31/d13/da2/dab/l13c 0 2026-03-10T10:20:21.250 INFO:tasks.workunit.client.0.vm02.stdout:0/977: creat d9/d18/d1a/d22/d24/d80/d57/d107/f13e x:0 0 0 2026-03-10T10:20:21.250 INFO:tasks.workunit.client.1.vm05.stdout:7/991: mkdir d5/d17/dae/d119/d131 0 2026-03-10T10:20:21.251 INFO:tasks.workunit.client.1.vm05.stdout:7/992: read d5/d26/f33 [6491503,24288] 0 2026-03-10T10:20:21.255 INFO:tasks.workunit.client.0.vm02.stdout:8/925: rename d1/d1c/c1a to d1/d1c/d23/d25/c11c 0 2026-03-10T10:20:21.255 INFO:tasks.workunit.client.1.vm05.stdout:8/851: symlink d7/d14/d3a/l113 0 2026-03-10T10:20:21.258 INFO:tasks.workunit.client.0.vm02.stdout:3/944: creat d1/d8/d21/d73/d78/f13a x:0 0 0 2026-03-10T10:20:21.259 INFO:tasks.workunit.client.0.vm02.stdout:3/945: truncate d1/d8/d21/d73/d78/f13a 489799 0 2026-03-10T10:20:21.259 INFO:tasks.workunit.client.0.vm02.stdout:3/946: read - d1/d8/d86/db1/f138 zero size 2026-03-10T10:20:21.263 INFO:tasks.workunit.client.0.vm02.stdout:2/961: dread d0/d8c/dc5/f122 [0,4194304] 0 2026-03-10T10:20:21.263 INFO:tasks.workunit.client.0.vm02.stdout:2/962: write d0/d8c/fa2 [1003804,32831] 0 2026-03-10T10:20:21.264 INFO:tasks.workunit.client.0.vm02.stdout:2/963: chown d0/d71/d108/d65/dc4/le9 6155 1 2026-03-10T10:20:21.265 INFO:tasks.workunit.client.1.vm05.stdout:0/952: truncate d1/d2/d9/d31/d13/d15/d4e/df6/f102 875836 0 2026-03-10T10:20:21.272 INFO:tasks.workunit.client.1.vm05.stdout:3/970: rename dd/d15/d24/d8e to dd/d20/d159 0 2026-03-10T10:20:21.272 INFO:tasks.workunit.client.1.vm05.stdout:3/971: stat dd/d15/d4c/f73 0 2026-03-10T10:20:21.273 INFO:tasks.workunit.client.1.vm05.stdout:3/972: chown dd/d15/d24/dc8 1396104933 1 2026-03-10T10:20:21.277 INFO:tasks.workunit.client.1.vm05.stdout:0/953: sync 2026-03-10T10:20:21.278 INFO:tasks.workunit.client.0.vm02.stdout:2/964: dread d0/d10/f6b [4194304,4194304] 0 2026-03-10T10:20:21.283 INFO:tasks.workunit.client.1.vm05.stdout:6/928: getdents dd/d36/d3f/d12/d59 0 2026-03-10T10:20:21.284 INFO:tasks.workunit.client.1.vm05.stdout:4/820: creat d1/d3/f117 x:0 0 0 2026-03-10T10:20:21.284 INFO:tasks.workunit.client.0.vm02.stdout:2/965: rmdir d0/d71/d108/d65/dc4/dfa/df1 39 2026-03-10T10:20:21.284 INFO:tasks.workunit.client.0.vm02.stdout:2/966: stat d0/d10/cb8 0 2026-03-10T10:20:21.285 INFO:tasks.workunit.client.1.vm05.stdout:4/821: chown d1/d31/d76/dac/db8 9 1 2026-03-10T10:20:21.286 INFO:tasks.workunit.client.1.vm05.stdout:5/938: getdents da/d9a/dc7 0 2026-03-10T10:20:21.290 INFO:tasks.workunit.client.0.vm02.stdout:2/967: dwrite d0/f36 [8388608,4194304] 0 2026-03-10T10:20:21.294 INFO:tasks.workunit.client.1.vm05.stdout:0/954: dread d1/d2/d9/d31/d12/d41/f6d [0,4194304] 0 2026-03-10T10:20:21.295 INFO:tasks.workunit.client.1.vm05.stdout:5/939: dread - da/db/d28/d8a/de3/f121 zero size 2026-03-10T10:20:21.296 INFO:tasks.workunit.client.1.vm05.stdout:0/955: stat d1/d2/d9/d31/d13/da2/dab/lbb 0 2026-03-10T10:20:21.298 INFO:tasks.workunit.client.0.vm02.stdout:2/968: dwrite d0/d71/d108/d65/dc4/dfa/f34 [0,4194304] 0 2026-03-10T10:20:21.302 INFO:tasks.workunit.client.0.vm02.stdout:2/969: readlink d0/d71/d108/d65/dc4/dfa/d80/d10f/l9c 0 2026-03-10T10:20:21.303 INFO:tasks.workunit.client.1.vm05.stdout:3/973: creat dd/f15a x:0 0 0 2026-03-10T10:20:21.303 INFO:tasks.workunit.client.1.vm05.stdout:3/974: stat dd/d15/d69/f99 0 2026-03-10T10:20:21.303 INFO:tasks.workunit.client.1.vm05.stdout:4/822: truncate d1/d31/dc/d40/d45/f48 1993703 0 2026-03-10T10:20:21.303 INFO:tasks.workunit.client.0.vm02.stdout:2/970: truncate d0/d71/d108/d65/dc4/dfa/dbf/f13f 46590 0 2026-03-10T10:20:21.304 INFO:tasks.workunit.client.0.vm02.stdout:2/971: chown d0/d71/d108/d65/dc4/dfa/d80/d10f/c73 0 1 2026-03-10T10:20:21.305 INFO:tasks.workunit.client.0.vm02.stdout:2/972: chown d0/d1a/d49/l9a 21 1 2026-03-10T10:20:21.313 INFO:tasks.workunit.client.0.vm02.stdout:9/901: dwrite da/d3c/d4c/d38/d82/d89/fe9 [0,4194304] 0 2026-03-10T10:20:21.314 INFO:tasks.workunit.client.1.vm05.stdout:4/823: dwrite f0 [4194304,4194304] 0 2026-03-10T10:20:21.321 INFO:tasks.workunit.client.1.vm05.stdout:4/824: dwrite d1/d64/f112 [0,4194304] 0 2026-03-10T10:20:21.325 INFO:tasks.workunit.client.1.vm05.stdout:0/956: unlink d1/d2/d39/d6e/d95/la6 0 2026-03-10T10:20:21.329 INFO:tasks.workunit.client.1.vm05.stdout:3/975: mknod dd/d39/d5f/df7/c15b 0 2026-03-10T10:20:21.329 INFO:tasks.workunit.client.1.vm05.stdout:2/880: write db/d28/d4f/f75 [896477,25338] 0 2026-03-10T10:20:21.330 INFO:tasks.workunit.client.0.vm02.stdout:9/902: fsync da/f5c 0 2026-03-10T10:20:21.330 INFO:tasks.workunit.client.0.vm02.stdout:9/903: stat da/f5c 0 2026-03-10T10:20:21.336 INFO:tasks.workunit.client.1.vm05.stdout:9/815: write d0/df/f97 [850995,60917] 0 2026-03-10T10:20:21.337 INFO:tasks.workunit.client.1.vm05.stdout:9/816: readlink d0/d1/d13/d26/l41 0 2026-03-10T10:20:21.337 INFO:tasks.workunit.client.1.vm05.stdout:4/825: unlink d1/fc7 0 2026-03-10T10:20:21.337 INFO:tasks.workunit.client.1.vm05.stdout:0/957: mknod d1/d2/d9/d31/d13/d15/d114/c13d 0 2026-03-10T10:20:21.338 INFO:tasks.workunit.client.0.vm02.stdout:7/943: truncate d1/dc/d16/d28/f108 1077706 0 2026-03-10T10:20:21.340 INFO:tasks.workunit.client.1.vm05.stdout:2/881: dread - db/d2d/d5e/f100 zero size 2026-03-10T10:20:21.341 INFO:tasks.workunit.client.0.vm02.stdout:7/944: mkdir d1/dc/d16/d28/d2d/dae/d126 0 2026-03-10T10:20:21.341 INFO:tasks.workunit.client.1.vm05.stdout:2/882: read db/d61/d67/f77 [8710014,52361] 0 2026-03-10T10:20:21.343 INFO:tasks.workunit.client.0.vm02.stdout:6/912: truncate d0/d8/d29/d6d/d96/de4/def/f73 1242657 0 2026-03-10T10:20:21.349 INFO:tasks.workunit.client.0.vm02.stdout:7/945: dread d1/d1b/d8f/f8c [0,4194304] 0 2026-03-10T10:20:21.352 INFO:tasks.workunit.client.1.vm05.stdout:5/940: dread da/db/d28/d32/f79 [0,4194304] 0 2026-03-10T10:20:21.355 INFO:tasks.workunit.client.0.vm02.stdout:6/913: truncate d0/d8/d29/d52/f63 391561 0 2026-03-10T10:20:21.355 INFO:tasks.workunit.client.1.vm05.stdout:0/958: unlink d1/d2/d9/d31/d54/f16 0 2026-03-10T10:20:21.357 INFO:tasks.workunit.client.0.vm02.stdout:7/946: unlink d1/d1b/d8f/d67/lfa 0 2026-03-10T10:20:21.358 INFO:tasks.workunit.client.0.vm02.stdout:6/914: symlink d0/d8/d9/d7a/dc0/l139 0 2026-03-10T10:20:21.366 INFO:tasks.workunit.client.0.vm02.stdout:6/915: dread - d0/d8/d29/d2f/ffe zero size 2026-03-10T10:20:21.366 INFO:tasks.workunit.client.0.vm02.stdout:6/916: chown d0/d8/d8c/l5b 1975782692 1 2026-03-10T10:20:21.367 INFO:tasks.workunit.client.0.vm02.stdout:6/917: chown d0/d8/d9/d7a/dc0 1688 1 2026-03-10T10:20:21.370 INFO:tasks.workunit.client.0.vm02.stdout:6/918: symlink d0/d8/d29/d52/l13a 0 2026-03-10T10:20:21.370 INFO:tasks.workunit.client.0.vm02.stdout:0/978: write d9/d34/d3d/d65/d89/df3/fbe [5220207,106000] 0 2026-03-10T10:20:21.371 INFO:tasks.workunit.client.1.vm05.stdout:0/959: creat d1/d2/d9/d31/d13/d17/da1/dee/f13e x:0 0 0 2026-03-10T10:20:21.371 INFO:tasks.workunit.client.1.vm05.stdout:0/960: fdatasync d1/d2/d9/d31/d13/da2/fd6 0 2026-03-10T10:20:21.375 INFO:tasks.workunit.client.0.vm02.stdout:0/979: dwrite d9/d18/d1a/d22/d24/f26 [0,4194304] 0 2026-03-10T10:20:21.377 INFO:tasks.workunit.client.1.vm05.stdout:7/993: dread d5/d1d/d29/f5c [0,4194304] 0 2026-03-10T10:20:21.390 INFO:tasks.workunit.client.0.vm02.stdout:0/980: creat d9/d34/d3d/d65/d89/dd3/da7/db7/f13f x:0 0 0 2026-03-10T10:20:21.393 INFO:tasks.workunit.client.0.vm02.stdout:7/947: dread d1/dc/d10/df5/f11e [0,4194304] 0 2026-03-10T10:20:21.395 INFO:tasks.workunit.client.0.vm02.stdout:6/919: dread d0/d8/d29/d2f/f67 [0,4194304] 0 2026-03-10T10:20:21.399 INFO:tasks.workunit.client.0.vm02.stdout:6/920: read - d0/d8/d29/dce/f126 zero size 2026-03-10T10:20:21.399 INFO:tasks.workunit.client.0.vm02.stdout:0/981: dwrite d9/d18/d1a/f88 [0,4194304] 0 2026-03-10T10:20:21.399 INFO:tasks.workunit.client.0.vm02.stdout:6/921: fsync d0/d8/d29/d2f/d50/d10f/d12a/f12b 0 2026-03-10T10:20:21.402 INFO:tasks.workunit.client.1.vm05.stdout:4/826: link d1/d31/c91 d1/d31/d76/c118 0 2026-03-10T10:20:21.403 INFO:tasks.workunit.client.0.vm02.stdout:0/982: dwrite d9/d34/d3d/d65/d89/fcd [0,4194304] 0 2026-03-10T10:20:21.410 INFO:tasks.workunit.client.1.vm05.stdout:2/883: link db/d28/l38 db/d28/dbc/l11b 0 2026-03-10T10:20:21.412 INFO:tasks.workunit.client.0.vm02.stdout:7/948: mknod d1/dc/d16/d28/d2d/dae/d126/c127 0 2026-03-10T10:20:21.412 INFO:tasks.workunit.client.1.vm05.stdout:9/817: link d0/d70/fb6 d0/d1/d13/f11a 0 2026-03-10T10:20:21.413 INFO:tasks.workunit.client.0.vm02.stdout:7/949: stat d1/d1b/d8f/c65 0 2026-03-10T10:20:21.416 INFO:tasks.workunit.client.0.vm02.stdout:0/983: mknod d9/d18/d1a/d22/d24/d8e/c140 0 2026-03-10T10:20:21.418 INFO:tasks.workunit.client.1.vm05.stdout:7/994: link d5/d1d/d29/fb7 d5/d1d/d29/d3e/d8c/d82/d90/d9a/d109/f132 0 2026-03-10T10:20:21.418 INFO:tasks.workunit.client.1.vm05.stdout:9/818: mkdir d0/df/df1/d11b 0 2026-03-10T10:20:21.419 INFO:tasks.workunit.client.0.vm02.stdout:0/984: unlink d9/d18/d1a/d22/d24/d8e/l98 0 2026-03-10T10:20:21.420 INFO:tasks.workunit.client.1.vm05.stdout:2/884: fdatasync db/d28/d4f/d59/da4/fca 0 2026-03-10T10:20:21.421 INFO:tasks.workunit.client.1.vm05.stdout:7/995: symlink d5/d1d/d20/d2d/d80/dd6/d11a/l133 0 2026-03-10T10:20:21.423 INFO:tasks.workunit.client.1.vm05.stdout:4/827: rename d1/dfd/l10a to d1/d31/dc/l119 0 2026-03-10T10:20:21.425 INFO:tasks.workunit.client.1.vm05.stdout:2/885: creat db/d2d/dc6/dc7/f11c x:0 0 0 2026-03-10T10:20:21.426 INFO:tasks.workunit.client.1.vm05.stdout:9/819: truncate d0/d1/d13/de/d21/ff7 773821 0 2026-03-10T10:20:21.426 INFO:tasks.workunit.client.1.vm05.stdout:4/828: creat d1/d31/dc/d40/d63/f11a x:0 0 0 2026-03-10T10:20:21.434 INFO:tasks.workunit.client.1.vm05.stdout:2/886: truncate db/d28/d4f/da3/f116 933949 0 2026-03-10T10:20:21.447 INFO:tasks.workunit.client.1.vm05.stdout:4/829: dread d1/d31/dc/d40/d63/f94 [0,4194304] 0 2026-03-10T10:20:21.448 INFO:tasks.workunit.client.1.vm05.stdout:2/887: sync 2026-03-10T10:20:21.449 INFO:tasks.workunit.client.0.vm02.stdout:1/980: write d4/f7a [632838,80704] 0 2026-03-10T10:20:21.452 INFO:tasks.workunit.client.0.vm02.stdout:8/926: dwrite d1/d1c/f9a [0,4194304] 0 2026-03-10T10:20:21.455 INFO:tasks.workunit.client.1.vm05.stdout:2/888: symlink db/d28/d4f/d59/d94/dfe/l11d 0 2026-03-10T10:20:21.455 INFO:tasks.workunit.client.1.vm05.stdout:2/889: chown db/d28/d4f/d59/da4/c108 32467736 1 2026-03-10T10:20:21.457 INFO:tasks.workunit.client.1.vm05.stdout:2/890: creat db/d28/d4f/d59/da4/d114/f11e x:0 0 0 2026-03-10T10:20:21.461 INFO:tasks.workunit.client.0.vm02.stdout:7/950: dread d1/d1b/d8f/dad/f75 [0,4194304] 0 2026-03-10T10:20:21.461 INFO:tasks.workunit.client.0.vm02.stdout:7/951: chown d1/dc/d60/cc7 133166 1 2026-03-10T10:20:21.467 INFO:tasks.workunit.client.1.vm05.stdout:2/891: unlink db/d28/d4f/d59/da4/d81/da7/cde 0 2026-03-10T10:20:21.470 INFO:tasks.workunit.client.0.vm02.stdout:1/981: getdents d4/da/d1a/d47 0 2026-03-10T10:20:21.499 INFO:tasks.workunit.client.1.vm05.stdout:7/996: dread d5/d1d/d20/d2d/f30 [0,4194304] 0 2026-03-10T10:20:21.502 INFO:tasks.workunit.client.1.vm05.stdout:7/997: mkdir d5/d26/db2/d134 0 2026-03-10T10:20:21.503 INFO:tasks.workunit.client.1.vm05.stdout:7/998: write d5/d1d/d20/fa2 [217882,56268] 0 2026-03-10T10:20:21.504 INFO:tasks.workunit.client.1.vm05.stdout:7/999: write d5/d1d/d20/d35/f11c [834805,72717] 0 2026-03-10T10:20:21.523 INFO:tasks.workunit.client.0.vm02.stdout:8/927: sync 2026-03-10T10:20:21.524 INFO:tasks.workunit.client.0.vm02.stdout:1/982: sync 2026-03-10T10:20:21.524 INFO:tasks.workunit.client.0.vm02.stdout:8/928: stat d1/d1c/d43/d6a/da8/d56/db5/l116 0 2026-03-10T10:20:21.530 INFO:tasks.workunit.client.0.vm02.stdout:8/929: fdatasync d1/d1c/d43/d5b/f63 0 2026-03-10T10:20:21.534 INFO:tasks.workunit.client.0.vm02.stdout:8/930: truncate d1/d1c/d23/f9d 156674 0 2026-03-10T10:20:21.536 INFO:tasks.workunit.client.0.vm02.stdout:8/931: symlink d1/d1c/d43/d6a/d7c/l11d 0 2026-03-10T10:20:21.540 INFO:tasks.workunit.client.0.vm02.stdout:8/932: mkdir d1/d1c/d43/d5b/d88/dac/d11e 0 2026-03-10T10:20:21.573 INFO:tasks.workunit.client.0.vm02.stdout:3/947: write d1/ff5 [421970,59972] 0 2026-03-10T10:20:21.574 INFO:tasks.workunit.client.1.vm05.stdout:8/852: write d7/d14/d62/d90/ff8 [940436,45850] 0 2026-03-10T10:20:21.577 INFO:tasks.workunit.client.1.vm05.stdout:6/929: dwrite dd/d36/f71 [0,4194304] 0 2026-03-10T10:20:21.579 INFO:tasks.workunit.client.0.vm02.stdout:3/948: creat d1/d8/d86/db1/f13b x:0 0 0 2026-03-10T10:20:21.582 INFO:tasks.workunit.client.1.vm05.stdout:8/853: dwrite d7/d14/d15/f51 [4194304,4194304] 0 2026-03-10T10:20:21.583 INFO:tasks.workunit.client.0.vm02.stdout:2/973: dwrite d0/d1a/d49/fc8 [0,4194304] 0 2026-03-10T10:20:21.588 INFO:tasks.workunit.client.1.vm05.stdout:4/830: write d1/d64/f112 [4784903,12924] 0 2026-03-10T10:20:21.591 INFO:tasks.workunit.client.0.vm02.stdout:9/904: dwrite da/d3c/d4c/df6/ffb [0,4194304] 0 2026-03-10T10:20:21.602 INFO:tasks.workunit.client.0.vm02.stdout:3/949: symlink d1/d20/db2/dcb/l13c 0 2026-03-10T10:20:21.603 INFO:tasks.workunit.client.1.vm05.stdout:3/976: write dd/d15/d24/d2c/f3f [1617496,127616] 0 2026-03-10T10:20:21.608 INFO:tasks.workunit.client.1.vm05.stdout:8/854: dread - d7/d14/d3a/d49/d65/db8/fe1 zero size 2026-03-10T10:20:21.616 INFO:tasks.workunit.client.1.vm05.stdout:5/941: dwrite da/db/d26/f7e [4194304,4194304] 0 2026-03-10T10:20:21.620 INFO:tasks.workunit.client.1.vm05.stdout:3/977: rmdir dd/d15/d24/d74 39 2026-03-10T10:20:21.626 INFO:tasks.workunit.client.1.vm05.stdout:6/930: symlink dd/d36/d3f/d12/d44/d2a/d77/l12e 0 2026-03-10T10:20:21.627 INFO:tasks.workunit.client.1.vm05.stdout:8/855: chown d7/d2f/d57/l8e 20 1 2026-03-10T10:20:21.632 INFO:tasks.workunit.client.0.vm02.stdout:6/922: dwrite d0/db9/fc5 [0,4194304] 0 2026-03-10T10:20:21.642 INFO:tasks.workunit.client.1.vm05.stdout:8/856: write d7/d14/d24/d3f/dc4/ffd [302519,102962] 0 2026-03-10T10:20:21.643 INFO:tasks.workunit.client.1.vm05.stdout:0/961: dwrite d1/d2/d9/d31/d54/f7f [0,4194304] 0 2026-03-10T10:20:21.651 INFO:tasks.workunit.client.1.vm05.stdout:5/942: fdatasync da/db/d28/d32/f105 0 2026-03-10T10:20:21.655 INFO:tasks.workunit.client.0.vm02.stdout:0/985: dwrite d9/d18/d1a/d3c/f113 [4194304,4194304] 0 2026-03-10T10:20:21.659 INFO:tasks.workunit.client.1.vm05.stdout:6/931: mknod dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/c12f 0 2026-03-10T10:20:21.660 INFO:tasks.workunit.client.1.vm05.stdout:6/932: chown dd/d1b/l54 1179612290 1 2026-03-10T10:20:21.665 INFO:tasks.workunit.client.1.vm05.stdout:8/857: creat d7/d14/d62/d90/dd3/f114 x:0 0 0 2026-03-10T10:20:21.666 INFO:tasks.workunit.client.1.vm05.stdout:3/978: symlink dd/d20/d9e/dc0/l15c 0 2026-03-10T10:20:21.668 INFO:tasks.workunit.client.0.vm02.stdout:6/923: rename d0/d8/d29/d52/de8 to d0/d8/d29/d2f/d13b 0 2026-03-10T10:20:21.668 INFO:tasks.workunit.client.1.vm05.stdout:9/820: dwrite d0/df/d11/dc6/ff4 [0,4194304] 0 2026-03-10T10:20:21.670 INFO:tasks.workunit.client.0.vm02.stdout:0/986: mknod d9/d34/d120/c141 0 2026-03-10T10:20:21.675 INFO:tasks.workunit.client.1.vm05.stdout:4/831: getdents d1/d64/da9/dae 0 2026-03-10T10:20:21.677 INFO:tasks.workunit.client.0.vm02.stdout:7/952: dwrite d1/dc/d16/d28/d2d/fb0 [0,4194304] 0 2026-03-10T10:20:21.690 INFO:tasks.workunit.client.0.vm02.stdout:3/950: dread d1/d6/d8e/fa0 [0,4194304] 0 2026-03-10T10:20:21.692 INFO:tasks.workunit.client.0.vm02.stdout:0/987: sync 2026-03-10T10:20:21.692 INFO:tasks.workunit.client.0.vm02.stdout:0/988: read d9/d18/d1a/d22/d24/d80/fe4 [32491,99717] 0 2026-03-10T10:20:21.698 INFO:tasks.workunit.client.1.vm05.stdout:6/933: rename dd/d36/d3f/d12/d96/fcd to dd/d36/d3f/d12/d58/db8/f130 0 2026-03-10T10:20:21.702 INFO:tasks.workunit.client.0.vm02.stdout:7/953: creat d1/dc/d16/d28/d2d/f128 x:0 0 0 2026-03-10T10:20:21.702 INFO:tasks.workunit.client.1.vm05.stdout:8/858: symlink d7/d14/d24/d3f/d4f/l115 0 2026-03-10T10:20:21.705 INFO:tasks.workunit.client.0.vm02.stdout:1/983: dwrite d4/dc3/dd4/f110 [0,4194304] 0 2026-03-10T10:20:21.706 INFO:tasks.workunit.client.1.vm05.stdout:2/892: dwrite db/f25 [4194304,4194304] 0 2026-03-10T10:20:21.709 INFO:tasks.workunit.client.0.vm02.stdout:0/989: mknod d9/d34/d3d/d65/d89/df3/c142 0 2026-03-10T10:20:21.710 INFO:tasks.workunit.client.0.vm02.stdout:0/990: chown d9/d34/d3d/d65/d89/dd3/f118 154390 1 2026-03-10T10:20:21.712 INFO:tasks.workunit.client.0.vm02.stdout:6/924: dread d0/f20 [0,4194304] 0 2026-03-10T10:20:21.721 INFO:tasks.workunit.client.0.vm02.stdout:7/954: mkdir d1/d1b/d8f/dad/d7e/dd2/d129 0 2026-03-10T10:20:21.724 INFO:tasks.workunit.client.0.vm02.stdout:3/951: dread d1/d8/d86/f111 [0,4194304] 0 2026-03-10T10:20:21.732 INFO:tasks.workunit.client.0.vm02.stdout:1/984: creat d4/da/d1a/d22/f13d x:0 0 0 2026-03-10T10:20:21.732 INFO:tasks.workunit.client.0.vm02.stdout:1/985: readlink d4/ld 0 2026-03-10T10:20:21.736 INFO:tasks.workunit.client.1.vm05.stdout:9/821: creat d0/d1/f11c x:0 0 0 2026-03-10T10:20:21.745 INFO:tasks.workunit.client.0.vm02.stdout:0/991: creat d9/d34/f143 x:0 0 0 2026-03-10T10:20:21.745 INFO:tasks.workunit.client.0.vm02.stdout:7/955: symlink d1/dc/d55/d9c/l12a 0 2026-03-10T10:20:21.745 INFO:tasks.workunit.client.1.vm05.stdout:3/979: getdents dd/d15/d24/d2c/d6d/da7/dbb 0 2026-03-10T10:20:21.745 INFO:tasks.workunit.client.1.vm05.stdout:3/980: chown dd/d39/d5c/l5d 9083491 1 2026-03-10T10:20:21.745 INFO:tasks.workunit.client.1.vm05.stdout:3/981: chown dd/d15/f6a 3211812 1 2026-03-10T10:20:21.746 INFO:tasks.workunit.client.1.vm05.stdout:9/822: mkdir d0/d1/d13/de/ddf/df5/d11d 0 2026-03-10T10:20:21.750 INFO:tasks.workunit.client.1.vm05.stdout:3/982: write dd/d15/d1f/f53 [1010826,117974] 0 2026-03-10T10:20:21.751 INFO:tasks.workunit.client.1.vm05.stdout:3/983: chown dd/d15/d24/d2c/d3b 238 1 2026-03-10T10:20:21.755 INFO:tasks.workunit.client.1.vm05.stdout:3/984: dwrite dd/d39/d66/f7e [0,4194304] 0 2026-03-10T10:20:21.755 INFO:tasks.workunit.client.0.vm02.stdout:3/952: mkdir d1/d8/d86/da2/d12b/d13d 0 2026-03-10T10:20:21.758 INFO:tasks.workunit.client.1.vm05.stdout:9/823: symlink d0/d1/d13/l11e 0 2026-03-10T10:20:21.763 INFO:tasks.workunit.client.1.vm05.stdout:3/985: fsync dd/d15/d24/d2c/dd0/f102 0 2026-03-10T10:20:21.766 INFO:tasks.workunit.client.1.vm05.stdout:3/986: dread dd/d39/d5f/fa2 [0,4194304] 0 2026-03-10T10:20:21.779 INFO:tasks.workunit.client.1.vm05.stdout:9/824: creat d0/d1/d13/d55/f11f x:0 0 0 2026-03-10T10:20:21.781 INFO:tasks.workunit.client.1.vm05.stdout:3/987: creat dd/d15/d24/d74/f15d x:0 0 0 2026-03-10T10:20:21.785 INFO:tasks.workunit.client.1.vm05.stdout:9/825: link d0/dc4/d63/cbc d0/d1/d13/d26/c120 0 2026-03-10T10:20:21.790 INFO:tasks.workunit.client.0.vm02.stdout:0/992: truncate d9/d34/d3d/d65/d89/fbc 292670 0 2026-03-10T10:20:21.791 INFO:tasks.workunit.client.1.vm05.stdout:3/988: read dd/d15/d4c/f73 [4179826,129262] 0 2026-03-10T10:20:21.793 INFO:tasks.workunit.client.0.vm02.stdout:0/993: dwrite d9/d18/d1a/f12b [0,4194304] 0 2026-03-10T10:20:21.796 INFO:tasks.workunit.client.0.vm02.stdout:6/925: getdents d0/d8/d9 0 2026-03-10T10:20:21.799 INFO:tasks.workunit.client.0.vm02.stdout:6/926: mkdir d0/d8/d29/d6d/d13c 0 2026-03-10T10:20:21.811 INFO:tasks.workunit.client.1.vm05.stdout:3/989: dread dd/d15/d24/d2c/f32 [0,4194304] 0 2026-03-10T10:20:21.814 INFO:tasks.workunit.client.0.vm02.stdout:6/927: symlink d0/d8/d29/d2f/d50/d98/l13d 0 2026-03-10T10:20:21.816 INFO:tasks.workunit.client.1.vm05.stdout:3/990: mkdir dd/d20/d94/dba/d15e 0 2026-03-10T10:20:21.816 INFO:tasks.workunit.client.1.vm05.stdout:3/991: chown dd/d20/d56/d5e/f135 31253 1 2026-03-10T10:20:21.817 INFO:tasks.workunit.client.0.vm02.stdout:6/928: rmdir d0/d8/d29/d6d/d13c 0 2026-03-10T10:20:21.819 INFO:tasks.workunit.client.1.vm05.stdout:3/992: creat dd/d15/d24/d2c/d6d/da7/dbb/f15f x:0 0 0 2026-03-10T10:20:21.825 INFO:tasks.workunit.client.1.vm05.stdout:3/993: creat dd/d39/d13e/f160 x:0 0 0 2026-03-10T10:20:21.861 INFO:tasks.workunit.client.0.vm02.stdout:8/933: dwrite d1/f80 [0,4194304] 0 2026-03-10T10:20:21.864 INFO:tasks.workunit.client.1.vm05.stdout:6/934: dread dd/d36/d3f/d12/d44/daa/fbc [0,4194304] 0 2026-03-10T10:20:21.867 INFO:tasks.workunit.client.1.vm05.stdout:6/935: write dd/d36/d3f/d12/d44/d2a/d3d/d48/fb2 [231202,66733] 0 2026-03-10T10:20:21.870 INFO:tasks.workunit.client.1.vm05.stdout:6/936: symlink dd/d36/l131 0 2026-03-10T10:20:21.870 INFO:tasks.workunit.client.0.vm02.stdout:8/934: fdatasync d1/d2/dff/f105 0 2026-03-10T10:20:21.871 INFO:tasks.workunit.client.1.vm05.stdout:6/937: dread - dd/d36/d3f/d12/d44/d2a/fec zero size 2026-03-10T10:20:21.871 INFO:tasks.workunit.client.0.vm02.stdout:8/935: chown d1/d1c/d43/d6a/da8/d56/db5/l116 106 1 2026-03-10T10:20:21.871 INFO:tasks.workunit.client.1.vm05.stdout:6/938: stat dd/d36/d3f/d12/d44/daa/fbc 0 2026-03-10T10:20:21.873 INFO:tasks.workunit.client.1.vm05.stdout:6/939: symlink dd/d36/d3f/dbd/dd5/l132 0 2026-03-10T10:20:21.920 INFO:tasks.workunit.client.0.vm02.stdout:2/974: truncate d0/d1a/f31 1391814 0 2026-03-10T10:20:21.921 INFO:tasks.workunit.client.0.vm02.stdout:9/905: write da/d3c/d4c/fe0 [355031,82868] 0 2026-03-10T10:20:21.929 INFO:tasks.workunit.client.1.vm05.stdout:5/943: dwrite da/db/d28/d32/f105 [0,4194304] 0 2026-03-10T10:20:21.929 INFO:tasks.workunit.client.0.vm02.stdout:2/975: rename d0/d71/d108/d65/dc4/fdc to d0/d10/dee/f144 0 2026-03-10T10:20:21.929 INFO:tasks.workunit.client.0.vm02.stdout:2/976: stat d0/d8c/dc5 0 2026-03-10T10:20:21.930 INFO:tasks.workunit.client.0.vm02.stdout:2/977: chown d0/d71/d108/d65/dc4/dfa/dd3/de8/d105 227622196 1 2026-03-10T10:20:21.932 INFO:tasks.workunit.client.1.vm05.stdout:0/962: write d1/d2/d9/d31/d13/fd2 [2850986,29760] 0 2026-03-10T10:20:21.932 INFO:tasks.workunit.client.0.vm02.stdout:9/906: symlink da/d3c/d4c/d2c/d34/dc2/l126 0 2026-03-10T10:20:21.933 INFO:tasks.workunit.client.0.vm02.stdout:2/978: read d0/d1a/d49/f4f [3216290,78596] 0 2026-03-10T10:20:21.936 INFO:tasks.workunit.client.1.vm05.stdout:0/963: dread - d1/d2/d9/d31/d12/d20/dbe/fc5 zero size 2026-03-10T10:20:21.936 INFO:tasks.workunit.client.1.vm05.stdout:4/832: dwrite d1/d3/f5 [0,4194304] 0 2026-03-10T10:20:21.947 INFO:tasks.workunit.client.0.vm02.stdout:9/907: creat da/d3c/d4c/d38/d4a/d70/d10a/f127 x:0 0 0 2026-03-10T10:20:21.951 INFO:tasks.workunit.client.0.vm02.stdout:1/986: write d4/d4a/fd2 [3039390,27454] 0 2026-03-10T10:20:21.951 INFO:tasks.workunit.client.1.vm05.stdout:8/859: dwrite d7/d14/d62/f9d [0,4194304] 0 2026-03-10T10:20:21.954 INFO:tasks.workunit.client.1.vm05.stdout:2/893: dwrite db/d2d/dc6/ff7 [0,4194304] 0 2026-03-10T10:20:21.967 INFO:tasks.workunit.client.0.vm02.stdout:1/987: creat d4/da/d27/d38/d3c/f13e x:0 0 0 2026-03-10T10:20:21.968 INFO:tasks.workunit.client.0.vm02.stdout:1/988: symlink d4/d1b/l13f 0 2026-03-10T10:20:21.968 INFO:tasks.workunit.client.1.vm05.stdout:8/860: dwrite d7/d14/d62/f9d [0,4194304] 0 2026-03-10T10:20:21.968 INFO:tasks.workunit.client.1.vm05.stdout:5/944: sync 2026-03-10T10:20:21.968 INFO:tasks.workunit.client.1.vm05.stdout:2/894: mknod db/d1c/d40/d80/c11f 0 2026-03-10T10:20:21.969 INFO:tasks.workunit.client.1.vm05.stdout:5/945: creat da/d96/dd9/f141 x:0 0 0 2026-03-10T10:20:21.969 INFO:tasks.workunit.client.1.vm05.stdout:4/833: mkdir d1/d31/dc/d40/d45/ded/d11b 0 2026-03-10T10:20:21.977 INFO:tasks.workunit.client.1.vm05.stdout:2/895: rename db/d12/d74 to db/d1c/d40/d62/d10c/d120 0 2026-03-10T10:20:21.978 INFO:tasks.workunit.client.1.vm05.stdout:2/896: chown db/d61/d67 575 1 2026-03-10T10:20:21.978 INFO:tasks.workunit.client.1.vm05.stdout:5/946: creat da/d9a/d120/f142 x:0 0 0 2026-03-10T10:20:21.978 INFO:tasks.workunit.client.1.vm05.stdout:4/834: symlink d1/d31/d4b/d6d/l11c 0 2026-03-10T10:20:21.979 INFO:tasks.workunit.client.1.vm05.stdout:8/861: sync 2026-03-10T10:20:21.980 INFO:tasks.workunit.client.1.vm05.stdout:2/897: creat db/d28/d4f/d8b/f121 x:0 0 0 2026-03-10T10:20:21.981 INFO:tasks.workunit.client.1.vm05.stdout:4/835: creat d1/d3/d65/de0/f11d x:0 0 0 2026-03-10T10:20:21.985 INFO:tasks.workunit.client.0.vm02.stdout:1/989: creat d4/da/d1a/d47/d88/f140 x:0 0 0 2026-03-10T10:20:21.999 INFO:tasks.workunit.client.1.vm05.stdout:8/862: link d7/d14/d24/l81 d7/d14/d24/d3f/l116 0 2026-03-10T10:20:21.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:21 vm05.local ceph-mon[59051]: pgmap v9: 65 pgs: 65 active+clean; 3.8 GiB data, 13 GiB used, 107 GiB / 120 GiB avail; 32 MiB/s rd, 78 MiB/s wr, 219 op/s 2026-03-10T10:20:21.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:21 vm05.local ceph-mon[59051]: Upgrade: Updating mgr.vm05.coparq 2026-03-10T10:20:21.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:21 vm05.local ceph-mon[59051]: Deploying daemon mgr.vm05.coparq on vm05 2026-03-10T10:20:21.999 INFO:tasks.workunit.client.1.vm05.stdout:8/863: rename d7/d14/d3a/d49/l85 to d7/d14/d62/l117 0 2026-03-10T10:20:21.999 INFO:tasks.workunit.client.1.vm05.stdout:8/864: dread d7/d14/d3a/d49/f72 [0,4194304] 0 2026-03-10T10:20:21.999 INFO:tasks.workunit.client.1.vm05.stdout:8/865: dread d7/d14/d24/d3f/feb [0,4194304] 0 2026-03-10T10:20:21.999 INFO:tasks.workunit.client.1.vm05.stdout:8/866: mkdir d7/d14/d24/d3f/d6a/d8a/d118 0 2026-03-10T10:20:22.000 INFO:tasks.workunit.client.1.vm05.stdout:4/836: sync 2026-03-10T10:20:22.001 INFO:tasks.workunit.client.1.vm05.stdout:4/837: stat d1/d64/f84 0 2026-03-10T10:20:22.004 INFO:tasks.workunit.client.1.vm05.stdout:4/838: fsync d1/d31/d76/faf 0 2026-03-10T10:20:22.005 INFO:tasks.workunit.client.1.vm05.stdout:4/839: stat d1/d31/d4b/l61 0 2026-03-10T10:20:22.006 INFO:tasks.workunit.client.1.vm05.stdout:4/840: unlink d1/d3/fcb 0 2026-03-10T10:20:22.013 INFO:tasks.workunit.client.1.vm05.stdout:8/867: dread d7/d2f/f7e [0,4194304] 0 2026-03-10T10:20:22.015 INFO:tasks.workunit.client.1.vm05.stdout:8/868: mkdir d7/d14/d15/da7/d119 0 2026-03-10T10:20:22.017 INFO:tasks.workunit.client.1.vm05.stdout:8/869: rmdir d7/d14/d62/d90/dac 39 2026-03-10T10:20:22.018 INFO:tasks.workunit.client.1.vm05.stdout:8/870: mknod d7/d14/d15/da7/d119/c11a 0 2026-03-10T10:20:22.020 INFO:tasks.workunit.client.1.vm05.stdout:8/871: symlink d7/d14/d62/d90/dac/df4/d10c/l11b 0 2026-03-10T10:20:22.022 INFO:tasks.workunit.client.1.vm05.stdout:8/872: chown d7/d14/d24/d3f/feb 5009871 1 2026-03-10T10:20:22.024 INFO:tasks.workunit.client.1.vm05.stdout:8/873: truncate d7/d14/d24/d3f/d4f/fbf 570345 0 2026-03-10T10:20:22.027 INFO:tasks.workunit.client.1.vm05.stdout:8/874: dread d7/d14/d62/f9d [0,4194304] 0 2026-03-10T10:20:22.028 INFO:tasks.workunit.client.1.vm05.stdout:8/875: write d7/d14/d24/d3f/d6a/d8a/d96/f101 [576200,80038] 0 2026-03-10T10:20:22.030 INFO:tasks.workunit.client.1.vm05.stdout:8/876: chown d7/d14/d24/d3f/f7d 49648682 1 2026-03-10T10:20:22.030 INFO:tasks.workunit.client.1.vm05.stdout:8/877: stat d7/d14/d15/l30 0 2026-03-10T10:20:22.035 INFO:tasks.workunit.client.1.vm05.stdout:8/878: dwrite d7/d14/d15/f3c [12582912,4194304] 0 2026-03-10T10:20:22.038 INFO:tasks.workunit.client.1.vm05.stdout:8/879: mknod d7/d2f/d57/c11c 0 2026-03-10T10:20:22.045 INFO:tasks.workunit.client.1.vm05.stdout:8/880: mknod d7/d14/d10b/c11d 0 2026-03-10T10:20:22.047 INFO:tasks.workunit.client.1.vm05.stdout:8/881: unlink d7/d14/d24/d3f/d6a/l105 0 2026-03-10T10:20:22.048 INFO:tasks.workunit.client.1.vm05.stdout:8/882: fsync d7/d14/d62/d90/dd3/f114 0 2026-03-10T10:20:22.049 INFO:tasks.workunit.client.0.vm02.stdout:3/953: write d1/d8/f3f [918554,8375] 0 2026-03-10T10:20:22.050 INFO:tasks.workunit.client.0.vm02.stdout:0/994: write d9/d18/d1a/d22/d24/d80/d49/feb [3956326,50621] 0 2026-03-10T10:20:22.051 INFO:tasks.workunit.client.1.vm05.stdout:9/826: dwrite d0/df/d74/f8a [0,4194304] 0 2026-03-10T10:20:22.055 INFO:tasks.workunit.client.0.vm02.stdout:3/954: dwrite d1/d8/d44/f127 [0,4194304] 0 2026-03-10T10:20:22.071 INFO:tasks.workunit.client.0.vm02.stdout:7/956: link d1/d1b/d8f/f59 d1/dc/d16/dfc/f12b 0 2026-03-10T10:20:22.072 INFO:tasks.workunit.client.0.vm02.stdout:9/908: write da/d3c/d53/fdc [112175,72077] 0 2026-03-10T10:20:22.072 INFO:tasks.workunit.client.0.vm02.stdout:2/979: write d0/d71/d108/d65/db0/fbd [777984,42390] 0 2026-03-10T10:20:22.073 INFO:tasks.workunit.client.1.vm05.stdout:3/994: dwrite dd/d15/d24/d2c/f7b [0,4194304] 0 2026-03-10T10:20:22.073 INFO:tasks.workunit.client.1.vm05.stdout:0/964: dwrite d1/d2/d9/d31/d13/da2/dab/dce/d106/d135/f44 [0,4194304] 0 2026-03-10T10:20:22.073 INFO:tasks.workunit.client.0.vm02.stdout:6/929: dwrite d0/d8/fff [0,4194304] 0 2026-03-10T10:20:22.074 INFO:tasks.workunit.client.0.vm02.stdout:8/936: dwrite d1/d1c/f33 [8388608,4194304] 0 2026-03-10T10:20:22.089 INFO:tasks.workunit.client.1.vm05.stdout:8/883: symlink d7/d14/d24/d3f/df0/l11e 0 2026-03-10T10:20:22.093 INFO:tasks.workunit.client.0.vm02.stdout:3/955: chown d1/d58/d104/c115 16257 1 2026-03-10T10:20:22.099 INFO:tasks.workunit.client.1.vm05.stdout:0/965: rename d1/d2/d9/d31/d13/c76 to d1/d2/d9/d31/d13/d17/da1/df5/c13f 0 2026-03-10T10:20:22.106 INFO:tasks.workunit.client.0.vm02.stdout:2/980: read d0/f9 [2971960,51618] 0 2026-03-10T10:20:22.109 INFO:tasks.workunit.client.0.vm02.stdout:9/909: truncate da/d3c/d4c/d38/d82/da3/fff 607633 0 2026-03-10T10:20:22.110 INFO:tasks.workunit.client.0.vm02.stdout:9/910: stat da/d3c/d4c/d38/d82/d8c 0 2026-03-10T10:20:22.115 INFO:tasks.workunit.client.0.vm02.stdout:8/937: creat d1/d1c/d43/d6a/da8/d56/db5/f11f x:0 0 0 2026-03-10T10:20:22.117 INFO:tasks.workunit.client.0.vm02.stdout:0/995: symlink d9/d18/d1a/d22/d24/d51/d133/l144 0 2026-03-10T10:20:22.117 INFO:tasks.workunit.client.0.vm02.stdout:0/996: read - d9/f13d zero size 2026-03-10T10:20:22.118 INFO:tasks.workunit.client.0.vm02.stdout:0/997: readlink d9/d34/d3d/d65/d89/dd3/l123 0 2026-03-10T10:20:22.120 INFO:tasks.workunit.client.0.vm02.stdout:3/956: symlink d1/d20/db2/l13e 0 2026-03-10T10:20:22.126 INFO:tasks.workunit.client.0.vm02.stdout:2/981: dread d0/d71/d108/fad [0,4194304] 0 2026-03-10T10:20:22.126 INFO:tasks.workunit.client.0.vm02.stdout:9/911: rename da/d3c/d4c/c12 to da/d3c/d4c/db1/d112/d114/c128 0 2026-03-10T10:20:22.127 INFO:tasks.workunit.client.0.vm02.stdout:6/930: mkdir d0/d8/d29/d6d/d13e 0 2026-03-10T10:20:22.128 INFO:tasks.workunit.client.0.vm02.stdout:8/938: mkdir d1/d1c/d43/d5b/dab/d120 0 2026-03-10T10:20:22.138 INFO:tasks.workunit.client.1.vm05.stdout:6/940: dread dd/d36/d3f/f41 [0,4194304] 0 2026-03-10T10:20:22.138 INFO:tasks.workunit.client.0.vm02.stdout:9/912: fdatasync da/d3c/fc0 0 2026-03-10T10:20:22.139 INFO:tasks.workunit.client.0.vm02.stdout:6/931: fsync d0/d8/d29/d2f/d13b/f10d 0 2026-03-10T10:20:22.140 INFO:tasks.workunit.client.0.vm02.stdout:8/939: mknod d1/d1c/d43/d6a/c121 0 2026-03-10T10:20:22.150 INFO:tasks.workunit.client.0.vm02.stdout:9/913: creat da/d3c/d4c/d38/d82/d89/f129 x:0 0 0 2026-03-10T10:20:22.150 INFO:tasks.workunit.client.1.vm05.stdout:5/947: write da/db/d28/f8d [4321847,68294] 0 2026-03-10T10:20:22.156 INFO:tasks.workunit.client.1.vm05.stdout:3/995: mknod dd/d15/d24/d2c/d6d/da7/d136/c161 0 2026-03-10T10:20:22.157 INFO:tasks.workunit.client.0.vm02.stdout:1/990: write d4/da/d27/d38/d3c/f139 [4121290,72836] 0 2026-03-10T10:20:22.159 INFO:tasks.workunit.client.1.vm05.stdout:2/898: dwrite db/d1c/d40/f50 [0,4194304] 0 2026-03-10T10:20:22.160 INFO:tasks.workunit.client.1.vm05.stdout:9/827: truncate d0/d1/d16/f3d 951939 0 2026-03-10T10:20:22.162 INFO:tasks.workunit.client.0.vm02.stdout:0/998: link d9/d34/d3d/d65/d89/dd3/da8/fd7 d9/d18/d1a/d22/d24/d8e/d9b/daa/f145 0 2026-03-10T10:20:22.164 INFO:tasks.workunit.client.1.vm05.stdout:4/841: write d1/d31/dc/d40/d45/f50 [4273555,90674] 0 2026-03-10T10:20:22.166 INFO:tasks.workunit.client.1.vm05.stdout:8/884: rename d7/d14/d24/d3f/d6a/d8a/d96/fc3 to d7/d14/d15/da7/d119/f11f 0 2026-03-10T10:20:22.167 INFO:tasks.workunit.client.1.vm05.stdout:6/941: mkdir dd/d36/d3f/d12/d44/daa/d133 0 2026-03-10T10:20:22.175 INFO:tasks.workunit.client.0.vm02.stdout:9/914: mkdir da/d3c/d4c/db1/d112/d114/d12a 0 2026-03-10T10:20:22.176 INFO:tasks.workunit.client.0.vm02.stdout:9/915: write da/d3c/d4c/d2c/f93 [1767375,38710] 0 2026-03-10T10:20:22.176 INFO:tasks.workunit.client.0.vm02.stdout:9/916: stat da/d3c/d4c/d38/d82/d8c/fca 0 2026-03-10T10:20:22.179 INFO:tasks.workunit.client.1.vm05.stdout:3/996: truncate dd/d20/d56/d5e/dab/fc4 4045177 0 2026-03-10T10:20:22.182 INFO:tasks.workunit.client.0.vm02.stdout:1/991: mknod d4/da/d1a/d11d/d53/da6/db8/dd9/dea/c141 0 2026-03-10T10:20:22.182 INFO:tasks.workunit.client.0.vm02.stdout:0/999: symlink d9/d34/d3d/l146 0 2026-03-10T10:20:22.183 INFO:tasks.workunit.client.1.vm05.stdout:4/842: mknod d1/d31/dc/d40/d45/c11e 0 2026-03-10T10:20:22.186 INFO:tasks.workunit.client.1.vm05.stdout:8/885: unlink d7/ccb 0 2026-03-10T10:20:22.186 INFO:tasks.workunit.client.1.vm05.stdout:6/942: unlink dd/d36/d3f/d12/d96/f107 0 2026-03-10T10:20:22.186 INFO:tasks.workunit.client.0.vm02.stdout:1/992: rmdir d4/da/d1a/d11d/d53/da6/db8/dd9 39 2026-03-10T10:20:22.188 INFO:tasks.workunit.client.1.vm05.stdout:3/997: creat dd/d20/d56/f162 x:0 0 0 2026-03-10T10:20:22.188 INFO:tasks.workunit.client.1.vm05.stdout:2/899: mknod db/c122 0 2026-03-10T10:20:22.189 INFO:tasks.workunit.client.0.vm02.stdout:1/993: creat d4/da/d1a/d47/dbc/dcb/f142 x:0 0 0 2026-03-10T10:20:22.190 INFO:tasks.workunit.client.0.vm02.stdout:1/994: write d4/da/f73 [1190916,69639] 0 2026-03-10T10:20:22.190 INFO:tasks.workunit.client.0.vm02.stdout:1/995: dread - d4/da/d1a/d47/dbc/dcb/f142 zero size 2026-03-10T10:20:22.191 INFO:tasks.workunit.client.0.vm02.stdout:1/996: stat d4/da/d27/cd3 0 2026-03-10T10:20:22.191 INFO:tasks.workunit.client.0.vm02.stdout:1/997: chown d4/da/d1a/d11d/d53 0 1 2026-03-10T10:20:22.194 INFO:tasks.workunit.client.1.vm05.stdout:6/943: chown dd/d36/d3f/d12/d44/d30/d4a/d6e/ff8 0 1 2026-03-10T10:20:22.194 INFO:tasks.workunit.client.1.vm05.stdout:3/998: fdatasync dd/d15/d69/f99 0 2026-03-10T10:20:22.194 INFO:tasks.workunit.client.1.vm05.stdout:3/999: stat dd/d15/c43 0 2026-03-10T10:20:22.194 INFO:tasks.workunit.client.1.vm05.stdout:6/944: chown dd/d36/f69 4212037 1 2026-03-10T10:20:22.197 INFO:tasks.workunit.client.0.vm02.stdout:1/998: creat d4/da/d1a/d11d/d91/f143 x:0 0 0 2026-03-10T10:20:22.197 INFO:tasks.workunit.client.0.vm02.stdout:1/999: stat d4/da/d1a/d47/dbc/c12c 0 2026-03-10T10:20:22.198 INFO:tasks.workunit.client.1.vm05.stdout:0/966: rename d1/d2/d9/d31/d13/d17/c3a to d1/d2/dc6/c140 0 2026-03-10T10:20:22.199 INFO:tasks.workunit.client.1.vm05.stdout:4/843: truncate d1/d64/da9/fb9 75725 0 2026-03-10T10:20:22.200 INFO:tasks.workunit.client.1.vm05.stdout:6/945: mknod dd/d36/d3f/d12/d59/df5/c134 0 2026-03-10T10:20:22.201 INFO:tasks.workunit.client.1.vm05.stdout:4/844: rename d1/d64/da9/dae/dfc/fe5 to d1/d31/dc/d40/d45/f11f 0 2026-03-10T10:20:22.205 INFO:tasks.workunit.client.1.vm05.stdout:6/946: creat dd/d36/d3f/d12/d44/d30/d4a/d6e/f135 x:0 0 0 2026-03-10T10:20:22.206 INFO:tasks.workunit.client.1.vm05.stdout:4/845: creat d1/d31/dc/d40/d45/ded/f120 x:0 0 0 2026-03-10T10:20:22.206 INFO:tasks.workunit.client.1.vm05.stdout:6/947: truncate dd/d36/d3f/d12/d58/f9d 1508899 0 2026-03-10T10:20:22.207 INFO:tasks.workunit.client.1.vm05.stdout:4/846: mknod d1/d31/dc/d40/d45/c121 0 2026-03-10T10:20:22.207 INFO:tasks.workunit.client.1.vm05.stdout:6/948: read - dd/d36/d3f/d12/d44/d2a/d7f/fea zero size 2026-03-10T10:20:22.217 INFO:tasks.workunit.client.0.vm02.stdout:9/917: dread da/d3c/d4c/d38/f88 [0,4194304] 0 2026-03-10T10:20:22.217 INFO:tasks.workunit.client.0.vm02.stdout:9/918: chown da/d3c/d4c/d38/d4a/d70/c104 7039429 1 2026-03-10T10:20:22.221 INFO:tasks.workunit.client.1.vm05.stdout:6/949: read dd/d36/d3f/d12/d44/d2a/fa5 [1388753,32246] 0 2026-03-10T10:20:22.229 INFO:tasks.workunit.client.0.vm02.stdout:9/919: dread da/d3c/d4c/d56/fac [0,4194304] 0 2026-03-10T10:20:22.233 INFO:tasks.workunit.client.0.vm02.stdout:7/957: write d1/d1b/faf [981278,30621] 0 2026-03-10T10:20:22.238 INFO:tasks.workunit.client.0.vm02.stdout:9/920: mknod da/d3c/d4c/d38/d82/dd9/c12b 0 2026-03-10T10:20:22.238 INFO:tasks.workunit.client.1.vm05.stdout:6/950: dread dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/fe1 [0,4194304] 0 2026-03-10T10:20:22.240 INFO:tasks.workunit.client.0.vm02.stdout:7/958: truncate d1/dc/d16/f6d 2420680 0 2026-03-10T10:20:22.243 INFO:tasks.workunit.client.0.vm02.stdout:2/982: write d0/d1a/d49/fb2 [917186,43308] 0 2026-03-10T10:20:22.244 INFO:tasks.workunit.client.0.vm02.stdout:9/921: unlink da/d3c/c107 0 2026-03-10T10:20:22.245 INFO:tasks.workunit.client.0.vm02.stdout:9/922: truncate da/d3c/d4c/d2c/d34/f57 5104315 0 2026-03-10T10:20:22.247 INFO:tasks.workunit.client.0.vm02.stdout:7/959: creat d1/dc/d16/d28/d2d/f12c x:0 0 0 2026-03-10T10:20:22.248 INFO:tasks.workunit.client.0.vm02.stdout:2/983: symlink d0/d10/dee/l145 0 2026-03-10T10:20:22.249 INFO:tasks.workunit.client.0.vm02.stdout:3/957: write d1/d8/d21/d73/d78/d84/dfa/f10f [1114587,72503] 0 2026-03-10T10:20:22.252 INFO:tasks.workunit.client.1.vm05.stdout:6/951: link dd/d36/d3f/d12/d44/d2a/d3d/d3e/db7/da3/fb6 dd/d36/d3f/d12/d44/daa/d133/f136 0 2026-03-10T10:20:22.254 INFO:tasks.workunit.client.0.vm02.stdout:7/960: fdatasync d1/dc/d55/f85 0 2026-03-10T10:20:22.262 INFO:tasks.workunit.client.0.vm02.stdout:2/984: symlink d0/d71/d108/d65/dc4/dfa/d80/l146 0 2026-03-10T10:20:22.263 INFO:tasks.workunit.client.0.vm02.stdout:8/940: dwrite d1/d1c/d23/f3b [0,4194304] 0 2026-03-10T10:20:22.272 INFO:tasks.workunit.client.0.vm02.stdout:6/932: write d0/d8/d29/d2f/d50/d98/f9f [754256,119799] 0 2026-03-10T10:20:22.274 INFO:tasks.workunit.client.1.vm05.stdout:5/948: dwrite da/d9a/dc7/db4/f113 [0,4194304] 0 2026-03-10T10:20:22.275 INFO:tasks.workunit.client.0.vm02.stdout:7/961: rmdir d1/d1b/d8f/dad/d7e/dba/ddf 39 2026-03-10T10:20:22.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:21 vm02.local ceph-mon[50200]: pgmap v9: 65 pgs: 65 active+clean; 3.8 GiB data, 13 GiB used, 107 GiB / 120 GiB avail; 32 MiB/s rd, 78 MiB/s wr, 219 op/s 2026-03-10T10:20:22.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:21 vm02.local ceph-mon[50200]: Upgrade: Updating mgr.vm05.coparq 2026-03-10T10:20:22.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:21 vm02.local ceph-mon[50200]: Deploying daemon mgr.vm05.coparq on vm05 2026-03-10T10:20:22.282 INFO:tasks.workunit.client.1.vm05.stdout:6/952: link dd/d36/d3f/d12/d44/d30/d4a/d6e/f135 dd/d36/d3f/f137 0 2026-03-10T10:20:22.293 INFO:tasks.workunit.client.0.vm02.stdout:3/958: unlink d1/d6/d8b/f110 0 2026-03-10T10:20:22.297 INFO:tasks.workunit.client.1.vm05.stdout:5/949: unlink da/db/d28/d8a/fa0 0 2026-03-10T10:20:22.299 INFO:tasks.workunit.client.1.vm05.stdout:8/886: write d7/d14/d3a/d49/f6b [37122,104841] 0 2026-03-10T10:20:22.299 INFO:tasks.workunit.client.1.vm05.stdout:8/887: stat d7/d14/d15/da7/d119 0 2026-03-10T10:20:22.299 INFO:tasks.workunit.client.1.vm05.stdout:9/828: dwrite d0/d1/d13/de/d93/fbd [0,4194304] 0 2026-03-10T10:20:22.300 INFO:tasks.workunit.client.1.vm05.stdout:2/900: write db/d61/dfc/d9d/fab [682034,121034] 0 2026-03-10T10:20:22.307 INFO:tasks.workunit.client.1.vm05.stdout:0/967: write d1/d2/d9/d31/f36 [2280898,21775] 0 2026-03-10T10:20:22.313 INFO:tasks.workunit.client.1.vm05.stdout:8/888: chown d7/d14/d62/d90/cbd 210 1 2026-03-10T10:20:22.315 INFO:tasks.workunit.client.0.vm02.stdout:3/959: creat d1/d8/d21/d73/d101/f13f x:0 0 0 2026-03-10T10:20:22.315 INFO:tasks.workunit.client.1.vm05.stdout:4/847: dwrite d1/d31/dc/d40/d45/f52 [0,4194304] 0 2026-03-10T10:20:22.324 INFO:tasks.workunit.client.1.vm05.stdout:5/950: dread da/db/d26/d70/f7c [0,4194304] 0 2026-03-10T10:20:22.326 INFO:tasks.workunit.client.1.vm05.stdout:9/829: fsync d0/d1/d16/fca 0 2026-03-10T10:20:22.332 INFO:tasks.workunit.client.1.vm05.stdout:2/901: rename db/d1c/d40/d62/le6 to db/d28/d4f/d59/da4/d114/l123 0 2026-03-10T10:20:22.332 INFO:tasks.workunit.client.1.vm05.stdout:2/902: write db/d1c/d40/d62/d10c/d120/fef [1221467,79552] 0 2026-03-10T10:20:22.332 INFO:tasks.workunit.client.0.vm02.stdout:7/962: creat d1/dc/d16/d28/d2d/d103/f12d x:0 0 0 2026-03-10T10:20:22.332 INFO:tasks.workunit.client.0.vm02.stdout:7/963: stat d1/dc/d10/d38/ld6 0 2026-03-10T10:20:22.332 INFO:tasks.workunit.client.0.vm02.stdout:8/941: link d1/d1c/d23/c69 d1/d2/dff/c122 0 2026-03-10T10:20:22.333 INFO:tasks.workunit.client.1.vm05.stdout:4/848: dread d1/d3/d65/ddb/fe7 [0,4194304] 0 2026-03-10T10:20:22.335 INFO:tasks.workunit.client.0.vm02.stdout:9/923: dwrite da/d3c/d4c/d56/f77 [0,4194304] 0 2026-03-10T10:20:22.341 INFO:tasks.workunit.client.0.vm02.stdout:3/960: rename d1/d8/d21/d73/f82 to d1/d8/d86/da2/d12b/d13d/f140 0 2026-03-10T10:20:22.348 INFO:tasks.workunit.client.1.vm05.stdout:9/830: dread d0/d1/f9 [0,4194304] 0 2026-03-10T10:20:22.348 INFO:tasks.workunit.client.0.vm02.stdout:8/942: rmdir d1/d1c/d43 39 2026-03-10T10:20:22.351 INFO:tasks.workunit.client.1.vm05.stdout:9/831: dwrite d0/df/d74/d8c/de4/d104/f118 [0,4194304] 0 2026-03-10T10:20:22.355 INFO:tasks.workunit.client.1.vm05.stdout:5/951: mkdir da/db/d28/d8a/de3/d143 0 2026-03-10T10:20:22.361 INFO:tasks.workunit.client.0.vm02.stdout:9/924: rename da/d3c/d4c/d2c/d34/d35/fd2 to da/d3c/d4c/db1/d112/d114/d12a/f12c 0 2026-03-10T10:20:22.365 INFO:tasks.workunit.client.0.vm02.stdout:3/961: truncate d1/d58/dc9/f11e 785367 0 2026-03-10T10:20:22.372 INFO:tasks.workunit.client.1.vm05.stdout:2/903: dread db/d2d/f47 [0,4194304] 0 2026-03-10T10:20:22.373 INFO:tasks.workunit.client.1.vm05.stdout:4/849: chown d1/d31/dc/f69 562 1 2026-03-10T10:20:22.373 INFO:tasks.workunit.client.0.vm02.stdout:9/925: symlink da/d3c/d4c/d56/l12d 0 2026-03-10T10:20:22.374 INFO:tasks.workunit.client.0.vm02.stdout:9/926: write da/d3c/d4c/f27 [4617707,96562] 0 2026-03-10T10:20:22.378 INFO:tasks.workunit.client.0.vm02.stdout:9/927: read da/d3c/d53/fdc [102485,102319] 0 2026-03-10T10:20:22.378 INFO:tasks.workunit.client.1.vm05.stdout:2/904: dread db/d2d/dc6/ff7 [0,4194304] 0 2026-03-10T10:20:22.379 INFO:tasks.workunit.client.0.vm02.stdout:3/962: creat d1/d8/d44/deb/f141 x:0 0 0 2026-03-10T10:20:22.389 INFO:tasks.workunit.client.0.vm02.stdout:2/985: dwrite d0/d71/d108/f63 [4194304,4194304] 0 2026-03-10T10:20:22.390 INFO:tasks.workunit.client.0.vm02.stdout:6/933: write d0/d8/d29/d2f/d13b/ffb [358476,67504] 0 2026-03-10T10:20:22.391 INFO:tasks.workunit.client.0.vm02.stdout:6/934: readlink d0/d8/d9/d7a/l88 0 2026-03-10T10:20:22.400 INFO:tasks.workunit.client.1.vm05.stdout:5/952: read da/db/f3b [3133748,87247] 0 2026-03-10T10:20:22.400 INFO:tasks.workunit.client.1.vm05.stdout:6/953: dwrite dd/d36/d3f/d12/d44/daa/de4/ff3 [0,4194304] 0 2026-03-10T10:20:22.406 INFO:tasks.workunit.client.1.vm05.stdout:8/889: creat d7/d14/d62/d90/f120 x:0 0 0 2026-03-10T10:20:22.409 INFO:tasks.workunit.client.1.vm05.stdout:0/968: dwrite d1/d2/d39/d6e/fac [0,4194304] 0 2026-03-10T10:20:22.414 INFO:tasks.workunit.client.1.vm05.stdout:2/905: creat db/d28/d4f/d59/dce/f124 x:0 0 0 2026-03-10T10:20:22.416 INFO:tasks.workunit.client.1.vm05.stdout:9/832: mknod d0/d1/d13/de/d21/ddc/c121 0 2026-03-10T10:20:22.425 INFO:tasks.workunit.client.0.vm02.stdout:3/963: unlink d1/cd 0 2026-03-10T10:20:22.428 INFO:tasks.workunit.client.1.vm05.stdout:6/954: creat dd/d36/d3f/d12/d44/d2a/d77/d8b/f138 x:0 0 0 2026-03-10T10:20:22.428 INFO:tasks.workunit.client.0.vm02.stdout:8/943: getdents d1/d1c/d23/d25/df1 0 2026-03-10T10:20:22.437 INFO:tasks.workunit.client.1.vm05.stdout:8/890: rename d7/d2f/fd6 to d7/d14/d62/d90/dac/f121 0 2026-03-10T10:20:22.437 INFO:tasks.workunit.client.0.vm02.stdout:3/964: creat d1/d8/d44/deb/f142 x:0 0 0 2026-03-10T10:20:22.439 INFO:tasks.workunit.client.1.vm05.stdout:0/969: creat d1/d2/d9/d31/d13/da2/dab/dce/d106/d135/d9f/f141 x:0 0 0 2026-03-10T10:20:22.441 INFO:tasks.workunit.client.1.vm05.stdout:9/833: symlink d0/d1/d13/d55/l122 0 2026-03-10T10:20:22.442 INFO:tasks.workunit.client.0.vm02.stdout:3/965: rmdir d1/d8/d21/d73/d101/d114 39 2026-03-10T10:20:22.448 INFO:tasks.workunit.client.0.vm02.stdout:3/966: write d1/d8/d86/f111 [1367890,2027] 0 2026-03-10T10:20:22.452 INFO:tasks.workunit.client.1.vm05.stdout:6/955: dread dd/d36/d3f/f61 [0,4194304] 0 2026-03-10T10:20:22.459 INFO:tasks.workunit.client.0.vm02.stdout:3/967: dread d1/d6/d8e/f8f [0,4194304] 0 2026-03-10T10:20:22.460 INFO:tasks.workunit.client.1.vm05.stdout:0/970: mknod d1/d2/d9/d50/d99/c142 0 2026-03-10T10:20:22.462 INFO:tasks.workunit.client.0.vm02.stdout:8/944: link d1/f117 d1/d1c/d43/f123 0 2026-03-10T10:20:22.464 INFO:tasks.workunit.client.0.vm02.stdout:3/968: dread - d1/d8/d21/ff0 zero size 2026-03-10T10:20:22.464 INFO:tasks.workunit.client.0.vm02.stdout:3/969: stat d1/d8/c10d 0 2026-03-10T10:20:22.468 INFO:tasks.workunit.client.0.vm02.stdout:9/928: dwrite da/d3c/d4c/fbf [0,4194304] 0 2026-03-10T10:20:22.474 INFO:tasks.workunit.client.0.vm02.stdout:9/929: truncate da/f13 3720874 0 2026-03-10T10:20:22.475 INFO:tasks.workunit.client.1.vm05.stdout:0/971: unlink d1/d2/d9/d31/d13/l9e 0 2026-03-10T10:20:22.479 INFO:tasks.workunit.client.0.vm02.stdout:6/935: dwrite d0/d8/d29/d2f/d50/d98/fb1 [0,4194304] 0 2026-03-10T10:20:22.487 INFO:tasks.workunit.client.0.vm02.stdout:7/964: truncate d1/dc/d16/f6d 3327234 0 2026-03-10T10:20:22.506 INFO:tasks.workunit.client.1.vm05.stdout:2/906: link db/d12/fb2 db/d1c/d40/f125 0 2026-03-10T10:20:22.507 INFO:tasks.workunit.client.0.vm02.stdout:2/986: write d0/dd4/fdd [959474,33404] 0 2026-03-10T10:20:22.510 INFO:tasks.workunit.client.1.vm05.stdout:4/850: dwrite d1/d31/d76/dac/db8/dbf/fff [4194304,4194304] 0 2026-03-10T10:20:22.510 INFO:tasks.workunit.client.0.vm02.stdout:9/930: unlink da/d3c/d4c/d2c/d34/f3a 0 2026-03-10T10:20:22.511 INFO:tasks.workunit.client.1.vm05.stdout:4/851: stat d1/d3/f10 0 2026-03-10T10:20:22.512 INFO:tasks.workunit.client.1.vm05.stdout:9/834: creat d0/df/f123 x:0 0 0 2026-03-10T10:20:22.513 INFO:tasks.workunit.client.1.vm05.stdout:2/907: dwrite db/d1c/d40/d62/d85/fd1 [4194304,4194304] 0 2026-03-10T10:20:22.514 INFO:tasks.workunit.client.0.vm02.stdout:2/987: creat d0/d71/d108/d65/dc4/dfa/dd3/f147 x:0 0 0 2026-03-10T10:20:22.519 INFO:tasks.workunit.client.0.vm02.stdout:2/988: readlink d0/d71/d108/d65/dc4/dfa/dd3/de8/l132 0 2026-03-10T10:20:22.519 INFO:tasks.workunit.client.0.vm02.stdout:6/936: dread d0/d8/d29/d6d/f3d [0,4194304] 0 2026-03-10T10:20:22.519 INFO:tasks.workunit.client.0.vm02.stdout:7/965: link d1/dc/d16/d28/d2d/f128 d1/d1b/d49/f12e 0 2026-03-10T10:20:22.519 INFO:tasks.workunit.client.0.vm02.stdout:7/966: stat d1/dc 0 2026-03-10T10:20:22.520 INFO:tasks.workunit.client.0.vm02.stdout:2/989: rmdir d0/d71/dd8 39 2026-03-10T10:20:22.520 INFO:tasks.workunit.client.0.vm02.stdout:2/990: stat d0/d10/dee/f121 0 2026-03-10T10:20:22.522 INFO:tasks.workunit.client.0.vm02.stdout:6/937: truncate d0/d8/d9/fcb 502691 0 2026-03-10T10:20:22.523 INFO:tasks.workunit.client.1.vm05.stdout:5/953: write da/db/d26/d70/fcc [308633,22760] 0 2026-03-10T10:20:22.523 INFO:tasks.workunit.client.0.vm02.stdout:7/967: mkdir d1/dc/d16/d28/d2d/dae/d12f 0 2026-03-10T10:20:22.525 INFO:tasks.workunit.client.1.vm05.stdout:5/954: chown da/d9a/dc7/c9c 114 1 2026-03-10T10:20:22.529 INFO:tasks.workunit.client.1.vm05.stdout:2/908: readlink db/d12/l46 0 2026-03-10T10:20:22.529 INFO:tasks.workunit.client.1.vm05.stdout:4/852: dwrite d1/d3/d65/de0/f11d [0,4194304] 0 2026-03-10T10:20:22.530 INFO:tasks.workunit.client.1.vm05.stdout:9/835: rename d0/d1/d13/de/d21/f76 to d0/df/d74/d8c/d8f/ddd/f124 0 2026-03-10T10:20:22.532 INFO:tasks.workunit.client.0.vm02.stdout:6/938: mknod d0/d8/d29/d2f/c13f 0 2026-03-10T10:20:22.534 INFO:tasks.workunit.client.1.vm05.stdout:9/836: dread - d0/df/d11/dc6/f115 zero size 2026-03-10T10:20:22.535 INFO:tasks.workunit.client.1.vm05.stdout:9/837: dread - d0/d1/dcc/f116 zero size 2026-03-10T10:20:22.538 INFO:tasks.workunit.client.1.vm05.stdout:9/838: creat d0/df/d74/d90/f125 x:0 0 0 2026-03-10T10:20:22.541 INFO:tasks.workunit.client.0.vm02.stdout:6/939: dread d0/d8/d29/d6d/d96/de4/d102/f26 [0,4194304] 0 2026-03-10T10:20:22.542 INFO:tasks.workunit.client.0.vm02.stdout:6/940: fsync d0/d8/d29/d52/f8b 0 2026-03-10T10:20:22.542 INFO:tasks.workunit.client.1.vm05.stdout:2/909: creat db/f126 x:0 0 0 2026-03-10T10:20:22.543 INFO:tasks.workunit.client.1.vm05.stdout:8/891: write d7/d2f/d57/fed [891000,111315] 0 2026-03-10T10:20:22.544 INFO:tasks.workunit.client.1.vm05.stdout:8/892: readlink d7/l1b 0 2026-03-10T10:20:22.545 INFO:tasks.workunit.client.1.vm05.stdout:8/893: truncate d7/d14/d24/d3f/d6a/fe6 1518060 0 2026-03-10T10:20:22.545 INFO:tasks.workunit.client.0.vm02.stdout:6/941: mknod d0/d8/d29/d2f/d50/d98/c140 0 2026-03-10T10:20:22.545 INFO:tasks.workunit.client.1.vm05.stdout:9/839: fdatasync d0/df/fb1 0 2026-03-10T10:20:22.545 INFO:tasks.workunit.client.1.vm05.stdout:2/910: creat db/d1c/f127 x:0 0 0 2026-03-10T10:20:22.549 INFO:tasks.workunit.client.1.vm05.stdout:9/840: dread - d0/df/d11/dc6/f115 zero size 2026-03-10T10:20:22.549 INFO:tasks.workunit.client.1.vm05.stdout:2/911: dread - db/d1c/d40/d80/fd3 zero size 2026-03-10T10:20:22.550 INFO:tasks.workunit.client.1.vm05.stdout:9/841: chown d0/df/d74/la6 2389 1 2026-03-10T10:20:22.557 INFO:tasks.workunit.client.1.vm05.stdout:2/912: mknod db/d61/dfc/c128 0 2026-03-10T10:20:22.557 INFO:tasks.workunit.client.1.vm05.stdout:8/894: creat d7/d14/d15/f122 x:0 0 0 2026-03-10T10:20:22.557 INFO:tasks.workunit.client.1.vm05.stdout:9/842: dread - d0/d1/d13/de/ddf/feb zero size 2026-03-10T10:20:22.559 INFO:tasks.workunit.client.1.vm05.stdout:9/843: dread - d0/d1/d13/de/ddf/feb zero size 2026-03-10T10:20:22.561 INFO:tasks.workunit.client.1.vm05.stdout:8/895: fsync d7/d14/d3a/f5f 0 2026-03-10T10:20:22.564 INFO:tasks.workunit.client.1.vm05.stdout:8/896: write d7/d14/f23 [325109,61307] 0 2026-03-10T10:20:22.570 INFO:tasks.workunit.client.1.vm05.stdout:9/844: dread d0/d1/d13/d26/f58 [0,4194304] 0 2026-03-10T10:20:22.576 INFO:tasks.workunit.client.0.vm02.stdout:3/970: dwrite d1/d6/f49 [0,4194304] 0 2026-03-10T10:20:22.585 INFO:tasks.workunit.client.0.vm02.stdout:3/971: mkdir d1/d8/d86/da2/d12b/d13d/d143 0 2026-03-10T10:20:22.587 INFO:tasks.workunit.client.0.vm02.stdout:3/972: getdents d1/d20/db2 0 2026-03-10T10:20:22.589 INFO:tasks.workunit.client.0.vm02.stdout:3/973: rmdir d1/d8/d86/da2/d12b/d13d/d143 0 2026-03-10T10:20:22.590 INFO:tasks.workunit.client.0.vm02.stdout:3/974: chown d1/d8/d44 3219 1 2026-03-10T10:20:22.647 INFO:tasks.workunit.client.0.vm02.stdout:8/945: dwrite d1/d2/f36 [0,4194304] 0 2026-03-10T10:20:22.649 INFO:tasks.workunit.client.0.vm02.stdout:8/946: fsync d1/d1c/d43/d6a/da8/d56/db5/f11f 0 2026-03-10T10:20:22.663 INFO:tasks.workunit.client.1.vm05.stdout:6/956: dwrite dd/d36/d3f/d12/d58/db8/f130 [0,4194304] 0 2026-03-10T10:20:22.670 INFO:tasks.workunit.client.1.vm05.stdout:0/972: write d1/d2/d9/d31/daa/fca [798472,4101] 0 2026-03-10T10:20:22.671 INFO:tasks.workunit.client.0.vm02.stdout:9/931: write da/d3c/d4c/f29 [2513369,29834] 0 2026-03-10T10:20:22.674 INFO:tasks.workunit.client.0.vm02.stdout:9/932: rename da/d3c/d4c/d56/fd3 to da/d9d/f12e 0 2026-03-10T10:20:22.675 INFO:tasks.workunit.client.1.vm05.stdout:0/973: mknod d1/d2/d39/d6e/dc0/c143 0 2026-03-10T10:20:22.678 INFO:tasks.workunit.client.1.vm05.stdout:0/974: rename d1/d2/d9/f98 to d1/d2/d39/d6e/f144 0 2026-03-10T10:20:22.716 INFO:tasks.workunit.client.1.vm05.stdout:0/975: dread d1/d2/d9/d31/f109 [0,4194304] 0 2026-03-10T10:20:22.749 INFO:tasks.workunit.client.0.vm02.stdout:2/991: dwrite d0/d10/f4b [0,4194304] 0 2026-03-10T10:20:22.753 INFO:tasks.workunit.client.1.vm05.stdout:0/976: dread d1/d2/d9/d50/d9a/da0/ff7 [0,4194304] 0 2026-03-10T10:20:22.760 INFO:tasks.workunit.client.1.vm05.stdout:0/977: mknod d1/d2/d9/d31/d13/da2/dab/dce/d106/d135/c145 0 2026-03-10T10:20:22.761 INFO:tasks.workunit.client.1.vm05.stdout:0/978: write d1/d2/d9/d31/d13/da2/fd6 [778848,65352] 0 2026-03-10T10:20:22.774 INFO:tasks.workunit.client.1.vm05.stdout:0/979: mknod d1/d2/d39/d6e/dc0/df8/c146 0 2026-03-10T10:20:22.774 INFO:tasks.workunit.client.1.vm05.stdout:5/955: write da/db/de9/fb7 [634621,1323] 0 2026-03-10T10:20:22.774 INFO:tasks.workunit.client.1.vm05.stdout:5/956: dread - da/ff3 zero size 2026-03-10T10:20:22.779 INFO:tasks.workunit.client.1.vm05.stdout:0/980: creat d1/d2/d9/d31/d13/da2/dab/dce/d106/d135/f147 x:0 0 0 2026-03-10T10:20:22.780 INFO:tasks.workunit.client.1.vm05.stdout:0/981: chown d1/d2/d39/f69 6393 1 2026-03-10T10:20:22.784 INFO:tasks.workunit.client.0.vm02.stdout:7/968: truncate d1/d1b/d8f/f105 1226829 0 2026-03-10T10:20:22.786 INFO:tasks.workunit.client.1.vm05.stdout:4/853: write d1/d31/f9d [684547,93831] 0 2026-03-10T10:20:22.790 INFO:tasks.workunit.client.0.vm02.stdout:6/942: write d0/d8/d29/d2f/d50/f108 [325074,3486] 0 2026-03-10T10:20:22.799 INFO:tasks.workunit.client.1.vm05.stdout:2/913: write db/d28/d4f/d8b/ffd [279560,128482] 0 2026-03-10T10:20:22.800 INFO:tasks.workunit.client.1.vm05.stdout:2/914: chown db/d28/d4f/l6b 182672138 1 2026-03-10T10:20:22.806 INFO:tasks.workunit.client.1.vm05.stdout:8/897: write d7/d14/d3a/ff3 [483945,52318] 0 2026-03-10T10:20:22.808 INFO:tasks.workunit.client.0.vm02.stdout:8/947: write d1/d1c/d23/d25/f64 [2281937,62717] 0 2026-03-10T10:20:22.809 INFO:tasks.workunit.client.1.vm05.stdout:8/898: fsync d7/d14/d24/d3f/d6a/d8a/d96/db7/fb9 0 2026-03-10T10:20:22.812 INFO:tasks.workunit.client.1.vm05.stdout:9/845: dwrite d0/df/d74/fa0 [0,4194304] 0 2026-03-10T10:20:22.812 INFO:tasks.workunit.client.1.vm05.stdout:8/899: write d7/d14/d15/f2e [7326045,19716] 0 2026-03-10T10:20:22.813 INFO:tasks.workunit.client.0.vm02.stdout:3/975: dwrite d1/f1c [0,4194304] 0 2026-03-10T10:20:22.813 INFO:tasks.workunit.client.0.vm02.stdout:8/948: truncate d1/d1c/d43/d6a/da8/d56/f10e 707464 0 2026-03-10T10:20:22.813 INFO:tasks.workunit.client.1.vm05.stdout:9/846: stat d0/l85 0 2026-03-10T10:20:22.817 INFO:tasks.workunit.client.1.vm05.stdout:5/957: dread da/db/dee/d38/fe4 [0,4194304] 0 2026-03-10T10:20:22.825 INFO:tasks.workunit.client.1.vm05.stdout:6/957: write dd/d36/d3f/d12/d44/d30/d4a/d6e/ff8 [840060,59290] 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.0.vm02.stdout:9/933: write da/d3c/d4c/d2c/d34/f4d [8066490,84564] 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.0.vm02.stdout:7/969: truncate d1/dc/d16/d28/f108 1879943 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.0.vm02.stdout:8/949: mknod d1/d1c/d24/c124 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.0.vm02.stdout:9/934: symlink da/l12f 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.0.vm02.stdout:7/970: getdents d1/d1b/d8f/dad/d7e/dba/ddf 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:2/915: mkdir db/d61/dcc/d129 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:9/847: fsync d0/d1/d13/de/d93/fa1 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:8/900: truncate d7/d14/d24/d3f/d6a/d8a/d96/db7/ffb 1516802 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:9/848: symlink d0/d70/d10d/daf/l126 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:2/916: fsync db/d1c/d40/f125 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:9/849: truncate d0/d1/f4a 544407 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:9/850: write d0/df/d74/f8a [3693768,78646] 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:8/901: truncate d7/d14/d15/ff1 983768 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:9/851: fsync d0/f73 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:9/852: stat d0/d1/f9 0 2026-03-10T10:20:22.852 INFO:tasks.workunit.client.1.vm05.stdout:8/902: creat d7/d14/d15/da7/f123 x:0 0 0 2026-03-10T10:20:22.855 INFO:tasks.workunit.client.1.vm05.stdout:8/903: symlink d7/d14/d15/d3b/l124 0 2026-03-10T10:20:22.856 INFO:tasks.workunit.client.1.vm05.stdout:9/853: creat d0/d1/f127 x:0 0 0 2026-03-10T10:20:22.856 INFO:tasks.workunit.client.1.vm05.stdout:8/904: mkdir d7/d14/d24/d3f/d6a/d8a/d125 0 2026-03-10T10:20:22.875 INFO:tasks.workunit.client.1.vm05.stdout:9/854: dread d0/f73 [0,4194304] 0 2026-03-10T10:20:22.898 INFO:tasks.workunit.client.1.vm05.stdout:9/855: truncate d0/d1/d13/d62/fa8 4799093 0 2026-03-10T10:20:23.011 INFO:tasks.workunit.client.0.vm02.stdout:6/943: sync 2026-03-10T10:20:23.016 INFO:tasks.workunit.client.0.vm02.stdout:6/944: dwrite d0/d8/d9/d7a/dc0/f120 [0,4194304] 0 2026-03-10T10:20:23.018 INFO:tasks.workunit.client.0.vm02.stdout:6/945: chown d0/d8/d29/d2f/d50/d121 780519128 1 2026-03-10T10:20:23.031 INFO:tasks.workunit.client.0.vm02.stdout:7/971: truncate d1/d1b/d8f/dad/d7e/dba/dea/f100 1665326 0 2026-03-10T10:20:23.032 INFO:tasks.workunit.client.1.vm05.stdout:0/982: dwrite d1/d2/d9/d31/f122 [0,4194304] 0 2026-03-10T10:20:23.032 INFO:tasks.workunit.client.0.vm02.stdout:6/946: creat d0/d8/d29/d2f/d50/d10f/d12a/f141 x:0 0 0 2026-03-10T10:20:23.035 INFO:tasks.workunit.client.1.vm05.stdout:0/983: rmdir d1/d2/d9 39 2026-03-10T10:20:23.036 INFO:tasks.workunit.client.0.vm02.stdout:6/947: readlink d0/d8/d9/d7a/l11c 0 2026-03-10T10:20:23.039 INFO:tasks.workunit.client.0.vm02.stdout:7/972: chown d1/dc/d16/dfc/f12b 618944 1 2026-03-10T10:20:23.039 INFO:tasks.workunit.client.1.vm05.stdout:0/984: chown d1/d2/d9/d31/d54 0 1 2026-03-10T10:20:23.040 INFO:tasks.workunit.client.1.vm05.stdout:0/985: write d1/d2/d9/d31/daa/fca [1033052,85190] 0 2026-03-10T10:20:23.041 INFO:tasks.workunit.client.0.vm02.stdout:7/973: mknod d1/dc/d10/d38/c130 0 2026-03-10T10:20:23.043 INFO:tasks.workunit.client.0.vm02.stdout:6/948: creat d0/d8/d29/d6d/d96/de4/d102/f142 x:0 0 0 2026-03-10T10:20:23.046 INFO:tasks.workunit.client.0.vm02.stdout:6/949: symlink d0/d8/d29/d2f/d50/d98/df6/l143 0 2026-03-10T10:20:23.064 INFO:tasks.workunit.client.1.vm05.stdout:4/854: write d1/d31/dc/d40/f7d [1126044,68523] 0 2026-03-10T10:20:23.066 INFO:tasks.workunit.client.1.vm05.stdout:4/855: truncate d1/d31/dc/d40/f67 2828417 0 2026-03-10T10:20:23.067 INFO:tasks.workunit.client.1.vm05.stdout:5/958: dread da/db/d26/f101 [0,4194304] 0 2026-03-10T10:20:23.067 INFO:tasks.workunit.client.1.vm05.stdout:5/959: fdatasync da/db/d28/d32/f105 0 2026-03-10T10:20:23.072 INFO:tasks.workunit.client.1.vm05.stdout:5/960: rename da/db/d26/d5c/l98 to da/d9a/daf/l144 0 2026-03-10T10:20:23.076 INFO:tasks.workunit.client.1.vm05.stdout:5/961: dwrite da/d9a/daf/f137 [0,4194304] 0 2026-03-10T10:20:23.080 INFO:tasks.workunit.client.1.vm05.stdout:5/962: chown da/d9a/dc7/l5a 1598257 1 2026-03-10T10:20:23.082 INFO:tasks.workunit.client.1.vm05.stdout:5/963: fdatasync da/db/d26/d5c/fc5 0 2026-03-10T10:20:23.089 INFO:tasks.workunit.client.1.vm05.stdout:0/986: sync 2026-03-10T10:20:23.089 INFO:tasks.workunit.client.1.vm05.stdout:5/964: sync 2026-03-10T10:20:23.090 INFO:tasks.workunit.client.1.vm05.stdout:0/987: chown d1/d2/d9/d31/d13/da2/dab/l13c 475403624 1 2026-03-10T10:20:23.092 INFO:tasks.workunit.client.1.vm05.stdout:5/965: creat da/db/d26/d70/d72/df6/d133/f145 x:0 0 0 2026-03-10T10:20:23.112 INFO:tasks.workunit.client.0.vm02.stdout:2/992: truncate d0/f2c 5303820 0 2026-03-10T10:20:23.113 INFO:tasks.workunit.client.1.vm05.stdout:6/958: write dd/d36/d3f/d12/d44/d63/f78 [1601946,45575] 0 2026-03-10T10:20:23.114 INFO:tasks.workunit.client.0.vm02.stdout:8/950: write d1/d1c/f42 [872980,7783] 0 2026-03-10T10:20:23.114 INFO:tasks.workunit.client.0.vm02.stdout:3/976: write d1/d6/ff1 [215726,108737] 0 2026-03-10T10:20:23.116 INFO:tasks.workunit.client.0.vm02.stdout:8/951: dread - d1/d1c/d24/dad/dbe/fe9 zero size 2026-03-10T10:20:23.119 INFO:tasks.workunit.client.0.vm02.stdout:3/977: unlink d1/d8/d21/f4d 0 2026-03-10T10:20:23.126 INFO:tasks.workunit.client.0.vm02.stdout:2/993: truncate d0/d10/d81/f12d 219006 0 2026-03-10T10:20:23.127 INFO:tasks.workunit.client.0.vm02.stdout:9/935: dwrite da/d3c/d4c/d38/d82/d89/f8a [4194304,4194304] 0 2026-03-10T10:20:23.130 INFO:tasks.workunit.client.1.vm05.stdout:2/917: write db/d28/d4f/d59/da4/fb3 [367069,68] 0 2026-03-10T10:20:23.130 INFO:tasks.workunit.client.1.vm05.stdout:2/918: fsync db/d61/dfc/d9d/fab 0 2026-03-10T10:20:23.135 INFO:tasks.workunit.client.1.vm05.stdout:8/905: write d7/d14/d24/f95 [367175,115534] 0 2026-03-10T10:20:23.137 INFO:tasks.workunit.client.1.vm05.stdout:2/919: mknod db/d2d/c12a 0 2026-03-10T10:20:23.137 INFO:tasks.workunit.client.1.vm05.stdout:8/906: symlink d7/d14/d24/l126 0 2026-03-10T10:20:23.138 INFO:tasks.workunit.client.1.vm05.stdout:8/907: readlink d7/d14/d15/d3b/l9a 0 2026-03-10T10:20:23.138 INFO:tasks.workunit.client.1.vm05.stdout:8/908: readlink d7/d14/d62/l6d 0 2026-03-10T10:20:23.138 INFO:tasks.workunit.client.1.vm05.stdout:2/920: fdatasync db/d12/ff0 0 2026-03-10T10:20:23.158 INFO:tasks.workunit.client.1.vm05.stdout:6/959: dread dd/d36/d3f/d12/d44/d30/f9e [0,4194304] 0 2026-03-10T10:20:23.160 INFO:tasks.workunit.client.1.vm05.stdout:6/960: mkdir dd/d36/d3f/d12/d44/daa/d139 0 2026-03-10T10:20:23.162 INFO:tasks.workunit.client.1.vm05.stdout:6/961: dread dd/d36/d3f/d12/d44/d2a/d3d/d48/fb2 [0,4194304] 0 2026-03-10T10:20:23.164 INFO:tasks.workunit.client.1.vm05.stdout:6/962: fsync dd/f14 0 2026-03-10T10:20:23.168 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:23 vm05.local ceph-mon[59051]: pgmap v10: 65 pgs: 65 active+clean; 3.8 GiB data, 13 GiB used, 107 GiB / 120 GiB avail; 29 MiB/s rd, 71 MiB/s wr, 200 op/s 2026-03-10T10:20:23.171 INFO:tasks.workunit.client.1.vm05.stdout:9/856: truncate d0/df/d74/d8c/de4/d104/f118 490847 0 2026-03-10T10:20:23.185 INFO:tasks.workunit.client.0.vm02.stdout:3/978: dread d1/d8/fb [0,4194304] 0 2026-03-10T10:20:23.186 INFO:tasks.workunit.client.0.vm02.stdout:7/974: dwrite d1/d1b/d8f/dad/d7e/dba/fcc [0,4194304] 0 2026-03-10T10:20:23.190 INFO:tasks.workunit.client.0.vm02.stdout:6/950: write d0/fa3 [907708,50199] 0 2026-03-10T10:20:23.201 INFO:tasks.workunit.client.1.vm05.stdout:5/966: rmdir da/db/d26/d5c 39 2026-03-10T10:20:23.202 INFO:tasks.workunit.client.1.vm05.stdout:0/988: write d1/d2/dc6/f104 [793903,108017] 0 2026-03-10T10:20:23.203 INFO:tasks.workunit.client.0.vm02.stdout:7/975: rmdir d1/dc/d16/d28/d2d/d103 39 2026-03-10T10:20:23.221 INFO:tasks.workunit.client.0.vm02.stdout:6/951: sync 2026-03-10T10:20:23.225 INFO:tasks.workunit.client.0.vm02.stdout:8/952: dwrite d1/d1c/d23/d25/fc2 [0,4194304] 0 2026-03-10T10:20:23.225 INFO:tasks.workunit.client.1.vm05.stdout:5/967: truncate da/db/d28/f107 1564790 0 2026-03-10T10:20:23.238 INFO:tasks.workunit.client.0.vm02.stdout:3/979: rename d1/d8/d44/lbf to d1/d58/dc9/l144 0 2026-03-10T10:20:23.253 INFO:tasks.workunit.client.0.vm02.stdout:2/994: write d0/d71/d108/ff0 [8717,72170] 0 2026-03-10T10:20:23.253 INFO:tasks.workunit.client.0.vm02.stdout:2/995: readlink d0/d1a/l2f 0 2026-03-10T10:20:23.262 INFO:tasks.workunit.client.0.vm02.stdout:9/936: write da/d3c/d4c/d38/d82/da3/fff [505640,93956] 0 2026-03-10T10:20:23.264 INFO:tasks.workunit.client.0.vm02.stdout:9/937: dread da/d3c/fc0 [0,4194304] 0 2026-03-10T10:20:23.285 INFO:tasks.workunit.client.0.vm02.stdout:3/980: symlink d1/d8/d86/da2/d12b/d13d/l145 0 2026-03-10T10:20:23.288 INFO:tasks.workunit.client.1.vm05.stdout:8/909: dwrite d7/d14/d15/da7/faf [0,4194304] 0 2026-03-10T10:20:23.295 INFO:tasks.workunit.client.1.vm05.stdout:2/921: dwrite db/d12/f1d [0,4194304] 0 2026-03-10T10:20:23.299 INFO:tasks.workunit.client.0.vm02.stdout:2/996: dwrite d0/d71/d108/d65/db0/fbb [0,4194304] 0 2026-03-10T10:20:23.300 INFO:tasks.workunit.client.1.vm05.stdout:8/910: sync 2026-03-10T10:20:23.310 INFO:tasks.workunit.client.0.vm02.stdout:6/952: creat d0/d8/d29/d6d/d13e/f144 x:0 0 0 2026-03-10T10:20:23.311 INFO:tasks.workunit.client.0.vm02.stdout:6/953: chown d0/d8/d29/d52/laf 29 1 2026-03-10T10:20:23.319 INFO:tasks.workunit.client.0.vm02.stdout:2/997: dwrite d0/d71/d108/f63 [4194304,4194304] 0 2026-03-10T10:20:23.336 INFO:tasks.workunit.client.0.vm02.stdout:2/998: dwrite d0/d1a/f25 [0,4194304] 0 2026-03-10T10:20:23.338 INFO:tasks.workunit.client.1.vm05.stdout:6/963: write dd/d36/d3f/f137 [13693,98169] 0 2026-03-10T10:20:23.350 INFO:tasks.workunit.client.1.vm05.stdout:6/964: dwrite dd/d36/f71 [0,4194304] 0 2026-03-10T10:20:23.350 INFO:tasks.workunit.client.1.vm05.stdout:8/911: dwrite d7/d2f/f7e [0,4194304] 0 2026-03-10T10:20:23.370 INFO:tasks.workunit.client.0.vm02.stdout:3/981: dread d1/d6/f53 [0,4194304] 0 2026-03-10T10:20:23.371 INFO:tasks.workunit.client.0.vm02.stdout:3/982: chown d1/d6/d8e/fa0 60262 1 2026-03-10T10:20:23.372 INFO:tasks.workunit.client.0.vm02.stdout:3/983: write d1/f1c [207274,34203] 0 2026-03-10T10:20:23.378 INFO:tasks.workunit.client.0.vm02.stdout:3/984: dwrite d1/d8/fd6 [0,4194304] 0 2026-03-10T10:20:23.385 INFO:tasks.workunit.client.1.vm05.stdout:2/922: link db/f25 db/d28/d4f/d59/dce/f12b 0 2026-03-10T10:20:23.390 INFO:tasks.workunit.client.1.vm05.stdout:9/857: write d0/df/f99 [4336429,65193] 0 2026-03-10T10:20:23.402 INFO:tasks.workunit.client.0.vm02.stdout:7/976: getdents d1/dc/d16/d28/d2d 0 2026-03-10T10:20:23.402 INFO:tasks.workunit.client.0.vm02.stdout:7/977: fdatasync d1/d1b/d8e/f121 0 2026-03-10T10:20:23.405 INFO:tasks.workunit.client.0.vm02.stdout:7/978: dread d1/dc/d16/d28/d2d/fb0 [0,4194304] 0 2026-03-10T10:20:23.420 INFO:tasks.workunit.client.1.vm05.stdout:4/856: dwrite d1/d31/dc/d40/f67 [0,4194304] 0 2026-03-10T10:20:23.444 INFO:tasks.workunit.client.0.vm02.stdout:6/954: readlink d0/d8/d29/d6d/d96/de4/d102/l66 0 2026-03-10T10:20:23.449 INFO:tasks.workunit.client.1.vm05.stdout:0/989: write d1/d2/d9/d31/d13/d17/fd3 [941361,70678] 0 2026-03-10T10:20:23.453 INFO:tasks.workunit.client.0.vm02.stdout:9/938: link da/d3c/d4c/d38/d82/d8c/f98 da/d3c/d4c/d38/da6/d118/f130 0 2026-03-10T10:20:23.464 INFO:tasks.workunit.client.0.vm02.stdout:8/953: dwrite d1/d1c/d23/f75 [0,4194304] 0 2026-03-10T10:20:23.465 INFO:tasks.workunit.client.1.vm05.stdout:5/968: dwrite da/db/d26/d70/d72/d10b/f111 [0,4194304] 0 2026-03-10T10:20:23.465 INFO:tasks.workunit.client.1.vm05.stdout:9/858: creat d0/df/d74/d8c/de4/f128 x:0 0 0 2026-03-10T10:20:23.465 INFO:tasks.workunit.client.1.vm05.stdout:4/857: creat d1/dfd/f122 x:0 0 0 2026-03-10T10:20:23.470 INFO:tasks.workunit.client.0.vm02.stdout:8/954: dread d1/d1c/d24/f6b [0,4194304] 0 2026-03-10T10:20:23.472 INFO:tasks.workunit.client.1.vm05.stdout:6/965: dread dd/f29 [0,4194304] 0 2026-03-10T10:20:23.472 INFO:tasks.workunit.client.0.vm02.stdout:2/999: unlink d0/d10/dee/fff 0 2026-03-10T10:20:23.497 INFO:tasks.workunit.client.0.vm02.stdout:3/985: creat d1/d8/d86/da2/f146 x:0 0 0 2026-03-10T10:20:23.498 INFO:tasks.workunit.client.1.vm05.stdout:5/969: mkdir da/db/de9/d146 0 2026-03-10T10:20:23.500 INFO:tasks.workunit.client.1.vm05.stdout:9/859: mknod d0/df/d74/d8c/d8f/ddd/de6/c129 0 2026-03-10T10:20:23.503 INFO:tasks.workunit.client.0.vm02.stdout:9/939: fdatasync da/d3c/d4c/d38/d82/d8c/fa8 0 2026-03-10T10:20:23.503 INFO:tasks.workunit.client.0.vm02.stdout:6/955: fdatasync d0/d8/d29/d6d/d96/dfd/ffc 0 2026-03-10T10:20:23.504 INFO:tasks.workunit.client.0.vm02.stdout:9/940: chown da/f15 1 1 2026-03-10T10:20:23.504 INFO:tasks.workunit.client.0.vm02.stdout:6/956: dread - d0/d8/d29/d6d/d96/dfd/ffc zero size 2026-03-10T10:20:23.506 INFO:tasks.workunit.client.0.vm02.stdout:6/957: write d0/d8/d29/d2f/d50/d98/fb1 [1602312,61364] 0 2026-03-10T10:20:23.507 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:23 vm02.local ceph-mon[50200]: pgmap v10: 65 pgs: 65 active+clean; 3.8 GiB data, 13 GiB used, 107 GiB / 120 GiB avail; 29 MiB/s rd, 71 MiB/s wr, 200 op/s 2026-03-10T10:20:23.509 INFO:tasks.workunit.client.1.vm05.stdout:4/858: creat d1/d31/d76/d109/f123 x:0 0 0 2026-03-10T10:20:23.510 INFO:tasks.workunit.client.0.vm02.stdout:8/955: rename d1/d2/f36 to d1/d1c/d24/dad/f125 0 2026-03-10T10:20:23.510 INFO:tasks.workunit.client.0.vm02.stdout:3/986: mkdir d1/d8/d86/db1/d116/d147 0 2026-03-10T10:20:23.511 INFO:tasks.workunit.client.1.vm05.stdout:8/912: rename d7/d14/d15/d3b/l124 to d7/d14/d24/d3f/d6a/db0/l127 0 2026-03-10T10:20:23.515 INFO:tasks.workunit.client.1.vm05.stdout:5/970: creat da/db/dee/f147 x:0 0 0 2026-03-10T10:20:23.516 INFO:tasks.workunit.client.1.vm05.stdout:5/971: stat da/db/d28/d8a/de3/l102 0 2026-03-10T10:20:23.517 INFO:tasks.workunit.client.1.vm05.stdout:5/972: fdatasync da/db/d28/d97/fb2 0 2026-03-10T10:20:23.518 INFO:tasks.workunit.client.0.vm02.stdout:6/958: symlink d0/d8/d9/l145 0 2026-03-10T10:20:23.518 INFO:tasks.workunit.client.1.vm05.stdout:8/913: fdatasync d7/d14/d24/f42 0 2026-03-10T10:20:23.518 INFO:tasks.workunit.client.0.vm02.stdout:9/941: creat da/d3c/d4c/d38/d4a/d70/d11e/f131 x:0 0 0 2026-03-10T10:20:23.522 INFO:tasks.workunit.client.1.vm05.stdout:4/859: creat d1/d64/f124 x:0 0 0 2026-03-10T10:20:23.523 INFO:tasks.workunit.client.1.vm05.stdout:6/966: rename dd/d36/d3f/dbd/c104 to dd/d36/d3f/dbd/c13a 0 2026-03-10T10:20:23.526 INFO:tasks.workunit.client.1.vm05.stdout:8/914: chown d7/d14/d62/l9e 47120 1 2026-03-10T10:20:23.531 INFO:tasks.workunit.client.0.vm02.stdout:9/942: unlink da/d3c/d4c/d2c/d34/c105 0 2026-03-10T10:20:23.540 INFO:tasks.workunit.client.1.vm05.stdout:9/860: dread d0/d70/d10d/daf/fcd [0,4194304] 0 2026-03-10T10:20:23.541 INFO:tasks.workunit.client.1.vm05.stdout:9/861: dread d0/d70/d10d/daf/fdb [0,4194304] 0 2026-03-10T10:20:23.542 INFO:tasks.workunit.client.0.vm02.stdout:9/943: mkdir da/d3c/d4c/d2c/d34/dc2/d115/d132 0 2026-03-10T10:20:23.542 INFO:tasks.workunit.client.0.vm02.stdout:8/956: dread d1/d1c/d24/d71/fb4 [0,4194304] 0 2026-03-10T10:20:23.543 INFO:tasks.workunit.client.0.vm02.stdout:8/957: readlink d1/d1c/d24/d71/l114 0 2026-03-10T10:20:23.544 INFO:tasks.workunit.client.0.vm02.stdout:9/944: creat da/d3c/d4c/db1/d112/d114/f133 x:0 0 0 2026-03-10T10:20:23.545 INFO:tasks.workunit.client.0.vm02.stdout:9/945: fsync da/d3c/d4c/f2b 0 2026-03-10T10:20:23.546 INFO:tasks.workunit.client.1.vm05.stdout:8/915: unlink d7/d14/d24/d3f/df0/c102 0 2026-03-10T10:20:23.547 INFO:tasks.workunit.client.1.vm05.stdout:8/916: chown d7/d14/d15/d3b/fc5 1684389 1 2026-03-10T10:20:23.550 INFO:tasks.workunit.client.0.vm02.stdout:9/946: creat da/f134 x:0 0 0 2026-03-10T10:20:23.550 INFO:tasks.workunit.client.1.vm05.stdout:9/862: creat d0/df/d11/f12a x:0 0 0 2026-03-10T10:20:23.551 INFO:tasks.workunit.client.1.vm05.stdout:4/860: link d1/d3/le d1/d31/d76/l125 0 2026-03-10T10:20:23.551 INFO:tasks.workunit.client.1.vm05.stdout:9/863: write d0/df/d74/d8c/de4/d104/fff [307727,32535] 0 2026-03-10T10:20:23.558 INFO:tasks.workunit.client.1.vm05.stdout:9/864: sync 2026-03-10T10:20:23.567 INFO:tasks.workunit.client.1.vm05.stdout:9/865: mknod d0/d1/d13/d55/c12b 0 2026-03-10T10:20:23.569 INFO:tasks.workunit.client.1.vm05.stdout:4/861: read d1/d31/d4b/d6d/fbc [424009,89781] 0 2026-03-10T10:20:23.572 INFO:tasks.workunit.client.1.vm05.stdout:9/866: creat d0/dc4/dfe/f12c x:0 0 0 2026-03-10T10:20:23.580 INFO:tasks.workunit.client.1.vm05.stdout:9/867: creat d0/df/d74/d8c/f12d x:0 0 0 2026-03-10T10:20:23.580 INFO:tasks.workunit.client.1.vm05.stdout:9/868: write d0/df/d74/d90/f125 [483268,6194] 0 2026-03-10T10:20:23.581 INFO:tasks.workunit.client.1.vm05.stdout:9/869: chown d0/d1/f6d 21134836 1 2026-03-10T10:20:23.601 INFO:tasks.workunit.client.1.vm05.stdout:2/923: write db/d28/d4f/fbd [621031,64288] 0 2026-03-10T10:20:23.602 INFO:tasks.workunit.client.1.vm05.stdout:0/990: write d1/f100 [694121,108991] 0 2026-03-10T10:20:23.603 INFO:tasks.workunit.client.1.vm05.stdout:0/991: chown d1/d2/d9/d50/f94 15 1 2026-03-10T10:20:23.604 INFO:tasks.workunit.client.1.vm05.stdout:0/992: chown d1/d2/d9/d50/d9a/da0/ff7 97960361 1 2026-03-10T10:20:23.614 INFO:tasks.workunit.client.0.vm02.stdout:7/979: dwrite d1/dc/d60/f53 [4194304,4194304] 0 2026-03-10T10:20:23.626 INFO:tasks.workunit.client.1.vm05.stdout:2/924: truncate db/f36 8742085 0 2026-03-10T10:20:23.632 INFO:tasks.workunit.client.1.vm05.stdout:0/993: creat d1/d2/d9/d31/d13/d15/d114/f148 x:0 0 0 2026-03-10T10:20:23.635 INFO:tasks.workunit.client.1.vm05.stdout:0/994: creat d1/d2/d9/d31/d13/d15/d114/f149 x:0 0 0 2026-03-10T10:20:23.641 INFO:tasks.workunit.client.1.vm05.stdout:5/973: write da/db/d28/d32/f79 [3672381,51742] 0 2026-03-10T10:20:23.641 INFO:tasks.workunit.client.1.vm05.stdout:6/967: write dd/d36/d3f/d12/d44/d30/f9e [2029122,98424] 0 2026-03-10T10:20:23.646 INFO:tasks.workunit.client.0.vm02.stdout:6/959: write d0/d8/d29/d6d/d96/ff5 [156871,65667] 0 2026-03-10T10:20:23.647 INFO:tasks.workunit.client.0.vm02.stdout:3/987: dwrite d1/d8/d21/d73/d78/d84/fb7 [0,4194304] 0 2026-03-10T10:20:23.655 INFO:tasks.workunit.client.1.vm05.stdout:0/995: dwrite d1/d2/d9/d31/d13/da2/fd6 [0,4194304] 0 2026-03-10T10:20:23.661 INFO:tasks.workunit.client.1.vm05.stdout:5/974: chown da/d9a/dc7/f83 504 1 2026-03-10T10:20:23.672 INFO:tasks.workunit.client.0.vm02.stdout:3/988: mknod d1/d8/d86/db1/c148 0 2026-03-10T10:20:23.675 INFO:tasks.workunit.client.0.vm02.stdout:9/947: write da/d3c/d4c/d75/fbb [4734333,24141] 0 2026-03-10T10:20:23.675 INFO:tasks.workunit.client.0.vm02.stdout:8/958: dwrite d1/d1c/d43/d5b/d88/dac/fb7 [0,4194304] 0 2026-03-10T10:20:23.675 INFO:tasks.workunit.client.1.vm05.stdout:5/975: unlink da/d9a/d120/f142 0 2026-03-10T10:20:23.675 INFO:tasks.workunit.client.1.vm05.stdout:0/996: rename d1/d2/d5d/c92 to d1/d2/d9/d31/d13/d124/c14a 0 2026-03-10T10:20:23.675 INFO:tasks.workunit.client.1.vm05.stdout:4/862: write d1/d31/dc/f3d [917113,87534] 0 2026-03-10T10:20:23.675 INFO:tasks.workunit.client.1.vm05.stdout:8/917: write d7/d14/d24/f61 [319433,64350] 0 2026-03-10T10:20:23.675 INFO:tasks.workunit.client.1.vm05.stdout:4/863: write d1/d3/d65/de0/f11d [5128110,95071] 0 2026-03-10T10:20:23.690 INFO:tasks.workunit.client.0.vm02.stdout:7/980: write d1/dc/d10/d38/f9b [1283693,86346] 0 2026-03-10T10:20:23.690 INFO:tasks.workunit.client.1.vm05.stdout:9/870: dwrite d0/df/d74/d8c/de4/d104/f91 [0,4194304] 0 2026-03-10T10:20:23.717 INFO:tasks.workunit.client.0.vm02.stdout:9/948: mkdir da/d3c/d4c/d2c/d34/d35/d135 0 2026-03-10T10:20:23.724 INFO:tasks.workunit.client.0.vm02.stdout:8/959: unlink d1/d1c/d23/f3b 0 2026-03-10T10:20:23.729 INFO:tasks.workunit.client.0.vm02.stdout:3/989: dread d1/d6/d8e/fa6 [0,4194304] 0 2026-03-10T10:20:23.732 INFO:tasks.workunit.client.0.vm02.stdout:3/990: chown d1/d6/d8e/fa0 4671 1 2026-03-10T10:20:23.738 INFO:tasks.workunit.client.1.vm05.stdout:6/968: truncate dd/d36/d3f/d12/d44/daa/de4/ff3 3842040 0 2026-03-10T10:20:23.740 INFO:tasks.workunit.client.0.vm02.stdout:6/960: write d0/d8/d29/d6d/d96/de4/d102/f8d [2474255,64259] 0 2026-03-10T10:20:23.745 INFO:tasks.workunit.client.1.vm05.stdout:2/925: dwrite db/d12/fb2 [0,4194304] 0 2026-03-10T10:20:23.749 INFO:tasks.workunit.client.1.vm05.stdout:8/918: rename d7/d14/d3a/d49/d65/db8/fe1 to d7/d14/d3a/d49/d65/db8/f128 0 2026-03-10T10:20:23.753 INFO:tasks.workunit.client.0.vm02.stdout:9/949: mknod da/d3c/d4c/db1/d112/d114/c136 0 2026-03-10T10:20:23.760 INFO:tasks.workunit.client.0.vm02.stdout:9/950: truncate da/d3c/d4c/fe0 931257 0 2026-03-10T10:20:23.761 INFO:tasks.workunit.client.1.vm05.stdout:6/969: sync 2026-03-10T10:20:23.763 INFO:tasks.workunit.client.1.vm05.stdout:6/970: chown dd/d36/d3f/d12/d44/d30/d4a/d6e/d28/c88 28368 1 2026-03-10T10:20:23.765 INFO:tasks.workunit.client.0.vm02.stdout:9/951: truncate da/d3c/d4c/fe0 1768062 0 2026-03-10T10:20:23.771 INFO:tasks.workunit.client.0.vm02.stdout:3/991: mkdir d1/d8/d21/d73/d78/d84/d149 0 2026-03-10T10:20:23.780 INFO:tasks.workunit.client.1.vm05.stdout:5/976: mknod da/db/de9/d146/c148 0 2026-03-10T10:20:23.780 INFO:tasks.workunit.client.1.vm05.stdout:5/977: dwrite da/d96/fea [0,4194304] 0 2026-03-10T10:20:23.780 INFO:tasks.workunit.client.0.vm02.stdout:7/981: symlink d1/dc/d16/d28/d2d/dae/d12f/l131 0 2026-03-10T10:20:23.780 INFO:tasks.workunit.client.0.vm02.stdout:6/961: mkdir d0/d8/d29/d6d/d96/de4/def/d6f/d10c/d146 0 2026-03-10T10:20:23.780 INFO:tasks.workunit.client.0.vm02.stdout:3/992: fdatasync d1/d8/d21/d73/d78/d84/fb7 0 2026-03-10T10:20:23.780 INFO:tasks.workunit.client.0.vm02.stdout:3/993: chown d1/d8/d86/da2/lfd 7 1 2026-03-10T10:20:23.783 INFO:tasks.workunit.client.1.vm05.stdout:8/919: symlink d7/d14/d24/d3f/d6a/d8a/l129 0 2026-03-10T10:20:23.789 INFO:tasks.workunit.client.1.vm05.stdout:2/926: mknod db/d1c/c12c 0 2026-03-10T10:20:23.790 INFO:tasks.workunit.client.0.vm02.stdout:8/960: dread d1/f65 [0,4194304] 0 2026-03-10T10:20:23.792 INFO:tasks.workunit.client.1.vm05.stdout:6/971: sync 2026-03-10T10:20:23.794 INFO:tasks.workunit.client.1.vm05.stdout:5/978: mkdir da/d63/d149 0 2026-03-10T10:20:23.795 INFO:tasks.workunit.client.1.vm05.stdout:9/871: dread d0/d1/d13/f27 [0,4194304] 0 2026-03-10T10:20:23.800 INFO:tasks.workunit.client.1.vm05.stdout:8/920: creat d7/d14/d15/da7/def/f12a x:0 0 0 2026-03-10T10:20:23.800 INFO:tasks.workunit.client.1.vm05.stdout:2/927: readlink db/d28/d4f/d59/l10b 0 2026-03-10T10:20:23.801 INFO:tasks.workunit.client.1.vm05.stdout:4/864: creat d1/d64/da9/f126 x:0 0 0 2026-03-10T10:20:23.803 INFO:tasks.workunit.client.1.vm05.stdout:4/865: fsync d1/d31/dc/d40/d45/ded/ff5 0 2026-03-10T10:20:23.805 INFO:tasks.workunit.client.1.vm05.stdout:5/979: creat da/d9a/dbe/d11d/f14a x:0 0 0 2026-03-10T10:20:23.806 INFO:tasks.workunit.client.1.vm05.stdout:5/980: write da/db/d28/d97/fb2 [2832240,4292] 0 2026-03-10T10:20:23.811 INFO:tasks.workunit.client.1.vm05.stdout:5/981: chown da/d9a/dc7/l5d 910063 1 2026-03-10T10:20:23.812 INFO:tasks.workunit.client.1.vm05.stdout:5/982: chown da/db/d26/d70/l8b 2 1 2026-03-10T10:20:23.814 INFO:tasks.workunit.client.1.vm05.stdout:4/866: dwrite d1/d3/d65/ddb/fe7 [0,4194304] 0 2026-03-10T10:20:23.871 INFO:tasks.workunit.client.1.vm05.stdout:9/872: mknod d0/df/c12e 0 2026-03-10T10:20:23.872 INFO:tasks.workunit.client.1.vm05.stdout:0/997: link d1/d2/d9/d50/f93 d1/d2/d9/d31/d13/da2/dab/f14b 0 2026-03-10T10:20:23.877 INFO:tasks.workunit.client.0.vm02.stdout:9/952: truncate da/d3c/d4c/d2c/d34/d35/df4/ff8 713341 0 2026-03-10T10:20:23.878 INFO:tasks.workunit.client.0.vm02.stdout:7/982: chown d1/c3c 4007918 1 2026-03-10T10:20:23.879 INFO:tasks.workunit.client.1.vm05.stdout:8/921: symlink d7/d14/d3a/d49/l12b 0 2026-03-10T10:20:23.879 INFO:tasks.workunit.client.1.vm05.stdout:6/972: rename dd/d36/d3f/d12/d44/fa1 to dd/d36/d3f/d12/d44/d2a/d77/f13b 0 2026-03-10T10:20:23.880 INFO:tasks.workunit.client.1.vm05.stdout:5/983: symlink da/db/l14b 0 2026-03-10T10:20:23.881 INFO:tasks.workunit.client.1.vm05.stdout:5/984: dread - da/d96/dd9/f141 zero size 2026-03-10T10:20:23.881 INFO:tasks.workunit.client.1.vm05.stdout:6/973: chown dd/d36/d3f 6021381 1 2026-03-10T10:20:23.884 INFO:tasks.workunit.client.0.vm02.stdout:6/962: read d0/d8/d8c/f75 [471012,126359] 0 2026-03-10T10:20:23.893 INFO:tasks.workunit.client.0.vm02.stdout:7/983: dread d1/dc/d16/d28/f108 [0,4194304] 0 2026-03-10T10:20:23.893 INFO:tasks.workunit.client.1.vm05.stdout:8/922: creat d7/d14/d24/d3f/dc4/f12c x:0 0 0 2026-03-10T10:20:23.895 INFO:tasks.workunit.client.1.vm05.stdout:5/985: creat da/d96/d117/f14c x:0 0 0 2026-03-10T10:20:23.898 INFO:tasks.workunit.client.0.vm02.stdout:8/961: dread d1/d1c/d43/d6a/f87 [0,4194304] 0 2026-03-10T10:20:23.898 INFO:tasks.workunit.client.1.vm05.stdout:9/873: mkdir d0/df/d74/d8c/de4/d104/d107/d12f 0 2026-03-10T10:20:23.899 INFO:tasks.workunit.client.0.vm02.stdout:3/994: mknod d1/d8/d21/d73/d78/c14a 0 2026-03-10T10:20:23.900 INFO:tasks.workunit.client.0.vm02.stdout:6/963: dread - d0/d87/f110 zero size 2026-03-10T10:20:23.901 INFO:tasks.workunit.client.1.vm05.stdout:6/974: dread dd/d1b/fa8 [0,4194304] 0 2026-03-10T10:20:23.902 INFO:tasks.workunit.client.1.vm05.stdout:9/874: fdatasync d0/d1/dcc/f116 0 2026-03-10T10:20:23.909 INFO:tasks.workunit.client.1.vm05.stdout:5/986: creat da/db/d28/d8a/f14d x:0 0 0 2026-03-10T10:20:23.911 INFO:tasks.workunit.client.1.vm05.stdout:6/975: stat dd/d1b/la7 0 2026-03-10T10:20:23.911 INFO:tasks.workunit.client.1.vm05.stdout:8/923: dwrite d7/d2f/f4b [4194304,4194304] 0 2026-03-10T10:20:23.912 INFO:tasks.workunit.client.1.vm05.stdout:0/998: creat d1/d2/d9/d31/d13/da2/f14c x:0 0 0 2026-03-10T10:20:23.919 INFO:tasks.workunit.client.1.vm05.stdout:6/976: readlink dd/d36/d3f/d12/d44/d2a/d3d/d3e/ld7 0 2026-03-10T10:20:23.927 INFO:tasks.workunit.client.1.vm05.stdout:4/867: truncate d1/f19 1607420 0 2026-03-10T10:20:23.931 INFO:tasks.workunit.client.0.vm02.stdout:7/984: dread d1/dc/d16/f1f [0,4194304] 0 2026-03-10T10:20:23.935 INFO:tasks.workunit.client.1.vm05.stdout:4/868: read d1/d31/dc/d40/d45/ded/ff5 [59055,125426] 0 2026-03-10T10:20:23.935 INFO:tasks.workunit.client.1.vm05.stdout:8/924: dread d7/d14/d15/da7/def/f107 [0,4194304] 0 2026-03-10T10:20:23.936 INFO:tasks.workunit.client.0.vm02.stdout:3/995: dread d1/d8/d86/da2/fd2 [0,4194304] 0 2026-03-10T10:20:23.936 INFO:tasks.workunit.client.1.vm05.stdout:2/928: dwrite db/d28/f7d [0,4194304] 0 2026-03-10T10:20:23.936 INFO:tasks.workunit.client.1.vm05.stdout:4/869: readlink d1/d3/l7c 0 2026-03-10T10:20:23.936 INFO:tasks.workunit.client.0.vm02.stdout:6/964: dread d0/d8/f8f [0,4194304] 0 2026-03-10T10:20:23.941 INFO:tasks.workunit.client.0.vm02.stdout:7/985: mkdir d1/dc/d16/d132 0 2026-03-10T10:20:23.946 INFO:tasks.workunit.client.1.vm05.stdout:5/987: unlink da/db/dee/c25 0 2026-03-10T10:20:23.953 INFO:tasks.workunit.client.1.vm05.stdout:0/999: mknod d1/d2/d39/d6e/d95/c14d 0 2026-03-10T10:20:23.960 INFO:tasks.workunit.client.0.vm02.stdout:9/953: truncate da/d3c/d4c/f49 3462691 0 2026-03-10T10:20:23.961 INFO:tasks.workunit.client.1.vm05.stdout:9/875: mknod d0/d1/d13/de/c130 0 2026-03-10T10:20:23.964 INFO:tasks.workunit.client.0.vm02.stdout:9/954: dwrite da/d3c/d4c/d75/fbb [0,4194304] 0 2026-03-10T10:20:23.974 INFO:tasks.workunit.client.0.vm02.stdout:7/986: truncate d1/dc/ff 4886096 0 2026-03-10T10:20:23.975 INFO:tasks.workunit.client.1.vm05.stdout:8/925: rmdir d7/d14/d24 39 2026-03-10T10:20:23.981 INFO:tasks.workunit.client.0.vm02.stdout:8/962: write d1/d1c/d43/d5b/d88/dac/d83/f99 [4591428,18950] 0 2026-03-10T10:20:23.981 INFO:tasks.workunit.client.1.vm05.stdout:4/870: chown d1/f5d 1066052544 1 2026-03-10T10:20:23.989 INFO:tasks.workunit.client.0.vm02.stdout:6/965: mkdir d0/d8/d29/d6d/d96/de4/def/d6f/d10c/d146/d147 0 2026-03-10T10:20:23.991 INFO:tasks.workunit.client.0.vm02.stdout:9/955: mknod da/d3c/d4c/d38/d82/d8c/c137 0 2026-03-10T10:20:23.996 INFO:tasks.workunit.client.1.vm05.stdout:2/929: write db/d2d/f47 [1428752,124620] 0 2026-03-10T10:20:23.998 INFO:tasks.workunit.client.1.vm05.stdout:2/930: read db/d28/d4f/d59/da4/faf [3527751,21136] 0 2026-03-10T10:20:24.000 INFO:tasks.workunit.client.1.vm05.stdout:6/977: dwrite dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6/f118 [0,4194304] 0 2026-03-10T10:20:24.002 INFO:tasks.workunit.client.0.vm02.stdout:6/966: creat d0/d8/d29/d2f/d13b/f148 x:0 0 0 2026-03-10T10:20:24.005 INFO:tasks.workunit.client.1.vm05.stdout:4/871: creat d1/d31/dc/d40/d45/daa/f127 x:0 0 0 2026-03-10T10:20:24.005 INFO:tasks.workunit.client.0.vm02.stdout:9/956: rename da/d3c/d4c/d38/da6/fec to da/de5/f138 0 2026-03-10T10:20:24.005 INFO:tasks.workunit.client.0.vm02.stdout:9/957: readlink da/d3c/d4c/d38/d82/le8 0 2026-03-10T10:20:24.008 INFO:tasks.workunit.client.0.vm02.stdout:7/987: write d1/dc/d10/f7d [261706,33558] 0 2026-03-10T10:20:24.009 INFO:tasks.workunit.client.1.vm05.stdout:8/926: dwrite d7/d14/d24/f34 [0,4194304] 0 2026-03-10T10:20:24.010 INFO:tasks.workunit.client.0.vm02.stdout:3/996: getdents d1/d58/d104 0 2026-03-10T10:20:24.014 INFO:tasks.workunit.client.0.vm02.stdout:8/963: creat d1/d1c/d24/dad/dbe/dda/ded/f126 x:0 0 0 2026-03-10T10:20:24.015 INFO:tasks.workunit.client.1.vm05.stdout:2/931: stat db/l63 0 2026-03-10T10:20:24.016 INFO:tasks.workunit.client.1.vm05.stdout:5/988: dwrite da/db/dee/d38/f65 [0,4194304] 0 2026-03-10T10:20:24.018 INFO:tasks.workunit.client.0.vm02.stdout:9/958: truncate da/d3c/d4c/d2c/d34/f83 3066247 0 2026-03-10T10:20:24.021 INFO:tasks.workunit.client.1.vm05.stdout:6/978: mknod dd/d36/d3f/d12/d44/d2a/d3d/d48/dc6/c13c 0 2026-03-10T10:20:24.028 INFO:tasks.workunit.client.0.vm02.stdout:3/997: dread - d1/d58/f91 zero size 2026-03-10T10:20:24.032 INFO:tasks.workunit.client.1.vm05.stdout:5/989: truncate da/db/d28/d97/f87 2147601 0 2026-03-10T10:20:24.032 INFO:tasks.workunit.client.1.vm05.stdout:2/932: dwrite db/d12/fb5 [0,4194304] 0 2026-03-10T10:20:24.037 INFO:tasks.workunit.client.1.vm05.stdout:6/979: creat dd/d36/d3f/d12/d44/d30/d4a/f13d x:0 0 0 2026-03-10T10:20:24.042 INFO:tasks.workunit.client.1.vm05.stdout:6/980: dwrite dd/d36/d3f/d12/d44/d63/f78 [0,4194304] 0 2026-03-10T10:20:24.056 INFO:tasks.workunit.client.0.vm02.stdout:6/967: dwrite d0/d8/d29/d6d/d96/de4/d102/fd5 [0,4194304] 0 2026-03-10T10:20:24.067 INFO:tasks.workunit.client.0.vm02.stdout:8/964: write d1/d1c/d43/d6a/da8/fbf [777247,92444] 0 2026-03-10T10:20:24.069 INFO:tasks.workunit.client.0.vm02.stdout:7/988: dwrite d1/dc/d55/f85 [0,4194304] 0 2026-03-10T10:20:24.071 INFO:tasks.workunit.client.1.vm05.stdout:5/990: unlink da/db/dee/f7d 0 2026-03-10T10:20:24.073 INFO:tasks.workunit.client.1.vm05.stdout:4/872: creat d1/d31/dc/d40/d45/ded/f128 x:0 0 0 2026-03-10T10:20:24.093 INFO:tasks.workunit.client.0.vm02.stdout:6/968: rename d0/d8/d29/d6d/d96/de4/def to d0/d122/d149 0 2026-03-10T10:20:24.097 INFO:tasks.workunit.client.0.vm02.stdout:9/959: dwrite da/d3c/d4c/d38/d82/d89/fb5 [0,4194304] 0 2026-03-10T10:20:24.101 INFO:tasks.workunit.client.1.vm05.stdout:8/927: link d7/d14/d24/d3f/d4f/cdb d7/d14/d3a/d49/c12d 0 2026-03-10T10:20:24.107 INFO:tasks.workunit.client.1.vm05.stdout:8/928: chown d7/d14/d24/d3f/d4f/ccd 2 1 2026-03-10T10:20:24.107 INFO:tasks.workunit.client.0.vm02.stdout:7/989: fdatasync d1/d1b/d49/fbf 0 2026-03-10T10:20:24.107 INFO:tasks.workunit.client.0.vm02.stdout:7/990: dread d1/dc/d16/dfc/f12b [0,4194304] 0 2026-03-10T10:20:24.112 INFO:tasks.workunit.client.1.vm05.stdout:9/876: dread d0/d1/d16/f36 [0,4194304] 0 2026-03-10T10:20:24.123 INFO:tasks.workunit.client.0.vm02.stdout:7/991: unlink d1/d1b/f43 0 2026-03-10T10:20:24.129 INFO:tasks.workunit.client.0.vm02.stdout:8/965: creat d1/d1c/d43/d6a/da8/d56/db5/f127 x:0 0 0 2026-03-10T10:20:24.144 INFO:tasks.workunit.client.0.vm02.stdout:3/998: dwrite d1/d6/f42 [0,4194304] 0 2026-03-10T10:20:24.146 INFO:tasks.workunit.client.0.vm02.stdout:7/992: dwrite d1/d1b/d8e/f121 [0,4194304] 0 2026-03-10T10:20:24.165 INFO:tasks.workunit.client.0.vm02.stdout:9/960: read da/d3c/d4c/d2c/d34/d35/fcd [2758544,107988] 0 2026-03-10T10:20:24.165 INFO:tasks.workunit.client.0.vm02.stdout:8/966: rename d1/d1c/d43/d5b/d88/fe1 to d1/d1c/d24/d71/f128 0 2026-03-10T10:20:24.167 INFO:tasks.workunit.client.0.vm02.stdout:9/961: write da/d3c/d4c/d2c/d34/f4d [8818200,88270] 0 2026-03-10T10:20:24.173 INFO:tasks.workunit.client.0.vm02.stdout:7/993: creat d1/def/f133 x:0 0 0 2026-03-10T10:20:24.177 INFO:tasks.workunit.client.0.vm02.stdout:3/999: truncate d1/d20/db2/fc0 422327 0 2026-03-10T10:20:24.191 INFO:tasks.workunit.client.1.vm05.stdout:5/991: write da/db/d28/fd7 [3455172,125634] 0 2026-03-10T10:20:24.195 INFO:tasks.workunit.client.1.vm05.stdout:2/933: creat db/d61/f12d x:0 0 0 2026-03-10T10:20:24.199 INFO:tasks.workunit.client.1.vm05.stdout:6/981: rename dd/d36/d3f/d12/l1a to dd/d1b/l13e 0 2026-03-10T10:20:24.200 INFO:tasks.workunit.client.1.vm05.stdout:2/934: truncate db/d28/d4f/d59/f8d 1852915 0 2026-03-10T10:20:24.200 INFO:tasks.workunit.client.1.vm05.stdout:4/873: symlink d1/d3/l129 0 2026-03-10T10:20:24.200 INFO:tasks.workunit.client.0.vm02.stdout:9/962: rename da/d3c/d4c/d38/d4a/d70/d11e/f131 to da/d3c/f139 0 2026-03-10T10:20:24.209 INFO:tasks.workunit.client.0.vm02.stdout:7/994: link d1/dc/l9 d1/dc/d16/d28/d2d/d114/l134 0 2026-03-10T10:20:24.210 INFO:tasks.workunit.client.1.vm05.stdout:6/982: stat dd/d36/d3f/c1f 0 2026-03-10T10:20:24.215 INFO:tasks.workunit.client.0.vm02.stdout:9/963: rename da/d3c/d4c/d2c/fb8 to da/d3c/d4c/d2c/d34/d35/df4/f13a 0 2026-03-10T10:20:24.217 INFO:tasks.workunit.client.0.vm02.stdout:9/964: read da/d3c/d4c/fbf [4015497,35169] 0 2026-03-10T10:20:24.230 INFO:tasks.workunit.client.1.vm05.stdout:5/992: chown da/db/d26/d5c/c3c 223 1 2026-03-10T10:20:24.230 INFO:tasks.workunit.client.1.vm05.stdout:5/993: chown da/db/d28/d97/l12a 84 1 2026-03-10T10:20:24.230 INFO:tasks.workunit.client.0.vm02.stdout:6/969: dwrite d0/d8/d29/d6d/ff8 [0,4194304] 0 2026-03-10T10:20:24.256 INFO:tasks.workunit.client.1.vm05.stdout:9/877: dread d0/d1/d13/d62/fa8 [0,4194304] 0 2026-03-10T10:20:24.256 INFO:tasks.workunit.client.1.vm05.stdout:9/878: dread - d0/d1/d16/fca zero size 2026-03-10T10:20:24.257 INFO:tasks.workunit.client.0.vm02.stdout:8/967: truncate d1/d1c/d43/d6a/da8/fbf 818286 0 2026-03-10T10:20:24.261 INFO:tasks.workunit.client.0.vm02.stdout:9/965: mknod da/d3c/d4c/d38/d4a/d70/c13b 0 2026-03-10T10:20:24.263 INFO:tasks.workunit.client.0.vm02.stdout:9/966: readlink da/d3c/d4c/d38/d82/da3/l120 0 2026-03-10T10:20:24.274 INFO:tasks.workunit.client.1.vm05.stdout:4/874: truncate d1/d3/f10 48886 0 2026-03-10T10:20:24.274 INFO:tasks.workunit.client.1.vm05.stdout:2/935: symlink db/d2d/l12e 0 2026-03-10T10:20:24.274 INFO:tasks.workunit.client.1.vm05.stdout:6/983: dread - dd/d36/d3f/dbd/dd5/f110 zero size 2026-03-10T10:20:24.275 INFO:tasks.workunit.client.1.vm05.stdout:8/929: creat d7/d14/f12e x:0 0 0 2026-03-10T10:20:24.276 INFO:tasks.workunit.client.0.vm02.stdout:8/968: dwrite d1/d1c/d43/d5b/d88/dac/d83/f99 [0,4194304] 0 2026-03-10T10:20:24.284 INFO:tasks.workunit.client.0.vm02.stdout:6/970: read d0/d8/d29/d6d/d96/ff5 [86092,60703] 0 2026-03-10T10:20:24.288 INFO:tasks.workunit.client.1.vm05.stdout:6/984: dwrite dd/d36/f71 [0,4194304] 0 2026-03-10T10:20:24.293 INFO:tasks.workunit.client.0.vm02.stdout:8/969: mknod d1/d1c/d43/d5b/d88/dac/d83/c129 0 2026-03-10T10:20:24.309 INFO:tasks.workunit.client.0.vm02.stdout:7/995: write d1/d1b/d8f/d67/fc2 [839477,60507] 0 2026-03-10T10:20:24.316 INFO:tasks.workunit.client.1.vm05.stdout:4/875: dread d1/d31/dc/d40/f7d [0,4194304] 0 2026-03-10T10:20:24.321 INFO:tasks.workunit.client.0.vm02.stdout:8/970: unlink d1/d1c/d23/ce6 0 2026-03-10T10:20:24.326 INFO:tasks.workunit.client.1.vm05.stdout:4/876: write d1/d31/dc/d40/d45/ded/f120 [208955,65392] 0 2026-03-10T10:20:24.326 INFO:tasks.workunit.client.1.vm05.stdout:4/877: chown d1/d31/dc/d40/d45/ded/ff5 186 1 2026-03-10T10:20:24.327 INFO:tasks.workunit.client.0.vm02.stdout:6/971: chown d0/d8/d29/d52/fbc 41971 1 2026-03-10T10:20:24.327 INFO:tasks.workunit.client.0.vm02.stdout:7/996: mknod d1/dc/d55/c135 0 2026-03-10T10:20:24.327 INFO:tasks.workunit.client.0.vm02.stdout:9/967: dwrite da/d3c/d4c/f23 [0,4194304] 0 2026-03-10T10:20:24.329 INFO:tasks.workunit.client.0.vm02.stdout:8/971: rename d1/d1c/d23/f75 to d1/d1c/d43/d5b/d88/dac/d83/f12a 0 2026-03-10T10:20:24.330 INFO:tasks.workunit.client.1.vm05.stdout:8/930: fsync d7/d14/d15/da7/def/f107 0 2026-03-10T10:20:24.354 INFO:tasks.workunit.client.1.vm05.stdout:9/879: creat d0/df/f131 x:0 0 0 2026-03-10T10:20:24.354 INFO:tasks.workunit.client.1.vm05.stdout:9/880: readlink d0/d1/d13/d55/d7d/l89 0 2026-03-10T10:20:24.365 INFO:tasks.workunit.client.1.vm05.stdout:4/878: rename d1/d31/d76/dac/db8 to d1/d64/da9/dae/d12a 0 2026-03-10T10:20:24.372 INFO:tasks.workunit.client.1.vm05.stdout:2/936: creat db/d1c/d40/f12f x:0 0 0 2026-03-10T10:20:24.373 INFO:tasks.workunit.client.0.vm02.stdout:6/972: mknod d0/c14a 0 2026-03-10T10:20:24.382 INFO:tasks.workunit.client.1.vm05.stdout:5/994: dwrite da/d131/f136 [0,4194304] 0 2026-03-10T10:20:24.382 INFO:tasks.workunit.client.0.vm02.stdout:7/997: write d1/dc/d55/f8d [222336,77097] 0 2026-03-10T10:20:24.388 INFO:tasks.workunit.client.0.vm02.stdout:8/972: creat d1/d1c/d43/d6a/f12b x:0 0 0 2026-03-10T10:20:24.389 INFO:tasks.workunit.client.0.vm02.stdout:8/973: read d1/d1c/d43/d5b/f63 [291685,69248] 0 2026-03-10T10:20:24.390 INFO:tasks.workunit.client.0.vm02.stdout:8/974: chown d1/f1b 30618 1 2026-03-10T10:20:24.394 INFO:tasks.workunit.client.0.vm02.stdout:9/968: fdatasync da/f13 0 2026-03-10T10:20:24.407 INFO:tasks.workunit.client.1.vm05.stdout:9/881: dwrite d0/df/d11/f8d [0,4194304] 0 2026-03-10T10:20:24.409 INFO:tasks.workunit.client.1.vm05.stdout:4/879: unlink d1/d31/dc/d40/d45/daa/fe4 0 2026-03-10T10:20:24.410 INFO:tasks.workunit.client.1.vm05.stdout:2/937: symlink db/d28/d4f/d59/da4/l130 0 2026-03-10T10:20:24.451 INFO:tasks.workunit.client.0.vm02.stdout:8/975: creat d1/d1c/d43/d5b/dab/d102/f12c x:0 0 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.0.vm02.stdout:9/969: dread da/f14 [0,4194304] 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.0.vm02.stdout:8/976: creat d1/d1c/d43/d6a/d7c/f12d x:0 0 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.0.vm02.stdout:8/977: unlink d1/d1c/d43/d6a/f9c 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.0.vm02.stdout:9/970: read da/d3c/d4c/d38/d4a/d70/d10a/f11d [3567411,5430] 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.0.vm02.stdout:9/971: mknod da/d9d/c13c 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.0.vm02.stdout:7/998: link d1/dc/fbc d1/d1b/d8f/dad/d7e/f136 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.0.vm02.stdout:7/999: fdatasync d1/dc/d10/f7d 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.0.vm02.stdout:8/978: fsync d1/d1c/d23/f9d 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:5/995: creat da/db/d26/d70/d72/df6/f14e x:0 0 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:8/931: truncate d7/d14/d3a/d49/d65/db8/f128 774246 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:4/880: write d1/d31/dc/d40/d63/f11a [667543,87613] 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:9/882: symlink d0/d70/d10d/l132 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:5/996: creat da/db/d28/d32/f14f x:0 0 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:8/932: mknod d7/d14/d15/d3b/da0/c12f 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:6/985: getdents dd/d36/d3f/d12/d44/d63 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:4/881: creat d1/d64/da9/dae/dfc/d108/f12b x:0 0 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:2/938: mkdir db/d12/d131 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:9/883: creat d0/df/d74/d8c/d8f/f133 x:0 0 0 2026-03-10T10:20:24.452 INFO:tasks.workunit.client.1.vm05.stdout:5/997: mkdir da/db/d26/d70/d72/d10b/d150 0 2026-03-10T10:20:24.456 INFO:tasks.workunit.client.0.vm02.stdout:9/972: mkdir da/d3c/d4c/d2c/d34/dc2/d13d 0 2026-03-10T10:20:24.459 INFO:tasks.workunit.client.1.vm05.stdout:2/939: rmdir db/d1c/d40/d62/d10c/d120 39 2026-03-10T10:20:24.463 INFO:tasks.workunit.client.1.vm05.stdout:9/884: mkdir d0/d1/d13/de/d93/d134 0 2026-03-10T10:20:24.464 INFO:tasks.workunit.client.0.vm02.stdout:6/973: sync 2026-03-10T10:20:24.464 INFO:tasks.workunit.client.1.vm05.stdout:5/998: symlink da/d63/df2/d123/l151 0 2026-03-10T10:20:24.467 INFO:tasks.workunit.client.0.vm02.stdout:9/973: fsync da/d3c/d4c/d38/d82/d89/fd6 0 2026-03-10T10:20:24.472 INFO:tasks.workunit.client.0.vm02.stdout:8/979: mknod d1/d1c/d43/d5b/c12e 0 2026-03-10T10:20:24.472 INFO:tasks.workunit.client.1.vm05.stdout:6/986: unlink dd/d36/d3f/d12/d44/d30/d4a/d6e/cc2 0 2026-03-10T10:20:24.474 INFO:tasks.workunit.client.1.vm05.stdout:2/940: mknod db/d61/d67/c132 0 2026-03-10T10:20:24.474 INFO:tasks.workunit.client.1.vm05.stdout:2/941: chown db/d2d/c12a 0 1 2026-03-10T10:20:24.475 INFO:tasks.workunit.client.1.vm05.stdout:9/885: chown d0/df/d11/l5f 2896032 1 2026-03-10T10:20:24.476 INFO:tasks.workunit.client.1.vm05.stdout:5/999: rename da/db/dee/fd3 to da/d96/d117/f152 0 2026-03-10T10:20:24.477 INFO:tasks.workunit.client.0.vm02.stdout:6/974: dread d0/d87/f11a [0,4194304] 0 2026-03-10T10:20:24.477 INFO:tasks.workunit.client.1.vm05.stdout:2/942: dread - db/d61/d10a/f112 zero size 2026-03-10T10:20:24.482 INFO:tasks.workunit.client.1.vm05.stdout:9/886: mknod d0/df/d74/d8c/de4/d104/d107/c135 0 2026-03-10T10:20:24.482 INFO:tasks.workunit.client.1.vm05.stdout:9/887: chown d0/f7 7489873 1 2026-03-10T10:20:24.489 INFO:tasks.workunit.client.1.vm05.stdout:6/987: unlink dd/d36/d3f/d12/c23 0 2026-03-10T10:20:24.495 INFO:tasks.workunit.client.1.vm05.stdout:6/988: rename dd/d36/d7d/d102/c106 to dd/d36/d3f/dbd/c13f 0 2026-03-10T10:20:24.498 INFO:tasks.workunit.client.0.vm02.stdout:6/975: dread - d0/d8/f100 zero size 2026-03-10T10:20:24.499 INFO:tasks.workunit.client.1.vm05.stdout:4/882: dread d1/d31/dc/d40/d63/f11a [0,4194304] 0 2026-03-10T10:20:24.502 INFO:tasks.workunit.client.0.vm02.stdout:9/974: getdents da/d3c/d4c/d2c/d34 0 2026-03-10T10:20:24.505 INFO:tasks.workunit.client.0.vm02.stdout:8/980: rename d1/d1c/d43/d6a/f12b to d1/dc7/f12f 0 2026-03-10T10:20:24.514 INFO:tasks.workunit.client.1.vm05.stdout:2/943: write db/d1c/f56 [4147321,12062] 0 2026-03-10T10:20:24.515 INFO:tasks.workunit.client.0.vm02.stdout:6/976: creat d0/d8/d29/d2f/d13b/db2/dbb/de5/f14b x:0 0 0 2026-03-10T10:20:24.515 INFO:tasks.workunit.client.1.vm05.stdout:2/944: chown db/d1c/d40/f50 86042 1 2026-03-10T10:20:24.518 INFO:tasks.workunit.client.0.vm02.stdout:9/975: creat da/d3c/d4c/d2c/d34/dc2/d115/d132/f13e x:0 0 0 2026-03-10T10:20:24.518 INFO:tasks.workunit.client.0.vm02.stdout:9/976: truncate da/d3c/f139 564438 0 2026-03-10T10:20:24.519 INFO:tasks.workunit.client.1.vm05.stdout:6/989: dread dd/d36/d3f/d12/d44/d30/d4a/ff7 [0,4194304] 0 2026-03-10T10:20:24.519 INFO:tasks.workunit.client.1.vm05.stdout:4/883: creat d1/d64/da9/dae/f12c x:0 0 0 2026-03-10T10:20:24.520 INFO:tasks.workunit.client.1.vm05.stdout:9/888: dwrite d0/df/d11/f52 [0,4194304] 0 2026-03-10T10:20:24.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:24 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:24.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:24 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:24.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:24 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:24.533 INFO:tasks.workunit.client.1.vm05.stdout:8/933: dread d7/f78 [0,4194304] 0 2026-03-10T10:20:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:24 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:24 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:24.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:24 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:24.703 INFO:tasks.workunit.client.0.vm02.stdout:9/977: fsync da/f14 0 2026-03-10T10:20:24.704 INFO:tasks.workunit.client.0.vm02.stdout:9/978: chown da/d3c/d4c/d38/d4a/d70/d11e 2055 1 2026-03-10T10:20:24.705 INFO:tasks.workunit.client.0.vm02.stdout:9/979: chown da/f13 494515148 1 2026-03-10T10:20:24.710 INFO:tasks.workunit.client.1.vm05.stdout:2/945: mkdir db/d2d/dc6/dc7/d133 0 2026-03-10T10:20:24.719 INFO:tasks.workunit.client.0.vm02.stdout:6/977: mknod d0/d122/d149/d6f/da1/da8/c14c 0 2026-03-10T10:20:24.738 INFO:tasks.workunit.client.0.vm02.stdout:9/980: mkdir da/de5/d13f 0 2026-03-10T10:20:24.744 INFO:tasks.workunit.client.1.vm05.stdout:6/990: fdatasync dd/d36/d3f/d12/d44/daa/fbc 0 2026-03-10T10:20:24.745 INFO:tasks.workunit.client.0.vm02.stdout:8/981: write d1/d2/f29 [2203053,19887] 0 2026-03-10T10:20:24.750 INFO:tasks.workunit.client.1.vm05.stdout:2/946: unlink db/d12/lb1 0 2026-03-10T10:20:24.764 INFO:tasks.workunit.client.1.vm05.stdout:4/884: dread - d1/d31/d72/d106/ff8 zero size 2026-03-10T10:20:24.764 INFO:tasks.workunit.client.1.vm05.stdout:6/991: truncate dd/d36/d3f/d12/d44/d2a/fa5 3981434 0 2026-03-10T10:20:24.764 INFO:tasks.workunit.client.0.vm02.stdout:8/982: truncate d1/d1c/d23/d25/f76 1668992 0 2026-03-10T10:20:24.789 INFO:tasks.workunit.client.1.vm05.stdout:2/947: creat db/d61/dfc/d9d/f134 x:0 0 0 2026-03-10T10:20:24.798 INFO:tasks.workunit.client.1.vm05.stdout:2/948: creat db/d28/d4f/d59/da4/d6c/f135 x:0 0 0 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.886+0000 7fc737004700 1 -- 192.168.123.102:0/4211499822 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc730075a40 msgr2=0x7fc730077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.886+0000 7fc737004700 1 --2- 192.168.123.102:0/4211499822 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc730075a40 0x7fc730077ed0 secure :-1 s=READY pgs=339 cs=0 l=1 rev1=1 crypto rx=0x7fc72800cd40 tx=0x7fc72800a320 comp rx=0 tx=0).stop 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 -- 192.168.123.102:0/4211499822 shutdown_connections 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 --2- 192.168.123.102:0/4211499822 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc730075a40 0x7fc730077ed0 unknown :-1 s=CLOSED pgs=339 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 --2- 192.168.123.102:0/4211499822 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc730072b50 0x7fc730072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 -- 192.168.123.102:0/4211499822 >> 192.168.123.102:0/4211499822 conn(0x7fc73006dae0 msgr2=0x7fc73006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 -- 192.168.123.102:0/4211499822 shutdown_connections 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 -- 192.168.123.102:0/4211499822 wait complete. 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 Processor -- start 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 -- start start 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc730072b50 0x7fc730083090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7300835d0 0x7fc7301b3120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc730083ae0 con 0x7fc730072b50 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.891+0000 7fc737004700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc730083c50 con 0x7fc7300835d0 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.892+0000 7fc72ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7300835d0 0x7fc7301b3120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.892+0000 7fc72ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7300835d0 0x7fc7301b3120 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38500/0 (socket says 192.168.123.102:38500) 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.892+0000 7fc72ffff700 1 -- 192.168.123.102:0/892117208 learned_addr learned my addr 192.168.123.102:0/892117208 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.893+0000 7fc734da0700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc730072b50 0x7fc730083090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.893+0000 7fc72ffff700 1 -- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc730072b50 msgr2=0x7fc730083090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.893+0000 7fc72ffff700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc730072b50 0x7fc730083090 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.893+0000 7fc72ffff700 1 -- 192.168.123.102:0/892117208 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc72800c9f0 con 0x7fc7300835d0 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.893+0000 7fc72ffff700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7300835d0 0x7fc7301b3120 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7fc728009850 tx=0x7fc728008f40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:24.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.893+0000 7fc72dffb700 1 -- 192.168.123.102:0/892117208 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc72800bda0 con 0x7fc7300835d0 2026-03-10T10:20:24.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.893+0000 7fc737004700 1 -- 192.168.123.102:0/892117208 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc7301b3660 con 0x7fc7300835d0 2026-03-10T10:20:24.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.893+0000 7fc737004700 1 -- 192.168.123.102:0/892117208 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc7301b3bb0 con 0x7fc7300835d0 2026-03-10T10:20:24.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.894+0000 7fc72dffb700 1 -- 192.168.123.102:0/892117208 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc728004900 con 0x7fc7300835d0 2026-03-10T10:20:24.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.894+0000 7fc72dffb700 1 -- 192.168.123.102:0/892117208 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc72800c250 con 0x7fc7300835d0 2026-03-10T10:20:24.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.895+0000 7fc7177fe700 1 -- 192.168.123.102:0/892117208 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc71c005320 con 0x7fc7300835d0 2026-03-10T10:20:24.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.900+0000 7fc72dffb700 1 -- 192.168.123.102:0/892117208 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 30) v1 ==== 100066+0+0 (secure 0 0 0) 0x7fc728004420 con 0x7fc7300835d0 2026-03-10T10:20:24.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.901+0000 7fc72dffb700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fc718077780 0x7fc718079c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:24.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.901+0000 7fc72dffb700 1 -- 192.168.123.102:0/892117208 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(43..43 src has 1..43) v4 ==== 5878+0+0 (secure 0 0 0) 0x7fc72800de40 con 0x7fc7300835d0 2026-03-10T10:20:24.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.902+0000 7fc734da0700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fc718077780 0x7fc718079c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:24.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.902+0000 7fc734da0700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fc718077780 0x7fc718079c40 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fc720005950 tx=0x7fc7200058e0 comp rx=0 tx=0).ready entity=mgr.14720 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:24.917 INFO:tasks.workunit.client.1.vm05.stdout:2/949: rename db/d28/c78 to db/d28/d4f/d59/dce/c136 0 2026-03-10T10:20:24.921 INFO:tasks.workunit.client.1.vm05.stdout:2/950: dread db/d1c/d40/f4d [0,4194304] 0 2026-03-10T10:20:24.925 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:24.906+0000 7fc72dffb700 1 -- 192.168.123.102:0/892117208 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7fc7280671c0 con 0x7fc7300835d0 2026-03-10T10:20:24.998 INFO:tasks.workunit.client.1.vm05.stdout:9/889: creat d0/df/d74/d8c/d8f/ddd/f136 x:0 0 0 2026-03-10T10:20:25.018 INFO:tasks.workunit.client.1.vm05.stdout:9/890: dread d0/d1/d13/de/d21/ff7 [0,4194304] 0 2026-03-10T10:20:25.026 INFO:tasks.workunit.client.1.vm05.stdout:9/891: mknod d0/d70/d10d/c137 0 2026-03-10T10:20:25.122 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.120+0000 7fc7177fe700 1 -- 192.168.123.102:0/892117208 --> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fc71c000bf0 con 0x7fc718077780 2026-03-10T10:20:25.129 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.125+0000 7fc72dffb700 1 -- 192.168.123.102:0/892117208 <== mgr.14720 v2:192.168.123.102:6800/2642809286 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+328 (secure 0 0 0) 0x7fc71c000bf0 con 0x7fc718077780 2026-03-10T10:20:25.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 -- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fc718077780 msgr2=0x7fc718079c40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fc718077780 0x7fc718079c40 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fc720005950 tx=0x7fc7200058e0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 -- 192.168.123.102:0/892117208 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7300835d0 msgr2=0x7fc7301b3120 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7300835d0 0x7fc7301b3120 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7fc728009850 tx=0x7fc728008f40 comp rx=0 tx=0).stop 2026-03-10T10:20:25.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 -- 192.168.123.102:0/892117208 shutdown_connections 2026-03-10T10:20:25.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fc718077780 0x7fc718079c40 secure :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fc720005950 tx=0x7fc7200058e0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.130 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc730072b50 0x7fc730083090 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 --2- 192.168.123.102:0/892117208 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7300835d0 0x7fc7301b3120 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 -- 192.168.123.102:0/892117208 >> 192.168.123.102:0/892117208 conn(0x7fc73006dae0 msgr2=0x7fc73006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:25.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 -- 192.168.123.102:0/892117208 shutdown_connections 2026-03-10T10:20:25.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.129+0000 7fc737004700 1 -- 192.168.123.102:0/892117208 wait complete. 2026-03-10T10:20:25.160 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:20:25.179 INFO:tasks.workunit.client.0.vm02.stdout:9/981: truncate da/d3c/d53/f6a 3757929 0 2026-03-10T10:20:25.193 INFO:tasks.workunit.client.0.vm02.stdout:9/982: truncate da/d3c/d4c/d2c/d34/d35/fc1 4784462 0 2026-03-10T10:20:25.196 INFO:tasks.workunit.client.0.vm02.stdout:8/983: link d1/d1c/d24/f31 d1/d2/f130 0 2026-03-10T10:20:25.197 INFO:tasks.workunit.client.0.vm02.stdout:9/983: creat da/de5/f140 x:0 0 0 2026-03-10T10:20:25.206 INFO:tasks.workunit.client.1.vm05.stdout:6/992: write dd/f14 [8396822,117137] 0 2026-03-10T10:20:25.212 INFO:tasks.workunit.client.1.vm05.stdout:6/993: fdatasync dd/d36/d3f/d12/d44/d2a/d3d/f99 0 2026-03-10T10:20:25.221 INFO:tasks.workunit.client.1.vm05.stdout:4/885: dwrite d1/d31/f36 [0,4194304] 0 2026-03-10T10:20:25.250 INFO:tasks.workunit.client.1.vm05.stdout:4/886: creat d1/d31/dc/d40/d45/ded/f12d x:0 0 0 2026-03-10T10:20:25.271 INFO:tasks.workunit.client.1.vm05.stdout:4/887: dwrite d1/d31/dc/d40/d63/f11a [0,4194304] 0 2026-03-10T10:20:25.273 INFO:tasks.workunit.client.1.vm05.stdout:4/888: read - d1/d64/da9/f126 zero size 2026-03-10T10:20:25.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.276+0000 7f03b15aa700 1 -- 192.168.123.102:0/3049332743 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 msgr2=0x7f03ac103290 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.276+0000 7f03b15aa700 1 --2- 192.168.123.102:0/3049332743 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 0x7f03ac103290 secure :-1 s=READY pgs=340 cs=0 l=1 rev1=1 crypto rx=0x7f0394009b00 tx=0x7f0394009e10 comp rx=0 tx=0).stop 2026-03-10T10:20:25.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.276+0000 7f03b15aa700 1 -- 192.168.123.102:0/3049332743 shutdown_connections 2026-03-10T10:20:25.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.276+0000 7f03b15aa700 1 --2- 192.168.123.102:0/3049332743 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03ac104060 0x7f03ac1044e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.276+0000 7f03b15aa700 1 --2- 192.168.123.102:0/3049332743 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 0x7f03ac103290 unknown :-1 s=CLOSED pgs=340 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.276+0000 7f03b15aa700 1 -- 192.168.123.102:0/3049332743 >> 192.168.123.102:0/3049332743 conn(0x7f03ac0fe440 msgr2=0x7f03ac1008a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:25.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.277+0000 7f03b15aa700 1 -- 192.168.123.102:0/3049332743 shutdown_connections 2026-03-10T10:20:25.277 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.277+0000 7f03b15aa700 1 -- 192.168.123.102:0/3049332743 wait complete. 2026-03-10T10:20:25.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.277+0000 7f03b15aa700 1 Processor -- start 2026-03-10T10:20:25.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.277+0000 7f03b15aa700 1 -- start start 2026-03-10T10:20:25.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.277+0000 7f03b15aa700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 0x7f03ac1986c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:25.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.277+0000 7f03b15aa700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03ac104060 0x7f03ac198c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:25.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.277+0000 7f03b15aa700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f03ac199220 con 0x7f03ac102e70 2026-03-10T10:20:25.278 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.277+0000 7f03b15aa700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f03ac199360 con 0x7f03ac104060 2026-03-10T10:20:25.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.279+0000 7f03abfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 0x7f03ac1986c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:25.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.279+0000 7f03abfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 0x7f03ac1986c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:49286/0 (socket says 192.168.123.102:49286) 2026-03-10T10:20:25.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.279+0000 7f03abfff700 1 -- 192.168.123.102:0/660379658 learned_addr learned my addr 192.168.123.102:0/660379658 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:25.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.279+0000 7f03abfff700 1 -- 192.168.123.102:0/660379658 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03ac104060 msgr2=0x7f03ac198c00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.279+0000 7f03abfff700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03ac104060 0x7f03ac198c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.279 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.279+0000 7f03abfff700 1 -- 192.168.123.102:0/660379658 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f03940097e0 con 0x7f03ac102e70 2026-03-10T10:20:25.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.283+0000 7f03abfff700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 0x7f03ac1986c0 secure :-1 s=READY pgs=341 cs=0 l=1 rev1=1 crypto rx=0x7f0394005230 tx=0x7f039400bb70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:25.290 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.289+0000 7f03a97fa700 1 -- 192.168.123.102:0/660379658 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f039401d070 con 0x7f03ac102e70 2026-03-10T10:20:25.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.290+0000 7f03b15aa700 1 -- 192.168.123.102:0/660379658 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f03ac19ddb0 con 0x7f03ac102e70 2026-03-10T10:20:25.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.290+0000 7f03b15aa700 1 -- 192.168.123.102:0/660379658 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f03ac19e240 con 0x7f03ac102e70 2026-03-10T10:20:25.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.290+0000 7f03a97fa700 1 -- 192.168.123.102:0/660379658 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f039400fb70 con 0x7f03ac102e70 2026-03-10T10:20:25.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.290+0000 7f03a97fa700 1 -- 192.168.123.102:0/660379658 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0394021d20 con 0x7f03ac102e70 2026-03-10T10:20:25.292 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.292+0000 7f03b15aa700 1 -- 192.168.123.102:0/660379658 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f038c005320 con 0x7f03ac102e70 2026-03-10T10:20:25.297 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.296+0000 7f03a97fa700 1 -- 192.168.123.102:0/660379658 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 30) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f039400fce0 con 0x7f03ac102e70 2026-03-10T10:20:25.300 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.297+0000 7f03a97fa700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f0398077790 0x7f0398079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:25.300 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.297+0000 7f03a97fa700 1 -- 192.168.123.102:0/660379658 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(43..43 src has 1..43) v4 ==== 5878+0+0 (secure 0 0 0) 0x7f039409b550 con 0x7f03ac102e70 2026-03-10T10:20:25.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.299+0000 7f03a97fa700 1 -- 192.168.123.102:0/660379658 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7f0394064060 con 0x7f03ac102e70 2026-03-10T10:20:25.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.300+0000 7f03ab7fe700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f0398077790 0x7f0398079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:25.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.302+0000 7f03ab7fe700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f0398077790 0x7f0398079c50 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f03ac102ba0 tx=0x7f039c005ca0 comp rx=0 tx=0).ready entity=mgr.14720 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:25.309 INFO:tasks.workunit.client.1.vm05.stdout:4/889: getdents d1/d31/dc/d40/d63 0 2026-03-10T10:20:25.315 INFO:tasks.workunit.client.1.vm05.stdout:4/890: mkdir d1/d31/d76/dac/d12e 0 2026-03-10T10:20:25.319 INFO:tasks.workunit.client.1.vm05.stdout:4/891: mknod d1/d64/da9/dae/dcc/c12f 0 2026-03-10T10:20:25.320 INFO:tasks.workunit.client.1.vm05.stdout:4/892: chown d1/d64/da9/dae/d12a 311147 1 2026-03-10T10:20:25.332 INFO:tasks.workunit.client.1.vm05.stdout:2/951: dread - db/d28/d4f/d59/da4/fe8 zero size 2026-03-10T10:20:25.334 INFO:tasks.workunit.client.1.vm05.stdout:4/893: dread d1/d31/dc/d40/d63/f94 [0,4194304] 0 2026-03-10T10:20:25.338 INFO:tasks.workunit.client.1.vm05.stdout:2/952: fdatasync db/d28/d4f/fb0 0 2026-03-10T10:20:25.339 INFO:tasks.workunit.client.1.vm05.stdout:4/894: symlink d1/d3/d65/l130 0 2026-03-10T10:20:25.345 INFO:tasks.workunit.client.1.vm05.stdout:4/895: rmdir d1/d31/dc/d40/d63 39 2026-03-10T10:20:25.355 INFO:tasks.workunit.client.1.vm05.stdout:2/953: getdents db/d61/d67 0 2026-03-10T10:20:25.356 INFO:tasks.workunit.client.1.vm05.stdout:2/954: chown db/d61/dcc 10 1 2026-03-10T10:20:25.379 INFO:tasks.workunit.client.0.vm02.stdout:8/984: rmdir d1/d1c/d43/d5b/d88/dac/d11e 0 2026-03-10T10:20:25.401 INFO:tasks.workunit.client.0.vm02.stdout:9/984: creat da/f141 x:0 0 0 2026-03-10T10:20:25.439 INFO:tasks.workunit.client.1.vm05.stdout:9/892: rename d0/d1/d13/d62/fa8 to d0/df/d74/d8c/f138 0 2026-03-10T10:20:25.441 INFO:tasks.workunit.client.1.vm05.stdout:9/893: symlink d0/d1/dcc/dd0/l139 0 2026-03-10T10:20:25.448 INFO:tasks.workunit.client.1.vm05.stdout:9/894: dread d0/d1/fa7 [0,4194304] 0 2026-03-10T10:20:25.451 INFO:tasks.workunit.client.0.vm02.stdout:6/978: creat d0/d8/d29/d2f/d13b/db2/dbb/f14d x:0 0 0 2026-03-10T10:20:25.454 INFO:tasks.workunit.client.1.vm05.stdout:9/895: dwrite d0/d1/f11c [0,4194304] 0 2026-03-10T10:20:25.458 INFO:tasks.workunit.client.0.vm02.stdout:8/985: mkdir d1/d131 0 2026-03-10T10:20:25.481 INFO:tasks.workunit.client.1.vm05.stdout:9/896: mkdir d0/df/d74/d8c/de4/d104/d107/d13a 0 2026-03-10T10:20:25.487 INFO:tasks.workunit.client.0.vm02.stdout:8/986: creat d1/d1c/d24/dad/dbe/dda/ded/f132 x:0 0 0 2026-03-10T10:20:25.503 INFO:tasks.workunit.client.0.vm02.stdout:8/987: rename d1/d1c/c3a to d1/d1c/d43/d6a/d7c/da6/c133 0 2026-03-10T10:20:25.504 INFO:tasks.workunit.client.1.vm05.stdout:9/897: mkdir d0/dc4/d13b 0 2026-03-10T10:20:25.504 INFO:tasks.workunit.client.1.vm05.stdout:6/994: write dd/d36/d3f/d12/d44/daa/d133/f136 [911176,85831] 0 2026-03-10T10:20:25.504 INFO:tasks.workunit.client.1.vm05.stdout:9/898: fdatasync d0/d1/dcc/f119 0 2026-03-10T10:20:25.504 INFO:tasks.workunit.client.1.vm05.stdout:9/899: fdatasync d0/df/d11/f64 0 2026-03-10T10:20:25.504 INFO:tasks.workunit.client.1.vm05.stdout:9/900: chown d0/d1/d13/d62/lf9 345705763 1 2026-03-10T10:20:25.512 INFO:tasks.workunit.client.1.vm05.stdout:9/901: unlink d0/df/d74/d8c/de4/lfa 0 2026-03-10T10:20:25.512 INFO:tasks.workunit.client.0.vm02.stdout:8/988: link d1/d1c/d43/d5b/d88/dac/d83/d9f/fdb d1/d1c/d24/d71/f134 0 2026-03-10T10:20:25.513 INFO:tasks.workunit.client.0.vm02.stdout:8/989: read - d1/d1c/d43/d6a/da8/d56/db5/f127 zero size 2026-03-10T10:20:25.513 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.513+0000 7f03b15aa700 1 -- 192.168.123.102:0/660379658 --> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f038c000bf0 con 0x7f0398077790 2026-03-10T10:20:25.514 INFO:tasks.workunit.client.0.vm02.stdout:8/990: chown d1/d1c/d43/d5b/dab/d102 8367 1 2026-03-10T10:20:25.515 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:25 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:25.515 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:25 vm02.local ceph-mon[50200]: pgmap v11: 65 pgs: 65 active+clean; 3.9 GiB data, 13 GiB used, 107 GiB / 120 GiB avail; 35 MiB/s rd, 87 MiB/s wr, 221 op/s 2026-03-10T10:20:25.519 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.519+0000 7f03a97fa700 1 -- 192.168.123.102:0/660379658 <== mgr.14720 v2:192.168.123.102:6800/2642809286 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+368 (secure 0 0 0) 0x7f038c000bf0 con 0x7f0398077790 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 -- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f0398077790 msgr2=0x7f0398079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f0398077790 0x7f0398079c50 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f03ac102ba0 tx=0x7f039c005ca0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 -- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 msgr2=0x7f03ac1986c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 0x7f03ac1986c0 secure :-1 s=READY pgs=341 cs=0 l=1 rev1=1 crypto rx=0x7f0394005230 tx=0x7f039400bb70 comp rx=0 tx=0).stop 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 -- 192.168.123.102:0/660379658 shutdown_connections 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f0398077790 0x7f0398079c50 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f03ac102e70 0x7f03ac1986c0 unknown :-1 s=CLOSED pgs=341 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 --2- 192.168.123.102:0/660379658 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03ac104060 0x7f03ac198c00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 -- 192.168.123.102:0/660379658 >> 192.168.123.102:0/660379658 conn(0x7f03ac0fe440 msgr2=0x7f03ac1007d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 -- 192.168.123.102:0/660379658 shutdown_connections 2026-03-10T10:20:25.524 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.522+0000 7f03a2ffd700 1 -- 192.168.123.102:0/660379658 wait complete. 2026-03-10T10:20:25.526 INFO:tasks.workunit.client.1.vm05.stdout:6/995: link dd/d36/d3f/d12/d96/f119 dd/d36/d3f/d12/d44/f140 0 2026-03-10T10:20:25.533 INFO:tasks.workunit.client.1.vm05.stdout:6/996: dwrite dd/d36/d3f/d12/d58/db8/f130 [0,4194304] 0 2026-03-10T10:20:25.534 INFO:tasks.workunit.client.1.vm05.stdout:6/997: stat dd/d1b/l51 0 2026-03-10T10:20:25.537 INFO:tasks.workunit.client.0.vm02.stdout:6/979: sync 2026-03-10T10:20:25.541 INFO:tasks.workunit.client.1.vm05.stdout:8/934: creat d7/f130 x:0 0 0 2026-03-10T10:20:25.544 INFO:tasks.workunit.client.1.vm05.stdout:2/955: write db/d61/d10a/f112 [420075,18139] 0 2026-03-10T10:20:25.546 INFO:tasks.workunit.client.1.vm05.stdout:6/998: symlink dd/d36/d3f/d12/d44/d30/l141 0 2026-03-10T10:20:25.547 INFO:tasks.workunit.client.0.vm02.stdout:6/980: creat d0/d8/d29/d6d/d96/f14e x:0 0 0 2026-03-10T10:20:25.548 INFO:tasks.workunit.client.0.vm02.stdout:9/985: write da/d3c/d4c/fbf [4779299,67682] 0 2026-03-10T10:20:25.550 INFO:tasks.workunit.client.1.vm05.stdout:4/896: dwrite d1/d64/da9/fb9 [0,4194304] 0 2026-03-10T10:20:25.560 INFO:tasks.workunit.client.1.vm05.stdout:8/935: read d7/d14/d24/f9c [3807082,29536] 0 2026-03-10T10:20:25.578 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:25 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:25.578 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:25 vm05.local ceph-mon[59051]: pgmap v11: 65 pgs: 65 active+clean; 3.9 GiB data, 13 GiB used, 107 GiB / 120 GiB avail; 35 MiB/s rd, 87 MiB/s wr, 221 op/s 2026-03-10T10:20:25.598 INFO:tasks.workunit.client.0.vm02.stdout:9/986: rename da/d3c/d4c/d38/d4a/c71 to da/d3c/d4c/d2c/d96/c142 0 2026-03-10T10:20:25.608 INFO:tasks.workunit.client.1.vm05.stdout:4/897: mkdir d1/dfd/d131 0 2026-03-10T10:20:25.613 INFO:tasks.workunit.client.0.vm02.stdout:9/987: truncate da/d3c/d4c/d38/d82/d89/fd6 343999 0 2026-03-10T10:20:25.624 INFO:tasks.workunit.client.0.vm02.stdout:6/981: dread d0/d8/d9/f82 [0,4194304] 0 2026-03-10T10:20:25.624 INFO:tasks.workunit.client.0.vm02.stdout:9/988: dread da/d3c/d4c/d38/f47 [4194304,4194304] 0 2026-03-10T10:20:25.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.660+0000 7f3f4313b700 1 -- 192.168.123.102:0/4256651016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3f3c075a40 msgr2=0x7f3f3c077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.660+0000 7f3f4313b700 1 --2- 192.168.123.102:0/4256651016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3f3c075a40 0x7f3f3c077ed0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f3f34009230 tx=0x7f3f34009260 comp rx=0 tx=0).stop 2026-03-10T10:20:25.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.660+0000 7f3f4313b700 1 -- 192.168.123.102:0/4256651016 shutdown_connections 2026-03-10T10:20:25.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.660+0000 7f3f4313b700 1 --2- 192.168.123.102:0/4256651016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3f3c075a40 0x7f3f3c077ed0 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.660+0000 7f3f4313b700 1 --2- 192.168.123.102:0/4256651016 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3f3c072b50 0x7f3f3c072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.661 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.660+0000 7f3f4313b700 1 -- 192.168.123.102:0/4256651016 >> 192.168.123.102:0/4256651016 conn(0x7f3f3c06dae0 msgr2=0x7f3f3c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:25.663 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.660+0000 7f3f4313b700 1 -- 192.168.123.102:0/4256651016 shutdown_connections 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.660+0000 7f3f4313b700 1 -- 192.168.123.102:0/4256651016 wait complete. 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.660+0000 7f3f4313b700 1 Processor -- start 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.661+0000 7f3f4313b700 1 -- start start 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.661+0000 7f3f4313b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3f3c072b50 0x7f3f3c0815a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.661+0000 7f3f4313b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3f3c081ae0 0x7f3f3c12e0a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.661+0000 7f3f4313b700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3f3c081ff0 con 0x7f3f3c072b50 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.661+0000 7f3f4313b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3f3c082130 con 0x7f3f3c081ae0 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.662+0000 7f3f3bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3f3c081ae0 0x7f3f3c12e0a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.662+0000 7f3f3bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3f3c081ae0 0x7f3f3c12e0a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38530/0 (socket says 192.168.123.102:38530) 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.662+0000 7f3f3bfff700 1 -- 192.168.123.102:0/176986381 learned_addr learned my addr 192.168.123.102:0/176986381 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.662+0000 7f3f40ed7700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3f3c072b50 0x7f3f3c0815a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.662+0000 7f3f40ed7700 1 -- 192.168.123.102:0/176986381 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3f3c081ae0 msgr2=0x7f3f3c12e0a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.662+0000 7f3f40ed7700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3f3c081ae0 0x7f3f3c12e0a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.662+0000 7f3f40ed7700 1 -- 192.168.123.102:0/176986381 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3f34008ee0 con 0x7f3f3c072b50 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.662+0000 7f3f40ed7700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3f3c072b50 0x7f3f3c0815a0 secure :-1 s=READY pgs=342 cs=0 l=1 rev1=1 crypto rx=0x7f3f2c007ae0 tx=0x7f3f2c007df0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.663+0000 7f3f39ffb700 1 -- 192.168.123.102:0/176986381 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3f2c010040 con 0x7f3f3c072b50 2026-03-10T10:20:25.664 INFO:tasks.workunit.client.0.vm02.stdout:6/982: mknod d0/d8/d29/d2f/d13b/db2/dbb/de5/c14f 0 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.663+0000 7f3f4313b700 1 -- 192.168.123.102:0/176986381 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3f3c12e640 con 0x7f3f3c072b50 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.663+0000 7f3f4313b700 1 -- 192.168.123.102:0/176986381 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3f3c12eb90 con 0x7f3f3c072b50 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.664+0000 7f3f39ffb700 1 -- 192.168.123.102:0/176986381 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3f2c00eeb0 con 0x7f3f3c072b50 2026-03-10T10:20:25.664 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.664+0000 7f3f39ffb700 1 -- 192.168.123.102:0/176986381 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3f2c018ea0 con 0x7f3f3c072b50 2026-03-10T10:20:25.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.664+0000 7f3f4313b700 1 -- 192.168.123.102:0/176986381 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3f28005320 con 0x7f3f3c072b50 2026-03-10T10:20:25.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.666+0000 7f3f39ffb700 1 -- 192.168.123.102:0/176986381 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 30) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f3f2c018800 con 0x7f3f3c072b50 2026-03-10T10:20:25.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.666+0000 7f3f39ffb700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f3f24077780 0x7f3f24079c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:25.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.666+0000 7f3f39ffb700 1 -- 192.168.123.102:0/176986381 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(43..43 src has 1..43) v4 ==== 5878+0+0 (secure 0 0 0) 0x7f3f2c099040 con 0x7f3f3c072b50 2026-03-10T10:20:25.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.666+0000 7f3f3bfff700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f3f24077780 0x7f3f24079c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:25.668 INFO:tasks.workunit.client.0.vm02.stdout:6/983: readlink d0/d8/d29/d6d/ldd 0 2026-03-10T10:20:25.668 INFO:tasks.workunit.client.0.vm02.stdout:9/989: dread da/de5/ff3 [0,4194304] 0 2026-03-10T10:20:25.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.670+0000 7f3f3bfff700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f3f24077780 0x7f3f24079c40 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f3f3400c9a0 tx=0x7f3f3401a040 comp rx=0 tx=0).ready entity=mgr.14720 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:25.673 INFO:tasks.workunit.client.1.vm05.stdout:4/898: unlink d1/d31/dc/c24 0 2026-03-10T10:20:25.673 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.673+0000 7f3f39ffb700 1 -- 192.168.123.102:0/176986381 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7f3f2c061c00 con 0x7f3f3c072b50 2026-03-10T10:20:25.694 INFO:tasks.workunit.client.1.vm05.stdout:9/902: write d0/d1/d13/f27 [4949396,44244] 0 2026-03-10T10:20:25.696 INFO:tasks.workunit.client.0.vm02.stdout:8/991: dwrite d1/d1c/f1e [0,4194304] 0 2026-03-10T10:20:25.700 INFO:tasks.workunit.client.0.vm02.stdout:8/992: readlink d1/d1c/d43/lde 0 2026-03-10T10:20:25.706 INFO:tasks.workunit.client.0.vm02.stdout:8/993: mknod d1/d1c/d24/dcf/c135 0 2026-03-10T10:20:25.707 INFO:tasks.workunit.client.1.vm05.stdout:9/903: unlink d0/d1/la9 0 2026-03-10T10:20:25.708 INFO:tasks.workunit.client.1.vm05.stdout:9/904: write d0/df/f99 [5133121,99006] 0 2026-03-10T10:20:25.709 INFO:tasks.workunit.client.1.vm05.stdout:6/999: write dd/d36/d3f/dbd/f10b [4422876,45227] 0 2026-03-10T10:20:25.710 INFO:tasks.workunit.client.1.vm05.stdout:9/905: write d0/d1/dcc/f116 [817879,117865] 0 2026-03-10T10:20:25.726 INFO:tasks.workunit.client.1.vm05.stdout:2/956: dwrite db/d28/d4f/d59/da4/d6c/fd0 [0,4194304] 0 2026-03-10T10:20:25.735 INFO:tasks.workunit.client.1.vm05.stdout:4/899: mknod d1/d31/dc/d40/d45/ded/dfb/d105/c132 0 2026-03-10T10:20:25.736 INFO:tasks.workunit.client.1.vm05.stdout:2/957: rmdir db/d2d 39 2026-03-10T10:20:25.736 INFO:tasks.workunit.client.1.vm05.stdout:8/936: write d7/d14/f40 [653502,45543] 0 2026-03-10T10:20:25.736 INFO:tasks.workunit.client.0.vm02.stdout:8/994: truncate d1/d1c/d43/d6a/d7c/ff5 167355 0 2026-03-10T10:20:25.742 INFO:tasks.workunit.client.1.vm05.stdout:4/900: creat d1/d31/dc/d40/d45/daa/f133 x:0 0 0 2026-03-10T10:20:25.743 INFO:tasks.workunit.client.1.vm05.stdout:2/958: mkdir db/d28/d4f/d8b/d137 0 2026-03-10T10:20:25.743 INFO:tasks.workunit.client.1.vm05.stdout:8/937: mkdir d7/d14/d24/d3f/d6a/d131 0 2026-03-10T10:20:25.744 INFO:tasks.workunit.client.0.vm02.stdout:9/990: dread da/d3c/d4c/d2c/d34/f83 [0,4194304] 0 2026-03-10T10:20:25.747 INFO:tasks.workunit.client.1.vm05.stdout:8/938: readlink d7/d2f/le5 0 2026-03-10T10:20:25.751 INFO:tasks.workunit.client.1.vm05.stdout:2/959: mkdir db/d28/d4f/d8b/de3/d138 0 2026-03-10T10:20:25.754 INFO:tasks.workunit.client.1.vm05.stdout:2/960: dwrite db/d12/fb2 [0,4194304] 0 2026-03-10T10:20:25.790 INFO:tasks.workunit.client.1.vm05.stdout:8/939: mkdir d7/d14/d24/d3f/d4f/d132 0 2026-03-10T10:20:25.791 INFO:tasks.workunit.client.1.vm05.stdout:8/940: readlink d7/d14/d3a/l113 0 2026-03-10T10:20:25.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.844+0000 7f3f4313b700 1 -- 192.168.123.102:0/176986381 --> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f3f28000bf0 con 0x7f3f24077780 2026-03-10T10:20:25.851 INFO:tasks.workunit.client.0.vm02.stdout:9/991: read da/d3c/d4c/d2c/d34/f3d [4114018,19145] 0 2026-03-10T10:20:25.860 INFO:tasks.workunit.client.0.vm02.stdout:6/984: dwrite d0/f43 [4194304,4194304] 0 2026-03-10T10:20:25.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.872+0000 7f3f39ffb700 1 -- 192.168.123.102:0/176986381 <== mgr.14720 v2:192.168.123.102:6800/2642809286 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7f3f28000bf0 con 0x7f3f24077780 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (4m) 12s ago 5m 23.1M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (5m) 12s ago 5m 8472k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (5m) 0s ago 5m 11.0M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (5m) 12s ago 5m 7415k - 18.2.1 5be31c24972a 51802fb57170 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (5m) 0s ago 5m 7407k - 18.2.1 5be31c24972a f275982dc269 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (4m) 12s ago 5m 86.9M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (3m) 12s ago 3m 14.7M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (3m) 12s ago 3m 230M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (3m) 0s ago 3m 15.1M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (3m) 0s ago 3m 140M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (28s) 12s ago 6m 590M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (3s) 0s ago 5m 46.4M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (6m) 12s ago 6m 57.2M 2048M 18.2.1 5be31c24972a ab92d831cc1d 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (5m) 0s ago 5m 43.0M 2048M 18.2.1 5be31c24972a cea7d23f93a6 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (5m) 12s ago 5m 16.5M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (5m) 0s ago 5m 15.2M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (4m) 12s ago 4m 333M 4096M 18.2.1 5be31c24972a 9d7f135a3f3b 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (4m) 12s ago 4m 349M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (4m) 12s ago 4m 299M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (4m) 0s ago 4m 475M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (4m) 0s ago 4m 410M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (3m) 0s ago 3m 395M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:20:25.873 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 starting - - - - 2026-03-10T10:20:25.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 -- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f3f24077780 msgr2=0x7f3f24079c40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f3f24077780 0x7f3f24079c40 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f3f3400c9a0 tx=0x7f3f3401a040 comp rx=0 tx=0).stop 2026-03-10T10:20:25.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 -- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3f3c072b50 msgr2=0x7f3f3c0815a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:25.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3f3c072b50 0x7f3f3c0815a0 secure :-1 s=READY pgs=342 cs=0 l=1 rev1=1 crypto rx=0x7f3f2c007ae0 tx=0x7f3f2c007df0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 -- 192.168.123.102:0/176986381 shutdown_connections 2026-03-10T10:20:25.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f3f24077780 0x7f3f24079c40 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3f3c072b50 0x7f3f3c0815a0 unknown :-1 s=CLOSED pgs=342 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.875 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 --2- 192.168.123.102:0/176986381 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3f3c081ae0 0x7f3f3c12e0a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:25.876 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 -- 192.168.123.102:0/176986381 >> 192.168.123.102:0/176986381 conn(0x7f3f3c06dae0 msgr2=0x7f3f3c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:25.876 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 -- 192.168.123.102:0/176986381 shutdown_connections 2026-03-10T10:20:25.876 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.875+0000 7f3f237fe700 1 -- 192.168.123.102:0/176986381 wait complete. 2026-03-10T10:20:25.916 INFO:tasks.workunit.client.1.vm05.stdout:4/901: write d1/d31/dc/f33 [1083465,98534] 0 2026-03-10T10:20:25.918 INFO:tasks.workunit.client.1.vm05.stdout:9/906: dwrite d0/f45 [0,4194304] 0 2026-03-10T10:20:25.918 INFO:tasks.workunit.client.1.vm05.stdout:2/961: write db/d1c/fcf [545805,63766] 0 2026-03-10T10:20:25.921 INFO:tasks.workunit.client.1.vm05.stdout:8/941: write d7/d14/d24/d3f/feb [412369,63173] 0 2026-03-10T10:20:25.923 INFO:tasks.workunit.client.1.vm05.stdout:8/942: chown d7/d14/d62 1771 1 2026-03-10T10:20:25.933 INFO:tasks.workunit.client.1.vm05.stdout:4/902: dwrite d1/d3/d65/ddb/fe7 [0,4194304] 0 2026-03-10T10:20:25.937 INFO:tasks.workunit.client.1.vm05.stdout:8/943: creat d7/d14/d24/d3f/d6a/d8a/d96/f133 x:0 0 0 2026-03-10T10:20:25.940 INFO:tasks.workunit.client.1.vm05.stdout:9/907: dwrite d0/df/d74/d8c/de4/d104/fff [0,4194304] 0 2026-03-10T10:20:25.948 INFO:tasks.workunit.client.0.vm02.stdout:8/995: getdents d1/d1c/d23/d25 0 2026-03-10T10:20:25.949 INFO:tasks.workunit.client.0.vm02.stdout:8/996: chown d1/d1c/d43/d5b/d88/f118 230 1 2026-03-10T10:20:25.951 INFO:tasks.workunit.client.1.vm05.stdout:8/944: mknod d7/d14/d62/d90/dd3/c134 0 2026-03-10T10:20:25.951 INFO:tasks.workunit.client.1.vm05.stdout:4/903: stat d1/d31/d76/dac/d12e 0 2026-03-10T10:20:25.955 INFO:tasks.workunit.client.1.vm05.stdout:4/904: chown d1/d31/dc/d40/d45/cb2 13676 1 2026-03-10T10:20:25.964 INFO:tasks.workunit.client.0.vm02.stdout:6/985: creat d0/d8/d29/d94/d9a/dc2/f150 x:0 0 0 2026-03-10T10:20:25.971 INFO:tasks.workunit.client.1.vm05.stdout:8/945: readlink d7/d14/d24/l31 0 2026-03-10T10:20:25.991 INFO:tasks.workunit.client.0.vm02.stdout:6/986: fsync d0/d8/d9/f54 0 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.993+0000 7f77e35a3700 1 -- 192.168.123.102:0/1978999170 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc075a40 msgr2=0x7f77dc077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.993+0000 7f77e35a3700 1 --2- 192.168.123.102:0/1978999170 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc075a40 0x7f77dc077ed0 secure :-1 s=READY pgs=343 cs=0 l=1 rev1=1 crypto rx=0x7f77d400d3f0 tx=0x7f77d400d700 comp rx=0 tx=0).stop 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.993+0000 7f77e35a3700 1 -- 192.168.123.102:0/1978999170 shutdown_connections 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.993+0000 7f77e35a3700 1 --2- 192.168.123.102:0/1978999170 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc075a40 0x7f77dc077ed0 unknown :-1 s=CLOSED pgs=343 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.993+0000 7f77e35a3700 1 --2- 192.168.123.102:0/1978999170 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f77dc072b50 0x7f77dc072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.993+0000 7f77e35a3700 1 -- 192.168.123.102:0/1978999170 >> 192.168.123.102:0/1978999170 conn(0x7f77dc06dae0 msgr2=0x7f77dc06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.994+0000 7f77e35a3700 1 -- 192.168.123.102:0/1978999170 shutdown_connections 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.994+0000 7f77e35a3700 1 -- 192.168.123.102:0/1978999170 wait complete. 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.994+0000 7f77e35a3700 1 Processor -- start 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.994+0000 7f77e35a3700 1 -- start start 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.994+0000 7f77e35a3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc072b50 0x7f77dc083100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.994+0000 7f77e35a3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f77dc083640 0x7f77dc1b3110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.994+0000 7f77e35a3700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f77dc083b80 con 0x7f77dc072b50 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.994+0000 7f77e35a3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f77dc083cf0 con 0x7f77dc083640 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.995+0000 7f77e133f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc072b50 0x7f77dc083100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.995+0000 7f77e133f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc072b50 0x7f77dc083100 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:49336/0 (socket says 192.168.123.102:49336) 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.995+0000 7f77e133f700 1 -- 192.168.123.102:0/2442705758 learned_addr learned my addr 192.168.123.102:0/2442705758 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.995+0000 7f77e133f700 1 -- 192.168.123.102:0/2442705758 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f77dc083640 msgr2=0x7f77dc1b3110 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.995+0000 7f77e133f700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f77dc083640 0x7f77dc1b3110 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.995+0000 7f77e133f700 1 -- 192.168.123.102:0/2442705758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f77d4007ed0 con 0x7f77dc072b50 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.995+0000 7f77e133f700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc072b50 0x7f77dc083100 secure :-1 s=READY pgs=344 cs=0 l=1 rev1=1 crypto rx=0x7f77d800d8d0 tx=0x7f77d800dc90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.997+0000 7f77d27fc700 1 -- 192.168.123.102:0/2442705758 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f77d8009940 con 0x7f77dc072b50 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.997+0000 7f77e35a3700 1 -- 192.168.123.102:0/2442705758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f77dc1b3770 con 0x7f77dc072b50 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.997+0000 7f77e35a3700 1 -- 192.168.123.102:0/2442705758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f77dc1b3c70 con 0x7f77dc072b50 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.999+0000 7f77d27fc700 1 -- 192.168.123.102:0/2442705758 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f77d8010460 con 0x7f77dc072b50 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.999+0000 7f77d27fc700 1 -- 192.168.123.102:0/2442705758 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f77d800f5d0 con 0x7f77dc072b50 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:25.999+0000 7f77d27fc700 1 -- 192.168.123.102:0/2442705758 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 30) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f77d800f790 con 0x7f77dc072b50 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.000+0000 7f77d27fc700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f77c8077860 0x7f77c8079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.000+0000 7f77d27fc700 1 -- 192.168.123.102:0/2442705758 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(43..43 src has 1..43) v4 ==== 5878+0+0 (secure 0 0 0) 0x7f77d809a740 con 0x7f77dc072b50 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.001+0000 7f77e0b3e700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f77c8077860 0x7f77c8079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:26.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.001+0000 7f77e35a3700 1 -- 192.168.123.102:0/2442705758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f77c0005320 con 0x7f77dc072b50 2026-03-10T10:20:26.004 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.004+0000 7f77e0b3e700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f77c8077860 0x7f77c8079d20 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f77d400d3f0 tx=0x7f77d400db00 comp rx=0 tx=0).ready entity=mgr.14720 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:26.009 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.006+0000 7f77d27fc700 1 -- 192.168.123.102:0/2442705758 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7f77d8062b80 con 0x7f77dc072b50 2026-03-10T10:20:26.012 INFO:tasks.workunit.client.1.vm05.stdout:2/962: sync 2026-03-10T10:20:26.023 INFO:tasks.workunit.client.0.vm02.stdout:6/987: truncate d0/d8/d29/d2f/f67 4196401 0 2026-03-10T10:20:26.066 INFO:tasks.workunit.client.0.vm02.stdout:8/997: write d1/d1c/d23/d25/f8c [1519646,29882] 0 2026-03-10T10:20:26.067 INFO:tasks.workunit.client.0.vm02.stdout:8/998: dread - d1/d1c/d43/d5b/dab/d102/f12c zero size 2026-03-10T10:20:26.069 INFO:tasks.workunit.client.0.vm02.stdout:9/992: dwrite da/f6f [0,4194304] 0 2026-03-10T10:20:26.071 INFO:tasks.workunit.client.0.vm02.stdout:8/999: dread d1/d1c/d43/d6a/f87 [0,4194304] 0 2026-03-10T10:20:26.071 INFO:tasks.workunit.client.1.vm05.stdout:9/908: dwrite d0/d1/d13/d26/f7c [0,4194304] 0 2026-03-10T10:20:26.085 INFO:tasks.workunit.client.1.vm05.stdout:8/946: write d7/d14/d3a/f68 [2695929,63984] 0 2026-03-10T10:20:26.085 INFO:tasks.workunit.client.1.vm05.stdout:4/905: write d1/d31/dc/d40/ffa [4401858,94848] 0 2026-03-10T10:20:26.086 INFO:tasks.workunit.client.1.vm05.stdout:4/906: stat d1/d31/d72/f107 0 2026-03-10T10:20:26.088 INFO:tasks.workunit.client.0.vm02.stdout:6/988: write d0/d87/fe2 [2128752,62831] 0 2026-03-10T10:20:26.092 INFO:tasks.workunit.client.0.vm02.stdout:9/993: fsync da/f25 0 2026-03-10T10:20:26.102 INFO:tasks.workunit.client.1.vm05.stdout:2/963: link db/d2d/dc6/cfb db/d28/d4f/d8b/de3/d118/c139 0 2026-03-10T10:20:26.125 INFO:tasks.workunit.client.0.vm02.stdout:9/994: dread da/d3c/d4c/f27 [0,4194304] 0 2026-03-10T10:20:26.133 INFO:tasks.workunit.client.0.vm02.stdout:6/989: dread d0/d87/fa7 [0,4194304] 0 2026-03-10T10:20:26.163 INFO:tasks.workunit.client.0.vm02.stdout:6/990: dwrite d0/d8/d29/d94/d9a/f12c [0,4194304] 0 2026-03-10T10:20:26.163 INFO:tasks.workunit.client.0.vm02.stdout:9/995: dwrite da/d3c/d4c/d38/fb2 [0,4194304] 0 2026-03-10T10:20:26.171 INFO:tasks.workunit.client.0.vm02.stdout:6/991: dread d0/d8/f8f [0,4194304] 0 2026-03-10T10:20:26.181 INFO:tasks.workunit.client.0.vm02.stdout:9/996: creat da/d3c/d4c/d75/f143 x:0 0 0 2026-03-10T10:20:26.186 INFO:tasks.workunit.client.0.vm02.stdout:6/992: creat d0/d8/d29/d2f/d50/d10f/d12a/f151 x:0 0 0 2026-03-10T10:20:26.191 INFO:tasks.workunit.client.0.vm02.stdout:9/997: symlink da/d3c/d4c/d2c/d34/dc2/d13d/l144 0 2026-03-10T10:20:26.232 INFO:tasks.workunit.client.0.vm02.stdout:9/998: mknod da/d3c/d4c/d38/da6/d118/c145 0 2026-03-10T10:20:26.244 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.243+0000 7f77e35a3700 1 -- 192.168.123.102:0/2442705758 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f77c0005cc0 con 0x7f77dc072b50 2026-03-10T10:20:26.247 INFO:tasks.workunit.client.0.vm02.stdout:6/993: creat d0/d8/d29/d6d/d96/de4/d102/f152 x:0 0 0 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 2 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 6 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:20:26.258 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 12, 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.253+0000 7f77d27fc700 1 -- 192.168.123.102:0/2442705758 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+679 (secure 0 0 0) 0x7f77d8016070 con 0x7f77dc072b50 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 -- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f77c8077860 msgr2=0x7f77c8079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f77c8077860 0x7f77c8079d20 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f77d400d3f0 tx=0x7f77d400db00 comp rx=0 tx=0).stop 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 -- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc072b50 msgr2=0x7f77dc083100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc072b50 0x7f77dc083100 secure :-1 s=READY pgs=344 cs=0 l=1 rev1=1 crypto rx=0x7f77d800d8d0 tx=0x7f77d800dc90 comp rx=0 tx=0).stop 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 -- 192.168.123.102:0/2442705758 shutdown_connections 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f77c8077860 0x7f77c8079d20 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f77dc072b50 0x7f77dc083100 unknown :-1 s=CLOSED pgs=344 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 --2- 192.168.123.102:0/2442705758 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f77dc083640 0x7f77dc1b3110 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 -- 192.168.123.102:0/2442705758 >> 192.168.123.102:0/2442705758 conn(0x7f77dc06dae0 msgr2=0x7f77dc06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 -- 192.168.123.102:0/2442705758 shutdown_connections 2026-03-10T10:20:26.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.256+0000 7f77bffff700 1 -- 192.168.123.102:0/2442705758 wait complete. 2026-03-10T10:20:26.259 INFO:tasks.workunit.client.0.vm02.stdout:9/999: unlink da/d3c/d4c/d75/l101 0 2026-03-10T10:20:26.276 INFO:tasks.workunit.client.0.vm02.stdout:6/994: mkdir d0/d8/d29/d6d/d96/de4/d102/d153 0 2026-03-10T10:20:26.305 INFO:tasks.workunit.client.0.vm02.stdout:6/995: dwrite d0/d8/d9/f13 [4194304,4194304] 0 2026-03-10T10:20:26.361 INFO:tasks.workunit.client.0.vm02.stdout:6/996: getdents d0/d8/d29/d2f/d13b/db2 0 2026-03-10T10:20:26.364 INFO:tasks.workunit.client.1.vm05.stdout:9/909: link d0/df/d74/d8c/c100 d0/d1/d13/d62/c13c 0 2026-03-10T10:20:26.369 INFO:tasks.workunit.client.1.vm05.stdout:8/947: link d7/d14/d3a/d49/l64 d7/d14/d24/d3f/d6a/d8a/l135 0 2026-03-10T10:20:26.370 INFO:tasks.workunit.client.1.vm05.stdout:4/907: link d1/d31/dc/d40/d45/ded/f120 d1/d64/da9/f134 0 2026-03-10T10:20:26.371 INFO:tasks.workunit.client.1.vm05.stdout:2/964: creat db/d1c/d40/f13a x:0 0 0 2026-03-10T10:20:26.371 INFO:tasks.workunit.client.1.vm05.stdout:8/948: chown d7/d14/d24/d3f/d6a/d8a/d125 157556986 1 2026-03-10T10:20:26.371 INFO:tasks.workunit.client.0.vm02.stdout:6/997: creat d0/d87/f154 x:0 0 0 2026-03-10T10:20:26.374 INFO:tasks.workunit.client.1.vm05.stdout:8/949: mkdir d7/d14/d3a/d49/d65/d136 0 2026-03-10T10:20:26.374 INFO:tasks.workunit.client.1.vm05.stdout:4/908: truncate d1/d31/dc/fe1 1857850 0 2026-03-10T10:20:26.379 INFO:tasks.workunit.client.1.vm05.stdout:2/965: mkdir db/d28/d4f/d59/dce/d13b 0 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 -- 192.168.123.102:0/1983511449 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a7810a700 msgr2=0x7f9a7810cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 --2- 192.168.123.102:0/1983511449 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a7810a700 0x7f9a7810cb90 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f9a68009a60 tx=0x7f9a68009d70 comp rx=0 tx=0).stop 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 -- 192.168.123.102:0/1983511449 shutdown_connections 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 --2- 192.168.123.102:0/1983511449 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a7810a700 0x7f9a7810cb90 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 --2- 192.168.123.102:0/1983511449 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9a78107d90 0x7f9a7810a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 -- 192.168.123.102:0/1983511449 >> 192.168.123.102:0/1983511449 conn(0x7f9a7806dda0 msgr2=0x7f9a78070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 -- 192.168.123.102:0/1983511449 shutdown_connections 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 -- 192.168.123.102:0/1983511449 wait complete. 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 Processor -- start 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.384+0000 7f9a7e94f700 1 -- start start 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a7e94f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9a78107d90 0x7f9a78116a50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a7e94f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a78116f90 0x7f9a781b3330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a7e94f700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9a781174a0 con 0x7f9a78107d90 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a7e94f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9a78117610 con 0x7f9a78116f90 2026-03-10T10:20:26.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a777fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a78116f90 0x7f9a781b3330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:26.387 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a777fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a78116f90 0x7f9a781b3330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:38558/0 (socket says 192.168.123.102:38558) 2026-03-10T10:20:26.387 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a777fe700 1 -- 192.168.123.102:0/2025354105 learned_addr learned my addr 192.168.123.102:0/2025354105 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:26.387 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a777fe700 1 -- 192.168.123.102:0/2025354105 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9a78107d90 msgr2=0x7f9a78116a50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.387 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a777fe700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9a78107d90 0x7f9a78116a50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.387 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.385+0000 7f9a777fe700 1 -- 192.168.123.102:0/2025354105 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9a68009710 con 0x7f9a78116f90 2026-03-10T10:20:26.388 INFO:tasks.workunit.client.1.vm05.stdout:2/966: dwrite db/d28/d4f/d59/da4/d114/f11e [0,4194304] 0 2026-03-10T10:20:26.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.386+0000 7f9a777fe700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a78116f90 0x7f9a781b3330 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f9a68009f90 tx=0x7f9a68005e40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:26.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.386+0000 7f9a757fa700 1 -- 192.168.123.102:0/2025354105 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9a6801c070 con 0x7f9a78116f90 2026-03-10T10:20:26.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.386+0000 7f9a7e94f700 1 -- 192.168.123.102:0/2025354105 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9a781b3870 con 0x7f9a78116f90 2026-03-10T10:20:26.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.386+0000 7f9a7e94f700 1 -- 192.168.123.102:0/2025354105 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9a781b3d60 con 0x7f9a78116f90 2026-03-10T10:20:26.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.387+0000 7f9a757fa700 1 -- 192.168.123.102:0/2025354105 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9a68003c80 con 0x7f9a78116f90 2026-03-10T10:20:26.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.387+0000 7f9a757fa700 1 -- 192.168.123.102:0/2025354105 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9a68020410 con 0x7f9a78116f90 2026-03-10T10:20:26.388 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.388+0000 7f9a7e94f700 1 -- 192.168.123.102:0/2025354105 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9a64005320 con 0x7f9a78116f90 2026-03-10T10:20:26.390 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.389+0000 7f9a757fa700 1 -- 192.168.123.102:0/2025354105 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 30) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f9a6800f460 con 0x7f9a78116f90 2026-03-10T10:20:26.390 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.389+0000 7f9a757fa700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f9a600776c0 0x7f9a60079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:26.390 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.389+0000 7f9a757fa700 1 -- 192.168.123.102:0/2025354105 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(43..43 src has 1..43) v4 ==== 5878+0+0 (secure 0 0 0) 0x7f9a6809af30 con 0x7f9a78116f90 2026-03-10T10:20:26.390 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.389+0000 7f9a77fff700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f9a600776c0 0x7f9a60079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:26.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.390+0000 7f9a77fff700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f9a600776c0 0x7f9a60079b80 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f9a70009fd0 tx=0x7f9a7000b040 comp rx=0 tx=0).ready entity=mgr.14720 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:26.396 INFO:tasks.workunit.client.1.vm05.stdout:8/950: mkdir d7/d14/d24/d3f/d6a/d131/d137 0 2026-03-10T10:20:26.400 INFO:tasks.workunit.client.1.vm05.stdout:9/910: dread d0/df/d74/d8c/d8f/ddd/f124 [0,4194304] 0 2026-03-10T10:20:26.406 INFO:tasks.workunit.client.1.vm05.stdout:2/967: rmdir db/d28/d4f/d8b/de3/d138 0 2026-03-10T10:20:26.406 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.402+0000 7f9a757fa700 1 -- 192.168.123.102:0/2025354105 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7f9a68063a40 con 0x7f9a78116f90 2026-03-10T10:20:26.407 INFO:tasks.workunit.client.1.vm05.stdout:9/911: mkdir d0/d1/d13/d26/d13d 0 2026-03-10T10:20:26.408 INFO:tasks.workunit.client.0.vm02.stdout:6/998: sync 2026-03-10T10:20:26.408 INFO:tasks.workunit.client.1.vm05.stdout:2/968: dread - db/d61/dcc/f104 zero size 2026-03-10T10:20:26.410 INFO:tasks.workunit.client.1.vm05.stdout:2/969: stat db/d28/d4f/d59/f6f 0 2026-03-10T10:20:26.410 INFO:tasks.workunit.client.0.vm02.stdout:6/999: dread - d0/d87/f10b zero size 2026-03-10T10:20:26.410 INFO:tasks.workunit.client.1.vm05.stdout:8/951: dwrite d7/d14/d15/da7/f123 [0,4194304] 0 2026-03-10T10:20:26.415 INFO:tasks.workunit.client.1.vm05.stdout:8/952: fdatasync d7/d2f/d57/fae 0 2026-03-10T10:20:26.416 INFO:tasks.workunit.client.0.vm02.stderr:+ rm -rf -- ./tmp.mQ3rEgPuFu 2026-03-10T10:20:26.418 INFO:tasks.workunit.client.1.vm05.stdout:9/912: creat d0/df/d74/d8c/f13e x:0 0 0 2026-03-10T10:20:26.418 INFO:tasks.workunit.client.1.vm05.stdout:8/953: stat d7/d2f/d57 0 2026-03-10T10:20:26.418 INFO:tasks.workunit.client.1.vm05.stdout:2/970: link db/d28/d4f/fb0 db/d28/d4f/d59/da4/d81/da7/f13c 0 2026-03-10T10:20:26.418 INFO:tasks.workunit.client.1.vm05.stdout:8/954: chown d7/d2f/f4b 33 1 2026-03-10T10:20:26.426 INFO:tasks.workunit.client.1.vm05.stdout:2/971: creat db/d28/d4f/d8b/d137/f13d x:0 0 0 2026-03-10T10:20:26.427 INFO:tasks.workunit.client.1.vm05.stdout:8/955: creat d7/d14/f138 x:0 0 0 2026-03-10T10:20:26.435 INFO:tasks.workunit.client.1.vm05.stdout:2/972: truncate db/d2d/f65 1711044 0 2026-03-10T10:20:26.437 INFO:tasks.workunit.client.1.vm05.stdout:8/956: read d7/d14/f4c [1793256,23873] 0 2026-03-10T10:20:26.438 INFO:tasks.workunit.client.1.vm05.stdout:8/957: stat d7/fb5 0 2026-03-10T10:20:26.441 INFO:tasks.workunit.client.1.vm05.stdout:8/958: read d7/f44 [150283,19715] 0 2026-03-10T10:20:26.441 INFO:tasks.workunit.client.1.vm05.stdout:2/973: truncate db/d28/d4f/d59/da4/fca 1183140 0 2026-03-10T10:20:26.442 INFO:tasks.workunit.client.1.vm05.stdout:8/959: creat d7/dd5/f139 x:0 0 0 2026-03-10T10:20:26.450 INFO:tasks.workunit.client.1.vm05.stdout:2/974: stat db/d28/d4f/f68 0 2026-03-10T10:20:26.451 INFO:tasks.workunit.client.1.vm05.stdout:8/960: unlink d7/l80 0 2026-03-10T10:20:26.451 INFO:tasks.workunit.client.1.vm05.stdout:2/975: chown db/d28/d4f/fe4 106908764 1 2026-03-10T10:20:26.451 INFO:tasks.workunit.client.1.vm05.stdout:2/976: truncate db/d12/ff0 215286 0 2026-03-10T10:20:26.451 INFO:tasks.workunit.client.1.vm05.stdout:8/961: mkdir d7/d14/d15/d13a 0 2026-03-10T10:20:26.453 INFO:tasks.workunit.client.1.vm05.stdout:8/962: mknod d7/d14/d62/d90/dd3/c13b 0 2026-03-10T10:20:26.454 INFO:tasks.workunit.client.1.vm05.stdout:8/963: chown d7/d14/d15/d3b/fda 1 1 2026-03-10T10:20:26.457 INFO:tasks.workunit.client.1.vm05.stdout:8/964: symlink d7/d14/d15/l13c 0 2026-03-10T10:20:26.463 INFO:tasks.workunit.client.1.vm05.stdout:2/977: sync 2026-03-10T10:20:26.464 INFO:tasks.workunit.client.1.vm05.stdout:2/978: chown db/d28/d4f/d59/da4/l130 205394316 1 2026-03-10T10:20:26.465 INFO:tasks.workunit.client.1.vm05.stdout:2/979: read - db/d28/d4f/d59/da4/fe8 zero size 2026-03-10T10:20:26.471 INFO:tasks.workunit.client.1.vm05.stdout:2/980: fsync db/d28/d4f/da3/f116 0 2026-03-10T10:20:26.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:26 vm02.local ceph-mon[50200]: from='client.24529 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:26.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:26 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:26.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:26 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:26.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:26 vm02.local ceph-mon[50200]: from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:26.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:26 vm02.local ceph-mon[50200]: from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:26.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:26 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:26.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:26 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/2442705758' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:26.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:26 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:26.514 INFO:tasks.workunit.client.1.vm05.stdout:2/981: rename db/d2d/dc6/dc7 to db/d61/d13e 0 2026-03-10T10:20:26.556 INFO:tasks.workunit.client.1.vm05.stdout:8/965: truncate d7/d14/d3a/ff3 344564 0 2026-03-10T10:20:26.556 INFO:tasks.workunit.client.1.vm05.stdout:2/982: dread db/d1c/d40/f70 [0,4194304] 0 2026-03-10T10:20:26.559 INFO:tasks.workunit.client.1.vm05.stdout:9/913: dwrite d0/df/d74/fbb [0,4194304] 0 2026-03-10T10:20:26.560 INFO:tasks.workunit.client.1.vm05.stdout:8/966: sync 2026-03-10T10:20:26.560 INFO:tasks.workunit.client.1.vm05.stdout:2/983: sync 2026-03-10T10:20:26.561 INFO:tasks.workunit.client.1.vm05.stdout:8/967: readlink d7/d14/d3a/d49/lea 0 2026-03-10T10:20:26.571 INFO:tasks.workunit.client.1.vm05.stdout:2/984: dwrite db/d12/fb5 [0,4194304] 0 2026-03-10T10:20:26.572 INFO:tasks.workunit.client.1.vm05.stdout:2/985: chown db/d28/d4f/d8b/feb 4482 1 2026-03-10T10:20:26.578 INFO:tasks.workunit.client.1.vm05.stdout:4/909: write d1/d31/dc/fe1 [2027630,55536] 0 2026-03-10T10:20:26.579 INFO:tasks.workunit.client.1.vm05.stdout:4/910: dread - d1/d31/d72/d106/ff8 zero size 2026-03-10T10:20:26.579 INFO:tasks.workunit.client.1.vm05.stdout:4/911: read - d1/dfd/f122 zero size 2026-03-10T10:20:26.585 INFO:tasks.workunit.client.1.vm05.stdout:2/986: rename db/d1c/d40/d62/d85/fd1 to db/d28/d4f/d59/da4/d81/f13f 0 2026-03-10T10:20:26.589 INFO:tasks.workunit.client.1.vm05.stdout:4/912: sync 2026-03-10T10:20:26.600 INFO:tasks.workunit.client.1.vm05.stdout:4/913: getdents d1/dfd/d131 0 2026-03-10T10:20:26.600 INFO:tasks.workunit.client.1.vm05.stdout:4/914: stat d1/d31/dc/d40/d45/fdd 0 2026-03-10T10:20:26.601 INFO:tasks.workunit.client.1.vm05.stdout:2/987: dread db/f19 [0,4194304] 0 2026-03-10T10:20:26.607 INFO:tasks.workunit.client.1.vm05.stdout:4/915: readlink d1/d31/dc/d40/l47 0 2026-03-10T10:20:26.615 INFO:tasks.workunit.client.1.vm05.stdout:2/988: creat db/d12/f140 x:0 0 0 2026-03-10T10:20:26.626 INFO:tasks.workunit.client.1.vm05.stdout:4/916: mknod d1/d64/da9/c135 0 2026-03-10T10:20:26.627 INFO:tasks.workunit.client.1.vm05.stdout:2/989: creat db/d28/d4f/d59/da4/d81/f141 x:0 0 0 2026-03-10T10:20:26.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.623+0000 7f9a7e94f700 1 -- 192.168.123.102:0/2025354105 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f9a64005cc0 con 0x7f9a78116f90 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:20:26.631 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:26.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.627+0000 7f9a757fa700 1 -- 192.168.123.102:0/2025354105 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1854 (secure 0 0 0) 0x7f9a68063190 con 0x7f9a78116f90 2026-03-10T10:20:26.634 INFO:tasks.workunit.client.1.vm05.stdout:2/990: mkdir db/d28/d4f/d59/d94/dfe/d142 0 2026-03-10T10:20:26.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.634+0000 7f9a5effd700 1 -- 192.168.123.102:0/2025354105 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f9a600776c0 msgr2=0x7f9a60079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.634+0000 7f9a5effd700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f9a600776c0 0x7f9a60079b80 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f9a70009fd0 tx=0x7f9a7000b040 comp rx=0 tx=0).stop 2026-03-10T10:20:26.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.634+0000 7f9a5effd700 1 -- 192.168.123.102:0/2025354105 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a78116f90 msgr2=0x7f9a781b3330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.634+0000 7f9a5effd700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a78116f90 0x7f9a781b3330 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f9a68009f90 tx=0x7f9a68005e40 comp rx=0 tx=0).stop 2026-03-10T10:20:26.635 INFO:tasks.workunit.client.1.vm05.stdout:2/991: stat db/d28/d4f/d8b/c111 0 2026-03-10T10:20:26.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.636+0000 7f9a5effd700 1 -- 192.168.123.102:0/2025354105 shutdown_connections 2026-03-10T10:20:26.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.636+0000 7f9a5effd700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f9a600776c0 0x7f9a60079b80 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.636+0000 7f9a5effd700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9a78107d90 0x7f9a78116a50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.636+0000 7f9a5effd700 1 --2- 192.168.123.102:0/2025354105 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9a78116f90 0x7f9a781b3330 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.636+0000 7f9a5effd700 1 -- 192.168.123.102:0/2025354105 >> 192.168.123.102:0/2025354105 conn(0x7f9a7806dda0 msgr2=0x7f9a781098d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:26.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.637+0000 7f9a5effd700 1 -- 192.168.123.102:0/2025354105 shutdown_connections 2026-03-10T10:20:26.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.637+0000 7f9a5effd700 1 -- 192.168.123.102:0/2025354105 wait complete. 2026-03-10T10:20:26.642 INFO:tasks.workunit.client.1.vm05.stdout:2/992: mkdir db/d61/d143 0 2026-03-10T10:20:26.642 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:20:26.647 INFO:tasks.workunit.client.1.vm05.stdout:2/993: symlink db/d2d/l144 0 2026-03-10T10:20:26.647 INFO:tasks.workunit.client.1.vm05.stdout:2/994: read - db/d1c/f127 zero size 2026-03-10T10:20:26.659 INFO:tasks.workunit.client.1.vm05.stdout:2/995: chown db/d28/d4f/d8b/f121 435358 1 2026-03-10T10:20:26.665 INFO:tasks.workunit.client.1.vm05.stdout:2/996: symlink db/d28/d4f/d59/d94/l145 0 2026-03-10T10:20:26.667 INFO:tasks.workunit.client.1.vm05.stdout:8/968: write d7/d14/d3a/f8f [270224,10064] 0 2026-03-10T10:20:26.667 INFO:tasks.workunit.client.1.vm05.stdout:9/914: write d0/d1/d13/d55/f10b [559480,73431] 0 2026-03-10T10:20:26.670 INFO:tasks.workunit.client.1.vm05.stdout:2/997: readlink db/d28/lda 0 2026-03-10T10:20:26.670 INFO:tasks.workunit.client.1.vm05.stdout:9/915: dread - d0/df/d74/d8c/d8f/f133 zero size 2026-03-10T10:20:26.671 INFO:tasks.workunit.client.1.vm05.stdout:9/916: chown d0/d1/d13/d26/f4e 6924110 1 2026-03-10T10:20:26.681 INFO:tasks.workunit.client.1.vm05.stdout:4/917: dwrite d1/d31/d76/f95 [0,4194304] 0 2026-03-10T10:20:26.686 INFO:tasks.workunit.client.1.vm05.stdout:8/969: dwrite d7/d14/d62/d90/dac/f121 [4194304,4194304] 0 2026-03-10T10:20:26.694 INFO:tasks.workunit.client.1.vm05.stdout:2/998: dread db/d1c/fcf [0,4194304] 0 2026-03-10T10:20:26.698 INFO:tasks.workunit.client.1.vm05.stdout:9/917: getdents d0/df/d74/d8c/d8f/ddd/de6 0 2026-03-10T10:20:26.699 INFO:tasks.workunit.client.1.vm05.stdout:2/999: chown db/d61/dcc/c10f 54 1 2026-03-10T10:20:26.703 INFO:tasks.workunit.client.1.vm05.stdout:9/918: readlink d0/d1/d13/l6 0 2026-03-10T10:20:26.704 INFO:tasks.workunit.client.1.vm05.stdout:8/970: dread d7/f78 [0,4194304] 0 2026-03-10T10:20:26.704 INFO:tasks.workunit.client.1.vm05.stdout:8/971: dread - d7/d14/d24/d3f/dc4/f12c zero size 2026-03-10T10:20:26.709 INFO:tasks.workunit.client.1.vm05.stdout:9/919: sync 2026-03-10T10:20:26.714 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:26 vm05.local ceph-mon[59051]: from='client.24529 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:26.714 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:26 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:26.715 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:26 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:26.715 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:26 vm05.local ceph-mon[59051]: from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:26.715 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:26 vm05.local ceph-mon[59051]: from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:26.715 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:26 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:26.715 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:26 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/2442705758' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:26.715 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:26 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:26.718 INFO:tasks.workunit.client.1.vm05.stdout:8/972: dread d7/d14/d62/d90/dac/f121 [4194304,4194304] 0 2026-03-10T10:20:26.727 INFO:tasks.workunit.client.1.vm05.stdout:8/973: creat d7/d14/d15/f13d x:0 0 0 2026-03-10T10:20:26.751 INFO:tasks.workunit.client.1.vm05.stdout:4/918: write d1/d31/dc/d40/d45/daa/fd4 [33569,44515] 0 2026-03-10T10:20:26.752 INFO:tasks.workunit.client.1.vm05.stdout:9/920: truncate d0/d1/f11c 1633248 0 2026-03-10T10:20:26.753 INFO:tasks.workunit.client.1.vm05.stdout:8/974: write d7/f1c [2017824,105889] 0 2026-03-10T10:20:26.754 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.751+0000 7f849ff41700 1 -- 192.168.123.102:0/1021041531 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498072b50 msgr2=0x7f8498072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.754 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.751+0000 7f849ff41700 1 --2- 192.168.123.102:0/1021041531 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498072b50 0x7f8498072f70 secure :-1 s=READY pgs=345 cs=0 l=1 rev1=1 crypto rx=0x7f8494007780 tx=0x7f849400c050 comp rx=0 tx=0).stop 2026-03-10T10:20:26.754 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.753+0000 7f849ff41700 1 -- 192.168.123.102:0/1021041531 shutdown_connections 2026-03-10T10:20:26.754 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.753+0000 7f849ff41700 1 --2- 192.168.123.102:0/1021041531 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8498075a40 0x7f8498077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.754 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.753+0000 7f849ff41700 1 --2- 192.168.123.102:0/1021041531 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498072b50 0x7f8498072f70 unknown :-1 s=CLOSED pgs=345 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.754 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.753+0000 7f849ff41700 1 -- 192.168.123.102:0/1021041531 >> 192.168.123.102:0/1021041531 conn(0x7f849806dae0 msgr2=0x7f849806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:26.755 INFO:tasks.workunit.client.1.vm05.stdout:4/919: symlink d1/dfd/l136 0 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.754+0000 7f849ff41700 1 -- 192.168.123.102:0/1021041531 shutdown_connections 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.754+0000 7f849ff41700 1 -- 192.168.123.102:0/1021041531 wait complete. 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.754+0000 7f849ff41700 1 Processor -- start 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.755+0000 7f849ff41700 1 -- start start 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.755+0000 7f849ff41700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498075a40 0x7f8498082f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.755+0000 7f849ff41700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84980834c0 0x7f8498083940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.755+0000 7f849ff41700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f849812e720 con 0x7f8498075a40 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.755+0000 7f849ff41700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f849812e890 con 0x7f84980834c0 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.755+0000 7f849dcdd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498075a40 0x7f8498082f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.755+0000 7f849dcdd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498075a40 0x7f8498082f80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:47756/0 (socket says 192.168.123.102:47756) 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.755+0000 7f849dcdd700 1 -- 192.168.123.102:0/3113232359 learned_addr learned my addr 192.168.123.102:0/3113232359 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.757+0000 7f849dcdd700 1 -- 192.168.123.102:0/3113232359 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84980834c0 msgr2=0x7f8498083940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.757+0000 7f849dcdd700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84980834c0 0x7f8498083940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.757+0000 7f849dcdd700 1 -- 192.168.123.102:0/3113232359 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8494007430 con 0x7f8498075a40 2026-03-10T10:20:26.761 INFO:tasks.workunit.client.1.vm05.stdout:9/921: dread d0/d1/d13/d55/f10b [0,4194304] 0 2026-03-10T10:20:26.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.760+0000 7f849dcdd700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498075a40 0x7f8498082f80 secure :-1 s=READY pgs=346 cs=0 l=1 rev1=1 crypto rx=0x7f8494000c00 tx=0x7f849400a3b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:26.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.760+0000 7f848effd700 1 -- 192.168.123.102:0/3113232359 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f849400f040 con 0x7f8498075a40 2026-03-10T10:20:26.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.761+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f849812eb10 con 0x7f8498075a40 2026-03-10T10:20:26.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.761+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f849812f060 con 0x7f8498075a40 2026-03-10T10:20:26.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.761+0000 7f848effd700 1 -- 192.168.123.102:0/3113232359 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8494004530 con 0x7f8498075a40 2026-03-10T10:20:26.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.761+0000 7f848effd700 1 -- 192.168.123.102:0/3113232359 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8494003cd0 con 0x7f8498075a40 2026-03-10T10:20:26.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.763+0000 7f848effd700 1 -- 192.168.123.102:0/3113232359 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 30) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f849400a890 con 0x7f8498075a40 2026-03-10T10:20:26.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.763+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f847c005320 con 0x7f8498075a40 2026-03-10T10:20:26.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.763+0000 7f848effd700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f84840776c0 0x7f8484079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:26.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.764+0000 7f849d4dc700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f84840776c0 0x7f8484079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:26.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.764+0000 7f849d4dc700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f84840776c0 0x7f8484079b80 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f849000ab00 tx=0x7f8490009250 comp rx=0 tx=0).ready entity=mgr.14720 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:26.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.764+0000 7f848effd700 1 -- 192.168.123.102:0/3113232359 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(43..43 src has 1..43) v4 ==== 5878+0+0 (secure 0 0 0) 0x7f8494099ba0 con 0x7f8498075a40 2026-03-10T10:20:26.772 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.771+0000 7f848effd700 1 -- 192.168.123.102:0/3113232359 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7f84940626e0 con 0x7f8498075a40 2026-03-10T10:20:26.778 INFO:tasks.workunit.client.1.vm05.stdout:9/922: fdatasync d0/d1/f75 0 2026-03-10T10:20:26.779 INFO:tasks.workunit.client.1.vm05.stdout:9/923: read d0/df/d74/d90/f125 [108938,92060] 0 2026-03-10T10:20:26.794 INFO:tasks.workunit.client.1.vm05.stdout:4/920: mkdir d1/d31/dc/d137 0 2026-03-10T10:20:26.797 INFO:tasks.workunit.client.1.vm05.stdout:4/921: read d1/d31/dc/d40/d45/f50 [3174186,81716] 0 2026-03-10T10:20:26.798 INFO:tasks.workunit.client.1.vm05.stdout:9/924: dread d0/d1/d13/de/d21/f53 [0,4194304] 0 2026-03-10T10:20:26.819 INFO:tasks.workunit.client.1.vm05.stdout:8/975: write d7/d14/d15/d3b/f8b [2151282,53689] 0 2026-03-10T10:20:26.867 INFO:tasks.workunit.client.1.vm05.stdout:9/925: creat d0/df/d11/dc6/f13f x:0 0 0 2026-03-10T10:20:26.898 INFO:tasks.workunit.client.1.vm05.stdout:9/926: symlink d0/d70/d10d/l140 0 2026-03-10T10:20:26.902 INFO:tasks.workunit.client.1.vm05.stdout:9/927: sync 2026-03-10T10:20:26.909 INFO:tasks.workunit.client.1.vm05.stdout:4/922: write d1/d31/dc/d40/f9c [48800,53170] 0 2026-03-10T10:20:26.909 INFO:tasks.workunit.client.1.vm05.stdout:9/928: symlink d0/d70/d10d/l141 0 2026-03-10T10:20:26.916 INFO:tasks.workunit.client.1.vm05.stdout:8/976: write d7/d14/d15/da7/fdf [237096,93971] 0 2026-03-10T10:20:26.917 INFO:tasks.workunit.client.1.vm05.stdout:9/929: read - d0/df/d74/d8c/fb5 zero size 2026-03-10T10:20:26.920 INFO:tasks.workunit.client.1.vm05.stdout:9/930: write d0/df/d11/dc6/f13f [823216,124041] 0 2026-03-10T10:20:26.926 INFO:tasks.workunit.client.1.vm05.stdout:9/931: read d0/df/d74/d8c/fed [6474594,81103] 0 2026-03-10T10:20:26.986 INFO:tasks.workunit.client.1.vm05.stdout:8/977: mkdir d7/d14/d24/d3f/d6a/d8a/d125/d13e 0 2026-03-10T10:20:26.986 INFO:tasks.workunit.client.1.vm05.stdout:9/932: mknod d0/d1/d13/d26/d13d/c142 0 2026-03-10T10:20:26.986 INFO:tasks.workunit.client.1.vm05.stdout:9/933: readlink d0/l85 0 2026-03-10T10:20:26.987 INFO:tasks.workunit.client.1.vm05.stdout:9/934: chown d0/df/d74/d8c 824289 1 2026-03-10T10:20:26.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.985+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 --> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f847c000bf0 con 0x7f84840776c0 2026-03-10T10:20:26.987 INFO:tasks.workunit.client.1.vm05.stdout:4/923: link d1/d3/c27 d1/d31/d4b/d6d/c138 0 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout: "mgr" 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "2/23 daemons upgraded", 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading mgr daemons", 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:20:26.990 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.988+0000 7f848effd700 1 -- 192.168.123.102:0/3113232359 <== mgr.14720 v2:192.168.123.102:6800/2642809286 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+368 (secure 0 0 0) 0x7f847c000bf0 con 0x7f84840776c0 2026-03-10T10:20:26.994 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.992+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f84840776c0 msgr2=0x7f8484079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.994 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.992+0000 7f849ff41700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f84840776c0 0x7f8484079b80 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f849000ab00 tx=0x7f8490009250 comp rx=0 tx=0).stop 2026-03-10T10:20:26.994 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.992+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498075a40 msgr2=0x7f8498082f80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:26.994 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.992+0000 7f849ff41700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498075a40 0x7f8498082f80 secure :-1 s=READY pgs=346 cs=0 l=1 rev1=1 crypto rx=0x7f8494000c00 tx=0x7f849400a3b0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.995 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.994+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 shutdown_connections 2026-03-10T10:20:26.995 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.994+0000 7f849ff41700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7f84840776c0 0x7f8484079b80 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.995 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.994+0000 7f849ff41700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8498075a40 0x7f8498082f80 unknown :-1 s=CLOSED pgs=346 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.995 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.994+0000 7f849ff41700 1 --2- 192.168.123.102:0/3113232359 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84980834c0 0x7f8498083940 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:26.995 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.994+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 >> 192.168.123.102:0/3113232359 conn(0x7f849806dae0 msgr2=0x7f849806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:26.995 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.995+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 shutdown_connections 2026-03-10T10:20:26.995 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:26.995+0000 7f849ff41700 1 -- 192.168.123.102:0/3113232359 wait complete. 2026-03-10T10:20:27.012 INFO:tasks.workunit.client.1.vm05.stdout:4/924: dread d1/d31/dc/d40/d45/ff7 [0,4194304] 0 2026-03-10T10:20:27.018 INFO:tasks.workunit.client.1.vm05.stdout:8/978: rmdir d7/d14/d15/d13a 0 2026-03-10T10:20:27.020 INFO:tasks.workunit.client.1.vm05.stdout:4/925: fdatasync d1/d64/da9/fc0 0 2026-03-10T10:20:27.022 INFO:tasks.workunit.client.1.vm05.stdout:8/979: symlink d7/d14/d62/d90/dac/df4/l13f 0 2026-03-10T10:20:27.022 INFO:tasks.workunit.client.1.vm05.stdout:9/935: dwrite d0/d1/d13/de/f46 [4194304,4194304] 0 2026-03-10T10:20:27.023 INFO:tasks.workunit.client.1.vm05.stdout:8/980: chown d7/d14/d15/d3b/c41 19909 1 2026-03-10T10:20:27.027 INFO:tasks.workunit.client.1.vm05.stdout:4/926: creat d1/d31/d4b/d6d/f139 x:0 0 0 2026-03-10T10:20:27.030 INFO:tasks.workunit.client.1.vm05.stdout:4/927: sync 2026-03-10T10:20:27.031 INFO:tasks.workunit.client.1.vm05.stdout:9/936: rename d0/d1/fa7 to d0/d1/d13/de/d93/f143 0 2026-03-10T10:20:27.031 INFO:tasks.workunit.client.1.vm05.stdout:4/928: truncate d1/d64/da9/f134 786030 0 2026-03-10T10:20:27.041 INFO:tasks.workunit.client.1.vm05.stdout:9/937: creat d0/df/d74/d8c/d8f/ddd/f144 x:0 0 0 2026-03-10T10:20:27.044 INFO:tasks.workunit.client.1.vm05.stdout:4/929: mkdir d1/d3/d115/d13a 0 2026-03-10T10:20:27.045 INFO:tasks.workunit.client.1.vm05.stdout:4/930: dread d1/d64/da9/fc0 [0,4194304] 0 2026-03-10T10:20:27.046 INFO:tasks.workunit.client.1.vm05.stdout:9/938: rmdir d0/d70 39 2026-03-10T10:20:27.049 INFO:tasks.workunit.client.1.vm05.stdout:4/931: write d1/d31/d4b/d6d/fbc [97620,84096] 0 2026-03-10T10:20:27.058 INFO:tasks.workunit.client.1.vm05.stdout:4/932: chown d1/d31/d72/lbe 3022988 1 2026-03-10T10:20:27.061 INFO:tasks.workunit.client.1.vm05.stdout:4/933: sync 2026-03-10T10:20:27.061 INFO:tasks.workunit.client.1.vm05.stdout:9/939: creat d0/dc4/f145 x:0 0 0 2026-03-10T10:20:27.062 INFO:tasks.workunit.client.1.vm05.stdout:9/940: sync 2026-03-10T10:20:27.062 INFO:tasks.workunit.client.1.vm05.stdout:4/934: chown d1/d64/da9/dae/d12a/cd9 104 1 2026-03-10T10:20:27.066 INFO:tasks.workunit.client.1.vm05.stdout:4/935: creat d1/d64/da9/dae/d12a/dbf/f13b x:0 0 0 2026-03-10T10:20:27.068 INFO:tasks.workunit.client.1.vm05.stdout:9/941: mkdir d0/d70/d10d/daf/db7/d146 0 2026-03-10T10:20:27.076 INFO:tasks.workunit.client.1.vm05.stdout:9/942: creat d0/d1/d13/de/d93/f147 x:0 0 0 2026-03-10T10:20:27.078 INFO:tasks.workunit.client.1.vm05.stdout:8/981: write d7/d14/d24/d3f/d6a/d8a/d96/db7/ffb [1963369,71319] 0 2026-03-10T10:20:27.079 INFO:tasks.workunit.client.1.vm05.stdout:4/936: mkdir d1/dfd/d131/d13c 0 2026-03-10T10:20:27.083 INFO:tasks.workunit.client.1.vm05.stdout:4/937: stat d1/d3/c1d 0 2026-03-10T10:20:27.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.091+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/1216162163 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2f0072b50 msgr2=0x7fa2f0072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:27.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.091+0000 7fa2f5c0d700 1 --2- 192.168.123.102:0/1216162163 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2f0072b50 0x7fa2f0072f70 secure :-1 s=READY pgs=347 cs=0 l=1 rev1=1 crypto rx=0x7fa2e000bc70 tx=0x7fa2e000bf80 comp rx=0 tx=0).stop 2026-03-10T10:20:27.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/1216162163 shutdown_connections 2026-03-10T10:20:27.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 --2- 192.168.123.102:0/1216162163 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2f0075a40 0x7fa2f0077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:27.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 --2- 192.168.123.102:0/1216162163 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2f0072b50 0x7fa2f0072f70 unknown :-1 s=CLOSED pgs=347 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:27.092 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/1216162163 >> 192.168.123.102:0/1216162163 conn(0x7fa2f006dae0 msgr2=0x7fa2f006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/1216162163 shutdown_connections 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/1216162163 wait complete. 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 Processor -- start 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 -- start start 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2f0075a40 0x7fa2f0083100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2f0083640 0x7fa2f012e400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa2f0083b80 con 0x7fa2f0075a40 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.092+0000 7fa2f5c0d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa2f0083cf0 con 0x7fa2f0083640 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.093+0000 7fa2eeffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2f0083640 0x7fa2f012e400 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.093+0000 7fa2eeffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2f0083640 0x7fa2f012e400 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:56532/0 (socket says 192.168.123.102:56532) 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.093+0000 7fa2eeffd700 1 -- 192.168.123.102:0/460797643 learned_addr learned my addr 192.168.123.102:0/460797643 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.093+0000 7fa2eeffd700 1 -- 192.168.123.102:0/460797643 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2f0075a40 msgr2=0x7fa2f0083100 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.093+0000 7fa2eeffd700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2f0075a40 0x7fa2f0083100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.093+0000 7fa2eeffd700 1 -- 192.168.123.102:0/460797643 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa2e000b920 con 0x7fa2f0083640 2026-03-10T10:20:27.093 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.093+0000 7fa2eeffd700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2f0083640 0x7fa2f012e400 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7fa2e800e9d0 tx=0x7fa2e800ed90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:27.094 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.093+0000 7fa2ecff9700 1 -- 192.168.123.102:0/460797643 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa2e800c4f0 con 0x7fa2f0083640 2026-03-10T10:20:27.094 INFO:tasks.workunit.client.1.vm05.stdout:9/943: dwrite d0/d70/d10d/daf/fcd [0,4194304] 0 2026-03-10T10:20:27.095 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.094+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa2f012ea60 con 0x7fa2f0083640 2026-03-10T10:20:27.095 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.094+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa2f012ef60 con 0x7fa2f0083640 2026-03-10T10:20:27.095 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.094+0000 7fa2ecff9700 1 -- 192.168.123.102:0/460797643 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa2e8013070 con 0x7fa2f0083640 2026-03-10T10:20:27.095 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.094+0000 7fa2ecff9700 1 -- 192.168.123.102:0/460797643 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa2e800f9e0 con 0x7fa2f0083640 2026-03-10T10:20:27.095 INFO:tasks.workunit.client.1.vm05.stdout:4/938: truncate d1/d31/d4b/ff0 514003 0 2026-03-10T10:20:27.095 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.095+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa2dc005320 con 0x7fa2f0083640 2026-03-10T10:20:27.103 INFO:tasks.workunit.client.1.vm05.stdout:8/982: link d7/d14/d15/f2e d7/d14/d15/f140 0 2026-03-10T10:20:27.104 INFO:tasks.workunit.client.1.vm05.stdout:9/944: mknod d0/d1/d13/de/d93/c148 0 2026-03-10T10:20:27.106 INFO:tasks.workunit.client.1.vm05.stdout:4/939: mknod d1/d31/d76/d109/c13d 0 2026-03-10T10:20:27.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.113+0000 7fa2ecff9700 1 -- 192.168.123.102:0/460797643 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 30) v1 ==== 100066+0+0 (secure 0 0 0) 0x7fa2e800bca0 con 0x7fa2f0083640 2026-03-10T10:20:27.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.114+0000 7fa2ecff9700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fa2d80776d0 0x7fa2d8079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:27.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.114+0000 7fa2ecff9700 1 -- 192.168.123.102:0/460797643 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(43..43 src has 1..43) v4 ==== 5878+0+0 (secure 0 0 0) 0x7fa2e8015070 con 0x7fa2f0083640 2026-03-10T10:20:27.114 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.114+0000 7fa2ef7fe700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fa2d80776d0 0x7fa2d8079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:27.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.115+0000 7fa2ef7fe700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fa2d80776d0 0x7fa2d8079b90 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fa2e000bc70 tx=0x7fa2e000d350 comp rx=0 tx=0).ready entity=mgr.14720 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:27.131 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.131+0000 7fa2ecff9700 1 -- 192.168.123.102:0/460797643 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191549 (secure 0 0 0) 0x7fa2e8062b90 con 0x7fa2f0083640 2026-03-10T10:20:27.171 INFO:tasks.workunit.client.1.vm05.stdout:8/983: write d7/d14/d24/d3f/dc4/f10f [970582,98588] 0 2026-03-10T10:20:27.175 INFO:tasks.workunit.client.1.vm05.stdout:4/940: creat d1/d31/d72/f13e x:0 0 0 2026-03-10T10:20:27.176 INFO:tasks.workunit.client.1.vm05.stdout:8/984: sync 2026-03-10T10:20:27.187 INFO:tasks.workunit.client.1.vm05.stdout:8/985: rename d7/d14/d24/f61 to d7/d14/d24/d3f/d4f/d132/f141 0 2026-03-10T10:20:27.188 INFO:tasks.workunit.client.1.vm05.stdout:9/945: link d0/d1/d13/de/d93/fee d0/d70/d10d/f149 0 2026-03-10T10:20:27.192 INFO:tasks.workunit.client.1.vm05.stdout:9/946: dwrite d0/df/d11/dc6/f13f [0,4194304] 0 2026-03-10T10:20:27.207 INFO:tasks.workunit.client.1.vm05.stdout:4/941: write d1/d31/d4b/ff0 [153969,30569] 0 2026-03-10T10:20:27.216 INFO:tasks.workunit.client.1.vm05.stdout:4/942: creat d1/d31/d4b/f13f x:0 0 0 2026-03-10T10:20:27.218 INFO:tasks.workunit.client.1.vm05.stdout:4/943: dread - d1/d31/d4b/f97 zero size 2026-03-10T10:20:27.220 INFO:tasks.workunit.client.1.vm05.stdout:4/944: sync 2026-03-10T10:20:27.225 INFO:tasks.workunit.client.1.vm05.stdout:9/947: creat d0/d1/d13/f14a x:0 0 0 2026-03-10T10:20:27.225 INFO:tasks.workunit.client.1.vm05.stdout:4/945: mknod d1/dfd/c140 0 2026-03-10T10:20:27.227 INFO:tasks.workunit.client.1.vm05.stdout:8/986: rename d7/d14/d24/f2c to d7/d14/d15/f142 0 2026-03-10T10:20:27.230 INFO:tasks.workunit.client.1.vm05.stdout:4/946: creat d1/d31/dc/d40/d45/ded/dfb/f141 x:0 0 0 2026-03-10T10:20:27.230 INFO:tasks.workunit.client.1.vm05.stdout:4/947: dread - d1/d31/dc/d40/d45/ded/dfb/f141 zero size 2026-03-10T10:20:27.231 INFO:tasks.workunit.client.1.vm05.stdout:8/987: mkdir d7/d2f/d57/de3/d143 0 2026-03-10T10:20:27.232 INFO:tasks.workunit.client.1.vm05.stdout:4/948: sync 2026-03-10T10:20:27.241 INFO:tasks.workunit.client.1.vm05.stdout:9/948: write d0/df/d74/f9e [4937767,121595] 0 2026-03-10T10:20:27.241 INFO:tasks.workunit.client.1.vm05.stdout:4/949: symlink d1/d64/da9/dae/d12a/l142 0 2026-03-10T10:20:27.248 INFO:tasks.workunit.client.1.vm05.stdout:8/988: unlink d7/d14/d24/fe4 0 2026-03-10T10:20:27.249 INFO:tasks.workunit.client.1.vm05.stdout:8/989: chown d7/d2f/d57/l8e 6485 1 2026-03-10T10:20:27.249 INFO:tasks.workunit.client.1.vm05.stdout:8/990: stat d7/l1b 0 2026-03-10T10:20:27.249 INFO:tasks.workunit.client.1.vm05.stdout:8/991: chown d7/d14/d3a 18 1 2026-03-10T10:20:27.252 INFO:tasks.workunit.client.1.vm05.stdout:4/950: truncate d1/d64/f8f 1434103 0 2026-03-10T10:20:27.252 INFO:tasks.workunit.client.1.vm05.stdout:9/949: mknod d0/dc4/d13b/c14b 0 2026-03-10T10:20:27.255 INFO:tasks.workunit.client.1.vm05.stdout:8/992: symlink d7/d14/d3a/d49/d65/l144 0 2026-03-10T10:20:27.256 INFO:tasks.workunit.client.1.vm05.stdout:9/950: dwrite d0/df/d74/d8c/de4/d104/fff [0,4194304] 0 2026-03-10T10:20:27.257 INFO:tasks.workunit.client.1.vm05.stdout:9/951: chown d0/d1/d13/de/d21/fab 90 1 2026-03-10T10:20:27.282 INFO:tasks.workunit.client.1.vm05.stdout:4/951: dwrite d1/d31/dc/d40/d45/ded/f120 [0,4194304] 0 2026-03-10T10:20:27.297 INFO:tasks.workunit.client.1.vm05.stdout:9/952: mkdir d0/d1/d13/de/ddf/d14c 0 2026-03-10T10:20:27.297 INFO:tasks.workunit.client.1.vm05.stdout:9/953: write d0/df/d11/dc6/f115 [188179,103175] 0 2026-03-10T10:20:27.298 INFO:tasks.workunit.client.1.vm05.stdout:9/954: dread - d0/df/f123 zero size 2026-03-10T10:20:27.298 INFO:tasks.workunit.client.1.vm05.stdout:9/955: readlink d0/d1/d13/d26/l6a 0 2026-03-10T10:20:27.309 INFO:tasks.workunit.client.1.vm05.stdout:4/952: readlink d1/d31/l4c 0 2026-03-10T10:20:27.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.309+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fa2dc005190 con 0x7fa2f0083640 2026-03-10T10:20:27.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.310+0000 7fa2ecff9700 1 -- 192.168.123.102:0/460797643 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7fa2e80622e0 con 0x7fa2f0083640 2026-03-10T10:20:27.310 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fa2d80776d0 msgr2=0x7fa2d8079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fa2d80776d0 0x7fa2d8079b90 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fa2e000bc70 tx=0x7fa2e000d350 comp rx=0 tx=0).stop 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2f0083640 msgr2=0x7fa2f012e400 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2f0083640 0x7fa2f012e400 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7fa2e800e9d0 tx=0x7fa2e800ed90 comp rx=0 tx=0).stop 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 shutdown_connections 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286] conn(0x7fa2d80776d0 0x7fa2d8079b90 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa2f0075a40 0x7fa2f0083100 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 --2- 192.168.123.102:0/460797643 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa2f0083640 0x7fa2f012e400 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 >> 192.168.123.102:0/460797643 conn(0x7fa2f006dae0 msgr2=0x7fa2f006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.313+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 shutdown_connections 2026-03-10T10:20:27.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:27.314+0000 7fa2f5c0d700 1 -- 192.168.123.102:0/460797643 wait complete. 2026-03-10T10:20:27.318 INFO:tasks.workunit.client.1.vm05.stdout:8/993: write d7/d14/d15/f2e [3708892,21501] 0 2026-03-10T10:20:27.323 INFO:tasks.workunit.client.1.vm05.stdout:4/953: dread - d1/d64/da9/dae/d12a/dbf/fad zero size 2026-03-10T10:20:27.337 INFO:tasks.workunit.client.1.vm05.stdout:4/954: write d1/d31/d4b/d6d/f9f [1605729,128578] 0 2026-03-10T10:20:27.339 INFO:tasks.workunit.client.1.vm05.stdout:8/994: chown d7/d14/d24/d3f/d6a/db0/cf7 1290 1 2026-03-10T10:20:27.343 INFO:tasks.workunit.client.1.vm05.stdout:9/956: getdents d0/df/d11/dc6 0 2026-03-10T10:20:27.344 INFO:tasks.workunit.client.1.vm05.stdout:9/957: readlink d0/df/lfb 0 2026-03-10T10:20:27.344 INFO:tasks.workunit.client.1.vm05.stdout:4/955: unlink d1/l88 0 2026-03-10T10:20:27.345 INFO:tasks.workunit.client.1.vm05.stdout:4/956: creat d1/d31/d76/d109/f143 x:0 0 0 2026-03-10T10:20:27.347 INFO:tasks.workunit.client.1.vm05.stdout:9/958: readlink d0/d1/d13/de/d21/l3c 0 2026-03-10T10:20:27.348 INFO:tasks.workunit.client.1.vm05.stdout:4/957: creat d1/d64/da9/dae/dfc/d108/f144 x:0 0 0 2026-03-10T10:20:27.350 INFO:tasks.workunit.client.1.vm05.stdout:9/959: truncate d0/d1/d13/f106 764786 0 2026-03-10T10:20:27.350 INFO:tasks.workunit.client.1.vm05.stdout:8/995: write d7/f9 [9114246,99802] 0 2026-03-10T10:20:27.352 INFO:tasks.workunit.client.1.vm05.stdout:4/958: mknod d1/d3/d65/de0/de9/c145 0 2026-03-10T10:20:27.353 INFO:tasks.workunit.client.1.vm05.stdout:9/960: symlink d0/d1/dcc/l14d 0 2026-03-10T10:20:27.355 INFO:tasks.workunit.client.1.vm05.stdout:4/959: dwrite d1/d31/d4b/d6d/fbc [0,4194304] 0 2026-03-10T10:20:27.356 INFO:tasks.workunit.client.1.vm05.stdout:8/996: dwrite d7/d14/d15/f140 [4194304,4194304] 0 2026-03-10T10:20:27.361 INFO:tasks.workunit.client.1.vm05.stdout:9/961: unlink d0/d1/dcc/l14d 0 2026-03-10T10:20:27.363 INFO:tasks.workunit.client.1.vm05.stdout:9/962: chown d0/f45 1702 1 2026-03-10T10:20:27.366 INFO:tasks.workunit.client.1.vm05.stdout:8/997: rename d7/d14/d62/d90/dac/df4 to d7/d14/d3a/d49/d65/d145 0 2026-03-10T10:20:27.367 INFO:tasks.workunit.client.1.vm05.stdout:4/960: stat d1/d31/dc/d40/d63/l87 0 2026-03-10T10:20:27.367 INFO:tasks.workunit.client.1.vm05.stdout:9/963: dread d0/df/fb8 [0,4194304] 0 2026-03-10T10:20:27.368 INFO:tasks.workunit.client.1.vm05.stdout:8/998: fsync d7/d14/d15/da7/f123 0 2026-03-10T10:20:27.368 INFO:tasks.workunit.client.1.vm05.stdout:9/964: readlink d0/df/lfb 0 2026-03-10T10:20:27.371 INFO:tasks.workunit.client.1.vm05.stdout:4/961: dwrite d1/d31/d4b/ff0 [0,4194304] 0 2026-03-10T10:20:27.375 INFO:tasks.workunit.client.1.vm05.stdout:8/999: creat d7/d14/d24/d3f/d6a/d8a/d125/d13e/f146 x:0 0 0 2026-03-10T10:20:27.389 INFO:tasks.workunit.client.1.vm05.stdout:4/962: fsync d1/d31/f7a 0 2026-03-10T10:20:27.404 INFO:tasks.workunit.client.1.vm05.stdout:4/963: getdents d1/d31 0 2026-03-10T10:20:27.405 INFO:tasks.workunit.client.1.vm05.stdout:4/964: read d1/d31/dc/d40/d45/ded/f120 [66796,38588] 0 2026-03-10T10:20:27.444 INFO:tasks.workunit.client.1.vm05.stdout:9/965: write d0/d1/f7b [1615250,14861] 0 2026-03-10T10:20:27.444 INFO:tasks.workunit.client.1.vm05.stdout:9/966: stat d0/d1/d13/d26/f58 0 2026-03-10T10:20:27.445 INFO:tasks.workunit.client.1.vm05.stdout:4/965: truncate d1/d31/d4b/d6d/fbc 280294 0 2026-03-10T10:20:27.449 INFO:tasks.workunit.client.1.vm05.stdout:9/967: rmdir d0/d1/d13/de 39 2026-03-10T10:20:27.456 INFO:tasks.workunit.client.1.vm05.stdout:4/966: link d1/d64/da9/dae/d12a/dbf/ca0 d1/d31/c146 0 2026-03-10T10:20:27.465 INFO:tasks.workunit.client.1.vm05.stdout:9/968: dwrite d0/d1/d13/f106 [0,4194304] 0 2026-03-10T10:20:27.467 INFO:tasks.workunit.client.1.vm05.stdout:4/967: write d1/d64/da9/dae/d12a/fd1 [521198,53652] 0 2026-03-10T10:20:27.468 INFO:tasks.workunit.client.1.vm05.stdout:4/968: write d1/d64/da9/dae/d12a/fd1 [1849176,14967] 0 2026-03-10T10:20:27.476 INFO:tasks.workunit.client.1.vm05.stdout:9/969: creat d0/df/d74/d8c/d8f/ddd/f14e x:0 0 0 2026-03-10T10:20:27.481 INFO:tasks.workunit.client.1.vm05.stdout:9/970: mkdir d0/df/d74/d8c/d8f/ddd/d14f 0 2026-03-10T10:20:27.484 INFO:tasks.workunit.client.1.vm05.stdout:4/969: rename d1/d3/d65/ddb/fe7 to d1/d31/dc/d137/f147 0 2026-03-10T10:20:27.493 INFO:tasks.workunit.client.1.vm05.stdout:9/971: symlink d0/df/d74/d8c/l150 0 2026-03-10T10:20:27.493 INFO:tasks.workunit.client.1.vm05.stdout:9/972: fdatasync d0/d1/d13/de/d93/fbd 0 2026-03-10T10:20:27.493 INFO:tasks.workunit.client.1.vm05.stdout:9/973: chown d0/df/f131 21 1 2026-03-10T10:20:27.493 INFO:tasks.workunit.client.1.vm05.stdout:9/974: getdents d0/df/d74/d8c/de4/d104/d107/d12f 0 2026-03-10T10:20:27.509 INFO:tasks.workunit.client.1.vm05.stdout:4/970: dwrite d1/d31/d4b/f114 [0,4194304] 0 2026-03-10T10:20:27.513 INFO:tasks.workunit.client.1.vm05.stdout:9/975: dwrite d0/d1/d16/f92 [0,4194304] 0 2026-03-10T10:20:27.518 INFO:tasks.workunit.client.1.vm05.stdout:4/971: creat d1/d31/d72/f148 x:0 0 0 2026-03-10T10:20:27.519 INFO:tasks.workunit.client.1.vm05.stdout:4/972: chown d1/d31/dc/d40/d45/daa/fd4 379778 1 2026-03-10T10:20:27.523 INFO:tasks.workunit.client.1.vm05.stdout:9/976: symlink d0/d70/l151 0 2026-03-10T10:20:27.530 INFO:tasks.workunit.client.1.vm05.stdout:9/977: getdents d0/df/d74/d8c/de4/df3 0 2026-03-10T10:20:27.546 INFO:tasks.workunit.client.1.vm05.stdout:9/978: fsync d0/df/d11/f84 0 2026-03-10T10:20:27.557 INFO:tasks.workunit.client.1.vm05.stdout:4/973: write d1/fb6 [4764319,49247] 0 2026-03-10T10:20:27.560 INFO:tasks.workunit.client.1.vm05.stdout:9/979: write d0/d1/d16/f5c [788767,22842] 0 2026-03-10T10:20:27.564 INFO:tasks.workunit.client.1.vm05.stdout:4/974: getdents d1/d31 0 2026-03-10T10:20:27.564 INFO:tasks.workunit.client.1.vm05.stdout:4/975: readlink d1/d31/l4c 0 2026-03-10T10:20:27.565 INFO:tasks.workunit.client.1.vm05.stdout:9/980: symlink d0/d1/d13/de/l152 0 2026-03-10T10:20:27.567 INFO:tasks.workunit.client.1.vm05.stdout:9/981: mknod d0/d1/d13/de/d93/c153 0 2026-03-10T10:20:27.570 INFO:tasks.workunit.client.1.vm05.stdout:9/982: truncate d0/df/d74/d90/fa4 3418186 0 2026-03-10T10:20:27.579 INFO:tasks.workunit.client.1.vm05.stdout:4/976: dwrite d1/d3/f10 [0,4194304] 0 2026-03-10T10:20:27.587 INFO:tasks.workunit.client.1.vm05.stdout:9/983: dread d0/d1/d13/f11a [0,4194304] 0 2026-03-10T10:20:27.589 INFO:tasks.workunit.client.1.vm05.stdout:9/984: creat d0/df/d74/d8c/d8f/ddd/de6/f154 x:0 0 0 2026-03-10T10:20:27.589 INFO:tasks.workunit.client.1.vm05.stdout:4/977: mknod d1/d31/d76/dac/d12e/c149 0 2026-03-10T10:20:27.590 INFO:tasks.workunit.client.1.vm05.stdout:9/985: mknod d0/d70/c155 0 2026-03-10T10:20:27.591 INFO:tasks.workunit.client.1.vm05.stdout:9/986: fsync d0/df/d74/f8a 0 2026-03-10T10:20:27.591 INFO:tasks.workunit.client.1.vm05.stdout:4/978: chown d1/d31/c91 10147607 1 2026-03-10T10:20:27.591 INFO:tasks.workunit.client.1.vm05.stdout:4/979: dread - d1/d31/d72/f148 zero size 2026-03-10T10:20:27.594 INFO:tasks.workunit.client.1.vm05.stdout:9/987: rmdir d0/d1/d13/de/ddf/df5 39 2026-03-10T10:20:27.596 INFO:tasks.workunit.client.1.vm05.stdout:9/988: link d0/f28 d0/d70/d10d/f156 0 2026-03-10T10:20:27.600 INFO:tasks.workunit.client.1.vm05.stdout:4/980: dread d1/d31/dc/d40/d45/daa/fd4 [0,4194304] 0 2026-03-10T10:20:27.601 INFO:tasks.workunit.client.1.vm05.stdout:4/981: readlink d1/d31/dc/d40/l98 0 2026-03-10T10:20:27.604 INFO:tasks.workunit.client.1.vm05.stdout:4/982: link d1/d31/dc/d40/d45/ded/dfb/d105/c132 d1/d3/d115/c14a 0 2026-03-10T10:20:27.618 INFO:tasks.workunit.client.1.vm05.stdout:9/989: write d0/df/f97 [1605256,55366] 0 2026-03-10T10:20:27.621 INFO:tasks.workunit.client.1.vm05.stdout:9/990: truncate d0/d1/d13/f6b 2040131 0 2026-03-10T10:20:27.623 INFO:tasks.workunit.client.1.vm05.stdout:9/991: unlink d0/d1/d13/de/cd1 0 2026-03-10T10:20:27.623 INFO:tasks.workunit.client.1.vm05.stdout:9/992: write d0/d1/d13/de/d93/fbd [3593247,73348] 0 2026-03-10T10:20:27.627 INFO:tasks.workunit.client.1.vm05.stdout:9/993: symlink d0/d1/d13/d26/l157 0 2026-03-10T10:20:27.629 INFO:tasks.workunit.client.1.vm05.stdout:9/994: mknod d0/d1/d13/d26/c158 0 2026-03-10T10:20:27.629 INFO:tasks.workunit.client.1.vm05.stdout:9/995: chown d0/df/d11 855 1 2026-03-10T10:20:27.630 INFO:tasks.workunit.client.1.vm05.stdout:9/996: truncate d0/d70/d10d/f149 4154273 0 2026-03-10T10:20:27.631 INFO:tasks.workunit.client.1.vm05.stdout:9/997: unlink d0/d70/f79 0 2026-03-10T10:20:27.632 INFO:tasks.workunit.client.1.vm05.stdout:9/998: rmdir d0/d70/d10d 39 2026-03-10T10:20:27.634 INFO:tasks.workunit.client.1.vm05.stdout:9/999: mknod d0/df/d74/d8c/de4/c159 0 2026-03-10T10:20:27.636 INFO:tasks.workunit.client.1.vm05.stdout:4/983: write d1/d31/dc/d40/d63/f90 [4310754,112157] 0 2026-03-10T10:20:27.638 INFO:tasks.workunit.client.1.vm05.stdout:4/984: mkdir d1/d31/d76/d109/d14b 0 2026-03-10T10:20:27.643 INFO:tasks.workunit.client.1.vm05.stdout:4/985: dwrite d1/d31/dc/d40/d45/daa/f133 [0,4194304] 0 2026-03-10T10:20:27.650 INFO:tasks.workunit.client.1.vm05.stdout:4/986: link d1/d31/d4b/d6d/f85 d1/d31/d76/dac/d12e/f14c 0 2026-03-10T10:20:27.650 INFO:tasks.workunit.client.1.vm05.stdout:4/987: dread - d1/d31/d4b/d6d/f139 zero size 2026-03-10T10:20:27.651 INFO:tasks.workunit.client.1.vm05.stdout:4/988: fdatasync d1/d31/d4b/d6d/fbc 0 2026-03-10T10:20:27.652 INFO:tasks.workunit.client.1.vm05.stdout:4/989: stat d1/d64/da9/dae/d12a/dbf/fff 0 2026-03-10T10:20:27.653 INFO:tasks.workunit.client.1.vm05.stdout:4/990: mknod d1/d31/c14d 0 2026-03-10T10:20:27.655 INFO:tasks.workunit.client.1.vm05.stdout:4/991: mknod d1/d31/dc/d40/d45/ded/c14e 0 2026-03-10T10:20:27.656 INFO:tasks.workunit.client.1.vm05.stdout:4/992: fsync d1/d31/dc/d40/f7d 0 2026-03-10T10:20:27.676 INFO:tasks.workunit.client.1.vm05.stdout:4/993: sync 2026-03-10T10:20:27.691 INFO:tasks.workunit.client.1.vm05.stdout:4/994: write d1/d64/da9/dae/d12a/dbf/fad [352279,98099] 0 2026-03-10T10:20:27.694 INFO:tasks.workunit.client.1.vm05.stdout:4/995: creat d1/d31/dc/d40/d45/ded/dfb/d105/f14f x:0 0 0 2026-03-10T10:20:27.694 INFO:tasks.workunit.client.1.vm05.stdout:4/996: chown d1/d31/l4c 362031 1 2026-03-10T10:20:27.697 INFO:tasks.workunit.client.1.vm05.stdout:4/997: dread d1/d31/dc/f69 [0,4194304] 0 2026-03-10T10:20:27.703 INFO:tasks.workunit.client.1.vm05.stdout:4/998: getdents d1/d31/dc/d40/d45/ded 0 2026-03-10T10:20:27.705 INFO:tasks.workunit.client.1.vm05.stdout:4/999: read d1/d31/dc/d40/fc3 [5816,109105] 0 2026-03-10T10:20:27.709 INFO:tasks.workunit.client.1.vm05.stderr:+ rm -rf -- ./tmp.FLwvv0QMyN 2026-03-10T10:20:27.771 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:27 vm05.local ceph-mon[59051]: pgmap v12: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail; 49 MiB/s rd, 117 MiB/s wr, 322 op/s 2026-03-10T10:20:27.771 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:27 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/2025354105' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:20:27.771 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:27 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:27.771 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:27 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:27.771 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:27 vm05.local ceph-mon[59051]: from='client.? 192.168.123.102:0/460797643' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:20:27.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:27 vm02.local ceph-mon[50200]: pgmap v12: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail; 49 MiB/s rd, 117 MiB/s wr, 322 op/s 2026-03-10T10:20:27.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:27 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/2025354105' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:20:27.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:27 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:27.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:27 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:27.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:27 vm02.local ceph-mon[50200]: from='client.? 192.168.123.102:0/460797643' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='client.14762 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm02.zmavgl"}]: dispatch 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm02.zmavgl"}]: dispatch 2026-03-10T10:20:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm02.zmavgl"}]': finished 2026-03-10T10:20:28.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.coparq"}]: dispatch 2026-03-10T10:20:28.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.coparq"}]: dispatch 2026-03-10T10:20:28.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.coparq"}]': finished 2026-03-10T10:20:28.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:28 vm05.local ceph-mon[59051]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='client.14762 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm02.zmavgl"}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm02.zmavgl"}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm02.zmavgl"}]': finished 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.coparq"}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.coparq"}]: dispatch 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 ' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.coparq"}]': finished 2026-03-10T10:20:28.655 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:28 vm02.local ceph-mon[50200]: from='mgr.14720 192.168.123.102:0/3707671034' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T10:20:29.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:29 vm02.local systemd[1]: Stopping Ceph mon.vm02 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:20:29.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:29 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02[50196]: 2026-03-10T10:20:29.391+0000 7f9310468700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm02 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:20:29.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:29 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02[50196]: 2026-03-10T10:20:29.391+0000 7f9310468700 -1 mon.vm02@0(leader) e2 *** Got Signal Terminated *** 2026-03-10T10:20:29.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:29 vm02.local podman[110005]: 2026-03-10 10:20:29.534154076 +0000 UTC m=+0.161616698 container died ab92d831cc1d0e24669aca88ce7ab5f62bbdd2ea45e7f8c4ada2277bd1fd5ffc (image=quay.io/ceph/ceph:v18.2.1, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.label-schema.schema-version=1.0, RELEASE=HEAD, org.label-schema.build-date=20240222, CEPH_POINT_RELEASE=-18.2.1, GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, GIT_BRANCH=HEAD, io.buildah.version=1.29.1, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.vendor=CentOS, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd) 2026-03-10T10:20:29.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:29 vm02.local podman[110005]: 2026-03-10 10:20:29.591735698 +0000 UTC m=+0.219198330 container remove ab92d831cc1d0e24669aca88ce7ab5f62bbdd2ea45e7f8c4ada2277bd1fd5ffc (image=quay.io/ceph/ceph:v18.2.1, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.label-schema.name=CentOS Stream 8 Base Image, CEPH_POINT_RELEASE=-18.2.1, GIT_BRANCH=HEAD, ceph=True, org.label-schema.build-date=20240222, RELEASE=HEAD, io.buildah.version=1.29.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd) 2026-03-10T10:20:29.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:29 vm02.local bash[110005]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02 2026-03-10T10:20:29.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:29 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02.service: Deactivated successfully. 2026-03-10T10:20:29.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:29 vm02.local systemd[1]: Stopped Ceph mon.vm02 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:20:29.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:29 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02.service: Consumed 6.650s CPU time. 2026-03-10T10:20:30.259 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local systemd[1]: Starting Ceph mon.vm02 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:20:30.259 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local podman[110113]: 2026-03-10 10:20:30.190391632 +0000 UTC m=+0.021697800 container create 1a2a2cb182f42eeba0c45f68e3ffdbc7a7b39ef8f68a7408e1f0975b498a6801 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True) 2026-03-10T10:20:30.259 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local podman[110113]: 2026-03-10 10:20:30.24454821 +0000 UTC m=+0.075854388 container init 1a2a2cb182f42eeba0c45f68e3ffdbc7a7b39ef8f68a7408e1f0975b498a6801 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:20:30.259 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local podman[110113]: 2026-03-10 10:20:30.250842176 +0000 UTC m=+0.082148344 container start 1a2a2cb182f42eeba0c45f68e3ffdbc7a7b39ef8f68a7408e1f0975b498a6801 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:20:30.259 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local bash[110113]: 1a2a2cb182f42eeba0c45f68e3ffdbc7a7b39ef8f68a7408e1f0975b498a6801 2026-03-10T10:20:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local podman[110113]: 2026-03-10 10:20:30.181036024 +0000 UTC m=+0.012342201 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local systemd[1]: Started Ceph mon.vm02 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: pidfile_write: ignore empty --pid-file 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: load: jerasure load: lrc 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: RocksDB version: 7.9.2 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Git sha 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: DB SUMMARY 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: DB Session ID: FAEL5U5IKNG76PZ7H3WL 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: CURRENT file: CURRENT 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: MANIFEST file: MANIFEST-000015 size: 776 Bytes 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm02/store.db dir, Total Num: 1, files: 000023.sst 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm02/store.db: 000021.log size: 2052682 ; 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.error_if_exists: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.create_if_missing: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.paranoid_checks: 1 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.env: 0x55b838f02dc0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.info_log: 0x55b83b657900 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.statistics: (nil) 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.use_fsync: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_log_file_size: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.allow_fallocate: 1 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.use_direct_reads: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.db_log_dir: 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.wal_dir: 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T10:20:30.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.write_buffer_manager: 0x55b83b65b900 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.unordered_write: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.row_cache: None 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.wal_filter: None 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.two_write_queues: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.wal_compression: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.atomic_flush: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.log_readahead_size: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_background_jobs: 2 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_background_compactions: -1 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_subcompactions: 1 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T10:20:30.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_open_files: -1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_background_flushes: -1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Compression algorithms supported: 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: kZSTD supported: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: kXpressCompression supported: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: kBZip2Compression supported: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: kLZ4Compression supported: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: kZlibCompression supported: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: kSnappyCompression supported: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm02/store.db/MANIFEST-000015 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.merge_operator: 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_filter: None 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b83b657580) 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: cache_index_and_filter_blocks: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: pin_top_level_index_and_filter: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: index_type: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: data_block_index_type: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: index_shortening: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: checksum: 4 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: no_block_cache: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_cache: 0x55b83b67a9b0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_cache_name: BinnedLRUCache 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_cache_options: 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: capacity : 536870912 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: num_shard_bits : 4 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: strict_capacity_limit : 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: high_pri_pool_ratio: 0.000 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_cache_compressed: (nil) 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: persistent_cache: (nil) 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_size: 4096 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_size_deviation: 10 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_restart_interval: 16 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: index_block_restart_interval: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: metadata_block_size: 4096 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: partition_filters: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: use_delta_encoding: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: filter_policy: bloomfilter 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: whole_key_filtering: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: verify_compression: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: read_amp_bytes_per_bit: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: format_version: 5 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: enable_index_compression: 1 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: block_align: 0 2026-03-10T10:20:30.532 INFO:journalctl@ceph.mon.vm02.vm02.stdout: max_auto_readahead_size: 262144 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout: prepopulate_block_cache: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout: initial_auto_readahead_size: 8192 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression: NoCompression 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.num_levels: 7 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.inplace_update_support: 0 2026-03-10T10:20:30.533 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.bloom_locality: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.max_successive_merges: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.ttl: 2592000 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.enable_blob_files: false 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.min_blob_size: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm02/store.db/MANIFEST-000015 succeeded,manifest_file_number is 15, next_file_number is 25, last_sequence is 8251, log_number is 21,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 21 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 21 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 10a06b0e-254c-477a-90be-fd62a43f94b6 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773138030289459, "job": 1, "event": "recovery_started", "wal_files": [21]} 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #21 mode 2 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773138030297522, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 26, "file_size": 1860002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8252, "largest_seqno": 8864, "table_properties": {"data_size": 1855982, "index_size": 2211, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 7235, "raw_average_key_size": 23, "raw_value_size": 1849062, "raw_average_value_size": 6122, "num_data_blocks": 106, "num_entries": 302, "num_filter_entries": 302, "num_deletions": 2, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773138030, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "10a06b0e-254c-477a-90be-fd62a43f94b6", "db_session_id": "FAEL5U5IKNG76PZ7H3WL", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773138030298604, "job": 1, "event": "recovery_finished"} 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/version_set.cc:5047] Creating manifest 28 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm02/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b83b67ce00 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: DB pointer 0x55b83b68c000 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ** DB Stats ** 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ** Compaction Stats [default] ** 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: L0 1/0 1.77 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 262.8 0.01 0.00 1 0.007 0 0 0.0 0.0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: L6 1/0 6.48 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Sum 2/0 8.26 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 262.8 0.01 0.00 1 0.007 0 0 0.0 0.0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 262.8 0.01 0.00 1 0.007 0 0 0.0 0.0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ** Compaction Stats [default] ** 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 262.8 0.01 0.00 1 0.007 0 0 0.0 0.0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:20:30.534 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Flush(GB): cumulative 0.002, interval 0.002 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Cumulative compaction: 0.00 GB write, 81.94 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Interval compaction: 0.00 GB write, 81.94 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Block cache BinnedLRUCache@0x55b83b67a9b0#2 capacity: 512.00 MB usage: 27.34 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Block cache entry stats(count,size,portion): FilterBlock(2,8.94 KB,0.00170469%) IndexBlock(2,18.41 KB,0.00351071%) Misc(1,0.00 KB,0%) 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: starting mon.vm02 rank 0 at public addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] at bind addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon_data /var/lib/ceph/mon/ceph-vm02 fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???) e2 preinit fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).mds e15 new map 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).mds e15 print_map 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: e15 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: btime 1970-01-01T00:00:00:000000+0000 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: legacy client fscid: 1 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Filesystem 'cephfs' (1) 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: fs_name cephfs 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: epoch 15 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: tableserver 0 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: root 0 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: session_timeout 60 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: session_autoclose 300 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: max_file_size 1099511627776 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: max_xattr_size 65536 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: required_client_features {} 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: last_failure 0 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: last_failure_osd_epoch 39 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: max_mds 1 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: in 0 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: up {0=14464} 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: failed 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: damaged 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: stopped 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: data_pools [3] 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: metadata_pool 2 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: inline_data disabled 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: balancer 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: bal_rank_mask -1 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: standby_count_wanted 1 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: qdb_cluster leader: 0 members: 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: [mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: [mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: Standby daemons: 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: [mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout: [mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).osd e43 crush map has features 3314933000852226048, adjusting msgr requires 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).osd e43 crush map has features 288514051259236352, adjusting msgr requires 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).osd e43 crush map has features 288514051259236352, adjusting msgr requires 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).osd e43 crush map has features 288514051259236352, adjusting msgr requires 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).paxosservice(auth 1..21) refresh upgraded, format 0 -> 3 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).mgr e0 loading version 30 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).mgr e30 active server: [v2:192.168.123.102:6800/2642809286,v1:192.168.123.102:6801/2642809286](14720) 2026-03-10T10:20:30.535 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:30 vm02.local ceph-mon[110129]: mon.vm02@-1(???).mgr e30 mkfs or daemon transitioned to available, loading commands 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/2045112720' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/key"}]: dispatch 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/2045112720' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: pgmap v14: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail; 45 MiB/s rd, 108 MiB/s wr, 303 op/s 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: mon.vm02 calling monitor election 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: mon.vm02 is new leader, mons vm02,vm05 in quorum (ranks 0,1) 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: monmap epoch 2 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: last_changed 2026-03-10T10:15:25.674350+0000 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: created 2026-03-10T10:14:07.630583+0000 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: min_mon_release 18 (reef) 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: election_strategy: 1 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: osdmap e43: 6 total, 6 up, 6 in 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: mgrmap e30: vm02.zmavgl(active, since 22s), standbys: vm05.coparq 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: overall HEALTH_OK 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: from='mgr.14720 ' entity='' 2026-03-10T10:20:31.322 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:31 vm05.local ceph-mon[59051]: mgrmap e31: vm02.zmavgl(active, since 22s), standbys: vm05.coparq 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: from='mgr.? 192.168.123.105:0/2045112720' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/key"}]: dispatch 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: from='mgr.? 192.168.123.105:0/2045112720' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: pgmap v14: 65 pgs: 65 active+clean; 4.0 GiB data, 14 GiB used, 106 GiB / 120 GiB avail; 45 MiB/s rd, 108 MiB/s wr, 303 op/s 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: mon.vm02 calling monitor election 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: mon.vm02 is new leader, mons vm02,vm05 in quorum (ranks 0,1) 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: monmap epoch 2 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: last_changed 2026-03-10T10:15:25.674350+0000 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: created 2026-03-10T10:14:07.630583+0000 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: min_mon_release 18 (reef) 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: election_strategy: 1 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: osdmap e43: 6 total, 6 up, 6 in 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: mgrmap e30: vm02.zmavgl(active, since 22s), standbys: vm05.coparq 2026-03-10T10:20:31.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: overall HEALTH_OK 2026-03-10T10:20:31.364 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: from='mgr.14720 ' entity='' 2026-03-10T10:20:31.364 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:31 vm02.local ceph-mon[110129]: mgrmap e31: vm02.zmavgl(active, since 22s), standbys: vm05.coparq 2026-03-10T10:20:35.843 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:35 vm02.local ceph-mon[110129]: Standby manager daemon vm05.coparq restarted 2026-03-10T10:20:35.843 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:35 vm02.local ceph-mon[110129]: Standby manager daemon vm05.coparq started 2026-03-10T10:20:35.843 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:35 vm02.local ceph-mon[110129]: from='mgr.? 192.168.123.105:0/662651387' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/crt"}]: dispatch 2026-03-10T10:20:35.843 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:35 vm02.local ceph-mon[110129]: from='mgr.? 192.168.123.105:0/662651387' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:20:35.843 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:35 vm02.local ceph-mon[110129]: from='mgr.? 192.168.123.105:0/662651387' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/key"}]: dispatch 2026-03-10T10:20:35.843 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:35 vm02.local ceph-mon[110129]: from='mgr.? 192.168.123.105:0/662651387' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:20:36.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:35 vm05.local ceph-mon[59051]: Standby manager daemon vm05.coparq restarted 2026-03-10T10:20:36.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:35 vm05.local ceph-mon[59051]: Standby manager daemon vm05.coparq started 2026-03-10T10:20:36.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:35 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/662651387' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/crt"}]: dispatch 2026-03-10T10:20:36.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:35 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/662651387' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:20:36.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:35 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/662651387' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.coparq/key"}]: dispatch 2026-03-10T10:20:36.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:35 vm05.local ceph-mon[59051]: from='mgr.? 192.168.123.105:0/662651387' entity='mgr.vm05.coparq' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:20:36.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: mgrmap e32: vm02.zmavgl(active, since 27s), standbys: vm05.coparq 2026-03-10T10:20:36.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: Active manager daemon vm02.zmavgl restarted 2026-03-10T10:20:36.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: Activating manager daemon vm02.zmavgl 2026-03-10T10:20:36.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: osdmap e44: 6 total, 6 up, 6 in 2026-03-10T10:20:36.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: mgrmap e33: vm02.zmavgl(active, starting, since 0.0440127s), standbys: vm05.coparq 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm05.coparq", "id": "vm05.coparq"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:20:36.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:36 vm02.local ceph-mon[110129]: Manager daemon vm02.zmavgl is now available 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: mgrmap e32: vm02.zmavgl(active, since 27s), standbys: vm05.coparq 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: Active manager daemon vm02.zmavgl restarted 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: Activating manager daemon vm02.zmavgl 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: osdmap e44: 6 total, 6 up, 6 in 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: mgrmap e33: vm02.zmavgl(active, starting, since 0.0440127s), standbys: vm05.coparq 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm02.zmavgl", "id": "vm02.zmavgl"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr metadata", "who": "vm05.coparq", "id": "vm05.coparq"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:20:37.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:36 vm05.local ceph-mon[59051]: Manager daemon vm02.zmavgl is now available 2026-03-10T10:20:37.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:37.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:20:37.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:37.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:20:37.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:37 vm02.local ceph-mon[110129]: mgrmap e34: vm02.zmavgl(active, since 1.05935s), standbys: vm05.coparq 2026-03-10T10:20:38.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:37 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:38.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:37 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:20:38.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:37 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:38.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:37 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.zmavgl/trash_purge_schedule"}]: dispatch 2026-03-10T10:20:38.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:37 vm05.local ceph-mon[59051]: mgrmap e34: vm02.zmavgl(active, since 1.05935s), standbys: vm05.coparq 2026-03-10T10:20:38.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:38 vm02.local ceph-mon[110129]: pgmap v3: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:38.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:38 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:39.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:38 vm05.local ceph-mon[59051]: pgmap v3: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:39.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:38 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: [10/Mar/2026:10:20:38] ENGINE Bus STARTING 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: [10/Mar/2026:10:20:38] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: pgmap v4: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: [10/Mar/2026:10:20:38] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: [10/Mar/2026:10:20:38] ENGINE Bus STARTED 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: [10/Mar/2026:10:20:38] ENGINE Client ('192.168.123.102', 36460) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: mgrmap e35: vm02.zmavgl(active, since 2s), standbys: vm05.coparq 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:39.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:39 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:38] ENGINE Bus STARTING 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:38] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: pgmap v4: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:38] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:38] ENGINE Bus STARTED 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: [10/Mar/2026:10:20:38] ENGINE Client ('192.168.123.102', 36460) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: mgrmap e35: vm02.zmavgl(active, since 2s), standbys: vm05.coparq 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:39.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:39 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: pgmap v5: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:41.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T10:20:41.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T10:20:41.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:20:41.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:41 vm02.local ceph-mon[110129]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:20:41.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: pgmap v5: 65 pgs: 65 active+clean; 3.1 GiB data, 11 GiB used, 109 GiB / 120 GiB avail 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: Updating vm02:/etc/ceph/ceph.conf 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:20:41.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:41 vm05.local ceph-mon[59051]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.conf 2026-03-10T10:20:42.660 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:20:42.660 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:20:42.660 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: mgrmap e36: vm02.zmavgl(active, since 4s), standbys: vm05.coparq 2026-03-10T10:20:42.660 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:20:42.660 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:20:42.660 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.660 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.660 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.660 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.661 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.661 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:42.661 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:42.661 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T10:20:42.661 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.661 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:20:42.661 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:20:42.661 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-mon[59051]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: mgrmap e36: vm02.zmavgl(active, since 4s), standbys: vm05.coparq 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: Updating vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: Updating vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/config/ceph.client.admin.keyring 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:20:42.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:42 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:43.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local systemd[1]: Stopping Ceph mon.vm05 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:20:43.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm05[59047]: 2026-03-10T10:20:42.883+0000 7efdcb906700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm05 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:20:43.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:42 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm05[59047]: 2026-03-10T10:20:42.883+0000 7efdcb906700 -1 mon.vm05@1(peon) e2 *** Got Signal Terminated *** 2026-03-10T10:20:43.375 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local podman[103471]: 2026-03-10 10:20:43.115606963 +0000 UTC m=+0.250458655 container died cea7d23f93a6a6b57a931b5a1e273e348649891bccec53821aaf026866d4ac70 (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm05, CEPH_POINT_RELEASE=-18.2.1, GIT_BRANCH=HEAD, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.29.1, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, org.label-schema.build-date=20240222, GIT_CLEAN=True, RELEASE=HEAD, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2) 2026-03-10T10:20:43.375 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local podman[103471]: 2026-03-10 10:20:43.295728082 +0000 UTC m=+0.430579774 container remove cea7d23f93a6a6b57a931b5a1e273e348649891bccec53821aaf026866d4ac70 (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm05, org.label-schema.schema-version=1.0, CEPH_POINT_RELEASE=-18.2.1, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, io.buildah.version=1.29.1, maintainer=Guillaume Abrioux , org.label-schema.name=CentOS Stream 8 Base Image, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.label-schema.vendor=CentOS, GIT_BRANCH=HEAD, GIT_CLEAN=True, org.label-schema.license=GPLv2, RELEASE=HEAD, org.label-schema.build-date=20240222) 2026-03-10T10:20:43.375 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local bash[103471]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm05 2026-03-10T10:20:43.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm05.service: Deactivated successfully. 2026-03-10T10:20:43.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local systemd[1]: Stopped Ceph mon.vm05 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:20:43.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm05.service: Consumed 3.703s CPU time. 2026-03-10T10:20:43.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local systemd[1]: Starting Ceph mon.vm05 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:20:44.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local podman[103579]: 2026-03-10 10:20:43.794775959 +0000 UTC m=+0.031795865 container create 3fb75dafefb6541647f3a3bf7307caf03068fa86bf7555fdc182851c046aaac9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm05, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local podman[103579]: 2026-03-10 10:20:43.847493049 +0000 UTC m=+0.084512965 container init 3fb75dafefb6541647f3a3bf7307caf03068fa86bf7555fdc182851c046aaac9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm05, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223) 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local podman[103579]: 2026-03-10 10:20:43.850204856 +0000 UTC m=+0.087224762 container start 3fb75dafefb6541647f3a3bf7307caf03068fa86bf7555fdc182851c046aaac9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm05, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2) 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local bash[103579]: 3fb75dafefb6541647f3a3bf7307caf03068fa86bf7555fdc182851c046aaac9 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local podman[103579]: 2026-03-10 10:20:43.784763275 +0000 UTC m=+0.021783192 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local systemd[1]: Started Ceph mon.vm05 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: pidfile_write: ignore empty --pid-file 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: load: jerasure load: lrc 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: RocksDB version: 7.9.2 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Git sha 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: DB SUMMARY 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: DB Session ID: BBR4R5VQUSJR7J8C5KOW 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: CURRENT file: CURRENT 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: MANIFEST file: MANIFEST-000010 size: 669 Bytes 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm05/store.db dir, Total Num: 1, files: 000018.sst 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm05/store.db: 000016.log size: 6187744 ; 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.error_if_exists: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.create_if_missing: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.paranoid_checks: 1 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.env: 0x559dca4b7dc0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.info_log: 0x559dcbbb2d40 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.statistics: (nil) 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.use_fsync: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_log_file_size: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.allow_fallocate: 1 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.use_direct_reads: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.db_log_dir: 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.wal_dir: 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T10:20:44.289 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.write_buffer_manager: 0x559dcbbb7900 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.unordered_write: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.row_cache: None 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.wal_filter: None 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.two_write_queues: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.wal_compression: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.atomic_flush: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.log_readahead_size: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_background_jobs: 2 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_background_compactions: -1 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_subcompactions: 1 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_open_files: -1 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_background_flushes: -1 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Compression algorithms supported: 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: kZSTD supported: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: kXpressCompression supported: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: kBZip2Compression supported: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T10:20:44.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: kLZ4Compression supported: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: kZlibCompression supported: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: kSnappyCompression supported: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000010 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.merge_operator: 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_filter: None 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcbbb2660) 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: cache_index_and_filter_blocks: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: pin_top_level_index_and_filter: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: index_type: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: data_block_index_type: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: index_shortening: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: checksum: 4 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: no_block_cache: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache: 0x559dcbbd7350 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache_name: BinnedLRUCache 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache_options: 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: capacity : 536870912 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: num_shard_bits : 4 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: strict_capacity_limit : 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: high_pri_pool_ratio: 0.000 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache_compressed: (nil) 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: persistent_cache: (nil) 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_size: 4096 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_size_deviation: 10 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_restart_interval: 16 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: index_block_restart_interval: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: metadata_block_size: 4096 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: partition_filters: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: use_delta_encoding: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: filter_policy: bloomfilter 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: whole_key_filtering: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: verify_compression: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: read_amp_bytes_per_bit: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: format_version: 5 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: enable_index_compression: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_align: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: max_auto_readahead_size: 262144 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: prepopulate_block_cache: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: initial_auto_readahead_size: 8192 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression: NoCompression 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.num_levels: 7 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T10:20:44.291 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.inplace_update_support: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.bloom_locality: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.max_successive_merges: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.ttl: 2592000 2026-03-10T10:20:44.292 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.enable_blob_files: false 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.min_blob_size: 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 20, last_sequence is 8205, log_number is 16,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 16 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 16 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b4603bda-8aba-46de-9e72-4f7fe28dcdeb 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773138043910452, "job": 1, "event": "recovery_started", "wal_files": [16]} 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #16 mode 2 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773138043928303, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 21, "file_size": 3699254, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8210, "largest_seqno": 9285, "table_properties": {"data_size": 3693376, "index_size": 3616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 11483, "raw_average_key_size": 23, "raw_value_size": 3682452, "raw_average_value_size": 7624, "num_data_blocks": 172, "num_entries": 483, "num_filter_entries": 483, "num_deletions": 2, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773138043, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b4603bda-8aba-46de-9e72-4f7fe28dcdeb", "db_session_id": "BBR4R5VQUSJR7J8C5KOW", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773138043928389, "job": 1, "event": "recovery_finished"} 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/version_set.cc:5047] Creating manifest 23 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm05/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559dcbbd8e00 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: DB pointer 0x559dcbce4000 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** DB Stats ** 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** Compaction Stats [default] ** 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: L0 1/0 3.53 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 314.2 0.01 0.00 1 0.011 0 0 0.0 0.0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: L6 1/0 6.48 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Sum 2/0 10.01 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 314.2 0.01 0.00 1 0.011 0 0 0.0 0.0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 314.2 0.01 0.00 1 0.011 0 0 0.0 0.0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** Compaction Stats [default] ** 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 314.2 0.01 0.00 1 0.011 0 0 0.0 0.0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Flush(GB): cumulative 0.003, interval 0.003 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative compaction: 0.00 GB write, 121.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval compaction: 0.00 GB write, 121.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Block cache BinnedLRUCache@0x559dcbbd7350#2 capacity: 512.00 MB usage: 5.36 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6e-06 secs_since: 0 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Block cache entry stats(count,size,portion): FilterBlock(1,1.28 KB,0.000244379%) IndexBlock(1,4.08 KB,0.000777841%) Misc(1,0.00 KB,0%) 2026-03-10T10:20:44.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: starting mon.vm05 rank 1 at public addrs [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] at bind addrs [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon_data /var/lib/ceph/mon/ceph-vm05 fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: mon.vm05@-1(???) e2 preinit fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: mon.vm05@-1(???).mds e15 new map 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: mon.vm05@-1(???).mds e15 print_map 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: e15 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: btime 1970-01-01T00:00:00:000000+0000 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: legacy client fscid: 1 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Filesystem 'cephfs' (1) 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: fs_name cephfs 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: epoch 15 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: tableserver 0 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: root 0 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: session_timeout 60 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: session_autoclose 300 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: max_file_size 1099511627776 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: max_xattr_size 65536 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: required_client_features {} 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: last_failure 0 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: last_failure_osd_epoch 39 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: max_mds 1 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: in 0 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: up {0=14464} 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: failed 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: damaged 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: stopped 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: data_pools [3] 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: metadata_pool 2 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: inline_data disabled 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: balancer 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: bal_rank_mask -1 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: standby_count_wanted 1 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: qdb_cluster leader: 0 members: 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: [mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: [mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Standby daemons: 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: [mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout: [mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: mon.vm05@-1(???).osd e44 crush map has features 3314933000852226048, adjusting msgr requires 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: mon.vm05@-1(???).osd e44 crush map has features 288514051259236352, adjusting msgr requires 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: mon.vm05@-1(???).osd e44 crush map has features 288514051259236352, adjusting msgr requires 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: mon.vm05@-1(???).osd e44 crush map has features 288514051259236352, adjusting msgr requires 2026-03-10T10:20:44.294 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:43 vm05.local ceph-mon[103593]: mon.vm05@-1(???).paxosservice(auth 1..22) refresh upgraded, format 0 -> 3 2026-03-10T10:20:45.533 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: osdmap e44: 6 total, 6 up, 6 in 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: mgrmap e36: vm02.zmavgl(active, since 7s), standbys: vm05.coparq 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: overall HEALTH_OK 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: mon.vm02 calling monitor election 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: mon.vm05 calling monitor election 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: mon.vm02 is new leader, mons vm02,vm05 in quorum (ranks 0,1) 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: monmap epoch 3 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: last_changed 2026-03-10T10:20:44.396662+0000 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: created 2026-03-10T10:14:07.630583+0000 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: min_mon_release 19 (squid) 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: election_strategy: 1 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: osdmap e44: 6 total, 6 up, 6 in 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: mgrmap e36: vm02.zmavgl(active, since 7s), standbys: vm05.coparq 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: overall HEALTH_OK 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:45.534 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:45 vm05.local ceph-mon[103593]: pgmap v7: 65 pgs: 65 active+clean; 2.5 GiB data, 9.6 GiB used, 110 GiB / 120 GiB avail; 642 KiB/s rd, 626 KiB/s wr, 86 op/s 2026-03-10T10:20:45.778 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T10:20:45.778 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:20:45.778 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: osdmap e44: 6 total, 6 up, 6 in 2026-03-10T10:20:45.778 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: mgrmap e36: vm02.zmavgl(active, since 7s), standbys: vm05.coparq 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: overall HEALTH_OK 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: mon.vm02 calling monitor election 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: mon.vm05 calling monitor election 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: mon.vm02 is new leader, mons vm02,vm05 in quorum (ranks 0,1) 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: monmap epoch 3 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: last_changed 2026-03-10T10:20:44.396662+0000 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: created 2026-03-10T10:14:07.630583+0000 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: min_mon_release 19 (squid) 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: election_strategy: 1 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: osdmap e44: 6 total, 6 up, 6 in 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: mgrmap e36: vm02.zmavgl(active, since 7s), standbys: vm05.coparq 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: overall HEALTH_OK 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:45.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:45 vm02.local ceph-mon[110129]: pgmap v7: 65 pgs: 65 active+clean; 2.5 GiB data, 9.6 GiB used, 110 GiB / 120 GiB avail; 642 KiB/s rd, 626 KiB/s wr, 86 op/s 2026-03-10T10:20:47.186 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:46 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:47.186 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:46 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:47.186 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:46 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:47.186 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:46 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:47.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:46 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:47.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:46 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:47.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:46 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:47.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:46 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:48.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:47 vm02.local ceph-mon[110129]: pgmap v8: 65 pgs: 65 active+clean; 2.5 GiB data, 9.6 GiB used, 110 GiB / 120 GiB avail; 499 KiB/s rd, 487 KiB/s wr, 67 op/s 2026-03-10T10:20:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:47 vm05.local ceph-mon[103593]: pgmap v8: 65 pgs: 65 active+clean; 2.5 GiB data, 9.6 GiB used, 110 GiB / 120 GiB avail; 499 KiB/s rd, 487 KiB/s wr, 67 op/s 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: Reconfiguring mon.vm02 (monmap changed)... 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: Reconfiguring daemon mon.vm02 on vm02 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.zmavgl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:20:49.283 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: Reconfiguring mon.vm02 (monmap changed)... 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: Reconfiguring daemon mon.vm02 on vm02 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.zmavgl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:20:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:49.978 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T10:20:49.978 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: Reconfiguring mgr.vm02.zmavgl (monmap changed)... 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: Reconfiguring daemon mgr.vm02.zmavgl on vm02 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: pgmap v9: 65 pgs: 65 active+clean; 1.4 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 960 KiB/s rd, 1019 KiB/s wr, 133 op/s 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: Reconfiguring ceph-exporter.vm02 (monmap changed)... 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: Unable to update caps for client.ceph-exporter.vm02 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm02"}]: dispatch 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: Reconfiguring daemon ceph-exporter.vm02 on vm02 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:20:50.284 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: Reconfiguring mgr.vm02.zmavgl (monmap changed)... 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: Reconfiguring daemon mgr.vm02.zmavgl on vm02 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: pgmap v9: 65 pgs: 65 active+clean; 1.4 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 960 KiB/s rd, 1019 KiB/s wr, 133 op/s 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: Reconfiguring ceph-exporter.vm02 (monmap changed)... 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: Unable to update caps for client.ceph-exporter.vm02 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm02"}]: dispatch 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: Reconfiguring daemon ceph-exporter.vm02 on vm02 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:20:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: Reconfiguring crash.vm02 (monmap changed)... 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: Reconfiguring daemon crash.vm02 on vm02 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: Reconfiguring osd.0 (monmap changed)... 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: Reconfiguring daemon osd.0 on vm02 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: Reconfiguring osd.1 (monmap changed)... 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: Reconfiguring daemon osd.1 on vm02 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: Reconfiguring osd.2 (monmap changed)... 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: Reconfiguring daemon osd.2 on vm02 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: pgmap v10: 65 pgs: 65 active+clean; 1.4 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 878 KiB/s rd, 933 KiB/s wr, 122 op/s 2026-03-10T10:20:52.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: Reconfiguring crash.vm02 (monmap changed)... 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: Reconfiguring daemon crash.vm02 on vm02 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: Reconfiguring osd.0 (monmap changed)... 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: Reconfiguring daemon osd.0 on vm02 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: Reconfiguring osd.1 (monmap changed)... 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: Reconfiguring daemon osd.1 on vm02 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: Reconfiguring osd.2 (monmap changed)... 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: Reconfiguring daemon osd.2 on vm02 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: pgmap v10: 65 pgs: 65 active+clean; 1.4 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 878 KiB/s rd, 933 KiB/s wr, 122 op/s 2026-03-10T10:20:52.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: Reconfiguring mds.cephfs.vm02.zymcrs (monmap changed)... 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.zymcrs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: Reconfiguring daemon mds.cephfs.vm02.zymcrs on vm02 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: Reconfiguring mds.cephfs.vm02.stcvsz (monmap changed)... 2026-03-10T10:20:52.596 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.stcvsz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:20:52.597 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:52.597 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: Reconfiguring daemon mds.cephfs.vm02.stcvsz on vm02 2026-03-10T10:20:52.597 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.597 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:52.597 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:20:52.597 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:20:52.597 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T10:20:52.597 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm05"}]: dispatch 2026-03-10T10:20:52.597 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: Reconfiguring mds.cephfs.vm02.zymcrs (monmap changed)... 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.zymcrs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: Reconfiguring daemon mds.cephfs.vm02.zymcrs on vm02 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: Reconfiguring mds.cephfs.vm02.stcvsz (monmap changed)... 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.stcvsz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: Reconfiguring daemon mds.cephfs.vm02.stcvsz on vm02 2026-03-10T10:20:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:20:53.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:20:53.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T10:20:53.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm05"}]: dispatch 2026-03-10T10:20:53.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:53.552 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T10:20:53.552 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.1/client.1/tmp 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: Unable to update caps for client.ceph-exporter.vm05 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: pgmap v11: 65 pgs: 65 active+clean; 1.4 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 878 KiB/s rd, 933 KiB/s wr, 122 op/s 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: Reconfiguring crash.vm05 (monmap changed)... 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: Reconfiguring daemon crash.vm05 on vm05 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: Reconfiguring mgr.vm05.coparq (monmap changed)... 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:20:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:20:53.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:53.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: Reconfiguring daemon mgr.vm05.coparq on vm05 2026-03-10T10:20:53.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:20:53.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:20:53.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:53 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:53.998 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: Unable to update caps for client.ceph-exporter.vm05 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: pgmap v11: 65 pgs: 65 active+clean; 1.4 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 878 KiB/s rd, 933 KiB/s wr, 122 op/s 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: Reconfiguring crash.vm05 (monmap changed)... 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: Reconfiguring daemon crash.vm05 on vm05 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: Reconfiguring mgr.vm05.coparq (monmap changed)... 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.coparq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: Reconfiguring daemon mgr.vm05.coparq on vm05 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:20:53.999 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:53 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:55.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: Reconfiguring mon.vm05 (monmap changed)... 2026-03-10T10:20:55.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: Reconfiguring daemon mon.vm05 on vm05 2026-03-10T10:20:55.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: Reconfiguring osd.3 (monmap changed)... 2026-03-10T10:20:55.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: Reconfiguring daemon osd.3 on vm05 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: Reconfiguring osd.4 (monmap changed)... 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: Reconfiguring daemon osd.4 on vm05 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:20:55.030 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:54 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: Reconfiguring mon.vm05 (monmap changed)... 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: Reconfiguring daemon mon.vm05 on vm05 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: Reconfiguring osd.3 (monmap changed)... 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: Reconfiguring daemon osd.3 on vm05 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: Reconfiguring osd.4 (monmap changed)... 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: Reconfiguring daemon osd.4 on vm05 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:20:55.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:54 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:56.227 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: pgmap v12: 65 pgs: 65 active+clean; 668 MiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 174 op/s 2026-03-10T10:20:56.227 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: Reconfiguring osd.5 (monmap changed)... 2026-03-10T10:20:56.227 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: Reconfiguring daemon osd.5 on vm05 2026-03-10T10:20:56.227 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.227 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.227 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: Reconfiguring mds.cephfs.vm05.liatdh (monmap changed)... 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.liatdh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: Reconfiguring daemon mds.cephfs.vm05.liatdh on vm05 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.sudjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm02"}]: dispatch 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm02"}]': finished 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm05"}]: dispatch 2026-03-10T10:20:56.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm05"}]': finished 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: pgmap v12: 65 pgs: 65 active+clean; 668 MiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 174 op/s 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: Reconfiguring osd.5 (monmap changed)... 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: Reconfiguring daemon osd.5 on vm05 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: Reconfiguring mds.cephfs.vm05.liatdh (monmap changed)... 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.liatdh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: Reconfiguring daemon mds.cephfs.vm05.liatdh on vm05 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.sudjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm02"}]: dispatch 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm02"}]': finished 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm05"}]: dispatch 2026-03-10T10:20:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm05"}]': finished 2026-03-10T10:20:57.367 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:57 vm05.local ceph-mon[103593]: Reconfiguring mds.cephfs.vm05.sudjys (monmap changed)... 2026-03-10T10:20:57.367 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:57 vm05.local ceph-mon[103593]: Reconfiguring daemon mds.cephfs.vm05.sudjys on vm05 2026-03-10T10:20:57.367 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:57 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all mon 2026-03-10T10:20:57.367 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:57 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:57.367 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:57 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:20:57.367 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:57 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:57.395 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:57 vm02.local ceph-mon[110129]: Reconfiguring mds.cephfs.vm05.sudjys (monmap changed)... 2026-03-10T10:20:57.395 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:57 vm02.local ceph-mon[110129]: Reconfiguring daemon mds.cephfs.vm05.sudjys on vm05 2026-03-10T10:20:57.395 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:57 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all mon 2026-03-10T10:20:57.395 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:57 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:57.395 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:57 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:20:57.395 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:57 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:57.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.485+0000 7f47b79da700 1 -- 192.168.123.102:0/3467416932 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0075a40 msgr2=0x7f47b0077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:57.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.485+0000 7f47b79da700 1 --2- 192.168.123.102:0/3467416932 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0075a40 0x7f47b0077ed0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f47a8009230 tx=0x7f47a8009260 comp rx=0 tx=0).stop 2026-03-10T10:20:57.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.486+0000 7f47b79da700 1 -- 192.168.123.102:0/3467416932 shutdown_connections 2026-03-10T10:20:57.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.486+0000 7f47b79da700 1 --2- 192.168.123.102:0/3467416932 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0075a40 0x7f47b0077ed0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.486+0000 7f47b79da700 1 --2- 192.168.123.102:0/3467416932 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f47b0072b50 0x7f47b0072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.486+0000 7f47b79da700 1 -- 192.168.123.102:0/3467416932 >> 192.168.123.102:0/3467416932 conn(0x7f47b006dae0 msgr2=0x7f47b006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:57.487 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.486+0000 7f47b79da700 1 -- 192.168.123.102:0/3467416932 shutdown_connections 2026-03-10T10:20:57.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.486+0000 7f47b79da700 1 -- 192.168.123.102:0/3467416932 wait complete. 2026-03-10T10:20:57.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.486+0000 7f47b79da700 1 Processor -- start 2026-03-10T10:20:57.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.487+0000 7f47b79da700 1 -- start start 2026-03-10T10:20:57.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.487+0000 7f47b79da700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0072b50 0x7f47b00830a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:57.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.487+0000 7f47b79da700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f47b00835e0 0x7f47b012e3f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:57.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.487+0000 7f47b79da700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f47b0083af0 con 0x7f47b00835e0 2026-03-10T10:20:57.488 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.487+0000 7f47b79da700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f47b0083c60 con 0x7f47b0072b50 2026-03-10T10:20:57.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.487+0000 7f47b4f75700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f47b00835e0 0x7f47b012e3f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:57.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.488+0000 7f47b5776700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0072b50 0x7f47b00830a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:57.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.488+0000 7f47b5776700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0072b50 0x7f47b00830a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:59088/0 (socket says 192.168.123.102:59088) 2026-03-10T10:20:57.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.488+0000 7f47b5776700 1 -- 192.168.123.102:0/1052582767 learned_addr learned my addr 192.168.123.102:0/1052582767 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:57.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.488+0000 7f47b5776700 1 -- 192.168.123.102:0/1052582767 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f47b00835e0 msgr2=0x7f47b012e3f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:57.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.488+0000 7f47b5776700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f47b00835e0 0x7f47b012e3f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.488+0000 7f47b5776700 1 -- 192.168.123.102:0/1052582767 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f47a8008ee0 con 0x7f47b0072b50 2026-03-10T10:20:57.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.488+0000 7f47b5776700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0072b50 0x7f47b00830a0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f47ac00bfd0 tx=0x7f47ac009d70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:57.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.489+0000 7f47a67fc700 1 -- 192.168.123.102:0/1052582767 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f47ac010040 con 0x7f47b0072b50 2026-03-10T10:20:57.490 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.489+0000 7f47b79da700 1 -- 192.168.123.102:0/1052582767 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f47b012e990 con 0x7f47b0072b50 2026-03-10T10:20:57.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.489+0000 7f47b79da700 1 -- 192.168.123.102:0/1052582767 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f47b012eee0 con 0x7f47b0072b50 2026-03-10T10:20:57.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.491+0000 7f47a67fc700 1 -- 192.168.123.102:0/1052582767 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f47ac00ec20 con 0x7f47b0072b50 2026-03-10T10:20:57.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.491+0000 7f47a67fc700 1 -- 192.168.123.102:0/1052582767 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f47ac014e40 con 0x7f47b0072b50 2026-03-10T10:20:57.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.492+0000 7f47b79da700 1 -- 192.168.123.102:0/1052582767 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4794005320 con 0x7f47b0072b50 2026-03-10T10:20:57.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.494+0000 7f47a67fc700 1 -- 192.168.123.102:0/1052582767 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f47ac014590 con 0x7f47b0072b50 2026-03-10T10:20:57.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.494+0000 7f47a67fc700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f479c077b00 0x7f479c079fc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:57.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.494+0000 7f47a67fc700 1 -- 192.168.123.102:0/1052582767 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f47ac099e70 con 0x7f47b0072b50 2026-03-10T10:20:57.496 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.496+0000 7f47b4f75700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f479c077b00 0x7f479c079fc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:57.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.497+0000 7f47b4f75700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f479c077b00 0x7f479c079fc0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f47a80062a0 tx=0x7f47a80061f0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:57.497 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.497+0000 7f47a67fc700 1 -- 192.168.123.102:0/1052582767 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f47ac0624b0 con 0x7f47b0072b50 2026-03-10T10:20:57.696 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.696+0000 7f47b79da700 1 -- 192.168.123.102:0/1052582767 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f4794000bf0 con 0x7f479c077b00 2026-03-10T10:20:57.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.698+0000 7f47a67fc700 1 -- 192.168.123.102:0/1052582767 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+385 (secure 0 0 0) 0x7f4794000bf0 con 0x7f479c077b00 2026-03-10T10:20:57.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.702+0000 7f479bfff700 1 -- 192.168.123.102:0/1052582767 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f479c077b00 msgr2=0x7f479c079fc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:57.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.702+0000 7f479bfff700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f479c077b00 0x7f479c079fc0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f47a80062a0 tx=0x7f47a80061f0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.702+0000 7f479bfff700 1 -- 192.168.123.102:0/1052582767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0072b50 msgr2=0x7f47b00830a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:57.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.702+0000 7f479bfff700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0072b50 0x7f47b00830a0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f47ac00bfd0 tx=0x7f47ac009d70 comp rx=0 tx=0).stop 2026-03-10T10:20:57.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.702+0000 7f479bfff700 1 -- 192.168.123.102:0/1052582767 shutdown_connections 2026-03-10T10:20:57.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.702+0000 7f479bfff700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f479c077b00 0x7f479c079fc0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.702+0000 7f479bfff700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f47b0072b50 0x7f47b00830a0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.702+0000 7f479bfff700 1 --2- 192.168.123.102:0/1052582767 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f47b00835e0 0x7f47b012e3f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.702+0000 7f479bfff700 1 -- 192.168.123.102:0/1052582767 >> 192.168.123.102:0/1052582767 conn(0x7f47b006dae0 msgr2=0x7f47b006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:57.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.703+0000 7f479bfff700 1 -- 192.168.123.102:0/1052582767 shutdown_connections 2026-03-10T10:20:57.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.703+0000 7f479bfff700 1 -- 192.168.123.102:0/1052582767 wait complete. 2026-03-10T10:20:57.720 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.817+0000 7fdb99505700 1 -- 192.168.123.102:0/3903072953 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdb9410a700 msgr2=0x7fdb9410cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.817+0000 7fdb99505700 1 --2- 192.168.123.102:0/3903072953 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdb9410a700 0x7fdb9410cb90 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fdb8c00b3a0 tx=0x7fdb8c00b6b0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.817+0000 7fdb99505700 1 -- 192.168.123.102:0/3903072953 shutdown_connections 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.817+0000 7fdb99505700 1 --2- 192.168.123.102:0/3903072953 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdb9410a700 0x7fdb9410cb90 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.817+0000 7fdb99505700 1 --2- 192.168.123.102:0/3903072953 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb94107d90 0x7fdb9410a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.817+0000 7fdb99505700 1 -- 192.168.123.102:0/3903072953 >> 192.168.123.102:0/3903072953 conn(0x7fdb9406daa0 msgr2=0x7fdb9406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.817+0000 7fdb99505700 1 -- 192.168.123.102:0/3903072953 shutdown_connections 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.817+0000 7fdb99505700 1 -- 192.168.123.102:0/3903072953 wait complete. 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb99505700 1 Processor -- start 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb99505700 1 -- start start 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb99505700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdb94107d90 0x7fdb94116aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:57.818 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb99505700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb9410a700 0x7fdb94116fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb99505700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdb94117600 con 0x7fdb94107d90 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb99505700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdb941b3110 con 0x7fdb9410a700 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb93fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdb94107d90 0x7fdb94116aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb937fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb9410a700 0x7fdb94116fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb937fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb9410a700 0x7fdb94116fe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:59116/0 (socket says 192.168.123.102:59116) 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb937fe700 1 -- 192.168.123.102:0/1912605283 learned_addr learned my addr 192.168.123.102:0/1912605283 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb937fe700 1 -- 192.168.123.102:0/1912605283 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdb94107d90 msgr2=0x7fdb94116aa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb937fe700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdb94107d90 0x7fdb94116aa0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.818+0000 7fdb937fe700 1 -- 192.168.123.102:0/1912605283 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdb8c00b050 con 0x7fdb9410a700 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.819+0000 7fdb937fe700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb9410a700 0x7fdb94116fe0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fdb8c000f80 tx=0x7fdb8c008f80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:57.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.819+0000 7fdb917fa700 1 -- 192.168.123.102:0/1912605283 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdb8c00e050 con 0x7fdb9410a700 2026-03-10T10:20:57.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.819+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdb941b32b0 con 0x7fdb9410a700 2026-03-10T10:20:57.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.819+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdb941b37b0 con 0x7fdb9410a700 2026-03-10T10:20:57.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.820+0000 7fdb917fa700 1 -- 192.168.123.102:0/1912605283 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fdb8c0047d0 con 0x7fdb9410a700 2026-03-10T10:20:57.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.820+0000 7fdb917fa700 1 -- 192.168.123.102:0/1912605283 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdb8c01da60 con 0x7fdb9410a700 2026-03-10T10:20:57.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.821+0000 7fdb917fa700 1 -- 192.168.123.102:0/1912605283 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fdb8c019040 con 0x7fdb9410a700 2026-03-10T10:20:57.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.822+0000 7fdb917fa700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdb7c077b00 0x7fdb7c079fc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:57.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.822+0000 7fdb917fa700 1 -- 192.168.123.102:0/1912605283 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fdb8c09c4a0 con 0x7fdb9410a700 2026-03-10T10:20:57.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.822+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdb80005320 con 0x7fdb9410a700 2026-03-10T10:20:57.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.823+0000 7fdb93fff700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdb7c077b00 0x7fdb7c079fc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:57.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.823+0000 7fdb93fff700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdb7c077b00 0x7fdb7c079fc0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fdb941af080 tx=0x7fdb8400b410 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:57.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.825+0000 7fdb917fa700 1 -- 192.168.123.102:0/1912605283 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fdb8c064ae0 con 0x7fdb9410a700 2026-03-10T10:20:57.994 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.993+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fdb80000bf0 con 0x7fdb7c077b00 2026-03-10T10:20:57.999 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:57.997+0000 7fdb917fa700 1 -- 192.168.123.102:0/1912605283 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+385 (secure 0 0 0) 0x7fdb80000bf0 con 0x7fdb7c077b00 2026-03-10T10:20:58.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdb7c077b00 msgr2=0x7fdb7c079fc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdb7c077b00 0x7fdb7c079fc0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fdb941af080 tx=0x7fdb8400b410 comp rx=0 tx=0).stop 2026-03-10T10:20:58.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb9410a700 msgr2=0x7fdb94116fe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb9410a700 0x7fdb94116fe0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fdb8c000f80 tx=0x7fdb8c008f80 comp rx=0 tx=0).stop 2026-03-10T10:20:58.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 shutdown_connections 2026-03-10T10:20:58.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdb94107d90 0x7fdb94116aa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdb7c077b00 0x7fdb7c079fc0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 --2- 192.168.123.102:0/1912605283 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb9410a700 0x7fdb94116fe0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 >> 192.168.123.102:0/1912605283 conn(0x7fdb9406daa0 msgr2=0x7fdb9406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:58.002 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.001+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 shutdown_connections 2026-03-10T10:20:58.003 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.002+0000 7fdb99505700 1 -- 192.168.123.102:0/1912605283 wait complete. 2026-03-10T10:20:58.138 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.138+0000 7feb9a5c9700 1 -- 192.168.123.102:0/3230811007 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb94107d90 msgr2=0x7feb9410a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.138 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.138+0000 7feb9a5c9700 1 --2- 192.168.123.102:0/3230811007 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb94107d90 0x7feb9410a1c0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7feb84009b00 tx=0x7feb84009e10 comp rx=0 tx=0).stop 2026-03-10T10:20:58.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.139+0000 7feb9a5c9700 1 -- 192.168.123.102:0/3230811007 shutdown_connections 2026-03-10T10:20:58.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.139+0000 7feb9a5c9700 1 --2- 192.168.123.102:0/3230811007 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feb9410a700 0x7feb9410cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.139+0000 7feb9a5c9700 1 --2- 192.168.123.102:0/3230811007 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb94107d90 0x7feb9410a1c0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.139+0000 7feb9a5c9700 1 -- 192.168.123.102:0/3230811007 >> 192.168.123.102:0/3230811007 conn(0x7feb9406dae0 msgr2=0x7feb9406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:58.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.140+0000 7feb9a5c9700 1 -- 192.168.123.102:0/3230811007 shutdown_connections 2026-03-10T10:20:58.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.140+0000 7feb9a5c9700 1 -- 192.168.123.102:0/3230811007 wait complete. 2026-03-10T10:20:58.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.140+0000 7feb9a5c9700 1 Processor -- start 2026-03-10T10:20:58.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.140+0000 7feb9a5c9700 1 -- start start 2026-03-10T10:20:58.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.140+0000 7feb9a5c9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feb94107d90 0x7feb94116a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:58.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.140+0000 7feb9a5c9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb9410a700 0x7feb94116f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:58.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.140+0000 7feb9a5c9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feb941175a0 con 0x7feb9410a700 2026-03-10T10:20:58.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.140+0000 7feb9a5c9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7feb941176e0 con 0x7feb94107d90 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.141+0000 7feb937fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb9410a700 0x7feb94116f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.141+0000 7feb937fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb9410a700 0x7feb94116f80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:49278/0 (socket says 192.168.123.102:49278) 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.141+0000 7feb937fe700 1 -- 192.168.123.102:0/2828279443 learned_addr learned my addr 192.168.123.102:0/2828279443 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.141+0000 7feb93fff700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feb94107d90 0x7feb94116a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.141+0000 7feb937fe700 1 -- 192.168.123.102:0/2828279443 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feb94107d90 msgr2=0x7feb94116a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.141+0000 7feb937fe700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feb94107d90 0x7feb94116a40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.141+0000 7feb937fe700 1 -- 192.168.123.102:0/2828279443 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7feb840097e0 con 0x7feb9410a700 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.141+0000 7feb937fe700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb9410a700 0x7feb94116f80 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7feb8800b700 tx=0x7feb8800bac0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.141+0000 7feb917fa700 1 -- 192.168.123.102:0/2828279443 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7feb88010840 con 0x7feb9410a700 2026-03-10T10:20:58.143 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.142+0000 7feb917fa700 1 -- 192.168.123.102:0/2828279443 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7feb88010e80 con 0x7feb9410a700 2026-03-10T10:20:58.144 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.142+0000 7feb917fa700 1 -- 192.168.123.102:0/2828279443 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7feb8800d590 con 0x7feb9410a700 2026-03-10T10:20:58.144 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.143+0000 7feb9a5c9700 1 -- 192.168.123.102:0/2828279443 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7feb941b3330 con 0x7feb9410a700 2026-03-10T10:20:58.144 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.143+0000 7feb9a5c9700 1 -- 192.168.123.102:0/2828279443 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7feb941b3800 con 0x7feb9410a700 2026-03-10T10:20:58.144 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.144+0000 7feb9a5c9700 1 -- 192.168.123.102:0/2828279443 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7feb94110c60 con 0x7feb9410a700 2026-03-10T10:20:58.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.146+0000 7feb917fa700 1 -- 192.168.123.102:0/2828279443 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7feb880109a0 con 0x7feb9410a700 2026-03-10T10:20:58.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.146+0000 7feb917fa700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7feb7c0777d0 0x7feb7c079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:58.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.146+0000 7feb917fa700 1 -- 192.168.123.102:0/2828279443 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 6222+0+0 (secure 0 0 0) 0x7feb88099ef0 con 0x7feb9410a700 2026-03-10T10:20:58.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.147+0000 7feb93fff700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7feb7c0777d0 0x7feb7c079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:58.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.148+0000 7feb93fff700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7feb7c0777d0 0x7feb7c079c90 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7feb8400b5c0 tx=0x7feb84005fb0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:58.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.155+0000 7feb917fa700 1 -- 192.168.123.102:0/2828279443 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7feb880624b0 con 0x7feb9410a700 2026-03-10T10:20:58.247 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:58 vm02.local ceph-mon[110129]: Upgrade: Updating crash.vm02 (1/2) 2026-03-10T10:20:58.247 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:58 vm02.local ceph-mon[110129]: Deploying daemon crash.vm02 on vm02 2026-03-10T10:20:58.248 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:58 vm02.local ceph-mon[110129]: pgmap v13: 65 pgs: 65 active+clean; 668 MiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 888 KiB/s rd, 936 KiB/s wr, 123 op/s 2026-03-10T10:20:58.248 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:58 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:58.334 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:58 vm05.local ceph-mon[103593]: Upgrade: Updating crash.vm02 (1/2) 2026-03-10T10:20:58.334 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:58 vm05.local ceph-mon[103593]: Deploying daemon crash.vm02 on vm02 2026-03-10T10:20:58.334 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:58 vm05.local ceph-mon[103593]: pgmap v13: 65 pgs: 65 active+clean; 668 MiB data, 4.3 GiB used, 116 GiB / 120 GiB avail; 888 KiB/s rd, 936 KiB/s wr, 123 op/s 2026-03-10T10:20:58.334 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:58 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:58.343 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.338+0000 7feb9a5c9700 1 -- 192.168.123.102:0/2828279443 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7feb940611d0 con 0x7feb7c0777d0 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (5m) 18s ago 6m 23.1M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (6m) 18s ago 6m 8514k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (5m) 12s ago 5m 11.1M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 starting - - - - 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (5m) 12s ago 5m 7407k - 18.2.1 5be31c24972a f275982dc269 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (5m) 18s ago 5m 88.2M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (4m) 18s ago 4m 15.3M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (4m) 18s ago 4m 240M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (4m) 12s ago 4m 15.8M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (4m) 12s ago 4m 149M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (60s) 18s ago 6m 586M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (35s) 12s ago 5m 487M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (28s) 18s ago 6m 43.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (14s) 12s ago 5m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (6m) 18s ago 6m 16.5M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (5m) 12s ago 5m 15.4M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (5m) 18s ago 5m 357M 4096M 18.2.1 5be31c24972a 9d7f135a3f3b 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (5m) 18s ago 5m 379M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (4m) 18s ago 4m 313M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:20:58.347 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (4m) 12s ago 4m 430M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:20:58.348 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (4m) 12s ago 4m 411M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:20:58.348 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (4m) 12s ago 4m 328M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:20:58.348 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (38s) 18s ago 5m 46.3M - 2.43.0 a07b618ecd1d 5ebb885bd417 2026-03-10T10:20:58.348 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.344+0000 7feb917fa700 1 -- 192.168.123.102:0/2828279443 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7feb940611d0 con 0x7feb7c0777d0 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.349+0000 7feb7affd700 1 -- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7feb7c0777d0 msgr2=0x7feb7c079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.349+0000 7feb7affd700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7feb7c0777d0 0x7feb7c079c90 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7feb8400b5c0 tx=0x7feb84005fb0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.349+0000 7feb7affd700 1 -- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb9410a700 msgr2=0x7feb94116f80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.349+0000 7feb7affd700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb9410a700 0x7feb94116f80 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7feb8800b700 tx=0x7feb8800bac0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.350+0000 7feb7affd700 1 -- 192.168.123.102:0/2828279443 shutdown_connections 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.350+0000 7feb7affd700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7feb7c0777d0 0x7feb7c079c90 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.350+0000 7feb7affd700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7feb94107d90 0x7feb94116a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.350+0000 7feb7affd700 1 --2- 192.168.123.102:0/2828279443 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7feb9410a700 0x7feb94116f80 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.350+0000 7feb7affd700 1 -- 192.168.123.102:0/2828279443 >> 192.168.123.102:0/2828279443 conn(0x7feb9406dae0 msgr2=0x7feb9406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.350+0000 7feb7affd700 1 -- 192.168.123.102:0/2828279443 shutdown_connections 2026-03-10T10:20:58.350 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.350+0000 7feb7affd700 1 -- 192.168.123.102:0/2828279443 wait complete. 2026-03-10T10:20:58.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.455+0000 7efe1f165700 1 -- 192.168.123.102:0/31306809 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18075a40 msgr2=0x7efe18077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.455+0000 7efe1f165700 1 --2- 192.168.123.102:0/31306809 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18075a40 0x7efe18077ed0 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7efe1000cd40 tx=0x7efe1000a320 comp rx=0 tx=0).stop 2026-03-10T10:20:58.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.455+0000 7efe1f165700 1 -- 192.168.123.102:0/31306809 shutdown_connections 2026-03-10T10:20:58.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.455+0000 7efe1f165700 1 --2- 192.168.123.102:0/31306809 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18075a40 0x7efe18077ed0 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.455+0000 7efe1f165700 1 --2- 192.168.123.102:0/31306809 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe18072b50 0x7efe18072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.455+0000 7efe1f165700 1 -- 192.168.123.102:0/31306809 >> 192.168.123.102:0/31306809 conn(0x7efe1806dae0 msgr2=0x7efe1806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:58.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1f165700 1 -- 192.168.123.102:0/31306809 shutdown_connections 2026-03-10T10:20:58.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1f165700 1 -- 192.168.123.102:0/31306809 wait complete. 2026-03-10T10:20:58.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1f165700 1 Processor -- start 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1f165700 1 -- start start 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1f165700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18072b50 0x7efe18083090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1f165700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe180835d0 0x7efe181b3120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1f165700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efe18083ae0 con 0x7efe18072b50 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1f165700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efe18083c50 con 0x7efe180835d0 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1cf01700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18072b50 0x7efe18083090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1cf01700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18072b50 0x7efe18083090 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:49310/0 (socket says 192.168.123.102:49310) 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.456+0000 7efe1cf01700 1 -- 192.168.123.102:0/4129329438 learned_addr learned my addr 192.168.123.102:0/4129329438 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.457+0000 7efe17fff700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe180835d0 0x7efe181b3120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.457+0000 7efe1cf01700 1 -- 192.168.123.102:0/4129329438 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe180835d0 msgr2=0x7efe181b3120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.457+0000 7efe1cf01700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe180835d0 0x7efe181b3120 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.457+0000 7efe1cf01700 1 -- 192.168.123.102:0/4129329438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7efe1000c9f0 con 0x7efe18072b50 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.457+0000 7efe1cf01700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18072b50 0x7efe18083090 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7efe0800bf20 tx=0x7efe0800dbb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:58.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.458+0000 7efe15ffb700 1 -- 192.168.123.102:0/4129329438 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efe08021630 con 0x7efe18072b50 2026-03-10T10:20:58.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.458+0000 7efe1f165700 1 -- 192.168.123.102:0/4129329438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7efe181b36c0 con 0x7efe18072b50 2026-03-10T10:20:58.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.458+0000 7efe1f165700 1 -- 192.168.123.102:0/4129329438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7efe181b3c10 con 0x7efe18072b50 2026-03-10T10:20:58.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.460+0000 7efe15ffb700 1 -- 192.168.123.102:0/4129329438 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7efe0801a070 con 0x7efe18072b50 2026-03-10T10:20:58.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.460+0000 7efe15ffb700 1 -- 192.168.123.102:0/4129329438 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efe08009bb0 con 0x7efe18072b50 2026-03-10T10:20:58.461 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.461+0000 7efe15ffb700 1 -- 192.168.123.102:0/4129329438 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7efe08009d10 con 0x7efe18072b50 2026-03-10T10:20:58.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.462+0000 7efe15ffb700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7efe00077b00 0x7efe00079fc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:58.463 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.462+0000 7efe15ffb700 1 -- 192.168.123.102:0/4129329438 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 6222+0+0 (secure 0 0 0) 0x7efe0801c070 con 0x7efe18072b50 2026-03-10T10:20:58.463 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.463+0000 7efe17fff700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7efe00077b00 0x7efe00079fc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:58.463 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.463+0000 7efe17fff700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7efe00077b00 0x7efe00079fc0 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7efe1000a7a0 tx=0x7efe10006210 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:58.463 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.463+0000 7efe1f165700 1 -- 192.168.123.102:0/4129329438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7efe04005320 con 0x7efe18072b50 2026-03-10T10:20:58.469 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.466+0000 7efe15ffb700 1 -- 192.168.123.102:0/4129329438 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7efe0806b9c0 con 0x7efe18072b50 2026-03-10T10:20:58.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.670+0000 7efe1f165700 1 -- 192.168.123.102:0/4129329438 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7efe04005cc0 con 0x7efe18072b50 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.671+0000 7efe15ffb700 1 -- 192.168.123.102:0/4129329438 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+694 (secure 0 0 0) 0x7efe0806b110 con 0x7efe18072b50 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 6 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 10, 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:20:58.671 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:20:58.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 -- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7efe00077b00 msgr2=0x7efe00079fc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7efe00077b00 0x7efe00079fc0 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7efe1000a7a0 tx=0x7efe10006210 comp rx=0 tx=0).stop 2026-03-10T10:20:58.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 -- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18072b50 msgr2=0x7efe18083090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18072b50 0x7efe18083090 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7efe0800bf20 tx=0x7efe0800dbb0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 -- 192.168.123.102:0/4129329438 shutdown_connections 2026-03-10T10:20:58.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7efe18072b50 0x7efe18083090 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7efe00077b00 0x7efe00079fc0 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 --2- 192.168.123.102:0/4129329438 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efe180835d0 0x7efe181b3120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 -- 192.168.123.102:0/4129329438 >> 192.168.123.102:0/4129329438 conn(0x7efe1806dae0 msgr2=0x7efe1806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:58.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.674+0000 7efdff7fe700 1 -- 192.168.123.102:0/4129329438 shutdown_connections 2026-03-10T10:20:58.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.675+0000 7efdff7fe700 1 -- 192.168.123.102:0/4129329438 wait complete. 2026-03-10T10:20:58.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.758+0000 7f2061931700 1 -- 192.168.123.102:0/1255679797 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f205c072b50 msgr2=0x7f205c072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.758+0000 7f2061931700 1 --2- 192.168.123.102:0/1255679797 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f205c072b50 0x7f205c072f70 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f204c008790 tx=0x7f204c008aa0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.758+0000 7f2061931700 1 -- 192.168.123.102:0/1255679797 shutdown_connections 2026-03-10T10:20:58.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.758+0000 7f2061931700 1 --2- 192.168.123.102:0/1255679797 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f205c075a40 0x7f205c077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.758+0000 7f2061931700 1 --2- 192.168.123.102:0/1255679797 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f205c072b50 0x7f205c072f70 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.758+0000 7f2061931700 1 -- 192.168.123.102:0/1255679797 >> 192.168.123.102:0/1255679797 conn(0x7f205c06dae0 msgr2=0x7f205c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:58.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.758+0000 7f2061931700 1 -- 192.168.123.102:0/1255679797 shutdown_connections 2026-03-10T10:20:58.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f2061931700 1 -- 192.168.123.102:0/1255679797 wait complete. 2026-03-10T10:20:58.760 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f2061931700 1 Processor -- start 2026-03-10T10:20:58.760 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f2061931700 1 -- start start 2026-03-10T10:20:58.760 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f2061931700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f205c075a40 0x7f205c0830e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:58.760 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f2061931700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f205c083620 0x7f205c1b3170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:58.760 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f2061931700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f205c083b30 con 0x7f205c075a40 2026-03-10T10:20:58.760 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f2061931700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f205c083ca0 con 0x7f205c083620 2026-03-10T10:20:58.760 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f205a7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f205c083620 0x7f205c1b3170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:58.760 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f205a7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f205c083620 0x7f205c1b3170 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:59178/0 (socket says 192.168.123.102:59178) 2026-03-10T10:20:58.761 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f205a7fc700 1 -- 192.168.123.102:0/2163599252 learned_addr learned my addr 192.168.123.102:0/2163599252 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:58.761 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.759+0000 7f205affd700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f205c075a40 0x7f205c0830e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:58.761 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.760+0000 7f205a7fc700 1 -- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f205c075a40 msgr2=0x7f205c0830e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.760+0000 7f205a7fc700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f205c075a40 0x7f205c0830e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.760+0000 7f205a7fc700 1 -- 192.168.123.102:0/2163599252 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f204c008440 con 0x7f205c083620 2026-03-10T10:20:58.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.760+0000 7f205a7fc700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f205c083620 0x7f205c1b3170 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f205400f4d0 tx=0x7f205400f7e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:58.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.760+0000 7f206092f700 1 -- 192.168.123.102:0/2163599252 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2054010040 con 0x7f205c083620 2026-03-10T10:20:58.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.760+0000 7f206092f700 1 -- 192.168.123.102:0/2163599252 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f2054009bf0 con 0x7f205c083620 2026-03-10T10:20:58.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.760+0000 7f206092f700 1 -- 192.168.123.102:0/2163599252 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f20540158b0 con 0x7f205c083620 2026-03-10T10:20:58.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.760+0000 7f2061931700 1 -- 192.168.123.102:0/2163599252 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f205c1b36b0 con 0x7f205c083620 2026-03-10T10:20:58.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.760+0000 7f2061931700 1 -- 192.168.123.102:0/2163599252 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f205c1b3c00 con 0x7f205c083620 2026-03-10T10:20:58.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.762+0000 7f206092f700 1 -- 192.168.123.102:0/2163599252 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f205400b3e0 con 0x7f205c083620 2026-03-10T10:20:58.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.763+0000 7f2061931700 1 -- 192.168.123.102:0/2163599252 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f205c07b1c0 con 0x7f205c083620 2026-03-10T10:20:58.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.763+0000 7f206092f700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2044077a30 0x7f2044079ef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:58.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.763+0000 7f206092f700 1 -- 192.168.123.102:0/2163599252 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f2054099ce0 con 0x7f205c083620 2026-03-10T10:20:58.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.763+0000 7f205affd700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2044077a30 0x7f2044079ef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:58.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.764+0000 7f205affd700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2044077a30 0x7f2044079ef0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f204c00f7b0 tx=0x7f204c019040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:58.766 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.766+0000 7f206092f700 1 -- 192.168.123.102:0/2163599252 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f2054062320 con 0x7f205c083620 2026-03-10T10:20:58.945 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.945+0000 7f2061931700 1 -- 192.168.123.102:0/2163599252 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f205c07b380 con 0x7f205c083620 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.948+0000 7f206092f700 1 -- 192.168.123.102:0/2163599252 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1945 (secure 0 0 0) 0x7f2054061a70 con 0x7f205c083620 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:20:58.948 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 0 members: 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:58.949 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:20:58.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.953+0000 7f20427fc700 1 -- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2044077a30 msgr2=0x7f2044079ef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.953+0000 7f20427fc700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2044077a30 0x7f2044079ef0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f204c00f7b0 tx=0x7f204c019040 comp rx=0 tx=0).stop 2026-03-10T10:20:58.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.953+0000 7f20427fc700 1 -- 192.168.123.102:0/2163599252 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f205c083620 msgr2=0x7f205c1b3170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:58.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.953+0000 7f20427fc700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f205c083620 0x7f205c1b3170 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f205400f4d0 tx=0x7f205400f7e0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.953+0000 7f20427fc700 1 -- 192.168.123.102:0/2163599252 shutdown_connections 2026-03-10T10:20:58.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.954+0000 7f20427fc700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f205c075a40 0x7f205c0830e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.954+0000 7f20427fc700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2044077a30 0x7f2044079ef0 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.954+0000 7f20427fc700 1 --2- 192.168.123.102:0/2163599252 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f205c083620 0x7f205c1b3170 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:58.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.954+0000 7f20427fc700 1 -- 192.168.123.102:0/2163599252 >> 192.168.123.102:0/2163599252 conn(0x7f205c06dae0 msgr2=0x7f205c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:58.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.954+0000 7f20427fc700 1 -- 192.168.123.102:0/2163599252 shutdown_connections 2026-03-10T10:20:58.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:58.954+0000 7f20427fc700 1 -- 192.168.123.102:0/2163599252 wait complete. 2026-03-10T10:20:58.957 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:20:59.048 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.047+0000 7f85f810b700 1 -- 192.168.123.102:0/2172233699 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f85f010a700 msgr2=0x7f85f010cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:59.048 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.047+0000 7f85f810b700 1 --2- 192.168.123.102:0/2172233699 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f85f010a700 0x7f85f010cb90 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f85e800b3a0 tx=0x7f85e800b6b0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.048+0000 7f85f810b700 1 -- 192.168.123.102:0/2172233699 shutdown_connections 2026-03-10T10:20:59.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.048+0000 7f85f810b700 1 --2- 192.168.123.102:0/2172233699 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f85f010a700 0x7f85f010cb90 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.048+0000 7f85f810b700 1 --2- 192.168.123.102:0/2172233699 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85f0107d90 0x7f85f010a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.048+0000 7f85f810b700 1 -- 192.168.123.102:0/2172233699 >> 192.168.123.102:0/2172233699 conn(0x7f85f006dda0 msgr2=0x7f85f0070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:59.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.049+0000 7f85f810b700 1 -- 192.168.123.102:0/2172233699 shutdown_connections 2026-03-10T10:20:59.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.049+0000 7f85f810b700 1 -- 192.168.123.102:0/2172233699 wait complete. 2026-03-10T10:20:59.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.051+0000 7f85f810b700 1 Processor -- start 2026-03-10T10:20:59.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.051+0000 7f85f810b700 1 -- start start 2026-03-10T10:20:59.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.051+0000 7f85f810b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85f0107d90 0x7f85f01a58a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:59.051 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.051+0000 7f85f810b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f85f010a700 0x7f85f01a5de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:59.052 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.051+0000 7f85f810b700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f85f01a6400 con 0x7f85f010a700 2026-03-10T10:20:59.052 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.051+0000 7f85f810b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f85f01a6540 con 0x7f85f0107d90 2026-03-10T10:20:59.052 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.052+0000 7f85f5ea7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85f0107d90 0x7f85f01a58a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:59.052 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.052+0000 7f85f5ea7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85f0107d90 0x7f85f01a58a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:59204/0 (socket says 192.168.123.102:59204) 2026-03-10T10:20:59.052 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.052+0000 7f85f5ea7700 1 -- 192.168.123.102:0/796944526 learned_addr learned my addr 192.168.123.102:0/796944526 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:59.052 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.052+0000 7f85f5ea7700 1 -- 192.168.123.102:0/796944526 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f85f010a700 msgr2=0x7f85f01a5de0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:59.052 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.052+0000 7f85f5ea7700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f85f010a700 0x7f85f01a5de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.052 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.052+0000 7f85f5ea7700 1 -- 192.168.123.102:0/796944526 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f85e800b050 con 0x7f85f0107d90 2026-03-10T10:20:59.052 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.052+0000 7f85f5ea7700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85f0107d90 0x7f85f01a58a0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f85ec00d8d0 tx=0x7f85ec00dc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:59.053 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.052+0000 7f85e6ffd700 1 -- 192.168.123.102:0/796944526 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f85ec009940 con 0x7f85f0107d90 2026-03-10T10:20:59.053 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.053+0000 7f85f810b700 1 -- 192.168.123.102:0/796944526 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f85f01aaff0 con 0x7f85f0107d90 2026-03-10T10:20:59.053 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.053+0000 7f85f810b700 1 -- 192.168.123.102:0/796944526 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f85f01ab4e0 con 0x7f85f0107d90 2026-03-10T10:20:59.054 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.053+0000 7f85e6ffd700 1 -- 192.168.123.102:0/796944526 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f85ec010460 con 0x7f85f0107d90 2026-03-10T10:20:59.054 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.053+0000 7f85e6ffd700 1 -- 192.168.123.102:0/796944526 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f85ec00f5d0 con 0x7f85f0107d90 2026-03-10T10:20:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.055+0000 7f85e6ffd700 1 -- 192.168.123.102:0/796944526 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f85ec009aa0 con 0x7f85f0107d90 2026-03-10T10:20:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.055+0000 7f85e6ffd700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f85dc077a40 0x7f85dc079f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.055+0000 7f85e6ffd700 1 -- 192.168.123.102:0/796944526 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f85ec099c70 con 0x7f85f0107d90 2026-03-10T10:20:59.058 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.055+0000 7f85f810b700 1 -- 192.168.123.102:0/796944526 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f85f019f7c0 con 0x7f85f0107d90 2026-03-10T10:20:59.059 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.058+0000 7f85e6ffd700 1 -- 192.168.123.102:0/796944526 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f85ec061b60 con 0x7f85f0107d90 2026-03-10T10:20:59.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.061+0000 7f85f56a6700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f85dc077a40 0x7f85dc079f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:59.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.068+0000 7f85f56a6700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f85dc077a40 0x7f85dc079f00 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f85e800bb30 tx=0x7f85e800bf90 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:59.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:59 vm05.local ceph-mon[103593]: from='client.44101 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:59.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:59 vm05.local ceph-mon[103593]: from='client.44105 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:59.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:59.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:59.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:20:59.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:59.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:59 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/4129329438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:59.153 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:20:59 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2163599252' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: "mon" 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "4/23 daemons upgraded", 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading crash daemons", 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:20:59.238 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:20:59.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.233+0000 7f85f810b700 1 -- 192.168.123.102:0/796944526 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f85f002d050 con 0x7f85dc077a40 2026-03-10T10:20:59.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.235+0000 7f85e6ffd700 1 -- 192.168.123.102:0/796944526 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+385 (secure 0 0 0) 0x7f85f002d050 con 0x7f85dc077a40 2026-03-10T10:20:59.239 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:59 vm02.local ceph-mon[110129]: from='client.44101 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:59.239 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:59 vm02.local ceph-mon[110129]: from='client.44105 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:20:59.239 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:59.239 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:20:59.239 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T10:20:59.239 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:59.239 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:59 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/4129329438' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:20:59.239 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:20:59 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2163599252' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:20:59.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.239+0000 7f85e4ff9700 1 -- 192.168.123.102:0/796944526 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f85dc077a40 msgr2=0x7f85dc079f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:59.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.239+0000 7f85e4ff9700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f85dc077a40 0x7f85dc079f00 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f85e800bb30 tx=0x7f85e800bf90 comp rx=0 tx=0).stop 2026-03-10T10:20:59.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.239+0000 7f85e4ff9700 1 -- 192.168.123.102:0/796944526 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85f0107d90 msgr2=0x7f85f01a58a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:59.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.239+0000 7f85e4ff9700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85f0107d90 0x7f85f01a58a0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f85ec00d8d0 tx=0x7f85ec00dc90 comp rx=0 tx=0).stop 2026-03-10T10:20:59.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.240+0000 7f85e4ff9700 1 -- 192.168.123.102:0/796944526 shutdown_connections 2026-03-10T10:20:59.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.240+0000 7f85e4ff9700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f85dc077a40 0x7f85dc079f00 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.240+0000 7f85e4ff9700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f85f0107d90 0x7f85f01a58a0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.240+0000 7f85e4ff9700 1 --2- 192.168.123.102:0/796944526 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f85f010a700 0x7f85f01a5de0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.240+0000 7f85e4ff9700 1 -- 192.168.123.102:0/796944526 >> 192.168.123.102:0/796944526 conn(0x7f85f006dda0 msgr2=0x7f85f010c2c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:59.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.241+0000 7f85e4ff9700 1 -- 192.168.123.102:0/796944526 shutdown_connections 2026-03-10T10:20:59.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.242+0000 7f85e4ff9700 1 -- 192.168.123.102:0/796944526 wait complete. 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.325+0000 7fb43712e700 1 -- 192.168.123.102:0/1495320969 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb430075a40 msgr2=0x7fb430077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.325+0000 7fb43712e700 1 --2- 192.168.123.102:0/1495320969 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb430075a40 0x7fb430077ed0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fb42800d3f0 tx=0x7fb42800d700 comp rx=0 tx=0).stop 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.325+0000 7fb43712e700 1 -- 192.168.123.102:0/1495320969 shutdown_connections 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.325+0000 7fb43712e700 1 --2- 192.168.123.102:0/1495320969 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb430075a40 0x7fb430077ed0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.325+0000 7fb43712e700 1 --2- 192.168.123.102:0/1495320969 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb430072b50 0x7fb430072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.325+0000 7fb43712e700 1 -- 192.168.123.102:0/1495320969 >> 192.168.123.102:0/1495320969 conn(0x7fb43006dae0 msgr2=0x7fb43006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.326+0000 7fb43712e700 1 -- 192.168.123.102:0/1495320969 shutdown_connections 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.326+0000 7fb43712e700 1 -- 192.168.123.102:0/1495320969 wait complete. 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.326+0000 7fb43712e700 1 Processor -- start 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.326+0000 7fb43712e700 1 -- start start 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.326+0000 7fb43712e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb430072b50 0x7fb430083170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.326+0000 7fb43712e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4300836b0 0x7fb43012e4f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.326+0000 7fb43712e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb430083bc0 con 0x7fb4300836b0 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.326+0000 7fb43712e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb430083d30 con 0x7fb430072b50 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.327+0000 7fb434eca700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb430072b50 0x7fb430083170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.327+0000 7fb434eca700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb430072b50 0x7fb430083170 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:59216/0 (socket says 192.168.123.102:59216) 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.327+0000 7fb434eca700 1 -- 192.168.123.102:0/498547919 learned_addr learned my addr 192.168.123.102:0/498547919 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:20:59.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.327+0000 7fb434eca700 1 -- 192.168.123.102:0/498547919 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4300836b0 msgr2=0x7fb43012e4f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:59.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.327+0000 7fb434eca700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4300836b0 0x7fb43012e4f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.327+0000 7fb434eca700 1 -- 192.168.123.102:0/498547919 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb428007ed0 con 0x7fb430072b50 2026-03-10T10:20:59.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.327+0000 7fb434eca700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb430072b50 0x7fb430083170 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fb42000b770 tx=0x7fb42000bb30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:59.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.328+0000 7fb42dffb700 1 -- 192.168.123.102:0/498547919 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb42000f820 con 0x7fb430072b50 2026-03-10T10:20:59.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.328+0000 7fb43712e700 1 -- 192.168.123.102:0/498547919 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb43012ea90 con 0x7fb430072b50 2026-03-10T10:20:59.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.328+0000 7fb43712e700 1 -- 192.168.123.102:0/498547919 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb43012efe0 con 0x7fb430072b50 2026-03-10T10:20:59.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.328+0000 7fb42dffb700 1 -- 192.168.123.102:0/498547919 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb42000fe60 con 0x7fb430072b50 2026-03-10T10:20:59.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.328+0000 7fb42dffb700 1 -- 192.168.123.102:0/498547919 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb42000d610 con 0x7fb430072b50 2026-03-10T10:20:59.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.330+0000 7fb42dffb700 1 -- 192.168.123.102:0/498547919 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fb42000f980 con 0x7fb430072b50 2026-03-10T10:20:59.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.330+0000 7fb42dffb700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb418077a50 0x7fb418079f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:20:59.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.330+0000 7fb42dffb700 1 -- 192.168.123.102:0/498547919 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(44..44 src has 1..44) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fb420099930 con 0x7fb430072b50 2026-03-10T10:20:59.331 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.330+0000 7fb43712e700 1 -- 192.168.123.102:0/498547919 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb41c005320 con 0x7fb430072b50 2026-03-10T10:20:59.335 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.335+0000 7fb42ffff700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb418077a50 0x7fb418079f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:20:59.335 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.335+0000 7fb42ffff700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb418077a50 0x7fb418079f10 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7fb42800d3f0 tx=0x7fb42800db00 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:20:59.341 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.337+0000 7fb42dffb700 1 -- 192.168.123.102:0/498547919 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb420061ef0 con 0x7fb430072b50 2026-03-10T10:20:59.532 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.531+0000 7fb43712e700 1 -- 192.168.123.102:0/498547919 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fb41c005190 con 0x7fb430072b50 2026-03-10T10:20:59.532 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.532+0000 7fb42dffb700 1 -- 192.168.123.102:0/498547919 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7fb420061640 con 0x7fb430072b50 2026-03-10T10:20:59.532 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:20:59.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.534+0000 7fb4177fe700 1 -- 192.168.123.102:0/498547919 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb418077a50 msgr2=0x7fb418079f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:59.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.534+0000 7fb4177fe700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb418077a50 0x7fb418079f10 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7fb42800d3f0 tx=0x7fb42800db00 comp rx=0 tx=0).stop 2026-03-10T10:20:59.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.534+0000 7fb4177fe700 1 -- 192.168.123.102:0/498547919 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb430072b50 msgr2=0x7fb430083170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:20:59.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.534+0000 7fb4177fe700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb430072b50 0x7fb430083170 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fb42000b770 tx=0x7fb42000bb30 comp rx=0 tx=0).stop 2026-03-10T10:20:59.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.535+0000 7fb4177fe700 1 -- 192.168.123.102:0/498547919 shutdown_connections 2026-03-10T10:20:59.536 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.536+0000 7fb4177fe700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb418077a50 0x7fb418079f10 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.536 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.536+0000 7fb4177fe700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb430072b50 0x7fb430083170 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.536 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.536+0000 7fb4177fe700 1 --2- 192.168.123.102:0/498547919 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4300836b0 0x7fb43012e4f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:20:59.536 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.536+0000 7fb4177fe700 1 -- 192.168.123.102:0/498547919 >> 192.168.123.102:0/498547919 conn(0x7fb43006dae0 msgr2=0x7fb43006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:20:59.536 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.536+0000 7fb4177fe700 1 -- 192.168.123.102:0/498547919 shutdown_connections 2026-03-10T10:20:59.536 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:20:59.536+0000 7fb4177fe700 1 -- 192.168.123.102:0/498547919 wait complete. 2026-03-10T10:21:00.238 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:00 vm05.local ceph-mon[103593]: from='client.34140 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:00.238 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:00 vm05.local ceph-mon[103593]: pgmap v14: 65 pgs: 65 active+clean; 287 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 241 op/s 2026-03-10T10:21:00.238 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:00 vm05.local ceph-mon[103593]: Upgrade: Updating crash.vm05 (2/2) 2026-03-10T10:21:00.238 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:00 vm05.local ceph-mon[103593]: Deploying daemon crash.vm05 on vm05 2026-03-10T10:21:00.238 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:00 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/498547919' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:21:00.238 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:00 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:00.238 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:00 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:00.238 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:00 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:00.473 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:00 vm02.local ceph-mon[110129]: from='client.34140 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:00.473 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:00 vm02.local ceph-mon[110129]: pgmap v14: 65 pgs: 65 active+clean; 287 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 241 op/s 2026-03-10T10:21:00.473 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:00 vm02.local ceph-mon[110129]: Upgrade: Updating crash.vm05 (2/2) 2026-03-10T10:21:00.473 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:00 vm02.local ceph-mon[110129]: Deploying daemon crash.vm05 on vm05 2026-03-10T10:21:00.473 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:00 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/498547919' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:21:00.473 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:00 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:00.473 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:00 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:00.473 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:00 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:01.104 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:01 vm05.local ceph-mon[103593]: from='client.44119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:01.105 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:01 vm02.local ceph-mon[110129]: from='client.44119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:02 vm02.local ceph-mon[110129]: pgmap v15: 65 pgs: 65 active+clean; 287 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 846 KiB/s rd, 840 KiB/s wr, 169 op/s 2026-03-10T10:21:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:02.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:02.331 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:02 vm05.local ceph-mon[103593]: pgmap v15: 65 pgs: 65 active+clean; 287 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 846 KiB/s rd, 840 KiB/s wr, 169 op/s 2026-03-10T10:21:02.331 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:02.331 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:02.331 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:02.331 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.520 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:04 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.520 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:04 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.520 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:04 vm02.local ceph-mon[110129]: pgmap v16: 65 pgs: 65 active+clean; 287 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 846 KiB/s rd, 840 KiB/s wr, 169 op/s 2026-03-10T10:21:04.520 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:04 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.520 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:04 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.520 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:04 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.520 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:04 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:04 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:04 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:04 vm05.local ceph-mon[103593]: pgmap v16: 65 pgs: 65 active+clean; 287 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 846 KiB/s rd, 840 KiB/s wr, 169 op/s 2026-03-10T10:21:04.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:04 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:04 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:04 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:04.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:04 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:05.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:05 vm02.local ceph-mon[110129]: pgmap v17: 65 pgs: 65 active+clean; 288 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 251 op/s 2026-03-10T10:21:06.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:05 vm05.local ceph-mon[103593]: pgmap v17: 65 pgs: 65 active+clean; 288 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 251 op/s 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all crash 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm02"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm02"}]': finished 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm05"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm05"}]': finished 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: Upgrade: osd.0 is safe to restart 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:06.870 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all crash 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm02"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm02"}]': finished 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm05"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm05"}]': finished 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: Upgrade: osd.0 is safe to restart 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:21:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:07.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:07.608 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:07 vm02.local systemd[1]: Stopping Ceph osd.0 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:21:08.029 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:07 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[67686]: 2026-03-10T10:21:07.607+0000 7f7196513700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:21:08.030 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:07 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[67686]: 2026-03-10T10:21:07.607+0000 7f7196513700 -1 osd.0 44 *** Got signal Terminated *** 2026-03-10T10:21:08.030 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:07 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[67686]: 2026-03-10T10:21:07.607+0000 7f7196513700 -1 osd.0 44 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:21:08.272 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:08 vm05.local ceph-mon[103593]: Upgrade: Updating osd.0 2026-03-10T10:21:08.272 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:08 vm05.local ceph-mon[103593]: Deploying daemon osd.0 on vm02 2026-03-10T10:21:08.272 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:08 vm05.local ceph-mon[103593]: pgmap v18: 65 pgs: 65 active+clean; 288 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 771 KiB/s rd, 785 KiB/s wr, 199 op/s 2026-03-10T10:21:08.272 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:08 vm05.local ceph-mon[103593]: osd.0 marked itself down and dead 2026-03-10T10:21:08.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:08 vm02.local ceph-mon[110129]: Upgrade: Updating osd.0 2026-03-10T10:21:08.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:08 vm02.local ceph-mon[110129]: Deploying daemon osd.0 on vm02 2026-03-10T10:21:08.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:08 vm02.local ceph-mon[110129]: pgmap v18: 65 pgs: 65 active+clean; 288 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 771 KiB/s rd, 785 KiB/s wr, 199 op/s 2026-03-10T10:21:08.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:08 vm02.local ceph-mon[110129]: osd.0 marked itself down and dead 2026-03-10T10:21:08.779 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:08 vm02.local podman[117403]: 2026-03-10 10:21:08.492856036 +0000 UTC m=+0.941671721 container died 9d7f135a3f3b1b528b83eeff088e69395fd4504fc8f6b9ac83c4a6e8c9344348 (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, RELEASE=HEAD, io.buildah.version=1.29.1, GIT_BRANCH=HEAD, ceph=True, org.label-schema.build-date=20240222, org.label-schema.name=CentOS Stream 8 Base Image, CEPH_POINT_RELEASE=-18.2.1, GIT_CLEAN=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, GIT_REPO=https://github.com/ceph/ceph-container.git) 2026-03-10T10:21:08.779 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:08 vm02.local podman[117403]: 2026-03-10 10:21:08.671382077 +0000 UTC m=+1.120197772 container remove 9d7f135a3f3b1b528b83eeff088e69395fd4504fc8f6b9ac83c4a6e8c9344348 (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0, GIT_CLEAN=True, org.label-schema.build-date=20240222, RELEASE=HEAD, ceph=True, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GIT_BRANCH=HEAD, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.29.1, CEPH_POINT_RELEASE=-18.2.1) 2026-03-10T10:21:08.779 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:08 vm02.local bash[117403]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:08 vm02.local podman[117468]: 2026-03-10 10:21:08.868497636 +0000 UTC m=+0.020403928 container create e503c4d42fa118d4c0d2803ae00b93ec6c3229248ba2b168c15b82fe003b9630 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:08 vm02.local podman[117468]: 2026-03-10 10:21:08.908514036 +0000 UTC m=+0.060420338 container init e503c4d42fa118d4c0d2803ae00b93ec6c3229248ba2b168c15b82fe003b9630 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0) 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:08 vm02.local podman[117468]: 2026-03-10 10:21:08.911942175 +0000 UTC m=+0.063848477 container start e503c4d42fa118d4c0d2803ae00b93ec6c3229248ba2b168c15b82fe003b9630 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:08 vm02.local podman[117468]: 2026-03-10 10:21:08.916467619 +0000 UTC m=+0.068373921 container attach e503c4d42fa118d4c0d2803ae00b93ec6c3229248ba2b168c15b82fe003b9630 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20260223, io.buildah.version=1.41.3) 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:08 vm02.local podman[117468]: 2026-03-10 10:21:08.859073739 +0000 UTC m=+0.010980041 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local conmon[117480]: conmon e503c4d42fa118d4c0d2 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e503c4d42fa118d4c0d2803ae00b93ec6c3229248ba2b168c15b82fe003b9630.scope/container/memory.events 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local podman[117468]: 2026-03-10 10:21:09.061968851 +0000 UTC m=+0.213875153 container died e503c4d42fa118d4c0d2803ae00b93ec6c3229248ba2b168c15b82fe003b9630 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local podman[117468]: 2026-03-10 10:21:09.088115098 +0000 UTC m=+0.240021400 container remove e503c4d42fa118d4c0d2803ae00b93ec6c3229248ba2b168c15b82fe003b9630 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.0.service: Deactivated successfully. 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.0.service: Unit process 117480 (conmon) remains running after unit stopped. 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local systemd[1]: Stopped Ceph osd.0 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:21:09.119 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.0.service: Consumed 30.822s CPU time, 528.6M memory peak. 2026-03-10T10:21:09.517 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local systemd[1]: Starting Ceph osd.0 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:21:09.517 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:09 vm02.local ceph-mon[110129]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:21:09.517 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:09 vm02.local ceph-mon[110129]: osdmap e45: 6 total, 5 up, 6 in 2026-03-10T10:21:09.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:09 vm05.local ceph-mon[103593]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:21:09.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:09 vm05.local ceph-mon[103593]: osdmap e45: 6 total, 5 up, 6 in 2026-03-10T10:21:09.783 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local podman[117569]: 2026-03-10 10:21:09.516981582 +0000 UTC m=+0.050291673 container create 75351a651fe9d9ecc874196d677dffc3a195d160e25e7eabe94088ca55a11219 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, ceph=True) 2026-03-10T10:21:09.783 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local podman[117569]: 2026-03-10 10:21:09.489206693 +0000 UTC m=+0.022516804 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:21:09.783 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local podman[117569]: 2026-03-10 10:21:09.648338388 +0000 UTC m=+0.181648489 container init 75351a651fe9d9ecc874196d677dffc3a195d160e25e7eabe94088ca55a11219 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate, ceph=True, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:21:09.783 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local podman[117569]: 2026-03-10 10:21:09.675073118 +0000 UTC m=+0.208383219 container start 75351a651fe9d9ecc874196d677dffc3a195d160e25e7eabe94088ca55a11219 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T10:21:09.783 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local podman[117569]: 2026-03-10 10:21:09.683200758 +0000 UTC m=+0.216510859 container attach 75351a651fe9d9ecc874196d677dffc3a195d160e25e7eabe94088ca55a11219 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:21:10.252 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-mon[110129]: pgmap v20: 65 pgs: 9 stale+active+clean, 56 active+clean; 290 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 779 KiB/s rd, 757 KiB/s wr, 209 op/s 2026-03-10T10:21:10.252 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-mon[110129]: osdmap e46: 6 total, 5 up, 6 in 2026-03-10T10:21:10.252 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:21:10.252 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local bash[117569]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:21:10.252 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:21:10.252 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:09 vm02.local bash[117569]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:21:10.509 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:21:10.509 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:21:10.509 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:21:10.509 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:21:10.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:10 vm05.local ceph-mon[103593]: pgmap v20: 65 pgs: 9 stale+active+clean, 56 active+clean; 290 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 779 KiB/s rd, 757 KiB/s wr, 209 op/s 2026-03-10T10:21:10.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:10 vm05.local ceph-mon[103593]: osdmap e46: 6 total, 5 up, 6 in 2026-03-10T10:21:10.779 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:21:10.779 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:21:10.779 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T10:21:10.779 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T10:21:10.779 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-062f0611-3165-4e82-81fa-187e6d8e5366/osd-block-f90b5cc0-11ce-4915-a46a-c23fb52a4ba2 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-10T10:21:10.779 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-062f0611-3165-4e82-81fa-187e6d8e5366/osd-block-f90b5cc0-11ce-4915-a46a-c23fb52a4ba2 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/ln -snf /dev/ceph-062f0611-3165-4e82-81fa-187e6d8e5366/osd-block-f90b5cc0-11ce-4915-a46a-c23fb52a4ba2 /var/lib/ceph/osd/ceph-0/block 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: Running command: /usr/bin/ln -snf /dev/ceph-062f0611-3165-4e82-81fa-187e6d8e5366/osd-block-f90b5cc0-11ce-4915-a46a-c23fb52a4ba2 /var/lib/ceph/osd/ceph-0/block 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate[117582]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local bash[117569]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local podman[117569]: 2026-03-10 10:21:10.890969199 +0000 UTC m=+1.424279300 container died 75351a651fe9d9ecc874196d677dffc3a195d160e25e7eabe94088ca55a11219 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:10 vm02.local podman[117569]: 2026-03-10 10:21:10.926542515 +0000 UTC m=+1.459852616 container remove 75351a651fe9d9ecc874196d677dffc3a195d160e25e7eabe94088ca55a11219 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3) 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:11 vm02.local podman[117835]: 2026-03-10 10:21:11.042826013 +0000 UTC m=+0.028601084 container create 319155aac7184bb690d71b68b867764b10891e041fd1b21825b0f1bab5557a1d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:11 vm02.local podman[117835]: 2026-03-10 10:21:11.08740432 +0000 UTC m=+0.073179391 container init 319155aac7184bb690d71b68b867764b10891e041fd1b21825b0f1bab5557a1d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:11 vm02.local podman[117835]: 2026-03-10 10:21:11.090764782 +0000 UTC m=+0.076539853 container start 319155aac7184bb690d71b68b867764b10891e041fd1b21825b0f1bab5557a1d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:11 vm02.local bash[117835]: 319155aac7184bb690d71b68b867764b10891e041fd1b21825b0f1bab5557a1d 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:11 vm02.local podman[117835]: 2026-03-10 10:21:11.031060403 +0000 UTC m=+0.016835483 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:21:11.108 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:11 vm02.local systemd[1]: Started Ceph osd.0 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:21:11.948 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:11 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[117845]: 2026-03-10T10:21:11.776+0000 7fdc30517740 -1 Falling back to public interface 2026-03-10T10:21:12.259 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:12 vm02.local ceph-mon[110129]: pgmap v22: 65 pgs: 9 stale+active+clean, 56 active+clean; 290 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 974 KiB/s rd, 947 KiB/s wr, 261 op/s 2026-03-10T10:21:12.259 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:12 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:12.259 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:12 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:12.259 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:12 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:12.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:12 vm05.local ceph-mon[103593]: pgmap v22: 65 pgs: 9 stale+active+clean, 56 active+clean; 290 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 974 KiB/s rd, 947 KiB/s wr, 261 op/s 2026-03-10T10:21:12.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:12 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:12.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:12 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:12.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:12 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:14.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:14 vm02.local ceph-mon[110129]: pgmap v23: 65 pgs: 9 stale+active+clean, 56 active+clean; 290 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 512 KiB/s rd, 479 KiB/s wr, 138 op/s 2026-03-10T10:21:14.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:14 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:14.531 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:14 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:14.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:14 vm05.local ceph-mon[103593]: pgmap v23: 65 pgs: 9 stale+active+clean, 56 active+clean; 290 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 512 KiB/s rd, 479 KiB/s wr, 138 op/s 2026-03-10T10:21:14.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:14 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:14.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:14 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:16.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:15 vm02.local ceph-mon[110129]: pgmap v24: 65 pgs: 1 active+undersized, 33 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 430 KiB/s rd, 1.0 MiB/s wr, 255 op/s; 7978/53826 objects degraded (14.822%) 2026-03-10T10:21:16.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:15 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:16.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:15 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:16.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:15 vm02.local ceph-mon[110129]: Health check failed: Degraded data redundancy: 7978/53826 objects degraded (14.822%), 33 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:16.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:15 vm05.local ceph-mon[103593]: pgmap v24: 65 pgs: 1 active+undersized, 33 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 430 KiB/s rd, 1.0 MiB/s wr, 255 op/s; 7978/53826 objects degraded (14.822%) 2026-03-10T10:21:16.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:15 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:16.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:15 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:16.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:15 vm05.local ceph-mon[103593]: Health check failed: Degraded data redundancy: 7978/53826 objects degraded (14.822%), 33 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:17.339 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[117845]: 2026-03-10T10:21:17.078+0000 7fdc30517740 -1 osd.0 0 read_superblock omap replica is missing. 2026-03-10T10:21:17.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:17.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:17.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:17.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:17.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:17.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: pgmap v25: 65 pgs: 1 active+undersized, 33 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 551 KiB/s wr, 115 op/s; 7978/53826 objects degraded (14.822%) 2026-03-10T10:21:17.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:17.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:17.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:17.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:17.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:17.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:17.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:17 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (15 PGs are or would become offline) 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: pgmap v25: 65 pgs: 1 active+undersized, 33 active+undersized+degraded, 31 active+clean; 293 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 551 KiB/s wr, 115 op/s; 7978/53826 objects degraded (14.822%) 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:17 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (15 PGs are or would become offline) 2026-03-10T10:21:18.529 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:18 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[117845]: 2026-03-10T10:21:18.128+0000 7fdc30517740 -1 osd.0 44 log_to_monitors true 2026-03-10T10:21:19.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:18 vm02.local ceph-mon[110129]: from='osd.0 [v2:192.168.123.102:6802/3585029795,v1:192.168.123.102:6803/3585029795]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:21:19.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:18 vm02.local ceph-mon[110129]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:21:19.029 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:21:18 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[117845]: 2026-03-10T10:21:18.628+0000 7fdc282b1640 -1 osd.0 44 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:21:19.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:18 vm05.local ceph-mon[103593]: from='osd.0 [v2:192.168.123.102:6802/3585029795,v1:192.168.123.102:6803/3585029795]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:21:19.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:18 vm05.local ceph-mon[103593]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:21:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:20 vm02.local ceph-mon[110129]: pgmap v26: 65 pgs: 1 active+undersized, 33 active+undersized+degraded, 31 active+clean; 295 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 380 KiB/s rd, 871 KiB/s wr, 233 op/s; 7436/50139 objects degraded (14.831%) 2026-03-10T10:21:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:20 vm02.local ceph-mon[110129]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:21:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:20 vm02.local ceph-mon[110129]: osdmap e47: 6 total, 5 up, 6 in 2026-03-10T10:21:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:20 vm02.local ceph-mon[110129]: from='osd.0 [v2:192.168.123.102:6802/3585029795,v1:192.168.123.102:6803/3585029795]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:21:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:20 vm02.local ceph-mon[110129]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:21:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:20 vm05.local ceph-mon[103593]: pgmap v26: 65 pgs: 1 active+undersized, 33 active+undersized+degraded, 31 active+clean; 295 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 380 KiB/s rd, 871 KiB/s wr, 233 op/s; 7436/50139 objects degraded (14.831%) 2026-03-10T10:21:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:20 vm05.local ceph-mon[103593]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:21:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:20 vm05.local ceph-mon[103593]: osdmap e47: 6 total, 5 up, 6 in 2026-03-10T10:21:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:20 vm05.local ceph-mon[103593]: from='osd.0 [v2:192.168.123.102:6802/3585029795,v1:192.168.123.102:6803/3585029795]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:21:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:20 vm05.local ceph-mon[103593]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:21:22.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:21 vm02.local ceph-mon[110129]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:21:22.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:21 vm02.local ceph-mon[110129]: osd.0 [v2:192.168.123.102:6802/3585029795,v1:192.168.123.102:6803/3585029795] boot 2026-03-10T10:21:22.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:21 vm02.local ceph-mon[110129]: osdmap e48: 6 total, 6 up, 6 in 2026-03-10T10:21:22.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:21:22.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:21 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 7436/50139 objects degraded (14.831%), 33 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:22.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:21 vm02.local ceph-mon[110129]: osdmap e49: 6 total, 6 up, 6 in 2026-03-10T10:21:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:21 vm05.local ceph-mon[103593]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:21:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:21 vm05.local ceph-mon[103593]: osd.0 [v2:192.168.123.102:6802/3585029795,v1:192.168.123.102:6803/3585029795] boot 2026-03-10T10:21:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:21 vm05.local ceph-mon[103593]: osdmap e48: 6 total, 6 up, 6 in 2026-03-10T10:21:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:21:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:21 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 7436/50139 objects degraded (14.831%), 33 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:21 vm05.local ceph-mon[103593]: osdmap e49: 6 total, 6 up, 6 in 2026-03-10T10:21:23.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:22 vm02.local ceph-mon[110129]: pgmap v30: 65 pgs: 1 active+undersized, 33 active+undersized+degraded, 31 active+clean; 295 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 742 KiB/s rd, 702 KiB/s wr, 233 op/s; 7436/50139 objects degraded (14.831%) 2026-03-10T10:21:23.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:23.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:22 vm02.local ceph-mon[110129]: osdmap e50: 6 total, 6 up, 6 in 2026-03-10T10:21:23.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:23.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:22 vm05.local ceph-mon[103593]: pgmap v30: 65 pgs: 1 active+undersized, 33 active+undersized+degraded, 31 active+clean; 295 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 742 KiB/s rd, 702 KiB/s wr, 233 op/s; 7436/50139 objects degraded (14.831%) 2026-03-10T10:21:23.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:21:23.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:22 vm05.local ceph-mon[103593]: osdmap e50: 6 total, 6 up, 6 in 2026-03-10T10:21:23.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:24.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:23 vm02.local ceph-mon[110129]: pgmap v32: 65 pgs: 3 active+recovery_wait+degraded, 1 active+undersized, 29 active+undersized+degraded, 32 active+clean; 294 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 172 KiB/s rd, 52 KiB/s wr, 93 op/s; 6074/49056 objects degraded (12.382%) 2026-03-10T10:21:24.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:23 vm02.local ceph-mon[110129]: osdmap e51: 6 total, 6 up, 6 in 2026-03-10T10:21:24.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:23 vm05.local ceph-mon[103593]: pgmap v32: 65 pgs: 3 active+recovery_wait+degraded, 1 active+undersized, 29 active+undersized+degraded, 32 active+clean; 294 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 172 KiB/s rd, 52 KiB/s wr, 93 op/s; 6074/49056 objects degraded (12.382%) 2026-03-10T10:21:24.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:23 vm05.local ceph-mon[103593]: osdmap e51: 6 total, 6 up, 6 in 2026-03-10T10:21:25.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:24 vm02.local ceph-mon[110129]: osdmap e52: 6 total, 6 up, 6 in 2026-03-10T10:21:25.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:24 vm05.local ceph-mon[103593]: osdmap e52: 6 total, 6 up, 6 in 2026-03-10T10:21:26.255 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:25 vm02.local ceph-mon[110129]: pgmap v35: 65 pgs: 1 active+undersized+remapped, 13 active+recovery_wait+degraded, 1 active+recovering, 50 active+clean; 298 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 436 op/s; 980/45396 objects degraded (2.159%); 1.1 MiB/s, 4 objects/s recovering 2026-03-10T10:21:26.255 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:25 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 980/45396 objects degraded (2.159%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:26.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:25 vm05.local ceph-mon[103593]: pgmap v35: 65 pgs: 1 active+undersized+remapped, 13 active+recovery_wait+degraded, 1 active+recovering, 50 active+clean; 298 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 436 op/s; 980/45396 objects degraded (2.159%); 1.1 MiB/s, 4 objects/s recovering 2026-03-10T10:21:26.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:25 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 980/45396 objects degraded (2.159%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:28.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:28 vm02.local ceph-mon[110129]: pgmap v36: 65 pgs: 1 active+undersized+remapped, 13 active+recovery_wait+degraded, 1 active+recovering, 50 active+clean; 298 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 308 op/s; 980/45396 objects degraded (2.159%); 767 KiB/s, 3 objects/s recovering 2026-03-10T10:21:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:28 vm05.local ceph-mon[103593]: pgmap v36: 65 pgs: 1 active+undersized+remapped, 13 active+recovery_wait+degraded, 1 active+recovering, 50 active+clean; 298 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 308 op/s; 980/45396 objects degraded (2.159%); 767 KiB/s, 3 objects/s recovering 2026-03-10T10:21:29.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.630+0000 7f7c49abb700 1 -- 192.168.123.102:0/959678474 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7c44075a40 msgr2=0x7f7c44077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:29.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.630+0000 7f7c49abb700 1 --2- 192.168.123.102:0/959678474 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7c44075a40 0x7f7c44077ed0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f7c3c009230 tx=0x7f7c3c009260 comp rx=0 tx=0).stop 2026-03-10T10:21:29.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.631+0000 7f7c49abb700 1 -- 192.168.123.102:0/959678474 shutdown_connections 2026-03-10T10:21:29.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.631+0000 7f7c49abb700 1 --2- 192.168.123.102:0/959678474 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7c44075a40 0x7f7c44077ed0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.631+0000 7f7c49abb700 1 --2- 192.168.123.102:0/959678474 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7c44072b50 0x7f7c44072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.631+0000 7f7c49abb700 1 -- 192.168.123.102:0/959678474 >> 192.168.123.102:0/959678474 conn(0x7f7c4406dae0 msgr2=0x7f7c4406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:29.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.632+0000 7f7c49abb700 1 -- 192.168.123.102:0/959678474 shutdown_connections 2026-03-10T10:21:29.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.632+0000 7f7c49abb700 1 -- 192.168.123.102:0/959678474 wait complete. 2026-03-10T10:21:29.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.632+0000 7f7c49abb700 1 Processor -- start 2026-03-10T10:21:29.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.632+0000 7f7c49abb700 1 -- start start 2026-03-10T10:21:29.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.632+0000 7f7c49abb700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7c44072b50 0x7f7c44081640 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:29.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.632+0000 7f7c49abb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7c44081b80 0x7f7c4412e0a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:29.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.632+0000 7f7c49abb700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7c44082000 con 0x7f7c44072b50 2026-03-10T10:21:29.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.632+0000 7f7c49abb700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7c44082170 con 0x7f7c44081b80 2026-03-10T10:21:29.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.633+0000 7f7c437fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7c44072b50 0x7f7c44081640 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:29.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.633+0000 7f7c437fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7c44072b50 0x7f7c44081640 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:41762/0 (socket says 192.168.123.102:41762) 2026-03-10T10:21:29.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.633+0000 7f7c437fe700 1 -- 192.168.123.102:0/1011319167 learned_addr learned my addr 192.168.123.102:0/1011319167 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:21:29.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.633+0000 7f7c42ffd700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7c44081b80 0x7f7c4412e0a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:29.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.634+0000 7f7c42ffd700 1 -- 192.168.123.102:0/1011319167 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7c44072b50 msgr2=0x7f7c44081640 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:29.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.634+0000 7f7c42ffd700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7c44072b50 0x7f7c44081640 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.634+0000 7f7c42ffd700 1 -- 192.168.123.102:0/1011319167 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7c3c008ee0 con 0x7f7c44081b80 2026-03-10T10:21:29.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.634+0000 7f7c42ffd700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7c44081b80 0x7f7c4412e0a0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f7c3c009200 tx=0x7f7c3c00e9f0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:29.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.634+0000 7f7c40ff9700 1 -- 192.168.123.102:0/1011319167 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7c3c00eea0 con 0x7f7c44081b80 2026-03-10T10:21:29.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.635+0000 7f7c49abb700 1 -- 192.168.123.102:0/1011319167 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7c4412e5e0 con 0x7f7c44081b80 2026-03-10T10:21:29.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.635+0000 7f7c49abb700 1 -- 192.168.123.102:0/1011319167 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7c4412eaa0 con 0x7f7c44081b80 2026-03-10T10:21:29.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.635+0000 7f7c40ff9700 1 -- 192.168.123.102:0/1011319167 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f7c3c005550 con 0x7f7c44081b80 2026-03-10T10:21:29.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.635+0000 7f7c49abb700 1 -- 192.168.123.102:0/1011319167 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7c4407b700 con 0x7f7c44081b80 2026-03-10T10:21:29.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.636+0000 7f7c40ff9700 1 -- 192.168.123.102:0/1011319167 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7c3c016a60 con 0x7f7c44081b80 2026-03-10T10:21:29.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.637+0000 7f7c40ff9700 1 -- 192.168.123.102:0/1011319167 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f7c3c016bc0 con 0x7f7c44081b80 2026-03-10T10:21:29.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.637+0000 7f7c40ff9700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7c2c077a40 0x7f7c2c079f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:29.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.638+0000 7f7c437fe700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7c2c077a40 0x7f7c2c079f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:29.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.638+0000 7f7c40ff9700 1 -- 192.168.123.102:0/1011319167 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f7c3c02f080 con 0x7f7c44081b80 2026-03-10T10:21:29.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.638+0000 7f7c437fe700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7c2c077a40 0x7f7c2c079f00 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f7c441c0100 tx=0x7f7c3400f040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:29.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.640+0000 7f7c40ff9700 1 -- 192.168.123.102:0/1011319167 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f7c3c064eb0 con 0x7f7c44081b80 2026-03-10T10:21:29.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:29 vm02.local ceph-mon[110129]: pgmap v37: 65 pgs: 13 active+recovery_wait+degraded, 1 active+recovering, 51 active+clean; 294 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 923 op/s; 980/42789 objects degraded (2.290%); 671 KiB/s, 3 objects/s recovering 2026-03-10T10:21:29.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:29 vm05.local ceph-mon[103593]: pgmap v37: 65 pgs: 13 active+recovery_wait+degraded, 1 active+recovering, 51 active+clean; 294 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 923 op/s; 980/42789 objects degraded (2.290%); 671 KiB/s, 3 objects/s recovering 2026-03-10T10:21:29.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.796+0000 7f7c49abb700 1 -- 192.168.123.102:0/1011319167 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f7c440611d0 con 0x7f7c2c077a40 2026-03-10T10:21:29.803 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.800+0000 7f7c40ff9700 1 -- 192.168.123.102:0/1011319167 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f7c440611d0 con 0x7f7c2c077a40 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 -- 192.168.123.102:0/1011319167 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7c2c077a40 msgr2=0x7f7c2c079f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7c2c077a40 0x7f7c2c079f00 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f7c441c0100 tx=0x7f7c3400f040 comp rx=0 tx=0).stop 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 -- 192.168.123.102:0/1011319167 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7c44081b80 msgr2=0x7f7c4412e0a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7c44081b80 0x7f7c4412e0a0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f7c3c009200 tx=0x7f7c3c00e9f0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 -- 192.168.123.102:0/1011319167 shutdown_connections 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7c44072b50 0x7f7c44081640 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7c2c077a40 0x7f7c2c079f00 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 --2- 192.168.123.102:0/1011319167 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7c44081b80 0x7f7c4412e0a0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 -- 192.168.123.102:0/1011319167 >> 192.168.123.102:0/1011319167 conn(0x7f7c4406dae0 msgr2=0x7f7c4406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 -- 192.168.123.102:0/1011319167 shutdown_connections 2026-03-10T10:21:29.804 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.804+0000 7f7c2a7fc700 1 -- 192.168.123.102:0/1011319167 wait complete. 2026-03-10T10:21:29.817 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:21:29.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.906+0000 7f63b85b6700 1 -- 192.168.123.102:0/1405459155 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b0075a40 msgr2=0x7f63b0077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:29.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.906+0000 7f63b85b6700 1 --2- 192.168.123.102:0/1405459155 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b0075a40 0x7f63b0077ed0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f63a800d3f0 tx=0x7f63a800d700 comp rx=0 tx=0).stop 2026-03-10T10:21:29.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.906+0000 7f63b85b6700 1 -- 192.168.123.102:0/1405459155 shutdown_connections 2026-03-10T10:21:29.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.906+0000 7f63b85b6700 1 --2- 192.168.123.102:0/1405459155 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b0075a40 0x7f63b0077ed0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.906+0000 7f63b85b6700 1 --2- 192.168.123.102:0/1405459155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b0072b50 0x7f63b0072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.906+0000 7f63b85b6700 1 -- 192.168.123.102:0/1405459155 >> 192.168.123.102:0/1405459155 conn(0x7f63b006dae0 msgr2=0x7f63b006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:29.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.906+0000 7f63b85b6700 1 -- 192.168.123.102:0/1405459155 shutdown_connections 2026-03-10T10:21:29.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.906+0000 7f63b85b6700 1 -- 192.168.123.102:0/1405459155 wait complete. 2026-03-10T10:21:29.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b85b6700 1 Processor -- start 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b85b6700 1 -- start start 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b85b6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b0072b50 0x7f63b0082f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b85b6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b00834a0 0x7f63b0083920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b85b6700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f63b012e700 con 0x7f63b0072b50 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b85b6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f63b012e870 con 0x7f63b00834a0 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b5b51700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b00834a0 0x7f63b0083920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b5b51700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b00834a0 0x7f63b0083920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:57934/0 (socket says 192.168.123.102:57934) 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b5b51700 1 -- 192.168.123.102:0/2388221324 learned_addr learned my addr 192.168.123.102:0/2388221324 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b5b51700 1 -- 192.168.123.102:0/2388221324 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b0072b50 msgr2=0x7f63b0082f60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b5b51700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b0072b50 0x7f63b0082f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.907+0000 7f63b5b51700 1 -- 192.168.123.102:0/2388221324 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f63a8007ed0 con 0x7f63b00834a0 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.908+0000 7f63b5b51700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b00834a0 0x7f63b0083920 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f63a8003c60 tx=0x7f63a8003d40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:29.908 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.908+0000 7f63a77fe700 1 -- 192.168.123.102:0/2388221324 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f63a801c070 con 0x7f63b00834a0 2026-03-10T10:21:29.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.908+0000 7f63b85b6700 1 -- 192.168.123.102:0/2388221324 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f63b012eaf0 con 0x7f63b00834a0 2026-03-10T10:21:29.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.908+0000 7f63b85b6700 1 -- 192.168.123.102:0/2388221324 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f63b012efe0 con 0x7f63b00834a0 2026-03-10T10:21:29.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.909+0000 7f63a77fe700 1 -- 192.168.123.102:0/2388221324 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f63a800fb40 con 0x7f63b00834a0 2026-03-10T10:21:29.909 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.909+0000 7f63a77fe700 1 -- 192.168.123.102:0/2388221324 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f63a8021bd0 con 0x7f63b00834a0 2026-03-10T10:21:29.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.910+0000 7f63a77fe700 1 -- 192.168.123.102:0/2388221324 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f63a800fcb0 con 0x7f63b00834a0 2026-03-10T10:21:29.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.911+0000 7f63a77fe700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f639c077a40 0x7f639c079f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:29.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.911+0000 7f63a77fe700 1 -- 192.168.123.102:0/2388221324 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f63a8013070 con 0x7f63b00834a0 2026-03-10T10:21:29.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.911+0000 7f63b6352700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f639c077a40 0x7f639c079f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:29.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.912+0000 7f63b85b6700 1 -- 192.168.123.102:0/2388221324 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6394005320 con 0x7f63b00834a0 2026-03-10T10:21:29.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.913+0000 7f63b6352700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f639c077a40 0x7f639c079f00 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f63ac005950 tx=0x7f63ac0058e0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:29.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:29.915+0000 7f63a77fe700 1 -- 192.168.123.102:0/2388221324 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f63a8063ea0 con 0x7f63b00834a0 2026-03-10T10:21:30.085 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.085+0000 7f63b85b6700 1 -- 192.168.123.102:0/2388221324 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f6394000bf0 con 0x7f639c077a40 2026-03-10T10:21:30.088 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.086+0000 7f63a77fe700 1 -- 192.168.123.102:0/2388221324 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f6394000bf0 con 0x7f639c077a40 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.089+0000 7f63a57fa700 1 -- 192.168.123.102:0/2388221324 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f639c077a40 msgr2=0x7f639c079f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.089+0000 7f63a57fa700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f639c077a40 0x7f639c079f00 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f63ac005950 tx=0x7f63ac0058e0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.090+0000 7f63a57fa700 1 -- 192.168.123.102:0/2388221324 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b00834a0 msgr2=0x7f63b0083920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.090+0000 7f63a57fa700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b00834a0 0x7f63b0083920 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f63a8003c60 tx=0x7f63a8003d40 comp rx=0 tx=0).stop 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.090+0000 7f63a57fa700 1 -- 192.168.123.102:0/2388221324 shutdown_connections 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.090+0000 7f63a57fa700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f63b0072b50 0x7f63b0082f60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.090+0000 7f63a57fa700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f639c077a40 0x7f639c079f00 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.090+0000 7f63a57fa700 1 --2- 192.168.123.102:0/2388221324 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f63b00834a0 0x7f63b0083920 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.090+0000 7f63a57fa700 1 -- 192.168.123.102:0/2388221324 >> 192.168.123.102:0/2388221324 conn(0x7f63b006dae0 msgr2=0x7f63b006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.090+0000 7f63a57fa700 1 -- 192.168.123.102:0/2388221324 shutdown_connections 2026-03-10T10:21:30.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.090+0000 7f63a57fa700 1 -- 192.168.123.102:0/2388221324 wait complete. 2026-03-10T10:21:30.187 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.186+0000 7f926dea8700 1 -- 192.168.123.102:0/3244838751 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9268075a40 msgr2=0x7f9268077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.187 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.186+0000 7f926dea8700 1 --2- 192.168.123.102:0/3244838751 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9268075a40 0x7f9268077ed0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f926000d3f0 tx=0x7f926000d700 comp rx=0 tx=0).stop 2026-03-10T10:21:30.187 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 -- 192.168.123.102:0/3244838751 shutdown_connections 2026-03-10T10:21:30.187 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 --2- 192.168.123.102:0/3244838751 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9268075a40 0x7f9268077ed0 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.187 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 --2- 192.168.123.102:0/3244838751 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9268072b50 0x7f9268072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.187 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 -- 192.168.123.102:0/3244838751 >> 192.168.123.102:0/3244838751 conn(0x7f926806dae0 msgr2=0x7f926806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 -- 192.168.123.102:0/3244838751 shutdown_connections 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 -- 192.168.123.102:0/3244838751 wait complete. 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 Processor -- start 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 -- start start 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9268072b50 0x7f9268083100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9268083640 0x7f92681b30f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9268083b50 con 0x7f9268083640 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.187+0000 7f926dea8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9268083cc0 con 0x7f9268072b50 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.188+0000 7f9266ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9268083640 0x7f92681b30f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.188+0000 7f92677fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9268072b50 0x7f9268083100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:30.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.188+0000 7f92677fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9268072b50 0x7f9268083100 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:57938/0 (socket says 192.168.123.102:57938) 2026-03-10T10:21:30.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.188+0000 7f92677fe700 1 -- 192.168.123.102:0/4027480213 learned_addr learned my addr 192.168.123.102:0/4027480213 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:21:30.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.188+0000 7f92677fe700 1 -- 192.168.123.102:0/4027480213 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9268083640 msgr2=0x7f92681b30f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.188+0000 7f92677fe700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9268083640 0x7f92681b30f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.188+0000 7f92677fe700 1 -- 192.168.123.102:0/4027480213 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9260007ed0 con 0x7f9268072b50 2026-03-10T10:21:30.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.188+0000 7f92677fe700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9268072b50 0x7f9268083100 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f925800dde0 tx=0x7f9258009520 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:30.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.189+0000 7f9264ff9700 1 -- 192.168.123.102:0/4027480213 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9258009ad0 con 0x7f9268072b50 2026-03-10T10:21:30.190 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.189+0000 7f926dea8700 1 -- 192.168.123.102:0/4027480213 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f92681b36f0 con 0x7f9268072b50 2026-03-10T10:21:30.190 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.189+0000 7f926dea8700 1 -- 192.168.123.102:0/4027480213 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f92681b3c40 con 0x7f9268072b50 2026-03-10T10:21:30.190 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.190+0000 7f9264ff9700 1 -- 192.168.123.102:0/4027480213 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9258009c30 con 0x7f9268072b50 2026-03-10T10:21:30.190 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.190+0000 7f9264ff9700 1 -- 192.168.123.102:0/4027480213 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f925800c650 con 0x7f9268072b50 2026-03-10T10:21:30.191 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.191+0000 7f9264ff9700 1 -- 192.168.123.102:0/4027480213 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f925800c8f0 con 0x7f9268072b50 2026-03-10T10:21:30.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.191+0000 7f9264ff9700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9250077b00 0x7f9250079fc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:30.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.192+0000 7f9264ff9700 1 -- 192.168.123.102:0/4027480213 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f92580a1820 con 0x7f9268072b50 2026-03-10T10:21:30.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.192+0000 7f9266ffd700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9250077b00 0x7f9250079fc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:30.193 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.193+0000 7f926dea8700 1 -- 192.168.123.102:0/4027480213 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9254005320 con 0x7f9268072b50 2026-03-10T10:21:30.193 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.193+0000 7f9266ffd700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9250077b00 0x7f9250079fc0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f9260000f80 tx=0x7f926000db00 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:30.199 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.196+0000 7f9264ff9700 1 -- 192.168.123.102:0/4027480213 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9258069f10 con 0x7f9268072b50 2026-03-10T10:21:30.344 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.344+0000 7f926dea8700 1 -- 192.168.123.102:0/4027480213 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f9254000bf0 con 0x7f9250077b00 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (5m) 16s ago 6m 23.2M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (6m) 16s ago 6m 8820k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (6m) 29s ago 6m 11.1M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (32s) 16s ago 6m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (30s) 29s ago 6m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (5m) 16s ago 6m 88.8M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (4m) 16s ago 4m 15.9M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (4m) 16s ago 4m 237M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (4m) 29s ago 4m 15.9M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (4m) 29s ago 4m 146M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (92s) 16s ago 7m 614M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (67s) 29s ago 6m 487M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (60s) 16s ago 7m 56.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (46s) 29s ago 6m 49.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (6m) 16s ago 6m 16.3M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (6m) 29s ago 6m 15.4M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (19s) 16s ago 5m 30.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (5m) 16s ago 5m 363M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (5m) 16s ago 5m 305M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (5m) 29s ago 5m 426M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (5m) 29s ago 5m 407M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (5m) 29s ago 5m 319M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:21:30.356 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (70s) 16s ago 6m 47.7M - 2.43.0 a07b618ecd1d 5ebb885bd417 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.352+0000 7f9264ff9700 1 -- 192.168.123.102:0/4027480213 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7f9254000bf0 con 0x7f9250077b00 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 -- 192.168.123.102:0/4027480213 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9250077b00 msgr2=0x7f9250079fc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9250077b00 0x7f9250079fc0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f9260000f80 tx=0x7f926000db00 comp rx=0 tx=0).stop 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 -- 192.168.123.102:0/4027480213 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9268072b50 msgr2=0x7f9268083100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9268072b50 0x7f9268083100 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f925800dde0 tx=0x7f9258009520 comp rx=0 tx=0).stop 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 -- 192.168.123.102:0/4027480213 shutdown_connections 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9250077b00 0x7f9250079fc0 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9268072b50 0x7f9268083100 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 --2- 192.168.123.102:0/4027480213 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9268083640 0x7f92681b30f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 -- 192.168.123.102:0/4027480213 >> 192.168.123.102:0/4027480213 conn(0x7f926806dae0 msgr2=0x7f926806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 -- 192.168.123.102:0/4027480213 shutdown_connections 2026-03-10T10:21:30.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.356+0000 7f924e7fc700 1 -- 192.168.123.102:0/4027480213 wait complete. 2026-03-10T10:21:30.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.455+0000 7fb02f0d1700 1 -- 192.168.123.102:0/1346694963 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb028075a40 msgr2=0x7fb028077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.455+0000 7fb02f0d1700 1 --2- 192.168.123.102:0/1346694963 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb028075a40 0x7fb028077ed0 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fb02000d3f0 tx=0x7fb02000d700 comp rx=0 tx=0).stop 2026-03-10T10:21:30.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 -- 192.168.123.102:0/1346694963 shutdown_connections 2026-03-10T10:21:30.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 --2- 192.168.123.102:0/1346694963 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb028075a40 0x7fb028077ed0 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 --2- 192.168.123.102:0/1346694963 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb028072b50 0x7fb028072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 -- 192.168.123.102:0/1346694963 >> 192.168.123.102:0/1346694963 conn(0x7fb02806dae0 msgr2=0x7fb02806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 -- 192.168.123.102:0/1346694963 shutdown_connections 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 -- 192.168.123.102:0/1346694963 wait complete. 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 Processor -- start 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 -- start start 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb028072b50 0x7fb028083100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb028083640 0x7fb02812e400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb028083b80 con 0x7fb028072b50 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.456+0000 7fb02f0d1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb028083cf0 con 0x7fb028083640 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.457+0000 7fb02ce6d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb028072b50 0x7fb028083100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.457+0000 7fb02ce6d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb028072b50 0x7fb028083100 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:41800/0 (socket says 192.168.123.102:41800) 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.457+0000 7fb02ce6d700 1 -- 192.168.123.102:0/19024505 learned_addr learned my addr 192.168.123.102:0/19024505 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:21:30.457 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.457+0000 7fb027fff700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb028083640 0x7fb02812e400 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:30.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.457+0000 7fb027fff700 1 -- 192.168.123.102:0/19024505 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb028072b50 msgr2=0x7fb028083100 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.457+0000 7fb027fff700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb028072b50 0x7fb028083100 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.458+0000 7fb027fff700 1 -- 192.168.123.102:0/19024505 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb020007ed0 con 0x7fb028083640 2026-03-10T10:21:30.458 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.458+0000 7fb027fff700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb028083640 0x7fb02812e400 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fb020000f80 tx=0x7fb020003c30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:30.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.458+0000 7fb025ffb700 1 -- 192.168.123.102:0/19024505 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb02001c070 con 0x7fb028083640 2026-03-10T10:21:30.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.458+0000 7fb02f0d1700 1 -- 192.168.123.102:0/19024505 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb02812ea00 con 0x7fb028083640 2026-03-10T10:21:30.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.458+0000 7fb02f0d1700 1 -- 192.168.123.102:0/19024505 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb02812eea0 con 0x7fb028083640 2026-03-10T10:21:30.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.459+0000 7fb025ffb700 1 -- 192.168.123.102:0/19024505 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb0200042e0 con 0x7fb028083640 2026-03-10T10:21:30.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.459+0000 7fb025ffb700 1 -- 192.168.123.102:0/19024505 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb020021880 con 0x7fb028083640 2026-03-10T10:21:30.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.460+0000 7fb02f0d1700 1 -- 192.168.123.102:0/19024505 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb014005320 con 0x7fb028083640 2026-03-10T10:21:30.461 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.461+0000 7fb025ffb700 1 -- 192.168.123.102:0/19024505 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fb02000f660 con 0x7fb028083640 2026-03-10T10:21:30.461 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.461+0000 7fb025ffb700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb010077a40 0x7fb010079f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:30.461 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.461+0000 7fb025ffb700 1 -- 192.168.123.102:0/19024505 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fb020013070 con 0x7fb028083640 2026-03-10T10:21:30.461 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.461+0000 7fb02ce6d700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb010077a40 0x7fb010079f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:30.462 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.462+0000 7fb02ce6d700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb010077a40 0x7fb010079f00 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7fb018009d30 tx=0x7fb0180094b0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:30.468 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.464+0000 7fb025ffb700 1 -- 192.168.123.102:0/19024505 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb020064060 con 0x7fb028083640 2026-03-10T10:21:30.663 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.662+0000 7fb02f0d1700 1 -- 192.168.123.102:0/19024505 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fb014005cc0 con 0x7fb028083640 2026-03-10T10:21:30.663 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:30 vm02.local ceph-mon[110129]: from='client.44129 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:30.663 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:30 vm02.local ceph-mon[110129]: from='client.44133 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:30.663 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:30 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 980/42789 objects degraded (2.290%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:30.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.667+0000 7fb025ffb700 1 -- 192.168.123.102:0/19024505 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7fb020007480 con 0x7fb028083640 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 5, 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 9, 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:21:30.668 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:21:30.671 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.670+0000 7fb00f7fe700 1 -- 192.168.123.102:0/19024505 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb010077a40 msgr2=0x7fb010079f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.673 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb010077a40 0x7fb010079f00 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7fb018009d30 tx=0x7fb0180094b0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.673 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 -- 192.168.123.102:0/19024505 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb028083640 msgr2=0x7fb02812e400 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.673 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb028083640 0x7fb02812e400 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fb020000f80 tx=0x7fb020003c30 comp rx=0 tx=0).stop 2026-03-10T10:21:30.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 -- 192.168.123.102:0/19024505 shutdown_connections 2026-03-10T10:21:30.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb028072b50 0x7fb028083100 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb010077a40 0x7fb010079f00 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 --2- 192.168.123.102:0/19024505 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb028083640 0x7fb02812e400 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 -- 192.168.123.102:0/19024505 >> 192.168.123.102:0/19024505 conn(0x7fb02806dae0 msgr2=0x7fb02806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:30.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 -- 192.168.123.102:0/19024505 shutdown_connections 2026-03-10T10:21:30.674 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.673+0000 7fb00f7fe700 1 -- 192.168.123.102:0/19024505 wait complete. 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 -- 192.168.123.102:0/11524287 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d14075a40 msgr2=0x7f9d14077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 --2- 192.168.123.102:0/11524287 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d14075a40 0x7f9d14077ed0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f9d0c00d3f0 tx=0x7f9d0c00d700 comp rx=0 tx=0).stop 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 -- 192.168.123.102:0/11524287 shutdown_connections 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 --2- 192.168.123.102:0/11524287 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d14075a40 0x7f9d14077ed0 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 --2- 192.168.123.102:0/11524287 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d14072b50 0x7f9d14072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 -- 192.168.123.102:0/11524287 >> 192.168.123.102:0/11524287 conn(0x7f9d1406dae0 msgr2=0x7f9d1406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 -- 192.168.123.102:0/11524287 shutdown_connections 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 -- 192.168.123.102:0/11524287 wait complete. 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 Processor -- start 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 -- start start 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d14072b50 0x7f9d14082f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d140834a0 0x7f9d14083920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9d1412e700 con 0x7f9d14072b50 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.767+0000 7f9d1c469700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9d1412e870 con 0x7f9d140834a0 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.768+0000 7f9d1a205700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d14072b50 0x7f9d14082f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.768+0000 7f9d19a04700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d140834a0 0x7f9d14083920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.768+0000 7f9d19a04700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d140834a0 0x7f9d14083920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:57990/0 (socket says 192.168.123.102:57990) 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.768+0000 7f9d19a04700 1 -- 192.168.123.102:0/76557391 learned_addr learned my addr 192.168.123.102:0/76557391 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.768+0000 7f9d19a04700 1 -- 192.168.123.102:0/76557391 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d14072b50 msgr2=0x7f9d14082f60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:30.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.768+0000 7f9d19a04700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d14072b50 0x7f9d14082f60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:30.770 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.768+0000 7f9d19a04700 1 -- 192.168.123.102:0/76557391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9d0c007ed0 con 0x7f9d140834a0 2026-03-10T10:21:30.770 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.768+0000 7f9d19a04700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d140834a0 0x7f9d14083920 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f9d0c003c60 tx=0x7f9d0c003d40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:30.770 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.768+0000 7f9d0b7fe700 1 -- 192.168.123.102:0/76557391 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9d0c01c070 con 0x7f9d140834a0 2026-03-10T10:21:30.770 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.769+0000 7f9d1c469700 1 -- 192.168.123.102:0/76557391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9d1412eaf0 con 0x7f9d140834a0 2026-03-10T10:21:30.770 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.769+0000 7f9d1c469700 1 -- 192.168.123.102:0/76557391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9d1412efe0 con 0x7f9d140834a0 2026-03-10T10:21:30.770 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.769+0000 7f9d0b7fe700 1 -- 192.168.123.102:0/76557391 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9d0c00fb40 con 0x7f9d140834a0 2026-03-10T10:21:30.770 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.769+0000 7f9d0b7fe700 1 -- 192.168.123.102:0/76557391 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9d0c004580 con 0x7f9d140834a0 2026-03-10T10:21:30.771 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.771+0000 7f9d0b7fe700 1 -- 192.168.123.102:0/76557391 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f9d0c021a00 con 0x7f9d140834a0 2026-03-10T10:21:30.773 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.773+0000 7f9d0b7fe700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d00077a40 0x7f9d00079f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:30.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.773+0000 7f9d1a205700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d00077a40 0x7f9d00079f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:30.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.773+0000 7f9d0b7fe700 1 -- 192.168.123.102:0/76557391 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f9d0c013070 con 0x7f9d140834a0 2026-03-10T10:21:30.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.774+0000 7f9d1a205700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d00077a40 0x7f9d00079f00 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f9d10005950 tx=0x7f9d100058e0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:30.774 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.774+0000 7f9d1c469700 1 -- 192.168.123.102:0/76557391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9cf8005320 con 0x7f9d140834a0 2026-03-10T10:21:30.779 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.777+0000 7f9d0b7fe700 1 -- 192.168.123.102:0/76557391 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9d0c064d50 con 0x7f9d140834a0 2026-03-10T10:21:30.997 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:30.996+0000 7f9d1c469700 1 -- 192.168.123.102:0/76557391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f9cf8005cc0 con 0x7f9d140834a0 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.000+0000 7f9d0b7fe700 1 -- 192.168.123.102:0/76557391 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1945 (secure 0 0 0) 0x7f9d0c0267b0 con 0x7f9d140834a0 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:21:31.001 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 0 members: 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:21:31.002 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:21:31.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.006+0000 7f9d097fa700 1 -- 192.168.123.102:0/76557391 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d00077a40 msgr2=0x7f9d00079f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.006+0000 7f9d097fa700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d00077a40 0x7f9d00079f00 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f9d10005950 tx=0x7f9d100058e0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.006+0000 7f9d097fa700 1 -- 192.168.123.102:0/76557391 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d140834a0 msgr2=0x7f9d14083920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.006+0000 7f9d097fa700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d140834a0 0x7f9d14083920 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f9d0c003c60 tx=0x7f9d0c003d40 comp rx=0 tx=0).stop 2026-03-10T10:21:31.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.007+0000 7f9d097fa700 1 -- 192.168.123.102:0/76557391 shutdown_connections 2026-03-10T10:21:31.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.007+0000 7f9d097fa700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d14072b50 0x7f9d14082f60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.007+0000 7f9d097fa700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d00077a40 0x7f9d00079f00 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.007+0000 7f9d097fa700 1 --2- 192.168.123.102:0/76557391 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d140834a0 0x7f9d14083920 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.007+0000 7f9d097fa700 1 -- 192.168.123.102:0/76557391 >> 192.168.123.102:0/76557391 conn(0x7f9d1406dae0 msgr2=0x7f9d1406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:31.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.007+0000 7f9d097fa700 1 -- 192.168.123.102:0/76557391 shutdown_connections 2026-03-10T10:21:31.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.007+0000 7f9d097fa700 1 -- 192.168.123.102:0/76557391 wait complete. 2026-03-10T10:21:31.010 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:21:31.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:30 vm05.local ceph-mon[103593]: from='client.44129 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:31.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:30 vm05.local ceph-mon[103593]: from='client.44133 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:31.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:30 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 980/42789 objects degraded (2.290%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:31.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.114+0000 7f8c5bd52700 1 -- 192.168.123.102:0/141539761 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54075a40 msgr2=0x7f8c54077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.114+0000 7f8c5bd52700 1 --2- 192.168.123.102:0/141539761 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54075a40 0x7f8c54077ed0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f8c4c009230 tx=0x7f8c4c009260 comp rx=0 tx=0).stop 2026-03-10T10:21:31.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 -- 192.168.123.102:0/141539761 shutdown_connections 2026-03-10T10:21:31.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 --2- 192.168.123.102:0/141539761 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54075a40 0x7f8c54077ed0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 --2- 192.168.123.102:0/141539761 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c54072b50 0x7f8c54072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 -- 192.168.123.102:0/141539761 >> 192.168.123.102:0/141539761 conn(0x7f8c5406dae0 msgr2=0x7f8c5406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:31.115 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 -- 192.168.123.102:0/141539761 shutdown_connections 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 -- 192.168.123.102:0/141539761 wait complete. 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 Processor -- start 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 -- start start 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54072b50 0x7f8c54083120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c54083660 0x7f8c5412e470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8c54083b70 con 0x7f8c54083660 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.115+0000 7f8c5bd52700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8c54083ce0 con 0x7f8c54072b50 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c59aee700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54072b50 0x7f8c54083120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c59aee700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54072b50 0x7f8c54083120 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:58016/0 (socket says 192.168.123.102:58016) 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c59aee700 1 -- 192.168.123.102:0/386935905 learned_addr learned my addr 192.168.123.102:0/386935905 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c592ed700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c54083660 0x7f8c5412e470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c59aee700 1 -- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c54083660 msgr2=0x7f8c5412e470 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c59aee700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c54083660 0x7f8c5412e470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c59aee700 1 -- 192.168.123.102:0/386935905 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8c4c008ee0 con 0x7f8c54072b50 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c59aee700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54072b50 0x7f8c54083120 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f8c5000bfd0 tx=0x7f8c50009d70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:31.116 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c4affd700 1 -- 192.168.123.102:0/386935905 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8c50010040 con 0x7f8c54072b50 2026-03-10T10:21:31.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8c5412ea10 con 0x7f8c54072b50 2026-03-10T10:21:31.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.116+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8c5412ef60 con 0x7f8c54072b50 2026-03-10T10:21:31.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.117+0000 7f8c4affd700 1 -- 192.168.123.102:0/386935905 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8c5000ec20 con 0x7f8c54072b50 2026-03-10T10:21:31.117 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.117+0000 7f8c4affd700 1 -- 192.168.123.102:0/386935905 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8c50014e40 con 0x7f8c54072b50 2026-03-10T10:21:31.118 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.118+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8c38005320 con 0x7f8c54072b50 2026-03-10T10:21:31.119 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.119+0000 7f8c4affd700 1 -- 192.168.123.102:0/386935905 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f8c50014590 con 0x7f8c54072b50 2026-03-10T10:21:31.120 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.119+0000 7f8c4affd700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c40077a40 0x7f8c40079f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:31.120 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.120+0000 7f8c4affd700 1 -- 192.168.123.102:0/386935905 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f8c50099ea0 con 0x7f8c54072b50 2026-03-10T10:21:31.120 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.120+0000 7f8c592ed700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c40077a40 0x7f8c40079f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:31.121 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.121+0000 7f8c592ed700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c40077a40 0x7f8c40079f00 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f8c4c009200 tx=0x7f8c4c00c920 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:31.125 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.121+0000 7f8c4affd700 1 -- 192.168.123.102:0/386935905 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8c500624e0 con 0x7f8c54072b50 2026-03-10T10:21:31.287 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.287+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f8c38000bf0 con 0x7f8c40077a40 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "crash" 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "7/23 daemons upgraded", 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:21:31.289 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.288+0000 7f8c4affd700 1 -- 192.168.123.102:0/386935905 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f8c38000bf0 con 0x7f8c40077a40 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.290+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c40077a40 msgr2=0x7f8c40079f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.290+0000 7f8c5bd52700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c40077a40 0x7f8c40079f00 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f8c4c009200 tx=0x7f8c4c00c920 comp rx=0 tx=0).stop 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.291+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54072b50 msgr2=0x7f8c54083120 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.291+0000 7f8c5bd52700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54072b50 0x7f8c54083120 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f8c5000bfd0 tx=0x7f8c50009d70 comp rx=0 tx=0).stop 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.291+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 shutdown_connections 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.291+0000 7f8c5bd52700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c40077a40 0x7f8c40079f00 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.291+0000 7f8c5bd52700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c54072b50 0x7f8c54083120 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.291+0000 7f8c5bd52700 1 --2- 192.168.123.102:0/386935905 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c54083660 0x7f8c5412e470 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.291+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 >> 192.168.123.102:0/386935905 conn(0x7f8c5406dae0 msgr2=0x7f8c5406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.291+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 shutdown_connections 2026-03-10T10:21:31.291 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.291+0000 7f8c5bd52700 1 -- 192.168.123.102:0/386935905 wait complete. 2026-03-10T10:21:31.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 -- 192.168.123.102:0/1301346345 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a4800 msgr2=0x7ff7fc0a4c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 --2- 192.168.123.102:0/1301346345 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a4800 0x7ff7fc0a4c60 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7ff804066a30 tx=0x7ff804067220 comp rx=0 tx=0).stop 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 -- 192.168.123.102:0/1301346345 shutdown_connections 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 --2- 192.168.123.102:0/1301346345 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a4800 0x7ff7fc0a4c60 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 --2- 192.168.123.102:0/1301346345 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7fc0a61c0 0x7ff7fc0a65e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 -- 192.168.123.102:0/1301346345 >> 192.168.123.102:0/1301346345 conn(0x7ff7fc0a0160 msgr2=0x7ff7fc0a25c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 -- 192.168.123.102:0/1301346345 shutdown_connections 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 -- 192.168.123.102:0/1301346345 wait complete. 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 Processor -- start 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 -- start start 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a61c0 0x7ff7fc0d0650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7fc0d0b90 0x7ff7fc010f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff7fc0d10a0 con 0x7ff7fc0d0b90 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.407+0000 7ff80ac4d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff7fc0d11e0 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.408+0000 7ff809c4b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a61c0 0x7ff7fc0d0650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.408+0000 7ff809c4b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a61c0 0x7ff7fc0d0650 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:58040/0 (socket says 192.168.123.102:58040) 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.408+0000 7ff809c4b700 1 -- 192.168.123.102:0/2221008989 learned_addr learned my addr 192.168.123.102:0/2221008989 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.408+0000 7ff80944a700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7fc0d0b90 0x7ff7fc010f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.408+0000 7ff809c4b700 1 -- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7fc0d0b90 msgr2=0x7ff7fc010f80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.408+0000 7ff809c4b700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7fc0d0b90 0x7ff7fc010f80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.408+0000 7ff809c4b700 1 -- 192.168.123.102:0/2221008989 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff804067090 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.408+0000 7ff809c4b700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a61c0 0x7ff7fc0d0650 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7ff80000b6e0 tx=0x7ff80000baa0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.409+0000 7ff7faffd700 1 -- 192.168.123.102:0/2221008989 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff800005cc0 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.409+0000 7ff80ac4d700 1 -- 192.168.123.102:0/2221008989 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff7fc011520 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.409+0000 7ff80ac4d700 1 -- 192.168.123.102:0/2221008989 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff7fc011af0 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.409+0000 7ff7faffd700 1 -- 192.168.123.102:0/2221008989 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff800005e20 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.409+0000 7ff7faffd700 1 -- 192.168.123.102:0/2221008989 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff800006820 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.410+0000 7ff7faffd700 1 -- 192.168.123.102:0/2221008989 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7ff80000d3e0 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.411+0000 7ff7faffd700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff7f0077b10 0x7ff7f0079fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:21:31.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.411+0000 7ff80944a700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff7f0077b10 0x7ff7f0079fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:21:31.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.411+0000 7ff7faffd700 1 -- 192.168.123.102:0/2221008989 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7ff80009a350 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.412+0000 7ff80944a700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff7f0077b10 0x7ff7f0079fd0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7ff80404f8e0 tx=0x7ff80406e040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:21:31.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.413+0000 7ff80ac4d700 1 -- 192.168.123.102:0/2221008989 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff7e8005320 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.423 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.417+0000 7ff7faffd700 1 -- 192.168.123.102:0/2221008989 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff800062990 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.641 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.641+0000 7ff80ac4d700 1 -- 192.168.123.102:0/2221008989 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7ff7e8005cc0 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.642+0000 7ff7faffd700 1 -- 192.168.123.102:0/2221008989 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+976 (secure 0 0 0) 0x7ff80001c090 con 0x7ff7fc0a61c0 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN Degraded data redundancy: 980/42180 objects degraded (2.323%), 13 pgs degraded 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 980/42180 objects degraded (2.323%), 13 pgs degraded 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.1 is active+recovery_wait+degraded, acting [0,4,2] 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.3 is active+recovery_wait+degraded, acting [4,0,3] 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.6 is active+recovery_wait+degraded, acting [0,1,4] 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.b is active+recovery_wait+degraded, acting [1,0,4] 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.c is active+recovery_wait+degraded, acting [5,0,3] 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.f is active+recovery_wait+degraded, acting [5,3,0] 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.10 is active+recovery_wait+degraded, acting [5,0,1] 2026-03-10T10:21:31.642 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.11 is active+recovery_wait+degraded, acting [3,4,0] 2026-03-10T10:21:31.643 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.12 is active+recovery_wait+degraded, acting [0,3,1] 2026-03-10T10:21:31.643 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.15 is active+recovery_wait+degraded, acting [3,0,4] 2026-03-10T10:21:31.643 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.17 is active+recovery_wait+degraded, acting [0,5,2] 2026-03-10T10:21:31.643 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.18 is active+recovery_wait+degraded, acting [2,0,1] 2026-03-10T10:21:31.643 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.1f is active+recovery_wait+degraded, acting [0,3,2] 2026-03-10T10:21:31.645 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.645+0000 7ff7f8ff9700 1 -- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff7f0077b10 msgr2=0x7ff7f0079fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff7f0077b10 0x7ff7f0079fd0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7ff80404f8e0 tx=0x7ff80406e040 comp rx=0 tx=0).stop 2026-03-10T10:21:31.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 -- 192.168.123.102:0/2221008989 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a61c0 msgr2=0x7ff7fc0d0650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:21:31.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a61c0 0x7ff7fc0d0650 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7ff80000b6e0 tx=0x7ff80000baa0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 -- 192.168.123.102:0/2221008989 shutdown_connections 2026-03-10T10:21:31.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff7f0077b10 0x7ff7f0079fd0 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7fc0a61c0 0x7ff7fc0d0650 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 --2- 192.168.123.102:0/2221008989 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7fc0d0b90 0x7ff7fc010f80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:21:31.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 -- 192.168.123.102:0/2221008989 >> 192.168.123.102:0/2221008989 conn(0x7ff7fc0a0160 msgr2=0x7ff7fc0a24a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:21:31.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 -- 192.168.123.102:0/2221008989 shutdown_connections 2026-03-10T10:21:31.647 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:21:31.646+0000 7ff7f8ff9700 1 -- 192.168.123.102:0/2221008989 wait complete. 2026-03-10T10:21:32.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:31 vm02.local ceph-mon[110129]: from='client.44137 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:32.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:31 vm02.local ceph-mon[110129]: pgmap v38: 65 pgs: 13 active+recovery_wait+degraded, 1 active+recovering, 51 active+clean; 293 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 846 op/s; 980/42180 objects degraded (2.323%); 575 KiB/s, 7 objects/s recovering 2026-03-10T10:21:32.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:31 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/19024505' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:32.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:31 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/76557391' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:21:32.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:31 vm05.local ceph-mon[103593]: from='client.44137 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:32.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:31 vm05.local ceph-mon[103593]: pgmap v38: 65 pgs: 13 active+recovery_wait+degraded, 1 active+recovering, 51 active+clean; 293 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 846 op/s; 980/42180 objects degraded (2.323%); 575 KiB/s, 7 objects/s recovering 2026-03-10T10:21:32.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:31 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/19024505' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:21:32.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:31 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/76557391' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:21:33.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:32 vm02.local ceph-mon[110129]: from='client.44147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:33.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:32 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2221008989' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:21:33.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:32 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:33.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:32 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:33.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:32 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (5 PGs are or would become offline) 2026-03-10T10:21:33.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:32 vm05.local ceph-mon[103593]: from='client.44147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:21:33.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:32 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2221008989' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:21:33.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:32 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:33.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:32 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:33.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:32 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (5 PGs are or would become offline) 2026-03-10T10:21:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:33 vm02.local ceph-mon[110129]: pgmap v39: 65 pgs: 12 active+recovery_wait+degraded, 2 active+recovering, 51 active+clean; 294 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1016 KiB/s rd, 1.1 MiB/s wr, 746 op/s; 889/41805 objects degraded (2.127%); 0 B/s, 5 objects/s recovering 2026-03-10T10:21:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:33 vm05.local ceph-mon[103593]: pgmap v39: 65 pgs: 12 active+recovery_wait+degraded, 2 active+recovering, 51 active+clean; 294 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1016 KiB/s rd, 1.1 MiB/s wr, 746 op/s; 889/41805 objects degraded (2.127%); 0 B/s, 5 objects/s recovering 2026-03-10T10:21:36.407 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:36 vm02.local ceph-mon[110129]: pgmap v40: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 289 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.3 MiB/s wr, 980 op/s; 889/41031 objects degraded (2.167%); 0 B/s, 5 objects/s recovering 2026-03-10T10:21:36.407 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:36 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 889/41031 objects degraded (2.167%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:36.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:36 vm05.local ceph-mon[103593]: pgmap v40: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 289 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.3 MiB/s wr, 980 op/s; 889/41031 objects degraded (2.167%); 0 B/s, 5 objects/s recovering 2026-03-10T10:21:36.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:36 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 889/41031 objects degraded (2.167%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:37.462 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:37.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:38.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:38 vm02.local ceph-mon[110129]: pgmap v41: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 289 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 937 KiB/s rd, 1.2 MiB/s wr, 876 op/s; 889/41031 objects degraded (2.167%); 0 B/s, 4 objects/s recovering 2026-03-10T10:21:38.543 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:38 vm05.local ceph-mon[103593]: pgmap v41: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 289 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 937 KiB/s rd, 1.2 MiB/s wr, 876 op/s; 889/41031 objects degraded (2.167%); 0 B/s, 4 objects/s recovering 2026-03-10T10:21:39.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:39 vm02.local ceph-mon[110129]: pgmap v42: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 279 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.6 MiB/s wr, 1.08k op/s; 889/36972 objects degraded (2.405%); 0 B/s, 7 objects/s recovering 2026-03-10T10:21:39.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:39 vm05.local ceph-mon[103593]: pgmap v42: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 279 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.6 MiB/s wr, 1.08k op/s; 889/36972 objects degraded (2.405%); 0 B/s, 7 objects/s recovering 2026-03-10T10:21:40.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:40 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 889/36972 objects degraded (2.405%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:40.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:40 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 889/36972 objects degraded (2.405%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:41.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:41 vm02.local ceph-mon[110129]: pgmap v43: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 277 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 995 KiB/s rd, 1.0 MiB/s wr, 748 op/s; 889/36066 objects degraded (2.465%); 0 B/s, 7 objects/s recovering 2026-03-10T10:21:42.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:41 vm05.local ceph-mon[103593]: pgmap v43: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 277 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 995 KiB/s rd, 1.0 MiB/s wr, 748 op/s; 889/36066 objects degraded (2.465%); 0 B/s, 7 objects/s recovering 2026-03-10T10:21:44.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:43 vm02.local ceph-mon[110129]: pgmap v44: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 274 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 747 op/s; 889/34155 objects degraded (2.603%); 0 B/s, 8 objects/s recovering 2026-03-10T10:21:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:43 vm05.local ceph-mon[103593]: pgmap v44: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 274 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 747 op/s; 889/34155 objects degraded (2.603%); 0 B/s, 8 objects/s recovering 2026-03-10T10:21:47.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:46 vm02.local ceph-mon[110129]: pgmap v45: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 269 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 742 op/s; 889/30678 objects degraded (2.898%); 0 B/s, 7 objects/s recovering 2026-03-10T10:21:47.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:46 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 889/30678 objects degraded (2.898%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:47.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:46 vm05.local ceph-mon[103593]: pgmap v45: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 269 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 742 op/s; 889/30678 objects degraded (2.898%); 0 B/s, 7 objects/s recovering 2026-03-10T10:21:47.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:46 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 889/30678 objects degraded (2.898%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:48.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:47 vm02.local ceph-mon[110129]: pgmap v46: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 269 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 996 KiB/s rd, 1.0 MiB/s wr, 450 op/s; 889/30678 objects degraded (2.898%); 0 B/s, 6 objects/s recovering 2026-03-10T10:21:48.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:47 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:48.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:47 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:48.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:47 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (5 PGs are or would become offline) 2026-03-10T10:21:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:47 vm05.local ceph-mon[103593]: pgmap v46: 65 pgs: 12 active+recovery_wait+degraded, 1 active+recovering, 52 active+clean; 269 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 996 KiB/s rd, 1.0 MiB/s wr, 450 op/s; 889/30678 objects degraded (2.898%); 0 B/s, 6 objects/s recovering 2026-03-10T10:21:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:47 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:47 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:21:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:47 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (5 PGs are or would become offline) 2026-03-10T10:21:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:49 vm02.local ceph-mon[110129]: pgmap v47: 65 pgs: 11 active+recovery_wait+degraded, 1 active+recovering, 53 active+clean; 269 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 594 op/s; 815/26505 objects degraded (3.075%); 0 B/s, 10 objects/s recovering 2026-03-10T10:21:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:49 vm05.local ceph-mon[103593]: pgmap v47: 65 pgs: 11 active+recovery_wait+degraded, 1 active+recovering, 53 active+clean; 269 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 594 op/s; 815/26505 objects degraded (3.075%); 0 B/s, 10 objects/s recovering 2026-03-10T10:21:51.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:51 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 815/26505 objects degraded (3.075%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:51.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:51 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 815/26505 objects degraded (3.075%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:52.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:52 vm02.local ceph-mon[110129]: pgmap v48: 65 pgs: 11 active+recovery_wait+degraded, 1 active+recovering, 53 active+clean; 264 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 411 op/s; 815/25674 objects degraded (3.174%); 0 B/s, 6 objects/s recovering 2026-03-10T10:21:52.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:52.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:52 vm05.local ceph-mon[103593]: pgmap v48: 65 pgs: 11 active+recovery_wait+degraded, 1 active+recovering, 53 active+clean; 264 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 411 op/s; 815/25674 objects degraded (3.174%); 0 B/s, 6 objects/s recovering 2026-03-10T10:21:52.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:53 vm02.local ceph-mon[110129]: pgmap v49: 65 pgs: 11 active+recovery_wait+degraded, 54 active+clean; 262 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 417 op/s; 815/24006 objects degraded (3.395%); 0 B/s, 10 objects/s recovering 2026-03-10T10:21:53.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:53 vm05.local ceph-mon[103593]: pgmap v49: 65 pgs: 11 active+recovery_wait+degraded, 54 active+clean; 262 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 417 op/s; 815/24006 objects degraded (3.395%); 0 B/s, 10 objects/s recovering 2026-03-10T10:21:56.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:56 vm02.local ceph-mon[110129]: pgmap v50: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 256 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 477 op/s; 649/20700 objects degraded (3.135%); 0 B/s, 7 objects/s recovering 2026-03-10T10:21:56.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:56 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 649/20700 objects degraded (3.135%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:56 vm05.local ceph-mon[103593]: pgmap v50: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 256 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 477 op/s; 649/20700 objects degraded (3.135%); 0 B/s, 7 objects/s recovering 2026-03-10T10:21:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:56 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 649/20700 objects degraded (3.135%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T10:21:58.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:57 vm02.local ceph-mon[110129]: pgmap v51: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 256 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1003 KiB/s wr, 340 op/s; 649/20700 objects degraded (3.135%); 0 B/s, 7 objects/s recovering 2026-03-10T10:21:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:57 vm05.local ceph-mon[103593]: pgmap v51: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 256 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1003 KiB/s wr, 340 op/s; 649/20700 objects degraded (3.135%); 0 B/s, 7 objects/s recovering 2026-03-10T10:22:00.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:21:59 vm02.local ceph-mon[110129]: pgmap v52: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 251 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 450 op/s; 649/17898 objects degraded (3.626%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:21:59 vm05.local ceph-mon[103593]: pgmap v52: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 251 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 450 op/s; 649/17898 objects degraded (3.626%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:01.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:01 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 649/17898 objects degraded (3.626%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:01.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:01 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 649/17898 objects degraded (3.626%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:01.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.741+0000 7f8c24e9d700 1 -- 192.168.123.102:0/2292422003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c20107d90 msgr2=0x7f8c2010a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:01.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.741+0000 7f8c24e9d700 1 --2- 192.168.123.102:0/2292422003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c20107d90 0x7f8c2010a1c0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f8c10009a60 tx=0x7f8c10009d70 comp rx=0 tx=0).stop 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.742+0000 7f8c24e9d700 1 -- 192.168.123.102:0/2292422003 shutdown_connections 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.742+0000 7f8c24e9d700 1 --2- 192.168.123.102:0/2292422003 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c2010a700 0x7f8c2010cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.742+0000 7f8c24e9d700 1 --2- 192.168.123.102:0/2292422003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c20107d90 0x7f8c2010a1c0 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.742+0000 7f8c24e9d700 1 -- 192.168.123.102:0/2292422003 >> 192.168.123.102:0/2292422003 conn(0x7f8c2006dae0 msgr2=0x7f8c2006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.742+0000 7f8c24e9d700 1 -- 192.168.123.102:0/2292422003 shutdown_connections 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.742+0000 7f8c24e9d700 1 -- 192.168.123.102:0/2292422003 wait complete. 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.742+0000 7f8c24e9d700 1 Processor -- start 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.743+0000 7f8c24e9d700 1 -- start start 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.743+0000 7f8c24e9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c20107d90 0x7f8c201169e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.743+0000 7f8c24e9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c2010a700 0x7f8c20116f40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.743+0000 7f8c24e9d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8c20117580 con 0x7f8c20107d90 2026-03-10T10:22:01.743 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.743+0000 7f8c24e9d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8c20076fe0 con 0x7f8c2010a700 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.744+0000 7f8c1e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c20107d90 0x7f8c201169e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.744+0000 7f8c1e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c20107d90 0x7f8c201169e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:50312/0 (socket says 192.168.123.102:50312) 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.744+0000 7f8c1e59c700 1 -- 192.168.123.102:0/3543705563 learned_addr learned my addr 192.168.123.102:0/3543705563 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.744+0000 7f8c1e59c700 1 -- 192.168.123.102:0/3543705563 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c2010a700 msgr2=0x7f8c20116f40 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.744+0000 7f8c1e59c700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c2010a700 0x7f8c20116f40 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.744+0000 7f8c1e59c700 1 -- 192.168.123.102:0/3543705563 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8c10009710 con 0x7f8c20107d90 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.744+0000 7f8c1e59c700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c20107d90 0x7f8c201169e0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f8c100096a0 tx=0x7f8c1000f880 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.744+0000 7f8c0f7fe700 1 -- 192.168.123.102:0/3543705563 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8c1001d070 con 0x7f8c20107d90 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.744+0000 7f8c0f7fe700 1 -- 192.168.123.102:0/3543705563 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8c1000fe60 con 0x7f8c20107d90 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.745+0000 7f8c0f7fe700 1 -- 192.168.123.102:0/3543705563 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8c100178a0 con 0x7f8c20107d90 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.745+0000 7f8c24e9d700 1 -- 192.168.123.102:0/3543705563 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8c20077260 con 0x7f8c20107d90 2026-03-10T10:22:01.745 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.745+0000 7f8c24e9d700 1 -- 192.168.123.102:0/3543705563 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8c200776d0 con 0x7f8c20107d90 2026-03-10T10:22:01.747 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.746+0000 7f8c24e9d700 1 -- 192.168.123.102:0/3543705563 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8c20110c60 con 0x7f8c20107d90 2026-03-10T10:22:01.748 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.747+0000 7f8c0f7fe700 1 -- 192.168.123.102:0/3543705563 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f8c10017a00 con 0x7f8c20107d90 2026-03-10T10:22:01.748 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.747+0000 7f8c0f7fe700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c08077820 0x7f8c08079ce0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:01.748 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.747+0000 7f8c0f7fe700 1 -- 192.168.123.102:0/3543705563 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f8c1009b770 con 0x7f8c20107d90 2026-03-10T10:22:01.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.750+0000 7f8c1dd9b700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c08077820 0x7f8c08079ce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:01.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.750+0000 7f8c0f7fe700 1 -- 192.168.123.102:0/3543705563 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8c10063db0 con 0x7f8c20107d90 2026-03-10T10:22:01.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.750+0000 7f8c1dd9b700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c08077820 0x7f8c08079ce0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f8c20117b10 tx=0x7f8c14009500 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:01.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.911+0000 7f8c24e9d700 1 -- 192.168.123.102:0/3543705563 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f8c200611d0 con 0x7f8c08077820 2026-03-10T10:22:01.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.914+0000 7f8c0f7fe700 1 -- 192.168.123.102:0/3543705563 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f8c200611d0 con 0x7f8c08077820 2026-03-10T10:22:01.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.917+0000 7f8c0d7fa700 1 -- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c08077820 msgr2=0x7f8c08079ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:01.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.917+0000 7f8c0d7fa700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c08077820 0x7f8c08079ce0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f8c20117b10 tx=0x7f8c14009500 comp rx=0 tx=0).stop 2026-03-10T10:22:01.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.917+0000 7f8c0d7fa700 1 -- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c20107d90 msgr2=0x7f8c201169e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:01.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.917+0000 7f8c0d7fa700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c20107d90 0x7f8c201169e0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f8c100096a0 tx=0x7f8c1000f880 comp rx=0 tx=0).stop 2026-03-10T10:22:01.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.917+0000 7f8c0d7fa700 1 -- 192.168.123.102:0/3543705563 shutdown_connections 2026-03-10T10:22:01.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.918+0000 7f8c0d7fa700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8c20107d90 0x7f8c201169e0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:01.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.918+0000 7f8c0d7fa700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8c08077820 0x7f8c08079ce0 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:01.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.918+0000 7f8c0d7fa700 1 --2- 192.168.123.102:0/3543705563 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8c2010a700 0x7f8c20116f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:01.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.918+0000 7f8c0d7fa700 1 -- 192.168.123.102:0/3543705563 >> 192.168.123.102:0/3543705563 conn(0x7f8c2006dae0 msgr2=0x7f8c2006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:01.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.918+0000 7f8c0d7fa700 1 -- 192.168.123.102:0/3543705563 shutdown_connections 2026-03-10T10:22:01.919 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:01.919+0000 7f8c0d7fa700 1 -- 192.168.123.102:0/3543705563 wait complete. 2026-03-10T10:22:01.932 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:22:02.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.006+0000 7f52309b0700 1 -- 192.168.123.102:0/444075247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c0ffb80 msgr2=0x7f522c0fffa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.006+0000 7f52309b0700 1 --2- 192.168.123.102:0/444075247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c0ffb80 0x7f522c0fffa0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f521c009b00 tx=0x7f521c009e10 comp rx=0 tx=0).stop 2026-03-10T10:22:02.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.007+0000 7f52309b0700 1 -- 192.168.123.102:0/444075247 shutdown_connections 2026-03-10T10:22:02.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.007+0000 7f52309b0700 1 --2- 192.168.123.102:0/444075247 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f522c100d80 0x7f522c1011e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.007+0000 7f52309b0700 1 --2- 192.168.123.102:0/444075247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c0ffb80 0x7f522c0fffa0 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.007+0000 7f52309b0700 1 -- 192.168.123.102:0/444075247 >> 192.168.123.102:0/444075247 conn(0x7f522c0fb100 msgr2=0x7f522c0fd560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:02.009 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.008+0000 7f52309b0700 1 -- 192.168.123.102:0/444075247 shutdown_connections 2026-03-10T10:22:02.009 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.009+0000 7f52309b0700 1 -- 192.168.123.102:0/444075247 wait complete. 2026-03-10T10:22:02.009 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.009+0000 7f52309b0700 1 Processor -- start 2026-03-10T10:22:02.009 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.009+0000 7f52309b0700 1 -- start start 2026-03-10T10:22:02.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.009+0000 7f52309b0700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c100d80 0x7f522c06cba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.009+0000 7f52309b0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f522c06d0e0 0x7f522c072150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.009+0000 7f52309b0700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f522c06d5f0 con 0x7f522c100d80 2026-03-10T10:22:02.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.009+0000 7f52309b0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f522c06d760 con 0x7f522c06d0e0 2026-03-10T10:22:02.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.010+0000 7f522b7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c100d80 0x7f522c06cba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.010+0000 7f522b7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c100d80 0x7f522c06cba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:50334/0 (socket says 192.168.123.102:50334) 2026-03-10T10:22:02.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.010+0000 7f522b7fe700 1 -- 192.168.123.102:0/618101765 learned_addr learned my addr 192.168.123.102:0/618101765 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:02.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.010+0000 7f522affd700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f522c06d0e0 0x7f522c072150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.011+0000 7f522b7fe700 1 -- 192.168.123.102:0/618101765 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f522c06d0e0 msgr2=0x7f522c072150 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.011+0000 7f522b7fe700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f522c06d0e0 0x7f522c072150 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.011+0000 7f522b7fe700 1 -- 192.168.123.102:0/618101765 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f521c0097e0 con 0x7f522c100d80 2026-03-10T10:22:02.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.011+0000 7f522affd700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f522c06d0e0 0x7f522c072150 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:22:02.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.012+0000 7f522b7fe700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c100d80 0x7f522c06cba0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f521c009fd0 tx=0x7f521c00faf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:02.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.012+0000 7f5228ff9700 1 -- 192.168.123.102:0/618101765 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f521c01c070 con 0x7f522c100d80 2026-03-10T10:22:02.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.012+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f522c072690 con 0x7f522c100d80 2026-03-10T10:22:02.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.012+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f522c072b80 con 0x7f522c100d80 2026-03-10T10:22:02.015 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.013+0000 7f5228ff9700 1 -- 192.168.123.102:0/618101765 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f521c00bea0 con 0x7f522c100d80 2026-03-10T10:22:02.015 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.014+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f522c04f2e0 con 0x7f522c100d80 2026-03-10T10:22:02.016 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.015+0000 7f5228ff9700 1 -- 192.168.123.102:0/618101765 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f521c0177d0 con 0x7f522c100d80 2026-03-10T10:22:02.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.017+0000 7f5228ff9700 1 -- 192.168.123.102:0/618101765 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f521c0179b0 con 0x7f522c100d80 2026-03-10T10:22:02.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.018+0000 7f5228ff9700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5214077be0 0x7f521407a0a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.018+0000 7f5228ff9700 1 -- 192.168.123.102:0/618101765 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f521c09c130 con 0x7f522c100d80 2026-03-10T10:22:02.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.020+0000 7f522affd700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5214077be0 0x7f521407a0a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.020+0000 7f522affd700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5214077be0 0x7f521407a0a0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f52200099b0 tx=0x7f5220008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:02.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.020+0000 7f5228ff9700 1 -- 192.168.123.102:0/618101765 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f521c064770 con 0x7f522c100d80 2026-03-10T10:22:02.193 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.190+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f522c0fc3d0 con 0x7f5214077be0 2026-03-10T10:22:02.194 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:02 vm02.local ceph-mon[110129]: pgmap v53: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 250 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1011 KiB/s rd, 1.0 MiB/s wr, 329 op/s; 649/17058 objects degraded (3.805%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:02.194 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:02.199 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.197+0000 7f5228ff9700 1 -- 192.168.123.102:0/618101765 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f522c0fc3d0 con 0x7f5214077be0 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.199+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5214077be0 msgr2=0x7f521407a0a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.199+0000 7f52309b0700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5214077be0 0x7f521407a0a0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f52200099b0 tx=0x7f5220008040 comp rx=0 tx=0).stop 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.199+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c100d80 msgr2=0x7f522c06cba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.199+0000 7f52309b0700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c100d80 0x7f522c06cba0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f521c009fd0 tx=0x7f521c00faf0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.200+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 shutdown_connections 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.200+0000 7f52309b0700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f522c100d80 0x7f522c06cba0 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.200+0000 7f52309b0700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5214077be0 0x7f521407a0a0 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.200+0000 7f52309b0700 1 --2- 192.168.123.102:0/618101765 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f522c06d0e0 0x7f522c072150 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.200+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 >> 192.168.123.102:0/618101765 conn(0x7f522c0fb100 msgr2=0x7f522c0fbcc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.200+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 shutdown_connections 2026-03-10T10:22:02.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.200+0000 7f52309b0700 1 -- 192.168.123.102:0/618101765 wait complete. 2026-03-10T10:22:02.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3689421524 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb034075a10 msgr2=0x7fb034077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 --2- 192.168.123.102:0/3689421524 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb034075a10 0x7fb034077ea0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7fb02c00b3a0 tx=0x7fb02c00b6b0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3689421524 shutdown_connections 2026-03-10T10:22:02.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 --2- 192.168.123.102:0/3689421524 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb034075a10 0x7fb034077ea0 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 --2- 192.168.123.102:0/3689421524 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb034072b20 0x7fb034072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3689421524 >> 192.168.123.102:0/3689421524 conn(0x7fb03406daa0 msgr2=0x7fb03406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3689421524 shutdown_connections 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3689421524 wait complete. 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 Processor -- start 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.305+0000 7fb03ab8a700 1 -- start start 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.306+0000 7fb03ab8a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb034072b20 0x7fb034083150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.306+0000 7fb03ab8a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb034083690 0x7fb03412e4a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.306+0000 7fb03ab8a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb034083ba0 con 0x7fb034072b20 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.306+0000 7fb03ab8a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb034083d10 con 0x7fb034083690 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.306+0000 7fb039b88700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb034072b20 0x7fb034083150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.306+0000 7fb039b88700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb034072b20 0x7fb034083150 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:50356/0 (socket says 192.168.123.102:50356) 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.306+0000 7fb039b88700 1 -- 192.168.123.102:0/3711576058 learned_addr learned my addr 192.168.123.102:0/3711576058 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:02.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.306+0000 7fb039387700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb034083690 0x7fb03412e4a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.307+0000 7fb039387700 1 -- 192.168.123.102:0/3711576058 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb034072b20 msgr2=0x7fb034083150 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.307+0000 7fb039387700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb034072b20 0x7fb034083150 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.307+0000 7fb039387700 1 -- 192.168.123.102:0/3711576058 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb02c00b050 con 0x7fb034083690 2026-03-10T10:22:02.307 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.307+0000 7fb039387700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb034083690 0x7fb03412e4a0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fb02c00bb30 tx=0x7fb02c0095c0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:02.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.307+0000 7fb02affd700 1 -- 192.168.123.102:0/3711576058 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb02c00e040 con 0x7fb034083690 2026-03-10T10:22:02.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.307+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3711576058 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb03412e9e0 con 0x7fb034083690 2026-03-10T10:22:02.308 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.308+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3711576058 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb03412ef30 con 0x7fb034083690 2026-03-10T10:22:02.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.308+0000 7fb02affd700 1 -- 192.168.123.102:0/3711576058 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb02c004050 con 0x7fb034083690 2026-03-10T10:22:02.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.308+0000 7fb02affd700 1 -- 192.168.123.102:0/3711576058 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb02c012620 con 0x7fb034083690 2026-03-10T10:22:02.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.309+0000 7fb028ff9700 1 -- 192.168.123.102:0/3711576058 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb03407a7d0 con 0x7fb034083690 2026-03-10T10:22:02.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.310+0000 7fb02affd700 1 -- 192.168.123.102:0/3711576058 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fb02c019070 con 0x7fb034083690 2026-03-10T10:22:02.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.310+0000 7fb02affd700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb0200779e0 0x7fb020079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.310+0000 7fb02affd700 1 -- 192.168.123.102:0/3711576058 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fb02c09aff0 con 0x7fb034083690 2026-03-10T10:22:02.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.310+0000 7fb039b88700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb0200779e0 0x7fb020079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.311+0000 7fb039b88700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb0200779e0 0x7fb020079ea0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fb03412a410 tx=0x7fb03000b500 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:02.312 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.312+0000 7fb02affd700 1 -- 192.168.123.102:0/3711576058 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb02c063670 con 0x7fb034083690 2026-03-10T10:22:02.517 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.514+0000 7fb028ff9700 1 -- 192.168.123.102:0/3711576058 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fb03412eb70 con 0x7fb0200779e0 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (6m) 48s ago 7m 23.2M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (7m) 48s ago 7m 8820k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (6m) 61s ago 6m 11.1M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (64s) 48s ago 7m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (62s) 61s ago 6m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (6m) 48s ago 6m 88.8M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (5m) 48s ago 5m 15.9M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (5m) 48s ago 5m 237M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (5m) 61s ago 5m 15.9M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (5m) 61s ago 5m 146M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (2m) 48s ago 7m 614M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (99s) 61s ago 6m 487M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (92s) 48s ago 7m 56.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (78s) 61s ago 6m 49.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (7m) 48s ago 7m 16.3M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (6m) 61s ago 6m 15.4M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (51s) 48s ago 6m 30.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (6m) 48s ago 6m 363M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (6m) 48s ago 6m 305M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (5m) 61s ago 5m 426M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (5m) 61s ago 5m 407M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (5m) 61s ago 5m 319M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (102s) 48s ago 6m 47.7M - 2.43.0 a07b618ecd1d 5ebb885bd417 2026-03-10T10:22:02.527 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.523+0000 7fb02affd700 1 -- 192.168.123.102:0/3711576058 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7fb03412eb70 con 0x7fb0200779e0 2026-03-10T10:22:02.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.528+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3711576058 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb0200779e0 msgr2=0x7fb020079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.528+0000 7fb03ab8a700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb0200779e0 0x7fb020079ea0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fb03412a410 tx=0x7fb03000b500 comp rx=0 tx=0).stop 2026-03-10T10:22:02.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.528+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3711576058 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb034083690 msgr2=0x7fb03412e4a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.528+0000 7fb03ab8a700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb034083690 0x7fb03412e4a0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fb02c00bb30 tx=0x7fb02c0095c0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.529+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3711576058 shutdown_connections 2026-03-10T10:22:02.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.529+0000 7fb03ab8a700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb034072b20 0x7fb034083150 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.529+0000 7fb03ab8a700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb0200779e0 0x7fb020079ea0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.529+0000 7fb03ab8a700 1 --2- 192.168.123.102:0/3711576058 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb034083690 0x7fb03412e4a0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.529+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3711576058 >> 192.168.123.102:0/3711576058 conn(0x7fb03406daa0 msgr2=0x7fb03406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:02.531 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.529+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3711576058 shutdown_connections 2026-03-10T10:22:02.531 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.530+0000 7fb03ab8a700 1 -- 192.168.123.102:0/3711576058 wait complete. 2026-03-10T10:22:02.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:02 vm05.local ceph-mon[103593]: pgmap v53: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 250 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1011 KiB/s rd, 1.0 MiB/s wr, 329 op/s; 649/17058 objects degraded (3.805%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:02.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:02.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.638+0000 7f78ede9b700 1 -- 192.168.123.102:0/2010014536 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f78e8075a10 msgr2=0x7f78e8077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.638 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.638+0000 7f78ede9b700 1 --2- 192.168.123.102:0/2010014536 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f78e8075a10 0x7f78e8077ea0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f78d8008790 tx=0x7f78d800ae50 comp rx=0 tx=0).stop 2026-03-10T10:22:02.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.638+0000 7f78ede9b700 1 -- 192.168.123.102:0/2010014536 shutdown_connections 2026-03-10T10:22:02.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.638+0000 7f78ede9b700 1 --2- 192.168.123.102:0/2010014536 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f78e8075a10 0x7f78e8077ea0 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.638+0000 7f78ede9b700 1 --2- 192.168.123.102:0/2010014536 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f78e8072b20 0x7f78e8072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.638+0000 7f78ede9b700 1 -- 192.168.123.102:0/2010014536 >> 192.168.123.102:0/2010014536 conn(0x7f78e806daa0 msgr2=0x7f78e806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:02.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.638+0000 7f78ede9b700 1 -- 192.168.123.102:0/2010014536 shutdown_connections 2026-03-10T10:22:02.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ede9b700 1 -- 192.168.123.102:0/2010014536 wait complete. 2026-03-10T10:22:02.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ede9b700 1 Processor -- start 2026-03-10T10:22:02.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ede9b700 1 -- start start 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ede9b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f78e8072b20 0x7f78e8082f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ede9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f78e80834a0 0x7f78e81b3030 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ede9b700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f78e80839b0 con 0x7f78e8072b20 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ede9b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f78e8083b20 con 0x7f78e80834a0 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ece99700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f78e8072b20 0x7f78e8082f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ece99700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f78e8072b20 0x7f78e8082f60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:50380/0 (socket says 192.168.123.102:50380) 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ece99700 1 -- 192.168.123.102:0/1721729901 learned_addr learned my addr 192.168.123.102:0/1721729901 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.639+0000 7f78ece99700 1 -- 192.168.123.102:0/1721729901 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f78e80834a0 msgr2=0x7f78e81b3030 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.640+0000 7f78ece99700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f78e80834a0 0x7f78e81b3030 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.640+0000 7f78ece99700 1 -- 192.168.123.102:0/1721729901 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f78d8008440 con 0x7f78e8072b20 2026-03-10T10:22:02.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.640+0000 7f78ece99700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f78e8072b20 0x7f78e8082f60 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f78e00060b0 tx=0x7f78e00077a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:02.641 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.641+0000 7f78e5ffb700 1 -- 192.168.123.102:0/1721729901 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f78e0010040 con 0x7f78e8072b20 2026-03-10T10:22:02.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.641+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f78e81b35d0 con 0x7f78e8072b20 2026-03-10T10:22:02.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.641+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f78e81b3b20 con 0x7f78e8072b20 2026-03-10T10:22:02.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.642+0000 7f78e5ffb700 1 -- 192.168.123.102:0/1721729901 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f78e0009240 con 0x7f78e8072b20 2026-03-10T10:22:02.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.642+0000 7f78e5ffb700 1 -- 192.168.123.102:0/1721729901 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f78e0016600 con 0x7f78e8072b20 2026-03-10T10:22:02.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.642+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f78e804ea90 con 0x7f78e8072b20 2026-03-10T10:22:02.644 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.644+0000 7f78e5ffb700 1 -- 192.168.123.102:0/1721729901 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f78e0004ad0 con 0x7f78e8072b20 2026-03-10T10:22:02.645 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.645+0000 7f78e5ffb700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f78d0077b10 0x7f78d0079fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.645 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.645+0000 7f78e5ffb700 1 -- 192.168.123.102:0/1721729901 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f78e009ac00 con 0x7f78e8072b20 2026-03-10T10:22:02.645 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.645+0000 7f78e7fff700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f78d0077b10 0x7f78d0079fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.647 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.647+0000 7f78e7fff700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f78d0077b10 0x7f78d0079fd0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f78d8005b40 tx=0x7f78d8005ad0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:02.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.648+0000 7f78e5ffb700 1 -- 192.168.123.102:0/1721729901 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f78e0063240 con 0x7f78e8072b20 2026-03-10T10:22:02.870 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.869+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f78e81b40b0 con 0x7f78e8072b20 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.870+0000 7f78e5ffb700 1 -- 192.168.123.102:0/1721729901 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7f78e0062990 con 0x7f78e8072b20 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 5, 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 9, 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:22:02.871 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:22:02.877 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.877+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f78d0077b10 msgr2=0x7f78d0079fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.877+0000 7f78ede9b700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f78d0077b10 0x7f78d0079fd0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f78d8005b40 tx=0x7f78d8005ad0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.877+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f78e8072b20 msgr2=0x7f78e8082f60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.877+0000 7f78ede9b700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f78e8072b20 0x7f78e8082f60 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f78e00060b0 tx=0x7f78e00077a0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.878+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 shutdown_connections 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.878+0000 7f78ede9b700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f78e8072b20 0x7f78e8082f60 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.878+0000 7f78ede9b700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f78d0077b10 0x7f78d0079fd0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.878+0000 7f78ede9b700 1 --2- 192.168.123.102:0/1721729901 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f78e80834a0 0x7f78e81b3030 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.878+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 >> 192.168.123.102:0/1721729901 conn(0x7f78e806daa0 msgr2=0x7f78e806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.878+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 shutdown_connections 2026-03-10T10:22:02.878 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.878+0000 7f78ede9b700 1 -- 192.168.123.102:0/1721729901 wait complete. 2026-03-10T10:22:02.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.962+0000 7fe2ace5e700 1 -- 192.168.123.102:0/38953384 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a810a700 msgr2=0x7fe2a810cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.962+0000 7fe2ace5e700 1 --2- 192.168.123.102:0/38953384 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a810a700 0x7fe2a810cb90 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7fe2a000b3a0 tx=0x7fe2a000b6b0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.963+0000 7fe2ace5e700 1 -- 192.168.123.102:0/38953384 shutdown_connections 2026-03-10T10:22:02.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.963+0000 7fe2ace5e700 1 --2- 192.168.123.102:0/38953384 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a810a700 0x7fe2a810cb90 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.963+0000 7fe2ace5e700 1 --2- 192.168.123.102:0/38953384 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a8107d90 0x7fe2a810a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.963+0000 7fe2ace5e700 1 -- 192.168.123.102:0/38953384 >> 192.168.123.102:0/38953384 conn(0x7fe2a806daa0 msgr2=0x7fe2a806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:02.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.964+0000 7fe2ace5e700 1 -- 192.168.123.102:0/38953384 shutdown_connections 2026-03-10T10:22:02.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.964+0000 7fe2ace5e700 1 -- 192.168.123.102:0/38953384 wait complete. 2026-03-10T10:22:02.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.964+0000 7fe2ace5e700 1 Processor -- start 2026-03-10T10:22:02.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2ace5e700 1 -- start start 2026-03-10T10:22:02.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2ace5e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a8107d90 0x7fe2a8116ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2ace5e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a8117020 0x7fe2a81b3180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2ace5e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2a8117530 con 0x7fe2a8117020 2026-03-10T10:22:02.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2ace5e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2a81176a0 con 0x7fe2a8107d90 2026-03-10T10:22:02.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2a6ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a8117020 0x7fe2a81b3180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2a6ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a8117020 0x7fe2a81b3180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:50392/0 (socket says 192.168.123.102:50392) 2026-03-10T10:22:02.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2a6ffd700 1 -- 192.168.123.102:0/3772476395 learned_addr learned my addr 192.168.123.102:0/3772476395 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:02.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2a6ffd700 1 -- 192.168.123.102:0/3772476395 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a8107d90 msgr2=0x7fe2a8116ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:02.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2a6ffd700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a8107d90 0x7fe2a8116ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:02.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.965+0000 7fe2a6ffd700 1 -- 192.168.123.102:0/3772476395 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe2a000b050 con 0x7fe2a8117020 2026-03-10T10:22:02.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.966+0000 7fe2a6ffd700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a8117020 0x7fe2a81b3180 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7fe2a0007ab0 tx=0x7fe2a00095a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:02.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.966+0000 7fe2a4ff9700 1 -- 192.168.123.102:0/3772476395 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe2a000e050 con 0x7fe2a8117020 2026-03-10T10:22:02.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.966+0000 7fe2ace5e700 1 -- 192.168.123.102:0/3772476395 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe2a81b36c0 con 0x7fe2a8117020 2026-03-10T10:22:02.966 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.966+0000 7fe2ace5e700 1 -- 192.168.123.102:0/3772476395 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe2a81b3c10 con 0x7fe2a8117020 2026-03-10T10:22:02.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.967+0000 7fe2a4ff9700 1 -- 192.168.123.102:0/3772476395 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe2a0003dc0 con 0x7fe2a8117020 2026-03-10T10:22:02.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.968+0000 7fe2a4ff9700 1 -- 192.168.123.102:0/3772476395 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe2a001ba40 con 0x7fe2a8117020 2026-03-10T10:22:02.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.968+0000 7fe2a4ff9700 1 -- 192.168.123.102:0/3772476395 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fe2a0019040 con 0x7fe2a8117020 2026-03-10T10:22:02.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.969+0000 7fe2a4ff9700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe290077870 0x7fe290079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:02.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.969+0000 7fe2a4ff9700 1 -- 192.168.123.102:0/3772476395 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fe2a009bc40 con 0x7fe2a8117020 2026-03-10T10:22:02.969 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.969+0000 7fe2a77fe700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe290077870 0x7fe290079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:02.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.970+0000 7fe2a77fe700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe290077870 0x7fe290079d30 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fe298005950 tx=0x7fe2980058e0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:02.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.970+0000 7fe2ace5e700 1 -- 192.168.123.102:0/3772476395 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe294005320 con 0x7fe2a8117020 2026-03-10T10:22:02.973 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:02.973+0000 7fe2a4ff9700 1 -- 192.168.123.102:0/3772476395 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe2a0064280 con 0x7fe2a8117020 2026-03-10T10:22:03.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.148+0000 7fe2ace5e700 1 -- 192.168.123.102:0/3772476395 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fe294005cc0 con 0x7fe2a8117020 2026-03-10T10:22:03.154 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:22:03.154 INFO:teuthology.orchestra.run.vm02.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-10T10:22:03.154 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:22:03.154 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:22:03.154 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:22:03.154 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:22:03.154 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 0 members: 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:22:03.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.152+0000 7fe2a4ff9700 1 -- 192.168.123.102:0/3772476395 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1945 (secure 0 0 0) 0x7fe2a00177b0 con 0x7fe2a8117020 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 -- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe290077870 msgr2=0x7fe290079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe290077870 0x7fe290079d30 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fe298005950 tx=0x7fe2980058e0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 -- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a8117020 msgr2=0x7fe2a81b3180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a8117020 0x7fe2a81b3180 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7fe2a0007ab0 tx=0x7fe2a00095a0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 -- 192.168.123.102:0/3772476395 shutdown_connections 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe290077870 0x7fe290079d30 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a8107d90 0x7fe2a8116ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 --2- 192.168.123.102:0/3772476395 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2a8117020 0x7fe2a81b3180 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 -- 192.168.123.102:0/3772476395 >> 192.168.123.102:0/3772476395 conn(0x7fe2a806daa0 msgr2=0x7fe2a806e780 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 -- 192.168.123.102:0/3772476395 shutdown_connections 2026-03-10T10:22:03.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.157+0000 7fe28e7fc700 1 -- 192.168.123.102:0/3772476395 wait complete. 2026-03-10T10:22:03.166 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 -- 192.168.123.102:0/3704676171 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a47c0 msgr2=0x7fdad80a4c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 --2- 192.168.123.102:0/3704676171 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a47c0 0x7fdad80a4c20 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fdae0066a30 tx=0x7fdae0067220 comp rx=0 tx=0).stop 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 -- 192.168.123.102:0/3704676171 shutdown_connections 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 --2- 192.168.123.102:0/3704676171 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a47c0 0x7fdad80a4c20 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 --2- 192.168.123.102:0/3704676171 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdad80a6180 0x7fdad80a65a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 -- 192.168.123.102:0/3704676171 >> 192.168.123.102:0/3704676171 conn(0x7fdad80a0120 msgr2=0x7fdad80a2580 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 -- 192.168.123.102:0/3704676171 shutdown_connections 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 -- 192.168.123.102:0/3704676171 wait complete. 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 Processor -- start 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.254+0000 7fdadf59e700 1 -- start start 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdadf59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a6180 0x7fdad80d0490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdadf59e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdad80d09d0 0x7fdad8010e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdadf59e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdad80d0ee0 con 0x7fdad80d09d0 2026-03-10T10:22:03.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdadf59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdad80d1050 con 0x7fdad80a6180 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdade59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a6180 0x7fdad80d0490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdade59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a6180 0x7fdad80d0490 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:58708/0 (socket says 192.168.123.102:58708) 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdade59c700 1 -- 192.168.123.102:0/1223115950 learned_addr learned my addr 192.168.123.102:0/1223115950 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdade59c700 1 -- 192.168.123.102:0/1223115950 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdad80d09d0 msgr2=0x7fdad8010e70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdade59c700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdad80d09d0 0x7fdad8010e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdade59c700 1 -- 192.168.123.102:0/1223115950 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdae0067090 con 0x7fdad80a6180 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.255+0000 7fdade59c700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a6180 0x7fdad80d0490 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7fdad000b6e0 tx=0x7fdad000baa0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.256+0000 7fdacf7fe700 1 -- 192.168.123.102:0/1223115950 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdad0005cc0 con 0x7fdad80a6180 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.256+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdad8011410 con 0x7fdad80a6180 2026-03-10T10:22:03.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.256+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdad8011960 con 0x7fdad80a6180 2026-03-10T10:22:03.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.256+0000 7fdacf7fe700 1 -- 192.168.123.102:0/1223115950 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fdad0005e20 con 0x7fdad80a6180 2026-03-10T10:22:03.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.256+0000 7fdacf7fe700 1 -- 192.168.123.102:0/1223115950 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdad0006820 con 0x7fdad80a6180 2026-03-10T10:22:03.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.257+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdad8004f10 con 0x7fdad80a6180 2026-03-10T10:22:03.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.259+0000 7fdacf7fe700 1 -- 192.168.123.102:0/1223115950 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fdad0038b50 con 0x7fdad80a6180 2026-03-10T10:22:03.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.259+0000 7fdacf7fe700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdac8077bd0 0x7fdac807a090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:03.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.259+0000 7fdacf7fe700 1 -- 192.168.123.102:0/1223115950 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fdad0099240 con 0x7fdad80a6180 2026-03-10T10:22:03.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.260+0000 7fdaddd9b700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdac8077bd0 0x7fdac807a090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:03.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.261+0000 7fdaddd9b700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdac8077bd0 0x7fdac807a090 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fdae004ed60 tx=0x7fdae0070040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:03.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.261+0000 7fdacf7fe700 1 -- 192.168.123.102:0/1223115950 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fdad0061880 con 0x7fdad80a6180 2026-03-10T10:22:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:03 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:03 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (4 PGs are or would become offline) 2026-03-10T10:22:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:03 vm05.local ceph-mon[103593]: from='client.34182 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:03 vm05.local ceph-mon[103593]: from='client.34186 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:03 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1721729901' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:03.463 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.460+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fdad80115d0 con 0x7fdac8077bd0 2026-03-10T10:22:03.463 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:03 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:03.463 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:03 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (4 PGs are or would become offline) 2026-03-10T10:22:03.463 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:03 vm02.local ceph-mon[110129]: from='client.34182 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:03.463 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:03 vm02.local ceph-mon[110129]: from='client.34186 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:03.463 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:03 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1721729901' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "crash" 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "7/23 daemons upgraded", 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:22:03.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.469+0000 7fdacf7fe700 1 -- 192.168.123.102:0/1223115950 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fdad80115d0 con 0x7fdac8077bd0 2026-03-10T10:22:03.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.472+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdac8077bd0 msgr2=0x7fdac807a090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.472+0000 7fdadf59e700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdac8077bd0 0x7fdac807a090 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fdae004ed60 tx=0x7fdae0070040 comp rx=0 tx=0).stop 2026-03-10T10:22:03.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.472+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a6180 msgr2=0x7fdad80d0490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.472+0000 7fdadf59e700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a6180 0x7fdad80d0490 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7fdad000b6e0 tx=0x7fdad000baa0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.472+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 shutdown_connections 2026-03-10T10:22:03.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.472+0000 7fdadf59e700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fdac8077bd0 0x7fdac807a090 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.472+0000 7fdadf59e700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdad80a6180 0x7fdad80d0490 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.473+0000 7fdadf59e700 1 --2- 192.168.123.102:0/1223115950 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fdad80d09d0 0x7fdad8010e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.473+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 >> 192.168.123.102:0/1223115950 conn(0x7fdad80a0120 msgr2=0x7fdad80a22b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:03.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.473+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 shutdown_connections 2026-03-10T10:22:03.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.473+0000 7fdadf59e700 1 -- 192.168.123.102:0/1223115950 wait complete. 2026-03-10T10:22:03.547 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 -- 192.168.123.102:0/3904929863 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0075a40 msgr2=0x7fceb0077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 --2- 192.168.123.102:0/3904929863 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0075a40 0x7fceb0077ed0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7fcea8009230 tx=0x7fcea8009260 comp rx=0 tx=0).stop 2026-03-10T10:22:03.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 -- 192.168.123.102:0/3904929863 shutdown_connections 2026-03-10T10:22:03.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 --2- 192.168.123.102:0/3904929863 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0075a40 0x7fceb0077ed0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 --2- 192.168.123.102:0/3904929863 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072b50 0x7fceb0072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.548 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 -- 192.168.123.102:0/3904929863 >> 192.168.123.102:0/3904929863 conn(0x7fceb006dae0 msgr2=0x7fceb006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 -- 192.168.123.102:0/3904929863 shutdown_connections 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 -- 192.168.123.102:0/3904929863 wait complete. 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 Processor -- start 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 -- start start 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072b50 0x7fceb00830f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0083630 0x7fceb01b3180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fceb0083b40 con 0x7fceb0083630 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.547+0000 7fceb6c3c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fceb0083cb0 con 0x7fceb0072b50 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.548+0000 7fceaffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0083630 0x7fceb01b3180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.548+0000 7fceaffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0083630 0x7fceb01b3180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:50448/0 (socket says 192.168.123.102:50448) 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.548+0000 7fceaffff700 1 -- 192.168.123.102:0/30721894 learned_addr learned my addr 192.168.123.102:0/30721894 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.548+0000 7fceaffff700 1 -- 192.168.123.102:0/30721894 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072b50 msgr2=0x7fceb00830f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.548+0000 7fceaffff700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072b50 0x7fceb00830f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.548+0000 7fceaffff700 1 -- 192.168.123.102:0/30721894 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcea8008ee0 con 0x7fceb0083630 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.548+0000 7fceaffff700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0083630 0x7fceb01b3180 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fcea80076d0 tx=0x7fcea8003fc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:03.549 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.549+0000 7fceadffb700 1 -- 192.168.123.102:0/30721894 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcea801f800 con 0x7fceb0083630 2026-03-10T10:22:03.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.549+0000 7fceb6c3c700 1 -- 192.168.123.102:0/30721894 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fceb01b36c0 con 0x7fceb0083630 2026-03-10T10:22:03.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.549+0000 7fceb6c3c700 1 -- 192.168.123.102:0/30721894 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fceb01b3bb0 con 0x7fceb0083630 2026-03-10T10:22:03.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.550+0000 7fceadffb700 1 -- 192.168.123.102:0/30721894 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fcea801fe40 con 0x7fceb0083630 2026-03-10T10:22:03.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.550+0000 7fceadffb700 1 -- 192.168.123.102:0/30721894 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcea8004bd0 con 0x7fceb0083630 2026-03-10T10:22:03.550 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.550+0000 7fceb6c3c700 1 -- 192.168.123.102:0/30721894 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fce9c005320 con 0x7fceb0083630 2026-03-10T10:22:03.551 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.551+0000 7fceadffb700 1 -- 192.168.123.102:0/30721894 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 36) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fcea801f960 con 0x7fceb0083630 2026-03-10T10:22:03.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.551+0000 7fceadffb700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fce98077a40 0x7fce98079f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:03.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.552+0000 7fceadffb700 1 -- 192.168.123.102:0/30721894 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fcea809ffe0 con 0x7fceb0083630 2026-03-10T10:22:03.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.552+0000 7fceb49d8700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fce98077a40 0x7fce98079f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:03.554 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.554+0000 7fceb49d8700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fce98077a40 0x7fce98079f00 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fcea000a990 tx=0x7fcea0005c80 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:03.554 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.554+0000 7fceadffb700 1 -- 192.168.123.102:0/30721894 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fcea8068620 con 0x7fceb0083630 2026-03-10T10:22:03.724 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.724+0000 7fceb6c3c700 1 -- 192.168.123.102:0/30721894 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fce9c005190 con 0x7fceb0083630 2026-03-10T10:22:03.727 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.727+0000 7fceadffb700 1 -- 192.168.123.102:0/30721894 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+732 (secure 0 0 0) 0x7fcea8067d70 con 0x7fceb0083630 2026-03-10T10:22:03.727 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN Degraded data redundancy: 649/15732 objects degraded (4.125%), 9 pgs degraded 2026-03-10T10:22:03.727 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 649/15732 objects degraded (4.125%), 9 pgs degraded 2026-03-10T10:22:03.727 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.1 is active+recovery_wait+degraded, acting [0,4,2] 2026-03-10T10:22:03.727 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.6 is active+recovery_wait+degraded, acting [0,1,4] 2026-03-10T10:22:03.727 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.b is active+recovery_wait+degraded, acting [1,0,4] 2026-03-10T10:22:03.727 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.c is active+recovery_wait+degraded, acting [5,0,3] 2026-03-10T10:22:03.727 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.10 is active+recovery_wait+degraded, acting [5,0,1] 2026-03-10T10:22:03.727 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.11 is active+recovery_wait+degraded, acting [3,4,0] 2026-03-10T10:22:03.728 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.17 is active+recovery_wait+degraded, acting [0,5,2] 2026-03-10T10:22:03.728 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.18 is active+recovery_wait+degraded, acting [2,0,1] 2026-03-10T10:22:03.728 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.1f is active+recovery_wait+degraded, acting [0,3,2] 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 -- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fce98077a40 msgr2=0x7fce98079f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fce98077a40 0x7fce98079f00 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fcea000a990 tx=0x7fcea0005c80 comp rx=0 tx=0).stop 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 -- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0083630 msgr2=0x7fceb01b3180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0083630 0x7fceb01b3180 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fcea80076d0 tx=0x7fcea8003fc0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 -- 192.168.123.102:0/30721894 shutdown_connections 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fce98077a40 0x7fce98079f00 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072b50 0x7fceb00830f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 --2- 192.168.123.102:0/30721894 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fceb0083630 0x7fceb01b3180 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 -- 192.168.123.102:0/30721894 >> 192.168.123.102:0/30721894 conn(0x7fceb006dae0 msgr2=0x7fceb006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:03.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 -- 192.168.123.102:0/30721894 shutdown_connections 2026-03-10T10:22:03.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:03.730+0000 7fce977fe700 1 -- 192.168.123.102:0/30721894 wait complete. 2026-03-10T10:22:04.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:04 vm02.local ceph-mon[110129]: from='client.44159 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:04.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:04 vm02.local ceph-mon[110129]: pgmap v54: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 247 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1011 KiB/s rd, 1.0 MiB/s wr, 342 op/s; 649/15732 objects degraded (4.125%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:04.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:04 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3772476395' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:22:04.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:04 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/30721894' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:22:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:04 vm05.local ceph-mon[103593]: from='client.44159 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:04 vm05.local ceph-mon[103593]: pgmap v54: 65 pgs: 9 active+recovery_wait+degraded, 2 active+recovering, 54 active+clean; 247 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1011 KiB/s rd, 1.0 MiB/s wr, 342 op/s; 649/15732 objects degraded (4.125%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:04 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3772476395' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:22:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:04 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/30721894' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:22:05.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:05 vm02.local ceph-mon[110129]: from='client.44167 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:05.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:05 vm05.local ceph-mon[103593]: from='client.44167 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:07.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:06 vm02.local ceph-mon[110129]: pgmap v55: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 247 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 397 op/s; 580/13209 objects degraded (4.391%); 0 B/s, 14 objects/s recovering 2026-03-10T10:22:07.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:06 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 580/13209 objects degraded (4.391%), 8 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:07.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:06 vm05.local ceph-mon[103593]: pgmap v55: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 247 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 397 op/s; 580/13209 objects degraded (4.391%); 0 B/s, 14 objects/s recovering 2026-03-10T10:22:07.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:06 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 580/13209 objects degraded (4.391%), 8 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:08.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:07 vm02.local ceph-mon[110129]: pgmap v56: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 247 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 965 KiB/s rd, 1000 KiB/s wr, 271 op/s; 580/13209 objects degraded (4.391%); 0 B/s, 13 objects/s recovering 2026-03-10T10:22:08.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:08.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:07 vm05.local ceph-mon[103593]: pgmap v56: 65 pgs: 8 active+recovery_wait+degraded, 1 active+recovering, 56 active+clean; 247 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 965 KiB/s rd, 1000 KiB/s wr, 271 op/s; 580/13209 objects degraded (4.391%); 0 B/s, 13 objects/s recovering 2026-03-10T10:22:08.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:10.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:09 vm02.local ceph-mon[110129]: pgmap v57: 65 pgs: 7 active+recovery_wait+degraded, 2 active+recovering, 56 active+clean; 242 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 377 op/s; 517/10584 objects degraded (4.885%); 0 B/s, 16 objects/s recovering 2026-03-10T10:22:10.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:09 vm05.local ceph-mon[103593]: pgmap v57: 65 pgs: 7 active+recovery_wait+degraded, 2 active+recovering, 56 active+clean; 242 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 377 op/s; 517/10584 objects degraded (4.885%); 0 B/s, 16 objects/s recovering 2026-03-10T10:22:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:10 vm02.local ceph-mon[110129]: mgrmap e37: vm02.zmavgl(active, since 93s), standbys: vm05.coparq 2026-03-10T10:22:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:10 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 517/10584 objects degraded (4.885%), 7 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:11.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:10 vm05.local ceph-mon[103593]: mgrmap e37: vm02.zmavgl(active, since 93s), standbys: vm05.coparq 2026-03-10T10:22:11.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:10 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 517/10584 objects degraded (4.885%), 7 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:12.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:11 vm02.local ceph-mon[110129]: pgmap v58: 65 pgs: 7 active+recovery_wait+degraded, 2 active+recovering, 56 active+clean; 240 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 875 KiB/s rd, 917 KiB/s wr, 285 op/s; 517/9948 objects degraded (5.197%); 0 B/s, 16 objects/s recovering 2026-03-10T10:22:12.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:11 vm05.local ceph-mon[103593]: pgmap v58: 65 pgs: 7 active+recovery_wait+degraded, 2 active+recovering, 56 active+clean; 240 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 875 KiB/s rd, 917 KiB/s wr, 285 op/s; 517/9948 objects degraded (5.197%); 0 B/s, 16 objects/s recovering 2026-03-10T10:22:13.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:13 vm02.local ceph-mon[110129]: pgmap v59: 65 pgs: 7 active+recovery_wait+degraded, 2 active+recovering, 56 active+clean; 233 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 875 KiB/s rd, 917 KiB/s wr, 292 op/s; 517/8835 objects degraded (5.852%); 0 B/s, 12 objects/s recovering 2026-03-10T10:22:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:13 vm05.local ceph-mon[103593]: pgmap v59: 65 pgs: 7 active+recovery_wait+degraded, 2 active+recovering, 56 active+clean; 233 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 875 KiB/s rd, 917 KiB/s wr, 292 op/s; 517/8835 objects degraded (5.852%); 0 B/s, 12 objects/s recovering 2026-03-10T10:22:15.941 INFO:tasks.workunit:Stopping ['suites/fsstress.sh'] on client.0... 2026-03-10T10:22:15.941 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-10T10:22:16.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:15 vm02.local ceph-mon[110129]: pgmap v60: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 226 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 345 op/s; 448/6954 objects degraded (6.442%); 0 B/s, 18 objects/s recovering 2026-03-10T10:22:16.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:15 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 448/6954 objects degraded (6.442%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:16.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:15 vm05.local ceph-mon[103593]: pgmap v60: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 226 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 345 op/s; 448/6954 objects degraded (6.442%); 0 B/s, 18 objects/s recovering 2026-03-10T10:22:16.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:15 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 448/6954 objects degraded (6.442%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:16.415 DEBUG:teuthology.parallel:result is None 2026-03-10T10:22:17.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:16 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:17.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:16 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:17 vm02.local ceph-mon[110129]: pgmap v61: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 226 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 873 KiB/s rd, 865 KiB/s wr, 243 op/s; 448/6954 objects degraded (6.442%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:22:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:22:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:22:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:17 vm05.local ceph-mon[103593]: pgmap v61: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 226 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 873 KiB/s rd, 865 KiB/s wr, 243 op/s; 448/6954 objects degraded (6.442%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:22:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:22:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:22:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:19.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:18 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:19.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:18 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T10:22:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:18 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:18 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T10:22:20.077 INFO:tasks.workunit:Stopping ['suites/fsstress.sh'] on client.1... 2026-03-10T10:22:20.077 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.1 /home/ubuntu/cephtest/clone.client.1 2026-03-10T10:22:20.473 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:20 vm05.local ceph-mon[103593]: pgmap v62: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 223 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 357 op/s; 448/3975 objects degraded (11.270%); 0 B/s, 15 objects/s recovering 2026-03-10T10:22:20.473 DEBUG:teuthology.parallel:result is None 2026-03-10T10:22:20.473 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T10:22:20.503 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:20 vm02.local ceph-mon[110129]: pgmap v62: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 223 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 357 op/s; 448/3975 objects degraded (11.270%); 0 B/s, 15 objects/s recovering 2026-03-10T10:22:20.515 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T10:22:20.515 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.1/client.1 2026-03-10T10:22:20.572 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.1/client.1 2026-03-10T10:22:20.572 DEBUG:teuthology.parallel:result is None 2026-03-10T10:22:21.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:21 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 448/3975 objects degraded (11.270%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:21.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:21 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 448/3975 objects degraded (11.270%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:22 vm02.local ceph-mon[110129]: pgmap v63: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 218 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 830 KiB/s rd, 873 KiB/s wr, 274 op/s; 448/3165 objects degraded (14.155%); 0 B/s, 12 objects/s recovering 2026-03-10T10:22:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:22 vm05.local ceph-mon[103593]: pgmap v63: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 218 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 830 KiB/s rd, 873 KiB/s wr, 274 op/s; 448/3165 objects degraded (14.155%); 0 B/s, 12 objects/s recovering 2026-03-10T10:22:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:23.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:23 vm02.local ceph-mon[110129]: pgmap v64: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 213 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 950 KiB/s rd, 996 KiB/s wr, 291 op/s; 448/2091 objects degraded (21.425%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:23.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:23 vm05.local ceph-mon[103593]: pgmap v64: 65 pgs: 6 active+recovery_wait+degraded, 1 active+recovering, 58 active+clean; 213 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 950 KiB/s rd, 996 KiB/s wr, 291 op/s; 448/2091 objects degraded (21.425%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:26.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:25 vm02.local ceph-mon[110129]: pgmap v65: 65 pgs: 5 active+recovery_wait+degraded, 1 active+recovering, 59 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 328 op/s; 384/231 objects degraded (166.234%); 0 B/s, 13 objects/s recovering 2026-03-10T10:22:26.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:25 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 384/231 objects degraded (166.234%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:26.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:25 vm05.local ceph-mon[103593]: pgmap v65: 65 pgs: 5 active+recovery_wait+degraded, 1 active+recovering, 59 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 328 op/s; 384/231 objects degraded (166.234%); 0 B/s, 13 objects/s recovering 2026-03-10T10:22:26.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:25 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 384/231 objects degraded (166.234%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:28.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:28 vm02.local ceph-mon[110129]: pgmap v66: 65 pgs: 5 active+recovery_wait+degraded, 1 active+recovering, 59 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 729 KiB/s rd, 736 KiB/s wr, 239 op/s; 384/231 objects degraded (166.234%); 0 B/s, 7 objects/s recovering 2026-03-10T10:22:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:28 vm05.local ceph-mon[103593]: pgmap v66: 65 pgs: 5 active+recovery_wait+degraded, 1 active+recovering, 59 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 729 KiB/s rd, 736 KiB/s wr, 239 op/s; 384/231 objects degraded (166.234%); 0 B/s, 7 objects/s recovering 2026-03-10T10:22:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:30 vm02.local ceph-mon[110129]: pgmap v67: 65 pgs: 4 active+recovery_wait+degraded, 2 active+recovering, 59 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 746 KiB/s rd, 753 KiB/s wr, 241 op/s; 299/228 objects degraded (131.140%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:30 vm05.local ceph-mon[103593]: pgmap v67: 65 pgs: 4 active+recovery_wait+degraded, 2 active+recovering, 59 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 746 KiB/s rd, 753 KiB/s wr, 241 op/s; 299/228 objects degraded (131.140%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:31.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:31 vm02.local ceph-mon[110129]: pgmap v68: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 335 KiB/s rd, 308 KiB/s wr, 127 op/s; 299/228 objects degraded (131.140%); 0 B/s, 7 objects/s recovering 2026-03-10T10:22:31.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:31 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 299/228 objects degraded (131.140%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:31.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:31 vm05.local ceph-mon[103593]: pgmap v68: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 335 KiB/s rd, 308 KiB/s wr, 127 op/s; 299/228 objects degraded (131.140%); 0 B/s, 7 objects/s recovering 2026-03-10T10:22:31.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:31 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 299/228 objects degraded (131.140%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:32.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:32 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:32.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:32 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:33 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:33 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T10:22:33.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:33 vm05.local ceph-mon[103593]: pgmap v69: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 335 KiB/s rd, 308 KiB/s wr, 105 op/s; 299/228 objects degraded (131.140%); 0 B/s, 7 objects/s recovering 2026-03-10T10:22:33.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:33 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:33.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:33 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T10:22:33.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:33 vm02.local ceph-mon[110129]: pgmap v69: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 335 KiB/s rd, 308 KiB/s wr, 105 op/s; 299/228 objects degraded (131.140%); 0 B/s, 7 objects/s recovering 2026-03-10T10:22:33.806 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.805+0000 7f8397653700 1 -- 192.168.123.102:0/4276674525 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 msgr2=0x7f83901047a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:33.806 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.805+0000 7f8397653700 1 --2- 192.168.123.102:0/4276674525 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 0x7f83901047a0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f8380009b50 tx=0x7f8380009e60 comp rx=0 tx=0).stop 2026-03-10T10:22:33.806 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.806+0000 7f8397653700 1 -- 192.168.123.102:0/4276674525 shutdown_connections 2026-03-10T10:22:33.806 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.806+0000 7f8397653700 1 --2- 192.168.123.102:0/4276674525 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 0x7f83901047a0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:33.806 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.806+0000 7f8397653700 1 --2- 192.168.123.102:0/4276674525 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8390103140 0x7f8390103560 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:33.806 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.806+0000 7f8397653700 1 -- 192.168.123.102:0/4276674525 >> 192.168.123.102:0/4276674525 conn(0x7f83900fe6c0 msgr2=0x7f8390100b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:33.807 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.806+0000 7f8397653700 1 -- 192.168.123.102:0/4276674525 shutdown_connections 2026-03-10T10:22:33.807 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.806+0000 7f8397653700 1 -- 192.168.123.102:0/4276674525 wait complete. 2026-03-10T10:22:33.807 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.807+0000 7f8397653700 1 Processor -- start 2026-03-10T10:22:33.807 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.807+0000 7f8397653700 1 -- start start 2026-03-10T10:22:33.808 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.807+0000 7f8397653700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8390103140 0x7f8390198a40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:33.808 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f8397653700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 0x7f8390198f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:33.808 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f8397653700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f83901995a0 con 0x7f8390104340 2026-03-10T10:22:33.808 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f8397653700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f83901996e0 con 0x7f8390103140 2026-03-10T10:22:33.808 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f8394bee700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 0x7f8390198f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:33.808 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f8394bee700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 0x7f8390198f80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:36178/0 (socket says 192.168.123.102:36178) 2026-03-10T10:22:33.808 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f8394bee700 1 -- 192.168.123.102:0/780598279 learned_addr learned my addr 192.168.123.102:0/780598279 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:33.808 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f8394bee700 1 -- 192.168.123.102:0/780598279 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8390103140 msgr2=0x7f8390198a40 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:22:33.808 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f83953ef700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8390103140 0x7f8390198a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:33.809 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f8394bee700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8390103140 0x7f8390198a40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:33.809 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.808+0000 7f8394bee700 1 -- 192.168.123.102:0/780598279 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f83800097e0 con 0x7f8390104340 2026-03-10T10:22:33.809 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.809+0000 7f8394bee700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 0x7f8390198f80 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f8380004ce0 tx=0x7f8380005ee0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:33.809 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.809+0000 7f83867fc700 1 -- 192.168.123.102:0/780598279 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f838001d070 con 0x7f8390104340 2026-03-10T10:22:33.810 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.809+0000 7f83867fc700 1 -- 192.168.123.102:0/780598279 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f838000bc30 con 0x7f8390104340 2026-03-10T10:22:33.810 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.809+0000 7f83867fc700 1 -- 192.168.123.102:0/780598279 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f838000f720 con 0x7f8390104340 2026-03-10T10:22:33.810 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.809+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f839019e130 con 0x7f8390104340 2026-03-10T10:22:33.810 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.809+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f839019e620 con 0x7f8390104340 2026-03-10T10:22:33.810 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.809+0000 7f83953ef700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8390103140 0x7f8390198a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:22:33.811 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.810+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8390066e80 con 0x7f8390104340 2026-03-10T10:22:33.814 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.811+0000 7f83867fc700 1 -- 192.168.123.102:0/780598279 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f8380022ae0 con 0x7f8390104340 2026-03-10T10:22:33.814 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.811+0000 7f83867fc700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f837c0778c0 0x7f837c079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:33.814 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.811+0000 7f83867fc700 1 -- 192.168.123.102:0/780598279 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f838009b300 con 0x7f8390104340 2026-03-10T10:22:33.814 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.814+0000 7f83867fc700 1 -- 192.168.123.102:0/780598279 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8380063ae0 con 0x7f8390104340 2026-03-10T10:22:33.815 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.814+0000 7f83953ef700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f837c0778c0 0x7f837c079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:33.815 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.815+0000 7f83953ef700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f837c0778c0 0x7f837c079d80 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f83901041a0 tx=0x7f838c009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:33.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.948+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f8390108c90 con 0x7f837c0778c0 2026-03-10T10:22:33.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.950+0000 7f83867fc700 1 -- 192.168.123.102:0/780598279 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f8390108c90 con 0x7f837c0778c0 2026-03-10T10:22:33.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f837c0778c0 msgr2=0x7f837c079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:33.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f837c0778c0 0x7f837c079d80 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f83901041a0 tx=0x7f838c009450 comp rx=0 tx=0).stop 2026-03-10T10:22:33.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 msgr2=0x7f8390198f80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:33.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 0x7f8390198f80 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f8380004ce0 tx=0x7f8380005ee0 comp rx=0 tx=0).stop 2026-03-10T10:22:33.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 shutdown_connections 2026-03-10T10:22:33.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f837c0778c0 0x7f837c079d80 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:33.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8390103140 0x7f8390198a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:33.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 --2- 192.168.123.102:0/780598279 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8390104340 0x7f8390198f80 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:33.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 >> 192.168.123.102:0/780598279 conn(0x7f83900fe6c0 msgr2=0x7f8390107570 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:33.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.953+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 shutdown_connections 2026-03-10T10:22:33.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:33.954+0000 7f8397653700 1 -- 192.168.123.102:0/780598279 wait complete. 2026-03-10T10:22:33.963 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:22:34.030 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.029+0000 7fed829d7700 1 -- 192.168.123.102:0/1700350054 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fed7c101ab0 msgr2=0x7fed7c103ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.030 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.029+0000 7fed829d7700 1 --2- 192.168.123.102:0/1700350054 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fed7c101ab0 0x7fed7c103ea0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fed64009b00 tx=0x7fed64009e10 comp rx=0 tx=0).stop 2026-03-10T10:22:34.030 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.030+0000 7fed829d7700 1 -- 192.168.123.102:0/1700350054 shutdown_connections 2026-03-10T10:22:34.030 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.030+0000 7fed829d7700 1 --2- 192.168.123.102:0/1700350054 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fed7c1043e0 0x7fed7c1067d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.030 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.030+0000 7fed829d7700 1 --2- 192.168.123.102:0/1700350054 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fed7c101ab0 0x7fed7c103ea0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.030 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.030+0000 7fed829d7700 1 -- 192.168.123.102:0/1700350054 >> 192.168.123.102:0/1700350054 conn(0x7fed7c0fb3c0 msgr2=0x7fed7c0fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:34.030 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.030+0000 7fed829d7700 1 -- 192.168.123.102:0/1700350054 shutdown_connections 2026-03-10T10:22:34.030 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.030+0000 7fed829d7700 1 -- 192.168.123.102:0/1700350054 wait complete. 2026-03-10T10:22:34.031 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.031+0000 7fed829d7700 1 Processor -- start 2026-03-10T10:22:34.031 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.031+0000 7fed829d7700 1 -- start start 2026-03-10T10:22:34.031 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.031+0000 7fed829d7700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fed7c101ab0 0x7fed7c1989c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.031 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.031+0000 7fed829d7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fed7c1043e0 0x7fed7c198f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.031 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.031+0000 7fed829d7700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fed7c199520 con 0x7fed7c101ab0 2026-03-10T10:22:34.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.031+0000 7fed829d7700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fed7c199660 con 0x7fed7c1043e0 2026-03-10T10:22:34.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.031+0000 7fed7b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fed7c1043e0 0x7fed7c198f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.031+0000 7fed7b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fed7c1043e0 0x7fed7c198f00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:51516/0 (socket says 192.168.123.102:51516) 2026-03-10T10:22:34.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.031+0000 7fed7b7fe700 1 -- 192.168.123.102:0/1902244247 learned_addr learned my addr 192.168.123.102:0/1902244247 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:34.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.032+0000 7fed7bfff700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fed7c101ab0 0x7fed7c1989c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.032+0000 7fed7bfff700 1 -- 192.168.123.102:0/1902244247 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fed7c1043e0 msgr2=0x7fed7c198f00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.032+0000 7fed7bfff700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fed7c1043e0 0x7fed7c198f00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.032+0000 7fed7bfff700 1 -- 192.168.123.102:0/1902244247 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fed640097e0 con 0x7fed7c101ab0 2026-03-10T10:22:34.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.032+0000 7fed7b7fe700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fed7c1043e0 0x7fed7c198f00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:22:34.033 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.032+0000 7fed7bfff700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fed7c101ab0 0x7fed7c1989c0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fed64000c00 tx=0x7fed64004a40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.034 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.033+0000 7fed797fa700 1 -- 192.168.123.102:0/1902244247 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fed6401d070 con 0x7fed7c101ab0 2026-03-10T10:22:34.034 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.033+0000 7fed797fa700 1 -- 192.168.123.102:0/1902244247 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fed6400bc50 con 0x7fed7c101ab0 2026-03-10T10:22:34.034 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.033+0000 7fed797fa700 1 -- 192.168.123.102:0/1902244247 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fed6400f830 con 0x7fed7c101ab0 2026-03-10T10:22:34.034 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.033+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fed7c19e0b0 con 0x7fed7c101ab0 2026-03-10T10:22:34.034 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.033+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fed7c19e5a0 con 0x7fed7c101ab0 2026-03-10T10:22:34.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.034+0000 7fed797fa700 1 -- 192.168.123.102:0/1902244247 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fed6400f990 con 0x7fed7c101ab0 2026-03-10T10:22:34.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.034+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fed7c0fcfd0 con 0x7fed7c101ab0 2026-03-10T10:22:34.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.035+0000 7fed797fa700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fed68077870 0x7fed68079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.035+0000 7fed797fa700 1 -- 192.168.123.102:0/1902244247 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fed6409b190 con 0x7fed7c101ab0 2026-03-10T10:22:34.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.037+0000 7fed7b7fe700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fed68077870 0x7fed68079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.037+0000 7fed7b7fe700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fed68077870 0x7fed68079d30 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fed6c005fd0 tx=0x7fed6c005e70 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.038 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.038+0000 7fed797fa700 1 -- 192.168.123.102:0/1902244247 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fed64063a20 con 0x7fed7c101ab0 2026-03-10T10:22:34.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.173+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fed7c0611d0 con 0x7fed68077870 2026-03-10T10:22:34.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.179+0000 7fed797fa700 1 -- 192.168.123.102:0/1902244247 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fed7c0611d0 con 0x7fed68077870 2026-03-10T10:22:34.182 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fed68077870 msgr2=0x7fed68079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.182 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fed68077870 0x7fed68079d30 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fed6c005fd0 tx=0x7fed6c005e70 comp rx=0 tx=0).stop 2026-03-10T10:22:34.182 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fed7c101ab0 msgr2=0x7fed7c1989c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.182 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fed7c101ab0 0x7fed7c1989c0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fed64000c00 tx=0x7fed64004a40 comp rx=0 tx=0).stop 2026-03-10T10:22:34.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 shutdown_connections 2026-03-10T10:22:34.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fed7c101ab0 0x7fed7c1989c0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fed68077870 0x7fed68079d30 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 --2- 192.168.123.102:0/1902244247 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fed7c1043e0 0x7fed7c198f00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 >> 192.168.123.102:0/1902244247 conn(0x7fed7c0fb3c0 msgr2=0x7fed7c105050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:34.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.182+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 shutdown_connections 2026-03-10T10:22:34.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.183+0000 7fed829d7700 1 -- 192.168.123.102:0/1902244247 wait complete. 2026-03-10T10:22:34.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.255+0000 7f5a23ca9700 1 -- 192.168.123.102:0/1785059559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a1c100f60 msgr2=0x7f5a1c1013e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.255+0000 7f5a23ca9700 1 --2- 192.168.123.102:0/1785059559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a1c100f60 0x7f5a1c1013e0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f5a18009b50 tx=0x7f5a18009e60 comp rx=0 tx=0).stop 2026-03-10T10:22:34.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.255+0000 7f5a23ca9700 1 -- 192.168.123.102:0/1785059559 shutdown_connections 2026-03-10T10:22:34.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.255+0000 7f5a23ca9700 1 --2- 192.168.123.102:0/1785059559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a1c100f60 0x7f5a1c1013e0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.255+0000 7f5a23ca9700 1 --2- 192.168.123.102:0/1785059559 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a1c0ffe00 0x7f5a1c100220 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.255+0000 7f5a23ca9700 1 -- 192.168.123.102:0/1785059559 >> 192.168.123.102:0/1785059559 conn(0x7f5a1c0fb360 msgr2=0x7f5a1c0fd7e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:34.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.255+0000 7f5a23ca9700 1 -- 192.168.123.102:0/1785059559 shutdown_connections 2026-03-10T10:22:34.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.255+0000 7f5a23ca9700 1 -- 192.168.123.102:0/1785059559 wait complete. 2026-03-10T10:22:34.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.256+0000 7f5a23ca9700 1 Processor -- start 2026-03-10T10:22:34.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.256+0000 7f5a23ca9700 1 -- start start 2026-03-10T10:22:34.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.256+0000 7f5a23ca9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a1c0ffe00 0x7f5a1c1945d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.256+0000 7f5a23ca9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a1c100f60 0x7f5a1c194b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.256+0000 7f5a23ca9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5a1c195130 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.256+0000 7f5a21244700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a1c100f60 0x7f5a1c194b10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.256+0000 7f5a21244700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a1c100f60 0x7f5a1c194b10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:51528/0 (socket says 192.168.123.102:51528) 2026-03-10T10:22:34.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.256+0000 7f5a21244700 1 -- 192.168.123.102:0/3515526111 learned_addr learned my addr 192.168.123.102:0/3515526111 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:34.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.256+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5a1c195270 con 0x7f5a1c100f60 2026-03-10T10:22:34.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a21a45700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a1c0ffe00 0x7f5a1c1945d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a21a45700 1 -- 192.168.123.102:0/3515526111 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a1c100f60 msgr2=0x7f5a1c194b10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a21a45700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a1c100f60 0x7f5a1c194b10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a21a45700 1 -- 192.168.123.102:0/3515526111 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5a180097e0 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.257 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a21244700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a1c100f60 0x7f5a1c194b10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:22:34.258 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a21a45700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a1c0ffe00 0x7f5a1c1945d0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f5a0c00eab0 tx=0x7f5a0c00edc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.258 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a12ffd700 1 -- 192.168.123.102:0/3515526111 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5a0c00cb20 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.258 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a12ffd700 1 -- 192.168.123.102:0/3515526111 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5a0c00cc80 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5a1c199d20 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.257+0000 7f5a12ffd700 1 -- 192.168.123.102:0/3515526111 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5a0c018860 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.258+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5a1c19a210 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.259+0000 7f5a12ffd700 1 -- 192.168.123.102:0/3515526111 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f5a0c0189c0 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.259+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5a1c066e80 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.260+0000 7f5a12ffd700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5a0807bcf0 0x7f5a0807e1b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.260+0000 7f5a12ffd700 1 -- 192.168.123.102:0/3515526111 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f5a0c014070 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.260+0000 7f5a21244700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5a0807bcf0 0x7f5a0807e1b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.262+0000 7f5a21244700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5a0807bcf0 0x7f5a0807e1b0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f5a180053b0 tx=0x7f5a180058e0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.263+0000 7f5a12ffd700 1 -- 192.168.123.102:0/3515526111 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5a0c062a50 con 0x7f5a1c0ffe00 2026-03-10T10:22:34.386 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.386+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f5a1c105940 con 0x7f5a0807bcf0 2026-03-10T10:22:34.391 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.391+0000 7f5a12ffd700 1 -- 192.168.123.102:0/3515526111 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f5a1c105940 con 0x7f5a0807bcf0 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (6m) 80s ago 7m 23.2M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (7m) 80s ago 7m 8820k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (7m) 93s ago 7m 11.1M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (96s) 80s ago 7m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (94s) 93s ago 7m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (6m) 80s ago 7m 88.8M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (5m) 80s ago 5m 15.9M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (5m) 80s ago 5m 237M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (5m) 93s ago 5m 15.9M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (5m) 93s ago 5m 146M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (2m) 80s ago 8m 614M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (2m) 93s ago 7m 487M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (2m) 80s ago 8m 56.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (110s) 93s ago 7m 49.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (7m) 80s ago 7m 16.3M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (7m) 93s ago 7m 15.4M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (83s) 80s ago 6m 30.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (6m) 80s ago 6m 363M 4096M 18.2.1 5be31c24972a 1b0a42d8ac01 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (6m) 80s ago 6m 305M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (6m) 93s ago 6m 426M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (6m) 93s ago 6m 407M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (6m) 93s ago 6m 319M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:22:34.392 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (2m) 80s ago 7m 47.7M - 2.43.0 a07b618ecd1d 5ebb885bd417 2026-03-10T10:22:34.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5a0807bcf0 msgr2=0x7f5a0807e1b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5a0807bcf0 0x7f5a0807e1b0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f5a180053b0 tx=0x7f5a180058e0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a1c0ffe00 msgr2=0x7f5a1c1945d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a1c0ffe00 0x7f5a1c1945d0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f5a0c00eab0 tx=0x7f5a0c00edc0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 shutdown_connections 2026-03-10T10:22:34.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5a1c0ffe00 0x7f5a1c1945d0 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5a0807bcf0 0x7f5a0807e1b0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 --2- 192.168.123.102:0/3515526111 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5a1c100f60 0x7f5a1c194b10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 >> 192.168.123.102:0/3515526111 conn(0x7f5a1c0fb360 msgr2=0x7f5a1c104220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:34.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 shutdown_connections 2026-03-10T10:22:34.395 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.394+0000 7f5a23ca9700 1 -- 192.168.123.102:0/3515526111 wait complete. 2026-03-10T10:22:34.470 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.469+0000 7f80a2eee700 1 -- 192.168.123.102:0/3023931720 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f809c10a700 msgr2=0x7f809c10cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.470 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.469+0000 7f80a2eee700 1 --2- 192.168.123.102:0/3023931720 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f809c10a700 0x7f809c10cb90 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f808c009b00 tx=0x7f808c009e10 comp rx=0 tx=0).stop 2026-03-10T10:22:34.470 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.469+0000 7f80a2eee700 1 -- 192.168.123.102:0/3023931720 shutdown_connections 2026-03-10T10:22:34.470 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.469+0000 7f80a2eee700 1 --2- 192.168.123.102:0/3023931720 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f809c10a700 0x7f809c10cb90 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.470 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.469+0000 7f80a2eee700 1 --2- 192.168.123.102:0/3023931720 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f809c107d90 0x7f809c10a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.470 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.469+0000 7f80a2eee700 1 -- 192.168.123.102:0/3023931720 >> 192.168.123.102:0/3023931720 conn(0x7f809c06dda0 msgr2=0x7f809c070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:34.470 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.469+0000 7f80a2eee700 1 -- 192.168.123.102:0/3023931720 shutdown_connections 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.469+0000 7f80a2eee700 1 -- 192.168.123.102:0/3023931720 wait complete. 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f80a2eee700 1 Processor -- start 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f80a2eee700 1 -- start start 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f80a2eee700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f809c107d90 0x7f809c116a70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f80a2eee700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f809c10a700 0x7f809c116fb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f80a2eee700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f809c1175d0 con 0x7f809c107d90 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f80a2eee700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f809c117710 con 0x7f809c10a700 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f80a0c8a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f809c107d90 0x7f809c116a70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f80a0c8a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f809c107d90 0x7f809c116a70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:36238/0 (socket says 192.168.123.102:36238) 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f80a0c8a700 1 -- 192.168.123.102:0/3770476112 learned_addr learned my addr 192.168.123.102:0/3770476112 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.470+0000 7f809bfff700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f809c10a700 0x7f809c116fb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.471+0000 7f80a0c8a700 1 -- 192.168.123.102:0/3770476112 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f809c10a700 msgr2=0x7f809c116fb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.471+0000 7f80a0c8a700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f809c10a700 0x7f809c116fb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.471+0000 7f80a0c8a700 1 -- 192.168.123.102:0/3770476112 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8090009710 con 0x7f809c107d90 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.471+0000 7f80a0c8a700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f809c107d90 0x7f809c116a70 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f809000eee0 tx=0x7f809000c5b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.471+0000 7f809bfff700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f809c10a700 0x7f809c116fb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.471+0000 7f8099ffb700 1 -- 192.168.123.102:0/3770476112 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f809000ce10 con 0x7f809c107d90 2026-03-10T10:22:34.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.471+0000 7f8099ffb700 1 -- 192.168.123.102:0/3770476112 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8090004500 con 0x7f809c107d90 2026-03-10T10:22:34.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.471+0000 7f8099ffb700 1 -- 192.168.123.102:0/3770476112 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8090005490 con 0x7f809c107d90 2026-03-10T10:22:34.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.471+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f808c0097e0 con 0x7f809c107d90 2026-03-10T10:22:34.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.472+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f809c1b38e0 con 0x7f809c107d90 2026-03-10T10:22:34.474 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.474+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f809c110c60 con 0x7f809c107d90 2026-03-10T10:22:34.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.474+0000 7f8099ffb700 1 -- 192.168.123.102:0/3770476112 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f809001e030 con 0x7f809c107d90 2026-03-10T10:22:34.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.474+0000 7f8099ffb700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8084077990 0x7f8084079e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.474+0000 7f8099ffb700 1 -- 192.168.123.102:0/3770476112 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f8090014070 con 0x7f809c107d90 2026-03-10T10:22:34.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.477+0000 7f809bfff700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8084077990 0x7f8084079e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.477+0000 7f8099ffb700 1 -- 192.168.123.102:0/3770476112 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f80900627c0 con 0x7f809c107d90 2026-03-10T10:22:34.478 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.477+0000 7f809bfff700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8084077990 0x7f8084079e50 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f808c005190 tx=0x7f808c005fd0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.642 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.642+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f809c02cc70 con 0x7f809c107d90 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.642+0000 7f8099ffb700 1 -- 192.168.123.102:0/3770476112 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7f8090061f10 con 0x7f809c107d90 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 5, 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 9, 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:22:34.643 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:22:34.645 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.645+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8084077990 msgr2=0x7f8084079e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.645 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.645+0000 7f80a2eee700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8084077990 0x7f8084079e50 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f808c005190 tx=0x7f808c005fd0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.645 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.645+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f809c107d90 msgr2=0x7f809c116a70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.645 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.645+0000 7f80a2eee700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f809c107d90 0x7f809c116a70 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f809000eee0 tx=0x7f809000c5b0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.645+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 shutdown_connections 2026-03-10T10:22:34.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.645+0000 7f80a2eee700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f809c107d90 0x7f809c116a70 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.645+0000 7f80a2eee700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8084077990 0x7f8084079e50 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.645+0000 7f80a2eee700 1 --2- 192.168.123.102:0/3770476112 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f809c10a700 0x7f809c116fb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.645+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 >> 192.168.123.102:0/3770476112 conn(0x7f809c06dda0 msgr2=0x7f809c10c170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:34.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.646+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 shutdown_connections 2026-03-10T10:22:34.646 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.646+0000 7f80a2eee700 1 -- 192.168.123.102:0/3770476112 wait complete. 2026-03-10T10:22:34.717 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:34 vm02.local ceph-mon[110129]: from='client.34206 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:34.717 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:34 vm02.local ceph-mon[110129]: from='client.34210 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:34.717 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.715+0000 7f0886544700 1 -- 192.168.123.102:0/3064943335 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0880074dc0 msgr2=0x7f0880073220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.717 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.715+0000 7f0886544700 1 --2- 192.168.123.102:0/3064943335 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0880074dc0 0x7f0880073220 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f0870009b00 tx=0x7f0870009e10 comp rx=0 tx=0).stop 2026-03-10T10:22:34.718 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.718+0000 7f0886544700 1 -- 192.168.123.102:0/3064943335 shutdown_connections 2026-03-10T10:22:34.718 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.718+0000 7f0886544700 1 --2- 192.168.123.102:0/3064943335 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f08800737f0 0x7f0880073c70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.718 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.718+0000 7f0886544700 1 --2- 192.168.123.102:0/3064943335 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0880074dc0 0x7f0880073220 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.718 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.718+0000 7f0886544700 1 -- 192.168.123.102:0/3064943335 >> 192.168.123.102:0/3064943335 conn(0x7f08800fc210 msgr2=0x7f08800fe670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:34.718 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.718+0000 7f0886544700 1 -- 192.168.123.102:0/3064943335 shutdown_connections 2026-03-10T10:22:34.718 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.718+0000 7f0886544700 1 -- 192.168.123.102:0/3064943335 wait complete. 2026-03-10T10:22:34.719 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.719+0000 7f0886544700 1 Processor -- start 2026-03-10T10:22:34.719 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.719+0000 7f0886544700 1 -- start start 2026-03-10T10:22:34.719 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.719+0000 7f0886544700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f08800737f0 0x7f0880198770 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.719 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.719+0000 7f0886544700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0880074dc0 0x7f0880198cb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.719 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.719+0000 7f0886544700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f08801992d0 con 0x7f08800737f0 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.719+0000 7f0886544700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0880199410 con 0x7f0880074dc0 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.719+0000 7f0884d41700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0880074dc0 0x7f0880198cb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.719+0000 7f0884d41700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0880074dc0 0x7f0880198cb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:51570/0 (socket says 192.168.123.102:51570) 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.719+0000 7f0884d41700 1 -- 192.168.123.102:0/1749101793 learned_addr learned my addr 192.168.123.102:0/1749101793 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.720+0000 7f0884d41700 1 -- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f08800737f0 msgr2=0x7f0880198770 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.720+0000 7f0885542700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f08800737f0 0x7f0880198770 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.720+0000 7f0884d41700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f08800737f0 0x7f0880198770 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.720+0000 7f0884d41700 1 -- 192.168.123.102:0/1749101793 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f08700097e0 con 0x7f0880074dc0 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.720+0000 7f0885542700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f08800737f0 0x7f0880198770 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:22:34.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.720+0000 7f0884d41700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0880074dc0 0x7f0880198cb0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f087c009fd0 tx=0x7f087c00eea0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.721 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.720+0000 7f08767fc700 1 -- 192.168.123.102:0/1749101793 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f087c009980 con 0x7f0880074dc0 2026-03-10T10:22:34.721 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.720+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f088019dec0 con 0x7f0880074dc0 2026-03-10T10:22:34.721 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.721+0000 7f08767fc700 1 -- 192.168.123.102:0/1749101793 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f087c004500 con 0x7f0880074dc0 2026-03-10T10:22:34.721 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.721+0000 7f08767fc700 1 -- 192.168.123.102:0/1749101793 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f087c010430 con 0x7f0880074dc0 2026-03-10T10:22:34.721 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.721+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f088019e3e0 con 0x7f0880074dc0 2026-03-10T10:22:34.722 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.722+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f088004ea90 con 0x7f0880074dc0 2026-03-10T10:22:34.723 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.722+0000 7f08767fc700 1 -- 192.168.123.102:0/1749101793 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f087c003680 con 0x7f0880074dc0 2026-03-10T10:22:34.723 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.723+0000 7f08767fc700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f086c0776c0 0x7f086c079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.723 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.723+0000 7f08767fc700 1 -- 192.168.123.102:0/1749101793 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f087c014070 con 0x7f0880074dc0 2026-03-10T10:22:34.723 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.723+0000 7f0885542700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f086c0776c0 0x7f086c079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.724 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.724+0000 7f0885542700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f086c0776c0 0x7f086c079b80 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f0870006010 tx=0x7f0870005050 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.726 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.726+0000 7f08767fc700 1 -- 192.168.123.102:0/1749101793 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f087c0627c0 con 0x7f0880074dc0 2026-03-10T10:22:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:34 vm05.local ceph-mon[103593]: from='client.34206 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:34 vm05.local ceph-mon[103593]: from='client.34210 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:34.865 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.865+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f088019e690 con 0x7f0880074dc0 2026-03-10T10:22:34.866 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.866+0000 7f08767fc700 1 -- 192.168.123.102:0/1749101793 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1945 (secure 0 0 0) 0x7f087c061f10 con 0x7f0880074dc0 2026-03-10T10:22:34.866 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:22:34.866 INFO:teuthology.orchestra.run.vm02.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-10T10:22:34.866 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:22:34.866 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:22:34.866 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 0 members: 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:22:34.867 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:22:34.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f086c0776c0 msgr2=0x7f086c079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f086c0776c0 0x7f086c079b80 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f0870006010 tx=0x7f0870005050 comp rx=0 tx=0).stop 2026-03-10T10:22:34.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0880074dc0 msgr2=0x7f0880198cb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0880074dc0 0x7f0880198cb0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f087c009fd0 tx=0x7f087c00eea0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 shutdown_connections 2026-03-10T10:22:34.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f08800737f0 0x7f0880198770 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f086c0776c0 0x7f086c079b80 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.870 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 --2- 192.168.123.102:0/1749101793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0880074dc0 0x7f0880198cb0 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.870 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 >> 192.168.123.102:0/1749101793 conn(0x7f08800fc210 msgr2=0x7f0880106a70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:34.870 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 shutdown_connections 2026-03-10T10:22:34.870 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.869+0000 7f0886544700 1 -- 192.168.123.102:0/1749101793 wait complete. 2026-03-10T10:22:34.870 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:22:34.937 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.936+0000 7fc702d83700 1 -- 192.168.123.102:0/2574739817 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6fc074dc0 msgr2=0x7fc6fc073220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.937 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.936+0000 7fc702d83700 1 --2- 192.168.123.102:0/2574739817 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6fc074dc0 0x7fc6fc073220 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7fc6e8009a60 tx=0x7fc6e8009d70 comp rx=0 tx=0).stop 2026-03-10T10:22:34.937 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.937+0000 7fc702d83700 1 -- 192.168.123.102:0/2574739817 shutdown_connections 2026-03-10T10:22:34.937 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.937+0000 7fc702d83700 1 --2- 192.168.123.102:0/2574739817 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc6fc0737f0 0x7fc6fc073c70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.937 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.937+0000 7fc702d83700 1 --2- 192.168.123.102:0/2574739817 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6fc074dc0 0x7fc6fc073220 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.937 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.937+0000 7fc702d83700 1 -- 192.168.123.102:0/2574739817 >> 192.168.123.102:0/2574739817 conn(0x7fc6fc0fc460 msgr2=0x7fc6fc0fe8e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:34.937 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.937+0000 7fc702d83700 1 -- 192.168.123.102:0/2574739817 shutdown_connections 2026-03-10T10:22:34.937 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.937+0000 7fc702d83700 1 -- 192.168.123.102:0/2574739817 wait complete. 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.937+0000 7fc702d83700 1 Processor -- start 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.938+0000 7fc702d83700 1 -- start start 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.938+0000 7fc702d83700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc6fc0737f0 0x7fc6fc19cf90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.938+0000 7fc702d83700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6fc19d4d0 0x7fc6fc1a2540 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.938+0000 7fc702d83700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc6fc19d9e0 con 0x7fc6fc0737f0 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.938+0000 7fc702d83700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc6fc19db50 con 0x7fc6fc19d4d0 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.938+0000 7fc6fbfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6fc19d4d0 0x7fc6fc1a2540 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.938+0000 7fc700b1f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc6fc0737f0 0x7fc6fc19cf90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.938+0000 7fc700b1f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc6fc0737f0 0x7fc6fc19cf90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:36268/0 (socket says 192.168.123.102:36268) 2026-03-10T10:22:34.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.938+0000 7fc700b1f700 1 -- 192.168.123.102:0/2072315896 learned_addr learned my addr 192.168.123.102:0/2072315896 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:34.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.939+0000 7fc700b1f700 1 -- 192.168.123.102:0/2072315896 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6fc19d4d0 msgr2=0x7fc6fc1a2540 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:34.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.939+0000 7fc700b1f700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6fc19d4d0 0x7fc6fc1a2540 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:34.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.939+0000 7fc700b1f700 1 -- 192.168.123.102:0/2072315896 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc6f00097e0 con 0x7fc6fc0737f0 2026-03-10T10:22:34.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.939+0000 7fc700b1f700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc6fc0737f0 0x7fc6fc19cf90 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fc6e8000c00 tx=0x7fc6e80041d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.939+0000 7fc6f9ffb700 1 -- 192.168.123.102:0/2072315896 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc6e801d070 con 0x7fc6fc0737f0 2026-03-10T10:22:34.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.939+0000 7fc6f9ffb700 1 -- 192.168.123.102:0/2072315896 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc6e8022950 con 0x7fc6fc0737f0 2026-03-10T10:22:34.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.939+0000 7fc6f9ffb700 1 -- 192.168.123.102:0/2072315896 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc6e800f990 con 0x7fc6fc0737f0 2026-03-10T10:22:34.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.939+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc6e8009710 con 0x7fc6fc0737f0 2026-03-10T10:22:34.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.939+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc6fc1a2de0 con 0x7fc6fc0737f0 2026-03-10T10:22:34.940 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.940+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc6fc066e80 con 0x7fc6fc0737f0 2026-03-10T10:22:34.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.940+0000 7fc6f9ffb700 1 -- 192.168.123.102:0/2072315896 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc6e800faf0 con 0x7fc6fc0737f0 2026-03-10T10:22:34.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.941+0000 7fc6f9ffb700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc6ec0778c0 0x7fc6ec079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:34.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.941+0000 7fc6f9ffb700 1 -- 192.168.123.102:0/2072315896 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fc6e809b040 con 0x7fc6fc0737f0 2026-03-10T10:22:34.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.942+0000 7fc6fbfff700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc6ec0778c0 0x7fc6ec079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:34.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.944+0000 7fc6fbfff700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc6ec0778c0 0x7fc6ec079d80 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fc6fc074af0 tx=0x7fc6f0009500 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:34.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:34.944+0000 7fc6f9ffb700 1 -- 192.168.123.102:0/2072315896 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc6e8063970 con 0x7fc6fc0737f0 2026-03-10T10:22:35.075 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.075+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fc6fc1a31d0 con 0x7fc6ec0778c0 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.076+0000 7fc6f9ffb700 1 -- 192.168.123.102:0/2072315896 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fc6fc1a31d0 con 0x7fc6ec0778c0 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "crash" 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "7/23 daemons upgraded", 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:22:35.076 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.078+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc6ec0778c0 msgr2=0x7fc6ec079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.078+0000 7fc702d83700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc6ec0778c0 0x7fc6ec079d80 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fc6fc074af0 tx=0x7fc6f0009500 comp rx=0 tx=0).stop 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.078+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc6fc0737f0 msgr2=0x7fc6fc19cf90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.078+0000 7fc702d83700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc6fc0737f0 0x7fc6fc19cf90 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fc6e8000c00 tx=0x7fc6e80041d0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.079+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 shutdown_connections 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.079+0000 7fc702d83700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc6fc0737f0 0x7fc6fc19cf90 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.079+0000 7fc702d83700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc6ec0778c0 0x7fc6ec079d80 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.079+0000 7fc702d83700 1 --2- 192.168.123.102:0/2072315896 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6fc19d4d0 0x7fc6fc1a2540 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.079+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 >> 192.168.123.102:0/2072315896 conn(0x7fc6fc0fc460 msgr2=0x7fc6fc102890 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.079+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 shutdown_connections 2026-03-10T10:22:35.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.079+0000 7fc702d83700 1 -- 192.168.123.102:0/2072315896 wait complete. 2026-03-10T10:22:35.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.146+0000 7f128d5f3700 1 -- 192.168.123.102:0/2677208481 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1288074dc0 msgr2=0x7f1288073220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:35.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.146+0000 7f128d5f3700 1 --2- 192.168.123.102:0/2677208481 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1288074dc0 0x7f1288073220 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f1270009b00 tx=0x7f1270009e10 comp rx=0 tx=0).stop 2026-03-10T10:22:35.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.146+0000 7f128d5f3700 1 -- 192.168.123.102:0/2677208481 shutdown_connections 2026-03-10T10:22:35.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.146+0000 7f128d5f3700 1 --2- 192.168.123.102:0/2677208481 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12880737f0 0x7f1288073c70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.146+0000 7f128d5f3700 1 --2- 192.168.123.102:0/2677208481 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1288074dc0 0x7f1288073220 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.146+0000 7f128d5f3700 1 -- 192.168.123.102:0/2677208481 >> 192.168.123.102:0/2677208481 conn(0x7f12880fc4a0 msgr2=0x7f12880fe900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:35.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.146+0000 7f128d5f3700 1 -- 192.168.123.102:0/2677208481 shutdown_connections 2026-03-10T10:22:35.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.146+0000 7f128d5f3700 1 -- 192.168.123.102:0/2677208481 wait complete. 2026-03-10T10:22:35.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.147+0000 7f128d5f3700 1 Processor -- start 2026-03-10T10:22:35.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.147+0000 7f128d5f3700 1 -- start start 2026-03-10T10:22:35.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.147+0000 7f128d5f3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12880737f0 0x7f128819d0e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.147+0000 7f128d5f3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f128819d620 0x7f12881a2690 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.147+0000 7f128d5f3700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f128819db30 con 0x7f12880737f0 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.147+0000 7f128d5f3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f128819dca0 con 0x7f128819d620 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.147+0000 7f12867fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f128819d620 0x7f12881a2690 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f1286ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12880737f0 0x7f128819d0e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f1286ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12880737f0 0x7f128819d0e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:36282/0 (socket says 192.168.123.102:36282) 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f1286ffd700 1 -- 192.168.123.102:0/2722431559 learned_addr learned my addr 192.168.123.102:0/2722431559 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f1286ffd700 1 -- 192.168.123.102:0/2722431559 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f128819d620 msgr2=0x7f12881a2690 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f1286ffd700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f128819d620 0x7f12881a2690 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f1286ffd700 1 -- 192.168.123.102:0/2722431559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1278009710 con 0x7f12880737f0 2026-03-10T10:22:35.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f1286ffd700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12880737f0 0x7f128819d0e0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f1270000c00 tx=0x7f127000bbf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:35.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f127ffff700 1 -- 192.168.123.102:0/2722431559 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f127001d070 con 0x7f12880737f0 2026-03-10T10:22:35.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f127ffff700 1 -- 192.168.123.102:0/2722431559 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1270022470 con 0x7f12880737f0 2026-03-10T10:22:35.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f127ffff700 1 -- 192.168.123.102:0/2722431559 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f127000f670 con 0x7f12880737f0 2026-03-10T10:22:35.150 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.148+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f12700097e0 con 0x7f12880737f0 2026-03-10T10:22:35.150 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.149+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f12881a2f30 con 0x7f12880737f0 2026-03-10T10:22:35.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.150+0000 7f127ffff700 1 -- 192.168.123.102:0/2722431559 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f127000f7d0 con 0x7f12880737f0 2026-03-10T10:22:35.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.150+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1288066e80 con 0x7f12880737f0 2026-03-10T10:22:35.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.150+0000 7f127ffff700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f12740778c0 0x7f1274079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:22:35.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.150+0000 7f127ffff700 1 -- 192.168.123.102:0/2722431559 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(52..52 src has 1..52) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f127009adc0 con 0x7f12880737f0 2026-03-10T10:22:35.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.151+0000 7f12867fc700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f12740778c0 0x7f1274079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:22:35.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.151+0000 7f12867fc700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f12740778c0 0x7f1274079d80 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f1288074af0 tx=0x7f1278009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:22:35.153 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.153+0000 7f127ffff700 1 -- 192.168.123.102:0/2722431559 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f127009e030 con 0x7f12880737f0 2026-03-10T10:22:35.322 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.321+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f12881a3300 con 0x7f12880737f0 2026-03-10T10:22:35.322 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.322+0000 7f127ffff700 1 -- 192.168.123.102:0/2722431559 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+429 (secure 0 0 0) 0x7f12700225e0 con 0x7f12880737f0 2026-03-10T10:22:35.322 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN Degraded data redundancy: 299/228 objects degraded (131.140%), 4 pgs degraded 2026-03-10T10:22:35.322 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 299/228 objects degraded (131.140%), 4 pgs degraded 2026-03-10T10:22:35.322 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.6 is active+recovery_wait+degraded, acting [0,1,4] 2026-03-10T10:22:35.322 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.c is active+recovery_wait+degraded, acting [5,0,3] 2026-03-10T10:22:35.322 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.10 is active+recovery_wait+degraded, acting [5,0,1] 2026-03-10T10:22:35.322 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.1f is active+recovery_wait+degraded, acting [0,3,2] 2026-03-10T10:22:35.325 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.325+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f12740778c0 msgr2=0x7f1274079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:35.325 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.325+0000 7f128d5f3700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f12740778c0 0x7f1274079d80 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f1288074af0 tx=0x7f1278009450 comp rx=0 tx=0).stop 2026-03-10T10:22:35.325 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.325+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12880737f0 msgr2=0x7f128819d0e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:22:35.325 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.325+0000 7f128d5f3700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12880737f0 0x7f128819d0e0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f1270000c00 tx=0x7f127000bbf0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.326+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 shutdown_connections 2026-03-10T10:22:35.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.326+0000 7f128d5f3700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f12880737f0 0x7f128819d0e0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.326+0000 7f128d5f3700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f12740778c0 0x7f1274079d80 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.326+0000 7f128d5f3700 1 --2- 192.168.123.102:0/2722431559 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f128819d620 0x7f12881a2690 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:22:35.326 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.326+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 >> 192.168.123.102:0/2722431559 conn(0x7f12880fc4a0 msgr2=0x7f12881028b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:22:35.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.327+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 shutdown_connections 2026-03-10T10:22:35.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:22:35.327+0000 7f128d5f3700 1 -- 192.168.123.102:0/2722431559 wait complete. 2026-03-10T10:22:35.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:35 vm02.local ceph-mon[110129]: from='client.34214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:35.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:35 vm02.local ceph-mon[110129]: pgmap v70: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 216 KiB/s rd, 185 KiB/s wr, 71 op/s; 299/228 objects degraded (131.140%); 0 B/s, 10 objects/s recovering 2026-03-10T10:22:35.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:35 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3770476112' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:35.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:35 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1749101793' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:22:35.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:35 vm02.local ceph-mon[110129]: from='client.34224 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:35.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:35 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2722431559' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:22:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:35 vm05.local ceph-mon[103593]: from='client.34214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:35 vm05.local ceph-mon[103593]: pgmap v70: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 216 KiB/s rd, 185 KiB/s wr, 71 op/s; 299/228 objects degraded (131.140%); 0 B/s, 10 objects/s recovering 2026-03-10T10:22:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:35 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3770476112' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:22:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:35 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1749101793' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:22:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:35 vm05.local ceph-mon[103593]: from='client.34224 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:22:35.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:35 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2722431559' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:22:37.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:37.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:38.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:37 vm02.local ceph-mon[110129]: pgmap v71: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 17 KiB/s rd, 17 KiB/s wr, 3 op/s; 299/228 objects degraded (131.140%); 0 B/s, 6 objects/s recovering 2026-03-10T10:22:38.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:37 vm05.local ceph-mon[103593]: pgmap v71: 65 pgs: 4 active+recovery_wait+degraded, 1 active+recovering, 60 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 17 KiB/s rd, 17 KiB/s wr, 3 op/s; 299/228 objects degraded (131.140%); 0 B/s, 6 objects/s recovering 2026-03-10T10:22:40.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:39 vm02.local ceph-mon[110129]: pgmap v72: 65 pgs: 4 active+recovery_wait+degraded, 61 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 18 KiB/s rd, 17 KiB/s wr, 3 op/s; 299/228 objects degraded (131.140%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:40.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:39 vm05.local ceph-mon[103593]: pgmap v72: 65 pgs: 4 active+recovery_wait+degraded, 61 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 18 KiB/s rd, 17 KiB/s wr, 3 op/s; 299/228 objects degraded (131.140%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:41.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:40 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 228/228 objects degraded (100.000%), 3 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:41.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:40 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 228/228 objects degraded (100.000%), 3 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:42.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:41 vm02.local ceph-mon[110129]: pgmap v73: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 938 B/s rd, 170 B/s wr, 2 op/s; 228/228 objects degraded (100.000%); 0 B/s, 10 objects/s recovering 2026-03-10T10:22:42.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:41 vm05.local ceph-mon[103593]: pgmap v73: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 938 B/s rd, 170 B/s wr, 2 op/s; 228/228 objects degraded (100.000%); 0 B/s, 10 objects/s recovering 2026-03-10T10:22:44.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:43 vm02.local ceph-mon[110129]: pgmap v74: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 938 B/s rd, 170 B/s wr, 2 op/s; 228/228 objects degraded (100.000%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:44.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:43 vm05.local ceph-mon[103593]: pgmap v74: 65 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 61 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 938 B/s rd, 170 B/s wr, 2 op/s; 228/228 objects degraded (100.000%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:46.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:45 vm05.local ceph-mon[103593]: pgmap v75: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 2 op/s; 164/228 objects degraded (71.930%); 0 B/s, 12 objects/s recovering 2026-03-10T10:22:46.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:45 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 164/228 objects degraded (71.930%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:46.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:45 vm02.local ceph-mon[110129]: pgmap v75: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 2 op/s; 164/228 objects degraded (71.930%); 0 B/s, 12 objects/s recovering 2026-03-10T10:22:46.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:45 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 164/228 objects degraded (71.930%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T10:22:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:47 vm05.local ceph-mon[103593]: pgmap v76: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 164/228 objects degraded (71.930%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:47 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:48.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:47 vm02.local ceph-mon[110129]: pgmap v76: 65 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 62 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 164/228 objects degraded (71.930%); 0 B/s, 8 objects/s recovering 2026-03-10T10:22:48.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:47 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:49.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:48 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:49.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:48 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-10T10:22:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:48 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:22:49.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:48 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-10T10:22:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:49 vm02.local ceph-mon[110129]: pgmap v77: 65 pgs: 1 active+recovery_wait+degraded, 2 active+recovering, 62 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s; 84/228 objects degraded (36.842%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:49 vm05.local ceph-mon[103593]: pgmap v77: 65 pgs: 1 active+recovery_wait+degraded, 2 active+recovering, 62 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s; 84/228 objects degraded (36.842%); 0 B/s, 11 objects/s recovering 2026-03-10T10:22:51.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:50 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 84/228 objects degraded (36.842%), 1 pg degraded (PG_DEGRADED) 2026-03-10T10:22:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:50 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 84/228 objects degraded (36.842%), 1 pg degraded (PG_DEGRADED) 2026-03-10T10:22:52.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:51 vm02.local ceph-mon[110129]: pgmap v78: 65 pgs: 1 active+recovery_wait+degraded, 2 active+recovering, 62 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 84/228 objects degraded (36.842%); 0 B/s, 13 objects/s recovering 2026-03-10T10:22:52.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:51 vm05.local ceph-mon[103593]: pgmap v78: 65 pgs: 1 active+recovery_wait+degraded, 2 active+recovering, 62 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 84/228 objects degraded (36.842%); 0 B/s, 13 objects/s recovering 2026-03-10T10:22:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:53.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:52 vm02.local ceph-mon[110129]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 84/228 objects degraded (36.842%), 1 pg degraded) 2026-03-10T10:22:53.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:52 vm02.local ceph-mon[110129]: Cluster is now healthy 2026-03-10T10:22:53.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:52 vm05.local ceph-mon[103593]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 84/228 objects degraded (36.842%), 1 pg degraded) 2026-03-10T10:22:53.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:52 vm05.local ceph-mon[103593]: Cluster is now healthy 2026-03-10T10:22:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:53 vm02.local ceph-mon[110129]: pgmap v79: 65 pgs: 2 active+recovering, 63 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 2 op/s; 0 B/s, 15 objects/s recovering 2026-03-10T10:22:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:53 vm05.local ceph-mon[103593]: pgmap v79: 65 pgs: 2 active+recovering, 63 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 2 op/s; 0 B/s, 15 objects/s recovering 2026-03-10T10:22:56.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:55 vm02.local ceph-mon[110129]: pgmap v80: 65 pgs: 1 active+recovering, 64 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s; 0 B/s, 16 objects/s recovering 2026-03-10T10:22:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:55 vm05.local ceph-mon[103593]: pgmap v80: 65 pgs: 1 active+recovering, 64 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s; 0 B/s, 16 objects/s recovering 2026-03-10T10:22:58.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:57 vm02.local ceph-mon[110129]: pgmap v81: 65 pgs: 1 active+recovering, 64 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 0 B/s, 12 objects/s recovering 2026-03-10T10:22:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:57 vm05.local ceph-mon[103593]: pgmap v81: 65 pgs: 1 active+recovering, 64 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 0 B/s, 12 objects/s recovering 2026-03-10T10:23:00.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:22:59 vm02.local ceph-mon[110129]: pgmap v82: 65 pgs: 1 active+recovering, 64 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s; 0 B/s, 16 objects/s recovering 2026-03-10T10:23:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:22:59 vm05.local ceph-mon[103593]: pgmap v82: 65 pgs: 1 active+recovering, 64 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s; 0 B/s, 16 objects/s recovering 2026-03-10T10:23:02.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:02 vm02.local ceph-mon[110129]: pgmap v83: 65 pgs: 1 active+recovering, 64 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 0 B/s, 13 objects/s recovering 2026-03-10T10:23:02.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:02 vm05.local ceph-mon[103593]: pgmap v83: 65 pgs: 1 active+recovering, 64 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 0 B/s, 13 objects/s recovering 2026-03-10T10:23:03.202 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:03 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:23:03.202 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:03 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:03.202 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:03 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:23:03.202 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:03 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:03.381 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:03 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:23:03.382 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:03 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:03.382 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:03 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:23:03.382 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:03 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:03.780 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:03 vm02.local systemd[1]: Stopping Ceph osd.1 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:23:03.780 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:03 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[74265]: 2026-03-10T10:23:03.640+0000 7faff2cb4700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:23:03.780 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:03 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[74265]: 2026-03-10T10:23:03.640+0000 7faff2cb4700 -1 osd.1 52 *** Got signal Terminated *** 2026-03-10T10:23:03.780 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:03 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[74265]: 2026-03-10T10:23:03.640+0000 7faff2cb4700 -1 osd.1 52 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:23:04.352 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123714]: 2026-03-10 10:23:04.157571815 +0000 UTC m=+0.535944374 container died 1b0a42d8ac013f5070edd681fa0b19e0edb4f58277829bb1ea1727c0404254b5 (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1, ceph=True, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, org.label-schema.schema-version=1.0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=HEAD, io.buildah.version=1.29.1, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_POINT_RELEASE=-18.2.1, GIT_CLEAN=True, org.label-schema.build-date=20240222, org.label-schema.name=CentOS Stream 8 Base Image, GIT_BRANCH=HEAD) 2026-03-10T10:23:04.352 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123714]: 2026-03-10 10:23:04.177170876 +0000 UTC m=+0.555543435 container remove 1b0a42d8ac013f5070edd681fa0b19e0edb4f58277829bb1ea1727c0404254b5 (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1, org.label-schema.build-date=20240222, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, RELEASE=HEAD, maintainer=Guillaume Abrioux , org.label-schema.name=CentOS Stream 8 Base Image, CEPH_POINT_RELEASE=-18.2.1, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.29.1, org.label-schema.schema-version=1.0, GIT_BRANCH=HEAD, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GIT_CLEAN=True) 2026-03-10T10:23:04.352 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local bash[123714]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1 2026-03-10T10:23:04.352 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:04 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:23:04.352 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:04 vm02.local ceph-mon[110129]: Upgrade: osd.1 is safe to restart 2026-03-10T10:23:04.352 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:04 vm02.local ceph-mon[110129]: pgmap v84: 65 pgs: 65 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 2 op/s; 0 B/s, 11 objects/s recovering 2026-03-10T10:23:04.352 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:04 vm02.local ceph-mon[110129]: Upgrade: Updating osd.1 2026-03-10T10:23:04.352 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:04 vm02.local ceph-mon[110129]: Deploying daemon osd.1 on vm02 2026-03-10T10:23:04.352 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:04 vm02.local ceph-mon[110129]: osd.1 marked itself down and dead 2026-03-10T10:23:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:04 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T10:23:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:04 vm05.local ceph-mon[103593]: Upgrade: osd.1 is safe to restart 2026-03-10T10:23:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:04 vm05.local ceph-mon[103593]: pgmap v84: 65 pgs: 65 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 2 op/s; 0 B/s, 11 objects/s recovering 2026-03-10T10:23:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:04 vm05.local ceph-mon[103593]: Upgrade: Updating osd.1 2026-03-10T10:23:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:04 vm05.local ceph-mon[103593]: Deploying daemon osd.1 on vm02 2026-03-10T10:23:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:04 vm05.local ceph-mon[103593]: osd.1 marked itself down and dead 2026-03-10T10:23:04.618 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123779]: 2026-03-10 10:23:04.379124865 +0000 UTC m=+0.031069639 container create da7d4a926618819e471a00a1ef044e5b207a82b566680dcade7a9059adf109a8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) 2026-03-10T10:23:04.618 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123779]: 2026-03-10 10:23:04.417822901 +0000 UTC m=+0.069767665 container init da7d4a926618819e471a00a1ef044e5b207a82b566680dcade7a9059adf109a8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:23:04.618 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123779]: 2026-03-10 10:23:04.422680683 +0000 UTC m=+0.074625457 container start da7d4a926618819e471a00a1ef044e5b207a82b566680dcade7a9059adf109a8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T10:23:04.618 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123779]: 2026-03-10 10:23:04.424723887 +0000 UTC m=+0.076668661 container attach da7d4a926618819e471a00a1ef044e5b207a82b566680dcade7a9059adf109a8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-10T10:23:04.618 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123779]: 2026-03-10 10:23:04.366335649 +0000 UTC m=+0.018280423 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:23:04.618 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local conmon[123790]: conmon da7d4a926618819e471a : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da7d4a926618819e471a00a1ef044e5b207a82b566680dcade7a9059adf109a8.scope/container/memory.events 2026-03-10T10:23:04.619 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123779]: 2026-03-10 10:23:04.579789382 +0000 UTC m=+0.231734156 container died da7d4a926618819e471a00a1ef044e5b207a82b566680dcade7a9059adf109a8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2) 2026-03-10T10:23:04.939 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123779]: 2026-03-10 10:23:04.624773583 +0000 UTC m=+0.276718357 container remove da7d4a926618819e471a00a1ef044e5b207a82b566680dcade7a9059adf109a8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:23:04.939 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.1.service: Deactivated successfully. 2026-03-10T10:23:04.939 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.1.service: Unit process 123790 (conmon) remains running after unit stopped. 2026-03-10T10:23:04.939 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local systemd[1]: Stopped Ceph osd.1 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:23:04.939 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.1.service: Consumed 51.845s CPU time, 599.8M memory peak. 2026-03-10T10:23:04.939 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local systemd[1]: Starting Ceph osd.1 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:23:05.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-mon[110129]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:23:05.281 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-mon[110129]: osdmap e53: 6 total, 5 up, 6 in 2026-03-10T10:23:05.281 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123889]: 2026-03-10 10:23:04.938284727 +0000 UTC m=+0.025242585 container create a0d36722106fa0f03c0df3b91b82f0066d1716543c24cdb73238cfe4190c1d9a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:23:05.281 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123889]: 2026-03-10 10:23:04.992720478 +0000 UTC m=+0.079678335 container init a0d36722106fa0f03c0df3b91b82f0066d1716543c24cdb73238cfe4190c1d9a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T10:23:05.281 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123889]: 2026-03-10 10:23:04.996057563 +0000 UTC m=+0.083015420 container start a0d36722106fa0f03c0df3b91b82f0066d1716543c24cdb73238cfe4190c1d9a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_REF=squid, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T10:23:05.281 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:04 vm02.local podman[123889]: 2026-03-10 10:23:04.996837653 +0000 UTC m=+0.083795501 container attach a0d36722106fa0f03c0df3b91b82f0066d1716543c24cdb73238cfe4190c1d9a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS) 2026-03-10T10:23:05.281 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local podman[123889]: 2026-03-10 10:23:04.930807103 +0000 UTC m=+0.017764961 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:23:05.281 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:05.281 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:05.281 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:05.281 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:05.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.426+0000 7f206bde8700 1 -- 192.168.123.102:0/1012567382 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 msgr2=0x7f206410a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.426+0000 7f206bde8700 1 --2- 192.168.123.102:0/1012567382 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 0x7f206410a1c0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f2060009b50 tx=0x7f2060009e60 comp rx=0 tx=0).stop 2026-03-10T10:23:05.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.426+0000 7f206bde8700 1 -- 192.168.123.102:0/1012567382 shutdown_connections 2026-03-10T10:23:05.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.426+0000 7f206bde8700 1 --2- 192.168.123.102:0/1012567382 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f206410a700 0x7f206410cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.426+0000 7f206bde8700 1 --2- 192.168.123.102:0/1012567382 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 0x7f206410a1c0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.426+0000 7f206bde8700 1 -- 192.168.123.102:0/1012567382 >> 192.168.123.102:0/1012567382 conn(0x7f206406dae0 msgr2=0x7f206406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:05.428 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.427+0000 7f206bde8700 1 -- 192.168.123.102:0/1012567382 shutdown_connections 2026-03-10T10:23:05.428 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.427+0000 7f206bde8700 1 -- 192.168.123.102:0/1012567382 wait complete. 2026-03-10T10:23:05.428 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.428+0000 7f206bde8700 1 Processor -- start 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.428+0000 7f206bde8700 1 -- start start 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.428+0000 7f206bde8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 0x7f206419cb70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.428+0000 7f206bde8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f206410a700 0x7f206419d0b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.428+0000 7f206bde8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f206419d6d0 con 0x7f2064107d90 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.428+0000 7f206bde8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f206419d810 con 0x7f206410a700 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.428+0000 7f2069b84700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 0x7f206419cb70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.428+0000 7f2069b84700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 0x7f206419cb70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33216/0 (socket says 192.168.123.102:33216) 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.428+0000 7f2069b84700 1 -- 192.168.123.102:0/3425728781 learned_addr learned my addr 192.168.123.102:0/3425728781 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.429+0000 7f2069383700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f206410a700 0x7f206419d0b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.429+0000 7f2069b84700 1 -- 192.168.123.102:0/3425728781 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f206410a700 msgr2=0x7f206419d0b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.429+0000 7f2069b84700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f206410a700 0x7f206419d0b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.429+0000 7f2069b84700 1 -- 192.168.123.102:0/3425728781 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f20600097e0 con 0x7f2064107d90 2026-03-10T10:23:05.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.429+0000 7f2069b84700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 0x7f206419cb70 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f2060006010 tx=0x7f2060004c30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:05.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.429+0000 7f205affd700 1 -- 192.168.123.102:0/3425728781 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f206001d070 con 0x7f2064107d90 2026-03-10T10:23:05.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.429+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f20641c4270 con 0x7f2064107d90 2026-03-10T10:23:05.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.429+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f20641c4760 con 0x7f2064107d90 2026-03-10T10:23:05.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.431+0000 7f205affd700 1 -- 192.168.123.102:0/3425728781 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f206000bc50 con 0x7f2064107d90 2026-03-10T10:23:05.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.431+0000 7f205affd700 1 -- 192.168.123.102:0/3425728781 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f206000f630 con 0x7f2064107d90 2026-03-10T10:23:05.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.431+0000 7f205affd700 1 -- 192.168.123.102:0/3425728781 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f206000f850 con 0x7f2064107d90 2026-03-10T10:23:05.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.432+0000 7f205affd700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2050077ab0 0x7f2050079f70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:05.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.432+0000 7f2069383700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2050077ab0 0x7f2050079f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:05.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.433+0000 7f205affd700 1 -- 192.168.123.102:0/3425728781 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(54..54 src has 1..54) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f206009c1a0 con 0x7f2064107d90 2026-03-10T10:23:05.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.433+0000 7f2069383700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2050077ab0 0x7f2050079f70 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f205c003eb0 tx=0x7f205c00b040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:05.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.433+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2064196d70 con 0x7f2064107d90 2026-03-10T10:23:05.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.436+0000 7f205affd700 1 -- 192.168.123.102:0/3425728781 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f206006b080 con 0x7f2064107d90 2026-03-10T10:23:05.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:05 vm05.local ceph-mon[103593]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:23:05.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:05 vm05.local ceph-mon[103593]: osdmap e53: 6 total, 5 up, 6 in 2026-03-10T10:23:05.577 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.576+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f206402cfd0 con 0x7f2050077ab0 2026-03-10T10:23:05.578 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.578+0000 7f205affd700 1 -- 192.168.123.102:0/3425728781 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f206402cfd0 con 0x7f2050077ab0 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2050077ab0 msgr2=0x7f2050079f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2050077ab0 0x7f2050079f70 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f205c003eb0 tx=0x7f205c00b040 comp rx=0 tx=0).stop 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 msgr2=0x7f206419cb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 0x7f206419cb70 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f2060006010 tx=0x7f2060004c30 comp rx=0 tx=0).stop 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 shutdown_connections 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2064107d90 0x7f206419cb70 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2050077ab0 0x7f2050079f70 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 --2- 192.168.123.102:0/3425728781 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f206410a700 0x7f206419d0b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 >> 192.168.123.102:0/3425728781 conn(0x7f206406dae0 msgr2=0x7f206406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 shutdown_connections 2026-03-10T10:23:05.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.582+0000 7f206bde8700 1 -- 192.168.123.102:0/3425728781 wait complete. 2026-03-10T10:23:05.596 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:23:05.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.663+0000 7fb4b674a700 1 -- 192.168.123.102:0/4031763293 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a80a5e10 msgr2=0x7fb4a80a6290 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.663+0000 7fb4b674a700 1 --2- 192.168.123.102:0/4031763293 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a80a5e10 0x7fb4a80a6290 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7fb4ac009b00 tx=0x7fb4ac009e10 comp rx=0 tx=0).stop 2026-03-10T10:23:05.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.663+0000 7fb4b674a700 1 -- 192.168.123.102:0/4031763293 shutdown_connections 2026-03-10T10:23:05.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.663+0000 7fb4b674a700 1 --2- 192.168.123.102:0/4031763293 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a80a5e10 0x7fb4a80a6290 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.663+0000 7fb4b674a700 1 --2- 192.168.123.102:0/4031763293 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb4a80a4cd0 0x7fb4a80a50f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.663+0000 7fb4b674a700 1 -- 192.168.123.102:0/4031763293 >> 192.168.123.102:0/4031763293 conn(0x7fb4a80a0190 msgr2=0x7fb4a80a25f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:05.668 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b674a700 1 -- 192.168.123.102:0/4031763293 shutdown_connections 2026-03-10T10:23:05.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b674a700 1 -- 192.168.123.102:0/4031763293 wait complete. 2026-03-10T10:23:05.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b674a700 1 Processor -- start 2026-03-10T10:23:05.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b674a700 1 -- start start 2026-03-10T10:23:05.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b674a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb4a815c630 0x7fb4a815ca50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:05.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b674a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a815cf90 0x7fb4a8009980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:05.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b674a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb4a8009ec0 con 0x7fb4a815cf90 2026-03-10T10:23:05.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b674a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb4a800a030 con 0x7fb4a815c630 2026-03-10T10:23:05.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b4f47700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a815cf90 0x7fb4a8009980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:05.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b4f47700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a815cf90 0x7fb4a8009980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33222/0 (socket says 192.168.123.102:33222) 2026-03-10T10:23:05.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.668+0000 7fb4b4f47700 1 -- 192.168.123.102:0/1620399227 learned_addr learned my addr 192.168.123.102:0/1620399227 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:05.671 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.669+0000 7fb4b4f47700 1 -- 192.168.123.102:0/1620399227 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb4a815c630 msgr2=0x7fb4a815ca50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.671 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.669+0000 7fb4b4f47700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb4a815c630 0x7fb4a815ca50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.671 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.669+0000 7fb4b4f47700 1 -- 192.168.123.102:0/1620399227 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb4ac0097e0 con 0x7fb4a815cf90 2026-03-10T10:23:05.671 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.669+0000 7fb4b4f47700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a815cf90 0x7fb4a8009980 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fb4ac009fd0 tx=0x7fb4ac01fa80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:05.671 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.671+0000 7fb4a67fc700 1 -- 192.168.123.102:0/1620399227 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb4ac016070 con 0x7fb4a815cf90 2026-03-10T10:23:05.672 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.671+0000 7fb4b674a700 1 -- 192.168.123.102:0/1620399227 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb4a800a2b0 con 0x7fb4a815cf90 2026-03-10T10:23:05.672 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.671+0000 7fb4b674a700 1 -- 192.168.123.102:0/1620399227 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb4a800a7a0 con 0x7fb4a815cf90 2026-03-10T10:23:05.675 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.672+0000 7fb4b674a700 1 -- 192.168.123.102:0/1620399227 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb4a8004f80 con 0x7fb4a815cf90 2026-03-10T10:23:05.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.675+0000 7fb4a67fc700 1 -- 192.168.123.102:0/1620399227 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb4ac01ceb0 con 0x7fb4a815cf90 2026-03-10T10:23:05.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.675+0000 7fb4a67fc700 1 -- 192.168.123.102:0/1620399227 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb4ac011810 con 0x7fb4a815cf90 2026-03-10T10:23:05.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.675+0000 7fb4a67fc700 1 -- 192.168.123.102:0/1620399227 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fb4ac0119b0 con 0x7fb4a815cf90 2026-03-10T10:23:05.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.675+0000 7fb4a67fc700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb49c077b80 0x7fb49c07a040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:05.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.676+0000 7fb4a67fc700 1 -- 192.168.123.102:0/1620399227 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(54..54 src has 1..54) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fb4ac09c580 con 0x7fb4a815cf90 2026-03-10T10:23:05.676 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.676+0000 7fb4b5748700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb49c077b80 0x7fb49c07a040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:05.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.676+0000 7fb4b5748700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb49c077b80 0x7fb49c07a040 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fb4b00665f0 tx=0x7fb4b004f040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:05.677 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.676+0000 7fb4a67fc700 1 -- 192.168.123.102:0/1620399227 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb4ac09eac0 con 0x7fb4a815cf90 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-66d63b2d-c6bd-4ba3-b408-ea404d3604ad/osd-block-8bd56e09-7dad-4b23-847e-c7afae0d2f41 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-10T10:23:05.689 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-66d63b2d-c6bd-4ba3-b408-ea404d3604ad/osd-block-8bd56e09-7dad-4b23-847e-c7afae0d2f41 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-10T10:23:05.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.821+0000 7fb4b674a700 1 -- 192.168.123.102:0/1620399227 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fb4a80aa3f0 con 0x7fb49c077b80 2026-03-10T10:23:05.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.822+0000 7fb4a67fc700 1 -- 192.168.123.102:0/1620399227 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fb4a80aa3f0 con 0x7fb49c077b80 2026-03-10T10:23:05.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 -- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb49c077b80 msgr2=0x7fb49c07a040 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb49c077b80 0x7fb49c07a040 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fb4b00665f0 tx=0x7fb4b004f040 comp rx=0 tx=0).stop 2026-03-10T10:23:05.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 -- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a815cf90 msgr2=0x7fb4a8009980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a815cf90 0x7fb4a8009980 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fb4ac009fd0 tx=0x7fb4ac01fa80 comp rx=0 tx=0).stop 2026-03-10T10:23:05.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 -- 192.168.123.102:0/1620399227 shutdown_connections 2026-03-10T10:23:05.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb49c077b80 0x7fb49c07a040 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb4a815c630 0x7fb4a815ca50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 --2- 192.168.123.102:0/1620399227 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb4a815cf90 0x7fb4a8009980 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 -- 192.168.123.102:0/1620399227 >> 192.168.123.102:0/1620399227 conn(0x7fb4a80a0190 msgr2=0x7fb4a80a2430 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:05.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 -- 192.168.123.102:0/1620399227 shutdown_connections 2026-03-10T10:23:05.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.825+0000 7fb49bfff700 1 -- 192.168.123.102:0/1620399227 wait complete. 2026-03-10T10:23:05.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.894+0000 7f465df1a700 1 -- 192.168.123.102:0/2554722316 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 msgr2=0x7f4650099630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.894+0000 7f465df1a700 1 --2- 192.168.123.102:0/2554722316 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 0x7f4650099630 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f4644009b50 tx=0x7f4644009e60 comp rx=0 tx=0).stop 2026-03-10T10:23:05.898 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.895+0000 7f465df1a700 1 -- 192.168.123.102:0/2554722316 shutdown_connections 2026-03-10T10:23:05.898 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.895+0000 7f465df1a700 1 --2- 192.168.123.102:0/2554722316 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 0x7f4650099630 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.898 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.895+0000 7f465df1a700 1 --2- 192.168.123.102:0/2554722316 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4650097fd0 0x7f46500983f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.898 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.895+0000 7f465df1a700 1 -- 192.168.123.102:0/2554722316 >> 192.168.123.102:0/2554722316 conn(0x7f4650093550 msgr2=0x7f46500959b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:05.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.908+0000 7f465df1a700 1 -- 192.168.123.102:0/2554722316 shutdown_connections 2026-03-10T10:23:05.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.910+0000 7f465df1a700 1 -- 192.168.123.102:0/2554722316 wait complete. 2026-03-10T10:23:05.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.911+0000 7f465df1a700 1 Processor -- start 2026-03-10T10:23:05.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.911+0000 7f465df1a700 1 -- start start 2026-03-10T10:23:05.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.911+0000 7f465df1a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4650097fd0 0x7f465012d920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:05.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.911+0000 7f465df1a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 0x7f465012de60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:05.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.911+0000 7f465df1a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f465012e480 con 0x7f46500991d0 2026-03-10T10:23:05.912 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.911+0000 7f465df1a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f465012e5c0 con 0x7f4650097fd0 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.912+0000 7f4657fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 0x7f465012de60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.912+0000 7f4657fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 0x7f465012de60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33230/0 (socket says 192.168.123.102:33230) 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.912+0000 7f4657fff700 1 -- 192.168.123.102:0/261020151 learned_addr learned my addr 192.168.123.102:0/261020151 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.912+0000 7f465cf18700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4650097fd0 0x7f465012d920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.912+0000 7f4657fff700 1 -- 192.168.123.102:0/261020151 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4650097fd0 msgr2=0x7f465012d920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.912+0000 7f4657fff700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4650097fd0 0x7f465012d920 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.912+0000 7f4657fff700 1 -- 192.168.123.102:0/261020151 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f46440097e0 con 0x7f46500991d0 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.913+0000 7f4657fff700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 0x7f465012de60 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f464400b5c0 tx=0x7f4644004a20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.913+0000 7f4655ffb700 1 -- 192.168.123.102:0/261020151 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f464401d070 con 0x7f46500991d0 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.913+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4650133010 con 0x7f46500991d0 2026-03-10T10:23:05.913 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.913+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4650133500 con 0x7f46500991d0 2026-03-10T10:23:05.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.914+0000 7f4655ffb700 1 -- 192.168.123.102:0/261020151 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f464400bc50 con 0x7f46500991d0 2026-03-10T10:23:05.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.914+0000 7f4655ffb700 1 -- 192.168.123.102:0/261020151 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f464400f670 con 0x7f46500991d0 2026-03-10T10:23:05.914 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.914+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f463c005320 con 0x7f46500991d0 2026-03-10T10:23:05.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.917+0000 7f4655ffb700 1 -- 192.168.123.102:0/261020151 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f4644022470 con 0x7f46500991d0 2026-03-10T10:23:05.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.917+0000 7f4655ffb700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4648077710 0x7f4648079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:05.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.917+0000 7f4655ffb700 1 -- 192.168.123.102:0/261020151 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(54..54 src has 1..54) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f464409b870 con 0x7f46500991d0 2026-03-10T10:23:05.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.918+0000 7f465cf18700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4648077710 0x7f4648079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:05.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.918+0000 7f4655ffb700 1 -- 192.168.123.102:0/261020151 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f46440640d0 con 0x7f46500991d0 2026-03-10T10:23:05.929 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:05.928+0000 7f465cf18700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4648077710 0x7f4648079bd0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f4650099030 tx=0x7f464c009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:06.032 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/ln -snf /dev/ceph-66d63b2d-c6bd-4ba3-b408-ea404d3604ad/osd-block-8bd56e09-7dad-4b23-847e-c7afae0d2f41 /var/lib/ceph/osd/ceph-1/block 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/ln -snf /dev/ceph-66d63b2d-c6bd-4ba3-b408-ea404d3604ad/osd-block-8bd56e09-7dad-4b23-847e-c7afae0d2f41 /var/lib/ceph/osd/ceph-1/block 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate[123899]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[123889]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local conmon[123899]: conmon a0d36722106fa0f03c0d : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0d36722106fa0f03c0df3b91b82f0066d1716543c24cdb73238cfe4190c1d9a.scope/container/memory.events 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local podman[123889]: 2026-03-10 10:23:05.727167731 +0000 UTC m=+0.814125588 container died a0d36722106fa0f03c0df3b91b82f0066d1716543c24cdb73238cfe4190c1d9a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_REF=squid, ceph=True) 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local podman[123889]: 2026-03-10 10:23:05.757054826 +0000 UTC m=+0.844012683 container remove a0d36722106fa0f03c0df3b91b82f0066d1716543c24cdb73238cfe4190c1d9a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223) 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local podman[124206]: 2026-03-10 10:23:05.870434854 +0000 UTC m=+0.020624721 container create 6b6be7f62bd3c4b52f1b9176d517329b91b750cea5194c64b9d7b03023dde879 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local podman[124206]: 2026-03-10 10:23:05.903834163 +0000 UTC m=+0.054024030 container init 6b6be7f62bd3c4b52f1b9176d517329b91b750cea5194c64b9d7b03023dde879 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True) 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local podman[124206]: 2026-03-10 10:23:05.906730965 +0000 UTC m=+0.056920821 container start 6b6be7f62bd3c4b52f1b9176d517329b91b750cea5194c64b9d7b03023dde879 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local bash[124206]: 6b6be7f62bd3c4b52f1b9176d517329b91b750cea5194c64b9d7b03023dde879 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local podman[124206]: 2026-03-10 10:23:05.859298111 +0000 UTC m=+0.009487968 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:23:06.033 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:05 vm02.local systemd[1]: Started Ceph osd.1 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:23:06.049 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.048+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f463c000bf0 con 0x7f4648077710 2026-03-10T10:23:06.056 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.056+0000 7f4655ffb700 1 -- 192.168.123.102:0/261020151 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f463c000bf0 con 0x7f4648077710 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (7m) 112s ago 8m 23.2M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (8m) 112s ago 8m 8820k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (7m) 2m ago 7m 11.1M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (2m) 112s ago 8m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (2m) 2m ago 7m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (7m) 112s ago 8m 88.8M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (6m) 112s ago 6m 15.9M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (6m) 112s ago 6m 237M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (6m) 2m ago 6m 15.9M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (6m) 2m ago 6m 146M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (3m) 112s ago 8m 614M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (2m) 2m ago 7m 487M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (2m) 112s ago 8m 56.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (2m) 2m ago 7m 49.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (8m) 112s ago 8m 16.3M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (7m) 2m ago 7m 15.4M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (115s) 112s ago 7m 30.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 starting - - - 4096M 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (7m) 112s ago 7m 305M 4096M 18.2.1 5be31c24972a 567f579c058e 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (6m) 2m ago 6m 426M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (6m) 2m ago 6m 407M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (6m) 2m ago 6m 319M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:23:06.060 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (2m) 112s ago 7m 47.7M - 2.43.0 a07b618ecd1d 5ebb885bd417 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.059+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4648077710 msgr2=0x7f4648079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4648077710 0x7f4648079bd0 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f4650099030 tx=0x7f464c009450 comp rx=0 tx=0).stop 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 msgr2=0x7f465012de60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 0x7f465012de60 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f464400b5c0 tx=0x7f4644004a20 comp rx=0 tx=0).stop 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 shutdown_connections 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4648077710 0x7f4648079bd0 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4650097fd0 0x7f465012d920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 --2- 192.168.123.102:0/261020151 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f46500991d0 0x7f465012de60 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 >> 192.168.123.102:0/261020151 conn(0x7f4650093550 msgr2=0x7f465009c400 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 shutdown_connections 2026-03-10T10:23:06.061 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.060+0000 7f465df1a700 1 -- 192.168.123.102:0/261020151 wait complete. 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.138+0000 7fd72e17e700 1 -- 192.168.123.102:0/4291290794 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728100c90 msgr2=0x7fd7281010b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.138+0000 7fd72e17e700 1 --2- 192.168.123.102:0/4291290794 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728100c90 0x7fd7281010b0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fd724009b00 tx=0x7fd724009e10 comp rx=0 tx=0).stop 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.138+0000 7fd72e17e700 1 -- 192.168.123.102:0/4291290794 shutdown_connections 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.138+0000 7fd72e17e700 1 --2- 192.168.123.102:0/4291290794 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd728101e90 0x7fd7281022f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.138+0000 7fd72e17e700 1 --2- 192.168.123.102:0/4291290794 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728100c90 0x7fd7281010b0 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.138+0000 7fd72e17e700 1 -- 192.168.123.102:0/4291290794 >> 192.168.123.102:0/4291290794 conn(0x7fd7280fc210 msgr2=0x7fd7280fe670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.138+0000 7fd72e17e700 1 -- 192.168.123.102:0/4291290794 shutdown_connections 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.138+0000 7fd72e17e700 1 -- 192.168.123.102:0/4291290794 wait complete. 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72e17e700 1 Processor -- start 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72e17e700 1 -- start start 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72e17e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728101e90 0x7fd72810f710 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72e17e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd72810fc50 0x7fd728112cb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72e17e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd728110160 con 0x7fd72810fc50 2026-03-10T10:23:06.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72e17e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd7281102d0 con 0x7fd728101e90 2026-03-10T10:23:06.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72d17c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728101e90 0x7fd72810f710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72d17c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728101e90 0x7fd72810f710 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:33840/0 (socket says 192.168.123.102:33840) 2026-03-10T10:23:06.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72d17c700 1 -- 192.168.123.102:0/558658425 learned_addr learned my addr 192.168.123.102:0/558658425 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:06.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.139+0000 7fd72c97b700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd72810fc50 0x7fd728112cb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.140+0000 7fd72d17c700 1 -- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd72810fc50 msgr2=0x7fd728112cb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.140+0000 7fd72d17c700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd72810fc50 0x7fd728112cb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.140+0000 7fd72d17c700 1 -- 192.168.123.102:0/558658425 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd7240097e0 con 0x7fd728101e90 2026-03-10T10:23:06.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.140+0000 7fd72d17c700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728101e90 0x7fd72810f710 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fd724000c00 tx=0x7fd72400f780 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:06.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.140+0000 7fd71e7fc700 1 -- 192.168.123.102:0/558658425 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd72401c070 con 0x7fd728101e90 2026-03-10T10:23:06.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.140+0000 7fd72e17e700 1 -- 192.168.123.102:0/558658425 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd7281131f0 con 0x7fd728101e90 2026-03-10T10:23:06.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.140+0000 7fd72e17e700 1 -- 192.168.123.102:0/558658425 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd7281136e0 con 0x7fd728101e90 2026-03-10T10:23:06.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.141+0000 7fd71e7fc700 1 -- 192.168.123.102:0/558658425 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd72400fb90 con 0x7fd728101e90 2026-03-10T10:23:06.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.141+0000 7fd71e7fc700 1 -- 192.168.123.102:0/558658425 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd7240178b0 con 0x7fd728101e90 2026-03-10T10:23:06.142 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.141+0000 7fd72e17e700 1 -- 192.168.123.102:0/558658425 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd70c005320 con 0x7fd728101e90 2026-03-10T10:23:06.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.147+0000 7fd71e7fc700 1 -- 192.168.123.102:0/558658425 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fd72400fd00 con 0x7fd728101e90 2026-03-10T10:23:06.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.147+0000 7fd71e7fc700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd714077910 0x7fd714079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.147+0000 7fd71e7fc700 1 -- 192.168.123.102:0/558658425 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(54..54 src has 1..54) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fd72409aea0 con 0x7fd728101e90 2026-03-10T10:23:06.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.147+0000 7fd71e7fc700 1 -- 192.168.123.102:0/558658425 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fd72409b320 con 0x7fd728101e90 2026-03-10T10:23:06.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.147+0000 7fd72c97b700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd714077910 0x7fd714079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.151 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.148+0000 7fd72c97b700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd714077910 0x7fd714079dd0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7fd718009910 tx=0x7fd718008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:06.316 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:06 vm02.local ceph-mon[110129]: pgmap v86: 65 pgs: 12 stale+active+clean, 53 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1023 B/s rd, 1 op/s; 0 B/s, 7 objects/s recovering 2026-03-10T10:23:06.316 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:06 vm02.local ceph-mon[110129]: osdmap e54: 6 total, 5 up, 6 in 2026-03-10T10:23:06.316 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:06.316 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:06.316 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4, 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T10:23:06.345 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 8, 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.344+0000 7fd72e17e700 1 -- 192.168.123.102:0/558658425 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fd70c005cc0 con 0x7fd728101e90 2026-03-10T10:23:06.346 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.345+0000 7fd71e7fc700 1 -- 192.168.123.102:0/558658425 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7fd724063700 con 0x7fd728101e90 2026-03-10T10:23:06.349 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.349+0000 7fd713fff700 1 -- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd714077910 msgr2=0x7fd714079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.349 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.349+0000 7fd713fff700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd714077910 0x7fd714079dd0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7fd718009910 tx=0x7fd718008040 comp rx=0 tx=0).stop 2026-03-10T10:23:06.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.349+0000 7fd713fff700 1 -- 192.168.123.102:0/558658425 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728101e90 msgr2=0x7fd72810f710 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.349+0000 7fd713fff700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728101e90 0x7fd72810f710 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fd724000c00 tx=0x7fd72400f780 comp rx=0 tx=0).stop 2026-03-10T10:23:06.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.351+0000 7fd713fff700 1 -- 192.168.123.102:0/558658425 shutdown_connections 2026-03-10T10:23:06.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.351+0000 7fd713fff700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd714077910 0x7fd714079dd0 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.351+0000 7fd713fff700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd728101e90 0x7fd72810f710 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.351+0000 7fd713fff700 1 --2- 192.168.123.102:0/558658425 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd72810fc50 0x7fd728112cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.351+0000 7fd713fff700 1 -- 192.168.123.102:0/558658425 >> 192.168.123.102:0/558658425 conn(0x7fd7280fc210 msgr2=0x7fd7280fe440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:06.352 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.352+0000 7fd713fff700 1 -- 192.168.123.102:0/558658425 shutdown_connections 2026-03-10T10:23:06.352 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.352+0000 7fd713fff700 1 -- 192.168.123.102:0/558658425 wait complete. 2026-03-10T10:23:06.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.435+0000 7fbe41782700 1 -- 192.168.123.102:0/4021444756 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c075a10 msgr2=0x7fbe3c077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.435+0000 7fbe41782700 1 --2- 192.168.123.102:0/4021444756 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c075a10 0x7fbe3c077ea0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fbe3400b600 tx=0x7fbe3400b910 comp rx=0 tx=0).stop 2026-03-10T10:23:06.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.435+0000 7fbe41782700 1 -- 192.168.123.102:0/4021444756 shutdown_connections 2026-03-10T10:23:06.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.435+0000 7fbe41782700 1 --2- 192.168.123.102:0/4021444756 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c075a10 0x7fbe3c077ea0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.435+0000 7fbe41782700 1 --2- 192.168.123.102:0/4021444756 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe3c072b20 0x7fbe3c072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.435+0000 7fbe41782700 1 -- 192.168.123.102:0/4021444756 >> 192.168.123.102:0/4021444756 conn(0x7fbe3c06daa0 msgr2=0x7fbe3c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:06.437 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.435+0000 7fbe41782700 1 -- 192.168.123.102:0/4021444756 shutdown_connections 2026-03-10T10:23:06.437 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe41782700 1 -- 192.168.123.102:0/4021444756 wait complete. 2026-03-10T10:23:06.437 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe41782700 1 Processor -- start 2026-03-10T10:23:06.437 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe41782700 1 -- start start 2026-03-10T10:23:06.440 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe41782700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c072b20 0x7fbe3c082f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.440 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe41782700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe3c0834c0 0x7fbe3c083940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.440 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe41782700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbe3c1b3430 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe41782700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbe3c1b35a0 con 0x7fbe3c0834c0 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe3bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c072b20 0x7fbe3c082f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe3bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c072b20 0x7fbe3c082f80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:33268/0 (socket says 192.168.123.102:33268) 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe3bfff700 1 -- 192.168.123.102:0/2700824754 learned_addr learned my addr 192.168.123.102:0/2700824754 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe3bfff700 1 -- 192.168.123.102:0/2700824754 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe3c0834c0 msgr2=0x7fbe3c083940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe3bfff700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe3c0834c0 0x7fbe3c083940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.436+0000 7fbe3bfff700 1 -- 192.168.123.102:0/2700824754 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbe3400b050 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.437+0000 7fbe3bfff700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c072b20 0x7fbe3c082f80 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fbe2c00ba70 tx=0x7fbe2c00be30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.437+0000 7fbe397fa700 1 -- 192.168.123.102:0/2700824754 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbe2c00c760 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.437+0000 7fbe397fa700 1 -- 192.168.123.102:0/2700824754 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbe2c00cda0 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.437+0000 7fbe397fa700 1 -- 192.168.123.102:0/2700824754 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbe2c012550 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.437+0000 7fbe41782700 1 -- 192.168.123.102:0/2700824754 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbe3c1b3880 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.437+0000 7fbe41782700 1 -- 192.168.123.102:0/2700824754 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbe3c1b3dd0 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.438+0000 7fbe41782700 1 -- 192.168.123.102:0/2700824754 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbe3c04ea90 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.439+0000 7fbe397fa700 1 -- 192.168.123.102:0/2700824754 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbe2c014440 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.439+0000 7fbe397fa700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbe240779e0 0x7fbe24079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.440+0000 7fbe3b7fe700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbe240779e0 0x7fbe24079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.440+0000 7fbe3b7fe700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbe240779e0 0x7fbe24079ea0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fbe3400b600 tx=0x7fbe3400bd10 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.440+0000 7fbe397fa700 1 -- 192.168.123.102:0/2700824754 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(54..54 src has 1..54) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fbe2c0993f0 con 0x7fbe3c072b20 2026-03-10T10:23:06.441 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.441+0000 7fbe397fa700 1 -- 192.168.123.102:0/2700824754 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbe2c061d60 con 0x7fbe3c072b20 2026-03-10T10:23:06.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:06 vm05.local ceph-mon[103593]: pgmap v86: 65 pgs: 12 stale+active+clean, 53 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1023 B/s rd, 1 op/s; 0 B/s, 7 objects/s recovering 2026-03-10T10:23:06.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:06 vm05.local ceph-mon[103593]: osdmap e54: 6 total, 5 up, 6 in 2026-03-10T10:23:06.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:06.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:06.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:06.603 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.602+0000 7fbe41782700 1 -- 192.168.123.102:0/2700824754 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fbe3c1b4300 con 0x7fbe3c072b20 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:23:06.611 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 0 members: 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.606+0000 7fbe397fa700 1 -- 192.168.123.102:0/2700824754 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1945 (secure 0 0 0) 0x7fbe2c0614b0 con 0x7fbe3c072b20 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.611+0000 7fbe22ffd700 1 -- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbe240779e0 msgr2=0x7fbe24079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.611+0000 7fbe22ffd700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbe240779e0 0x7fbe24079ea0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fbe3400b600 tx=0x7fbe3400bd10 comp rx=0 tx=0).stop 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.611+0000 7fbe22ffd700 1 -- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c072b20 msgr2=0x7fbe3c082f80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.611+0000 7fbe22ffd700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c072b20 0x7fbe3c082f80 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fbe2c00ba70 tx=0x7fbe2c00be30 comp rx=0 tx=0).stop 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.612+0000 7fbe22ffd700 1 -- 192.168.123.102:0/2700824754 shutdown_connections 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.612+0000 7fbe22ffd700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbe3c072b20 0x7fbe3c082f80 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.612+0000 7fbe22ffd700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbe240779e0 0x7fbe24079ea0 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.612+0000 7fbe22ffd700 1 --2- 192.168.123.102:0/2700824754 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe3c0834c0 0x7fbe3c083940 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.612 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.612+0000 7fbe22ffd700 1 -- 192.168.123.102:0/2700824754 >> 192.168.123.102:0/2700824754 conn(0x7fbe3c06daa0 msgr2=0x7fbe3c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:06.613 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.612+0000 7fbe22ffd700 1 -- 192.168.123.102:0/2700824754 shutdown_connections 2026-03-10T10:23:06.613 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.613+0000 7fbe22ffd700 1 -- 192.168.123.102:0/2700824754 wait complete. 2026-03-10T10:23:06.622 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:23:06.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.698+0000 7f8d67fff700 1 -- 192.168.123.102:0/982122270 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d6810a700 msgr2=0x7f8d6810cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.698+0000 7f8d67fff700 1 --2- 192.168.123.102:0/982122270 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d6810a700 0x7f8d6810cb90 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f8d6000b3a0 tx=0x7f8d6000b6b0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.698+0000 7f8d67fff700 1 -- 192.168.123.102:0/982122270 shutdown_connections 2026-03-10T10:23:06.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.698+0000 7f8d67fff700 1 --2- 192.168.123.102:0/982122270 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d6810a700 0x7f8d6810cb90 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.698+0000 7f8d67fff700 1 --2- 192.168.123.102:0/982122270 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d68107d90 0x7f8d6810a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.698+0000 7f8d67fff700 1 -- 192.168.123.102:0/982122270 >> 192.168.123.102:0/982122270 conn(0x7f8d6806daa0 msgr2=0x7f8d6806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:06.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.698+0000 7f8d67fff700 1 -- 192.168.123.102:0/982122270 shutdown_connections 2026-03-10T10:23:06.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.698+0000 7f8d67fff700 1 -- 192.168.123.102:0/982122270 wait complete. 2026-03-10T10:23:06.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d67fff700 1 Processor -- start 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d67fff700 1 -- start start 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d67fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d68107d90 0x7f8d68116a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d67fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d6810a700 0x7f8d68116fc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d67fff700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8d681175e0 con 0x7f8d68107d90 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d67fff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8d681b3170 con 0x7f8d6810a700 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d66ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d68107d90 0x7f8d68116a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d66ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d68107d90 0x7f8d68116a80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44504/0 (socket says 192.168.123.102:44504) 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d66ffd700 1 -- 192.168.123.102:0/2080976430 learned_addr learned my addr 192.168.123.102:0/2080976430 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.699+0000 7f8d667fc700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d6810a700 0x7f8d68116fc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.700+0000 7f8d667fc700 1 -- 192.168.123.102:0/2080976430 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d68107d90 msgr2=0x7f8d68116a80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.700+0000 7f8d667fc700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d68107d90 0x7f8d68116a80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.700+0000 7f8d667fc700 1 -- 192.168.123.102:0/2080976430 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8d6000b050 con 0x7f8d6810a700 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.700+0000 7f8d667fc700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d6810a700 0x7f8d68116fc0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f8d60000f80 tx=0x7f8d60008ef0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:06.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.700+0000 7f8d4ffff700 1 -- 192.168.123.102:0/2080976430 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8d6000e050 con 0x7f8d6810a700 2026-03-10T10:23:06.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.700+0000 7f8d67fff700 1 -- 192.168.123.102:0/2080976430 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8d681b3310 con 0x7f8d6810a700 2026-03-10T10:23:06.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.700+0000 7f8d67fff700 1 -- 192.168.123.102:0/2080976430 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8d681b3890 con 0x7f8d6810a700 2026-03-10T10:23:06.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.701+0000 7f8d4ffff700 1 -- 192.168.123.102:0/2080976430 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8d600047d0 con 0x7f8d6810a700 2026-03-10T10:23:06.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.701+0000 7f8d4ffff700 1 -- 192.168.123.102:0/2080976430 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8d6001daa0 con 0x7f8d6810a700 2026-03-10T10:23:06.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.701+0000 7f8d67fff700 1 -- 192.168.123.102:0/2080976430 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8d54005320 con 0x7f8d6810a700 2026-03-10T10:23:06.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.702+0000 7f8d4ffff700 1 -- 192.168.123.102:0/2080976430 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f8d60019040 con 0x7f8d6810a700 2026-03-10T10:23:06.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.702+0000 7f8d4ffff700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8d50077920 0x7f8d50079de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.702+0000 7f8d66ffd700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8d50077920 0x7f8d50079de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.703+0000 7f8d4ffff700 1 -- 192.168.123.102:0/2080976430 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(54..54 src has 1..54) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f8d6009b0d0 con 0x7f8d6810a700 2026-03-10T10:23:06.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.703+0000 7f8d66ffd700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8d50077920 0x7f8d50079de0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f8d58009cc0 tx=0x7f8d58009480 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:06.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.705+0000 7f8d4ffff700 1 -- 192.168.123.102:0/2080976430 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8d600638b0 con 0x7f8d6810a700 2026-03-10T10:23:06.839 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.838+0000 7f8d67fff700 1 -- 192.168.123.102:0/2080976430 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f8d54000bf0 con 0x7f8d50077920 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout: "crash" 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:23:06.841 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "7/23 daemons upgraded", 2026-03-10T10:23:06.842 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T10:23:06.842 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:23:06.842 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:23:06.842 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.839+0000 7f8d4ffff700 1 -- 192.168.123.102:0/2080976430 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f8d54000bf0 con 0x7f8d50077920 2026-03-10T10:23:06.843 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.842+0000 7f8d4dffb700 1 -- 192.168.123.102:0/2080976430 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8d50077920 msgr2=0x7f8d50079de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.842+0000 7f8d4dffb700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8d50077920 0x7f8d50079de0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f8d58009cc0 tx=0x7f8d58009480 comp rx=0 tx=0).stop 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.842+0000 7f8d4dffb700 1 -- 192.168.123.102:0/2080976430 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d6810a700 msgr2=0x7f8d68116fc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.842+0000 7f8d4dffb700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d6810a700 0x7f8d68116fc0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f8d60000f80 tx=0x7f8d60008ef0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.843+0000 7f8d4dffb700 1 -- 192.168.123.102:0/2080976430 shutdown_connections 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.843+0000 7f8d4dffb700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8d68107d90 0x7f8d68116a80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.843+0000 7f8d4dffb700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8d50077920 0x7f8d50079de0 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.843+0000 7f8d4dffb700 1 --2- 192.168.123.102:0/2080976430 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d6810a700 0x7f8d68116fc0 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.843+0000 7f8d4dffb700 1 -- 192.168.123.102:0/2080976430 >> 192.168.123.102:0/2080976430 conn(0x7f8d6806daa0 msgr2=0x7f8d6806ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.843+0000 7f8d4dffb700 1 -- 192.168.123.102:0/2080976430 shutdown_connections 2026-03-10T10:23:06.844 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.843+0000 7f8d4dffb700 1 -- 192.168.123.102:0/2080976430 wait complete. 2026-03-10T10:23:06.885 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:06 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[124217]: 2026-03-10T10:23:06.746+0000 7fa79fee9740 -1 Falling back to public interface 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.915+0000 7ffa63fff700 1 -- 192.168.123.102:0/537810306 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffa5c0a4ca0 msgr2=0x7ffa5c0a50c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.915+0000 7ffa63fff700 1 --2- 192.168.123.102:0/537810306 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffa5c0a4ca0 0x7ffa5c0a50c0 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7ffa64066a30 tx=0x7ffa64069a30 comp rx=0 tx=0).stop 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.915+0000 7ffa63fff700 1 -- 192.168.123.102:0/537810306 shutdown_connections 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.915+0000 7ffa63fff700 1 --2- 192.168.123.102:0/537810306 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffa5c0a5de0 0x7ffa5c0a6260 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.915+0000 7ffa63fff700 1 --2- 192.168.123.102:0/537810306 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffa5c0a4ca0 0x7ffa5c0a50c0 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.915+0000 7ffa63fff700 1 -- 192.168.123.102:0/537810306 >> 192.168.123.102:0/537810306 conn(0x7ffa5c0a0160 msgr2=0x7ffa5c0a25c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.916+0000 7ffa63fff700 1 -- 192.168.123.102:0/537810306 shutdown_connections 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.917+0000 7ffa63fff700 1 -- 192.168.123.102:0/537810306 wait complete. 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.917+0000 7ffa63fff700 1 Processor -- start 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.917+0000 7ffa63fff700 1 -- start start 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.917+0000 7ffa63fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffa5c0a4ca0 0x7ffa5c0b3e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.917+0000 7ffa63fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffa5c0a5de0 0x7ffa5c0b43d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.917+0000 7ffa63fff700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ffa5c0b4a10 con 0x7ffa5c0a4ca0 2026-03-10T10:23:06.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.917+0000 7ffa63fff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ffa5c0b4b80 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.918+0000 7ffa627fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffa5c0a5de0 0x7ffa5c0b43d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.919+0000 7ffa627fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffa5c0a5de0 0x7ffa5c0b43d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:41566/0 (socket says 192.168.123.102:41566) 2026-03-10T10:23:06.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.919+0000 7ffa627fc700 1 -- 192.168.123.102:0/1695082063 learned_addr learned my addr 192.168.123.102:0/1695082063 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:06.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.921+0000 7ffa627fc700 1 -- 192.168.123.102:0/1695082063 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffa5c0a4ca0 msgr2=0x7ffa5c0b3e90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:06.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.921+0000 7ffa627fc700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffa5c0a4ca0 0x7ffa5c0b3e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:06.923 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.921+0000 7ffa627fc700 1 -- 192.168.123.102:0/1695082063 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ffa58009710 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.925 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.922+0000 7ffa627fc700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffa5c0a5de0 0x7ffa5c0b43d0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7ffa5800ec80 tx=0x7ffa5800c5b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:06.925 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.923+0000 7ffa4bfff700 1 -- 192.168.123.102:0/1695082063 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ffa5800cd50 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.925 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.923+0000 7ffa63fff700 1 -- 192.168.123.102:0/1695082063 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ffa64067090 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.925 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.923+0000 7ffa63fff700 1 -- 192.168.123.102:0/1695082063 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ffa5c13fc70 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.925 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.923+0000 7ffa4bfff700 1 -- 192.168.123.102:0/1695082063 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ffa5800ceb0 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.925 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.924+0000 7ffa4bfff700 1 -- 192.168.123.102:0/1695082063 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ffa58018a00 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.925 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.924+0000 7ffa63fff700 1 -- 192.168.123.102:0/1695082063 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ffa50005320 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.926 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.925+0000 7ffa4bfff700 1 -- 192.168.123.102:0/1695082063 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ffa58018b60 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.926 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.925+0000 7ffa4bfff700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ffa4c0778c0 0x7ffa4c079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:06.926 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.925+0000 7ffa4bfff700 1 -- 192.168.123.102:0/1695082063 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(54..54 src has 1..54) v4 ==== 6222+0+0 (secure 0 0 0) 0x7ffa58014070 con 0x7ffa5c0a5de0 2026-03-10T10:23:06.926 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.925+0000 7ffa62ffd700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ffa4c0778c0 0x7ffa4c079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:06.926 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.926+0000 7ffa62ffd700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ffa4c0778c0 0x7ffa4c079d80 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7ffa64066b20 tx=0x7ffa64066cd0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:06.935 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:06.929+0000 7ffa4bfff700 1 -- 192.168.123.102:0/1695082063 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ffa58062a30 con 0x7ffa5c0a5de0 2026-03-10T10:23:07.120 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.119+0000 7ffa63fff700 1 -- 192.168.123.102:0/1695082063 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7ffa50005190 con 0x7ffa5c0a5de0 2026-03-10T10:23:07.120 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.120+0000 7ffa4bfff700 1 -- 192.168.123.102:0/1695082063 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+95 (secure 0 0 0) 0x7ffa58062180 con 0x7ffa5c0a5de0 2026-03-10T10:23:07.120 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN 1 osds down 2026-03-10T10:23:07.120 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] OSD_DOWN: 1 osds down 2026-03-10T10:23:07.120 INFO:teuthology.orchestra.run.vm02.stdout: osd.1 (root=default,host=vm02) is down 2026-03-10T10:23:07.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.126+0000 7ffa49ffb700 1 -- 192.168.123.102:0/1695082063 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ffa4c0778c0 msgr2=0x7ffa4c079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:07.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.126+0000 7ffa49ffb700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ffa4c0778c0 0x7ffa4c079d80 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7ffa64066b20 tx=0x7ffa64066cd0 comp rx=0 tx=0).stop 2026-03-10T10:23:07.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.126+0000 7ffa49ffb700 1 -- 192.168.123.102:0/1695082063 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffa5c0a5de0 msgr2=0x7ffa5c0b43d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:07.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.126+0000 7ffa49ffb700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffa5c0a5de0 0x7ffa5c0b43d0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7ffa5800ec80 tx=0x7ffa5800c5b0 comp rx=0 tx=0).stop 2026-03-10T10:23:07.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.127+0000 7ffa49ffb700 1 -- 192.168.123.102:0/1695082063 shutdown_connections 2026-03-10T10:23:07.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.127+0000 7ffa49ffb700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ffa5c0a4ca0 0x7ffa5c0b3e90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:07.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.127+0000 7ffa49ffb700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ffa4c0778c0 0x7ffa4c079d80 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:07.127 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.127+0000 7ffa49ffb700 1 --2- 192.168.123.102:0/1695082063 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffa5c0a5de0 0x7ffa5c0b43d0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:07.128 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.127+0000 7ffa49ffb700 1 -- 192.168.123.102:0/1695082063 >> 192.168.123.102:0/1695082063 conn(0x7ffa5c0a0160 msgr2=0x7ffa5c0a9010 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:07.128 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.127+0000 7ffa49ffb700 1 -- 192.168.123.102:0/1695082063 shutdown_connections 2026-03-10T10:23:07.128 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:07.127+0000 7ffa49ffb700 1 -- 192.168.123.102:0/1695082063 wait complete. 2026-03-10T10:23:07.175 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:07 vm02.local ceph-mon[110129]: from='client.34232 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:07.175 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:07 vm02.local ceph-mon[110129]: from='client.34236 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:07.175 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:07 vm02.local ceph-mon[110129]: from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:07.175 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:07 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/558658425' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:07.175 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:07 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2700824754' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:23:07.175 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:07.175 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:07.439 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:07 vm05.local ceph-mon[103593]: from='client.34232 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:07.439 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:07 vm05.local ceph-mon[103593]: from='client.34236 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:07.439 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:07 vm05.local ceph-mon[103593]: from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:07.439 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:07 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/558658425' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:07.439 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:07 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2700824754' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:23:07.439 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:07.439 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:08.371 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:08 vm02.local ceph-mon[110129]: pgmap v88: 65 pgs: 12 stale+active+clean, 53 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 639 B/s rd, 1 op/s; 0 B/s, 3 objects/s recovering 2026-03-10T10:23:08.371 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:08 vm02.local ceph-mon[110129]: from='client.44205 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:08.371 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:08 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1695082063' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:23:08.371 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:08 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:08.371 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:08 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:08.371 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:08 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:08.371 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:08 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:08.410 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:08 vm05.local ceph-mon[103593]: pgmap v88: 65 pgs: 12 stale+active+clean, 53 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 639 B/s rd, 1 op/s; 0 B/s, 3 objects/s recovering 2026-03-10T10:23:08.410 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:08 vm05.local ceph-mon[103593]: from='client.44205 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:08.410 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:08 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1695082063' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:23:08.410 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:08 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:08.410 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:08 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:08.410 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:08 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:08.411 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:08 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: pgmap v89: 65 pgs: 18 active+undersized, 3 stale+active+clean, 11 active+undersized+degraded, 33 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 1 op/s; 27/228 objects degraded (11.842%); 0 B/s, 3 objects/s recovering 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:10.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T10:23:10.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T10:23:10.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (12 PGs are or would become offline) 2026-03-10T10:23:10.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:09 vm02.local ceph-mon[110129]: Health check failed: Degraded data redundancy: 27/228 objects degraded (11.842%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: pgmap v89: 65 pgs: 18 active+undersized, 3 stale+active+clean, 11 active+undersized+degraded, 33 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 1 op/s; 27/228 objects degraded (11.842%); 0 B/s, 3 objects/s recovering 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (12 PGs are or would become offline) 2026-03-10T10:23:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:09 vm05.local ceph-mon[103593]: Health check failed: Degraded data redundancy: 27/228 objects degraded (11.842%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:11.029 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[124217]: 2026-03-10T10:23:10.772+0000 7fa79fee9740 -1 osd.1 0 read_superblock omap replica is missing. 2026-03-10T10:23:11.529 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:11 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[124217]: 2026-03-10T10:23:11.031+0000 7fa79fee9740 -1 osd.1 52 log_to_monitors true 2026-03-10T10:23:12.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:11 vm02.local ceph-mon[110129]: pgmap v90: 65 pgs: 22 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 28/228 objects degraded (12.281%) 2026-03-10T10:23:12.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:11 vm02.local ceph-mon[110129]: from='osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:23:12.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:11 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:11 vm05.local ceph-mon[103593]: pgmap v90: 65 pgs: 22 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 28/228 objects degraded (12.281%) 2026-03-10T10:23:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:11 vm05.local ceph-mon[103593]: from='osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:23:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:11 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:13.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:13 vm02.local ceph-mon[110129]: from='osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:23:13.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:13 vm02.local ceph-mon[110129]: osdmap e55: 6 total, 5 up, 6 in 2026-03-10T10:23:13.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:13 vm02.local ceph-mon[110129]: from='osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:23:13.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:13 vm02.local ceph-mon[110129]: from='osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675]' entity='osd.1' 2026-03-10T10:23:13.280 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:23:12 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[124217]: 2026-03-10T10:23:12.919+0000 7fa797482640 -1 osd.1 52 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:23:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:13 vm05.local ceph-mon[103593]: from='osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:23:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:13 vm05.local ceph-mon[103593]: osdmap e55: 6 total, 5 up, 6 in 2026-03-10T10:23:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:13 vm05.local ceph-mon[103593]: from='osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:23:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:13 vm05.local ceph-mon[103593]: from='osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675]' entity='osd.1' 2026-03-10T10:23:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:14 vm02.local ceph-mon[110129]: pgmap v92: 65 pgs: 22 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 28/228 objects degraded (12.281%) 2026-03-10T10:23:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:14 vm02.local ceph-mon[110129]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:23:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:14 vm02.local ceph-mon[110129]: osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675] boot 2026-03-10T10:23:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:14 vm02.local ceph-mon[110129]: osdmap e56: 6 total, 6 up, 6 in 2026-03-10T10:23:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:14 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:23:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:14 vm05.local ceph-mon[103593]: pgmap v92: 65 pgs: 22 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 28/228 objects degraded (12.281%) 2026-03-10T10:23:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:14 vm05.local ceph-mon[103593]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:23:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:14 vm05.local ceph-mon[103593]: osd.1 [v2:192.168.123.102:6810/3598719675,v1:192.168.123.102:6811/3598719675] boot 2026-03-10T10:23:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:14 vm05.local ceph-mon[103593]: osdmap e56: 6 total, 6 up, 6 in 2026-03-10T10:23:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:14 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:23:16.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:15 vm02.local ceph-mon[110129]: pgmap v94: 65 pgs: 22 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 1 op/s; 28/228 objects degraded (12.281%) 2026-03-10T10:23:16.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:15 vm02.local ceph-mon[110129]: osdmap e57: 6 total, 6 up, 6 in 2026-03-10T10:23:16.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:15 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 28/228 objects degraded (12.281%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:16.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:15 vm05.local ceph-mon[103593]: pgmap v94: 65 pgs: 22 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 1 op/s; 28/228 objects degraded (12.281%) 2026-03-10T10:23:16.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:15 vm05.local ceph-mon[103593]: osdmap e57: 6 total, 6 up, 6 in 2026-03-10T10:23:16.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:15 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 28/228 objects degraded (12.281%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:17 vm02.local ceph-mon[110129]: pgmap v96: 65 pgs: 22 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 28/228 objects degraded (12.281%) 2026-03-10T10:23:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:17 vm05.local ceph-mon[103593]: pgmap v96: 65 pgs: 22 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 28/228 objects degraded (12.281%) 2026-03-10T10:23:19.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:19 vm02.local ceph-mon[110129]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 28/228 objects degraded (12.281%), 12 pgs degraded) 2026-03-10T10:23:19.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:19 vm02.local ceph-mon[110129]: Cluster is now healthy 2026-03-10T10:23:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:19 vm05.local ceph-mon[103593]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 28/228 objects degraded (12.281%), 12 pgs degraded) 2026-03-10T10:23:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:19 vm05.local ceph-mon[103593]: Cluster is now healthy 2026-03-10T10:23:20.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:20 vm02.local ceph-mon[110129]: pgmap v97: 65 pgs: 2 active+undersized, 63 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1.4 KiB/s rd, 2 op/s 2026-03-10T10:23:20.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:20 vm05.local ceph-mon[103593]: pgmap v97: 65 pgs: 2 active+undersized, 63 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1.4 KiB/s rd, 2 op/s 2026-03-10T10:23:22.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:22 vm02.local ceph-mon[110129]: pgmap v98: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:23:22.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:22.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:22 vm05.local ceph-mon[103593]: pgmap v98: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:23:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:24.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:24 vm05.local ceph-mon[103593]: pgmap v99: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 471 B/s rd, 1 op/s 2026-03-10T10:23:24.323 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:24 vm02.local ceph-mon[110129]: pgmap v99: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 471 B/s rd, 1 op/s 2026-03-10T10:23:25.104 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:25 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T10:23:25.104 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:25 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T10:23:25.104 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:25 vm02.local ceph-mon[110129]: Upgrade: osd.2 is safe to restart 2026-03-10T10:23:25.104 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:25 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:25.104 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:25 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:23:25.104 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:25 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:25.105 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:25 vm02.local systemd[1]: Stopping Ceph osd.2 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:23:25.530 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:25 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[81491]: 2026-03-10T10:23:25.167+0000 7f586309a700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:23:25.530 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:25 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[81491]: 2026-03-10T10:23:25.167+0000 7f586309a700 -1 osd.2 57 *** Got signal Terminated *** 2026-03-10T10:23:25.530 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:25 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[81491]: 2026-03-10T10:23:25.167+0000 7f586309a700 -1 osd.2 57 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:23:25.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:25 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T10:23:25.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:25 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T10:23:25.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:25 vm05.local ceph-mon[103593]: Upgrade: osd.2 is safe to restart 2026-03-10T10:23:25.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:25 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:25.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:25 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:23:25.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:25 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:26.295 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129026]: 2026-03-10 10:23:26.114810345 +0000 UTC m=+0.959601871 container died 567f579c058e97b20ad1c196e3ee15084de8ca9d56af0f5e0ce49932fe00713c (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.schema-version=1.0, CEPH_POINT_RELEASE=-18.2.1, io.buildah.version=1.29.1, org.label-schema.build-date=20240222, org.label-schema.license=GPLv2, maintainer=Guillaume Abrioux , org.label-schema.vendor=CentOS, GIT_CLEAN=True, RELEASE=HEAD, ceph=True, org.label-schema.name=CentOS Stream 8 Base Image, GIT_BRANCH=HEAD, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd) 2026-03-10T10:23:26.295 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129026]: 2026-03-10 10:23:26.146881816 +0000 UTC m=+0.991673342 container remove 567f579c058e97b20ad1c196e3ee15084de8ca9d56af0f5e0ce49932fe00713c (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GIT_CLEAN=True, RELEASE=HEAD, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.29.1, org.label-schema.build-date=20240222, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.vendor=CentOS, CEPH_POINT_RELEASE=-18.2.1, GIT_BRANCH=HEAD, ceph=True, maintainer=Guillaume Abrioux ) 2026-03-10T10:23:26.295 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local bash[129026]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2 2026-03-10T10:23:26.295 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:26 vm02.local ceph-mon[110129]: Upgrade: Updating osd.2 2026-03-10T10:23:26.295 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:26 vm02.local ceph-mon[110129]: Deploying daemon osd.2 on vm02 2026-03-10T10:23:26.295 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:26 vm02.local ceph-mon[110129]: pgmap v100: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 921 B/s rd, 1 op/s 2026-03-10T10:23:26.295 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:26 vm02.local ceph-mon[110129]: osd.2 marked itself down and dead 2026-03-10T10:23:26.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:26 vm05.local ceph-mon[103593]: Upgrade: Updating osd.2 2026-03-10T10:23:26.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:26 vm05.local ceph-mon[103593]: Deploying daemon osd.2 on vm02 2026-03-10T10:23:26.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:26 vm05.local ceph-mon[103593]: pgmap v100: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 921 B/s rd, 1 op/s 2026-03-10T10:23:26.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:26 vm05.local ceph-mon[103593]: osd.2 marked itself down and dead 2026-03-10T10:23:26.572 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129095]: 2026-03-10 10:23:26.295164573 +0000 UTC m=+0.017208632 container create d176bccd09f4f5d471bd73c328c5087cf30011a3d5cc20153a80c6d356108bbb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T10:23:26.573 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129095]: 2026-03-10 10:23:26.335490647 +0000 UTC m=+0.057534706 container init d176bccd09f4f5d471bd73c328c5087cf30011a3d5cc20153a80c6d356108bbb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0) 2026-03-10T10:23:26.573 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129095]: 2026-03-10 10:23:26.338916719 +0000 UTC m=+0.060960778 container start d176bccd09f4f5d471bd73c328c5087cf30011a3d5cc20153a80c6d356108bbb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:23:26.573 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129095]: 2026-03-10 10:23:26.34008707 +0000 UTC m=+0.062131129 container attach d176bccd09f4f5d471bd73c328c5087cf30011a3d5cc20153a80c6d356108bbb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223) 2026-03-10T10:23:26.573 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129095]: 2026-03-10 10:23:26.288290356 +0000 UTC m=+0.010334416 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:23:26.573 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129095]: 2026-03-10 10:23:26.466273015 +0000 UTC m=+0.188317065 container died d176bccd09f4f5d471bd73c328c5087cf30011a3d5cc20153a80c6d356108bbb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True) 2026-03-10T10:23:26.573 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129095]: 2026-03-10 10:23:26.484105284 +0000 UTC m=+0.206149343 container remove d176bccd09f4f5d471bd73c328c5087cf30011a3d5cc20153a80c6d356108bbb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.vendor=CentOS) 2026-03-10T10:23:26.573 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.2.service: Deactivated successfully. 2026-03-10T10:23:26.573 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local systemd[1]: Stopped Ceph osd.2 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:23:26.573 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.2.service: Consumed 41.809s CPU time. 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local systemd[1]: Starting Ceph osd.2 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129198]: 2026-03-10 10:23:26.786226526 +0000 UTC m=+0.020070659 container create 63c425f660649619bb94e6604a1277fbee75890285acbde30ddf66f2d2cbdbf4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129198]: 2026-03-10 10:23:26.825435208 +0000 UTC m=+0.059279341 container init 63c425f660649619bb94e6604a1277fbee75890285acbde30ddf66f2d2cbdbf4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129198]: 2026-03-10 10:23:26.828088134 +0000 UTC m=+0.061932267 container start 63c425f660649619bb94e6604a1277fbee75890285acbde30ddf66f2d2cbdbf4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129198]: 2026-03-10 10:23:26.828921904 +0000 UTC m=+0.062766037 container attach 63c425f660649619bb94e6604a1277fbee75890285acbde30ddf66f2d2cbdbf4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local podman[129198]: 2026-03-10 10:23:26.778499173 +0000 UTC m=+0.012343306 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local bash[129198]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:27.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:26 vm02.local bash[129198]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:27.372 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-mon[110129]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:23:27.372 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-mon[110129]: osdmap e58: 6 total, 5 up, 6 in 2026-03-10T10:23:27.372 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:27.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:27 vm05.local ceph-mon[103593]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:23:27.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:27 vm05.local ceph-mon[103593]: osdmap e58: 6 total, 5 up, 6 in 2026-03-10T10:23:27.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:27 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:27.717 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-92e2ebbd-6cf7-4382-984b-cd14147b1c6c/osd-block-1ccdc548-a0cb-41e0-bc7a-21b41198ffea --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-92e2ebbd-6cf7-4382-984b-cd14147b1c6c/osd-block-1ccdc548-a0cb-41e0-bc7a-21b41198ffea --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/ln -snf /dev/ceph-92e2ebbd-6cf7-4382-984b-cd14147b1c6c/osd-block-1ccdc548-a0cb-41e0-bc7a-21b41198ffea /var/lib/ceph/osd/ceph-2/block 2026-03-10T10:23:27.718 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: Running command: /usr/bin/ln -snf /dev/ceph-92e2ebbd-6cf7-4382-984b-cd14147b1c6c/osd-block-1ccdc548-a0cb-41e0-bc7a-21b41198ffea /var/lib/ceph/osd/ceph-2/block 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate[129208]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129198]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local podman[129198]: 2026-03-10 10:23:27.74596345 +0000 UTC m=+0.979807583 container died 63c425f660649619bb94e6604a1277fbee75890285acbde30ddf66f2d2cbdbf4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True) 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local podman[129198]: 2026-03-10 10:23:27.764139983 +0000 UTC m=+0.997984116 container remove 63c425f660649619bb94e6604a1277fbee75890285acbde30ddf66f2d2cbdbf4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local podman[129466]: 2026-03-10 10:23:27.854765717 +0000 UTC m=+0.015441495 container create 745b9931485f3ac4dcf7a8b986a17d05af607019958af3969fd02bfe351b3fc4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local podman[129466]: 2026-03-10 10:23:27.899541759 +0000 UTC m=+0.060217527 container init 745b9931485f3ac4dcf7a8b986a17d05af607019958af3969fd02bfe351b3fc4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local podman[129466]: 2026-03-10 10:23:27.902414406 +0000 UTC m=+0.063090184 container start 745b9931485f3ac4dcf7a8b986a17d05af607019958af3969fd02bfe351b3fc4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local bash[129466]: 745b9931485f3ac4dcf7a8b986a17d05af607019958af3969fd02bfe351b3fc4 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local podman[129466]: 2026-03-10 10:23:27.848611557 +0000 UTC m=+0.009287335 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:23:28.030 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:27 vm02.local systemd[1]: Started Ceph osd.2 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:23:28.432 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:28 vm02.local ceph-mon[110129]: pgmap v102: 65 pgs: 9 stale+active+clean, 56 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 921 B/s rd, 1 op/s 2026-03-10T10:23:28.432 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:28 vm02.local ceph-mon[110129]: osdmap e59: 6 total, 5 up, 6 in 2026-03-10T10:23:28.432 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:28.432 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:28.432 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:28.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:28 vm05.local ceph-mon[103593]: pgmap v102: 65 pgs: 9 stale+active+clean, 56 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 921 B/s rd, 1 op/s 2026-03-10T10:23:28.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:28 vm05.local ceph-mon[103593]: osdmap e59: 6 total, 5 up, 6 in 2026-03-10T10:23:28.518 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:28.518 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:28.518 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:29.029 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:28 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[129476]: 2026-03-10T10:23:28.748+0000 7fe1a6920740 -1 Falling back to public interface 2026-03-10T10:23:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:29 vm02.local ceph-mon[110129]: Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY) 2026-03-10T10:23:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:29 vm02.local ceph-mon[110129]: Health check failed: Degraded data redundancy: 10/228 objects degraded (4.386%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:29.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:29 vm05.local ceph-mon[103593]: Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY) 2026-03-10T10:23:29.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:29 vm05.local ceph-mon[103593]: Health check failed: Degraded data redundancy: 10/228 objects degraded (4.386%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:30.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:30 vm02.local ceph-mon[110129]: pgmap v104: 65 pgs: 7 active+undersized, 7 peering, 3 stale+active+clean, 5 active+undersized+degraded, 43 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1.1 KiB/s rd, 2 op/s; 10/228 objects degraded (4.386%) 2026-03-10T10:23:30.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:30 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:30.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:30 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:30.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:30 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:30.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:30 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:30 vm05.local ceph-mon[103593]: pgmap v104: 65 pgs: 7 active+undersized, 7 peering, 3 stale+active+clean, 5 active+undersized+degraded, 43 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1.1 KiB/s rd, 2 op/s; 10/228 objects degraded (4.386%) 2026-03-10T10:23:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:30 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:30 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:30 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:30 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: pgmap v105: 65 pgs: 11 active+undersized, 7 peering, 9 active+undersized+degraded, 38 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 18/228 objects degraded (7.895%) 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T10:23:32.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:31 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (11 PGs are or would become offline) 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: pgmap v105: 65 pgs: 11 active+undersized, 7 peering, 9 active+undersized+degraded, 38 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 18/228 objects degraded (7.895%) 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T10:23:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:31 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (11 PGs are or would become offline) 2026-03-10T10:23:32.529 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:32 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[129476]: 2026-03-10T10:23:32.177+0000 7fe1a6920740 -1 osd.2 0 read_superblock omap replica is missing. 2026-03-10T10:23:32.529 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:32 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[129476]: 2026-03-10T10:23:32.398+0000 7fe1a6920740 -1 osd.2 57 log_to_monitors true 2026-03-10T10:23:33.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:32 vm02.local ceph-mon[110129]: from='osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:23:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:32 vm05.local ceph-mon[103593]: from='osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:23:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:33 vm02.local ceph-mon[110129]: pgmap v106: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 29/228 objects degraded (12.719%) 2026-03-10T10:23:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:33 vm02.local ceph-mon[110129]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 4 pgs peering) 2026-03-10T10:23:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:33 vm02.local ceph-mon[110129]: from='osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:23:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:33 vm02.local ceph-mon[110129]: osdmap e60: 6 total, 5 up, 6 in 2026-03-10T10:23:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:33 vm02.local ceph-mon[110129]: from='osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:23:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:33 vm02.local ceph-mon[110129]: from='osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313]' entity='osd.2' 2026-03-10T10:23:34.279 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:23:33 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[129476]: 2026-03-10T10:23:33.819+0000 7fe19deb9640 -1 osd.2 57 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:23:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:33 vm05.local ceph-mon[103593]: pgmap v106: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 29/228 objects degraded (12.719%) 2026-03-10T10:23:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:33 vm05.local ceph-mon[103593]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 4 pgs peering) 2026-03-10T10:23:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:33 vm05.local ceph-mon[103593]: from='osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:23:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:33 vm05.local ceph-mon[103593]: osdmap e60: 6 total, 5 up, 6 in 2026-03-10T10:23:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:33 vm05.local ceph-mon[103593]: from='osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T10:23:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:33 vm05.local ceph-mon[103593]: from='osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313]' entity='osd.2' 2026-03-10T10:23:35.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:34 vm02.local ceph-mon[110129]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:23:35.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:34 vm02.local ceph-mon[110129]: osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313] boot 2026-03-10T10:23:35.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:34 vm02.local ceph-mon[110129]: osdmap e61: 6 total, 6 up, 6 in 2026-03-10T10:23:35.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:23:35.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:34 vm05.local ceph-mon[103593]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:23:35.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:34 vm05.local ceph-mon[103593]: osd.2 [v2:192.168.123.102:6818/2713277313,v1:192.168.123.102:6819/2713277313] boot 2026-03-10T10:23:35.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:34 vm05.local ceph-mon[103593]: osdmap e61: 6 total, 6 up, 6 in 2026-03-10T10:23:35.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:23:36.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:35 vm02.local ceph-mon[110129]: pgmap v108: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 209 MiB data, 2.3 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 1 op/s; 29/228 objects degraded (12.719%) 2026-03-10T10:23:36.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:35 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 29/228 objects degraded (12.719%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:36.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:35 vm02.local ceph-mon[110129]: osdmap e62: 6 total, 6 up, 6 in 2026-03-10T10:23:36.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:35 vm05.local ceph-mon[103593]: pgmap v108: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 209 MiB data, 2.3 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 1 op/s; 29/228 objects degraded (12.719%) 2026-03-10T10:23:36.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:35 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 29/228 objects degraded (12.719%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:36.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:35 vm05.local ceph-mon[103593]: osdmap e62: 6 total, 6 up, 6 in 2026-03-10T10:23:37.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.196+0000 7f6567010700 1 -- 192.168.123.102:0/2440391356 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6560104320 msgr2=0x7f6560104780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.197 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.196+0000 7f6567010700 1 --2- 192.168.123.102:0/2440391356 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6560104320 0x7f6560104780 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f6550009b50 tx=0x7f6550009e60 comp rx=0 tx=0).stop 2026-03-10T10:23:37.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.197+0000 7f6567010700 1 -- 192.168.123.102:0/2440391356 shutdown_connections 2026-03-10T10:23:37.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.197+0000 7f6567010700 1 --2- 192.168.123.102:0/2440391356 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6560104320 0x7f6560104780 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.197+0000 7f6567010700 1 --2- 192.168.123.102:0/2440391356 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6560103120 0x7f6560103540 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.197+0000 7f6567010700 1 -- 192.168.123.102:0/2440391356 >> 192.168.123.102:0/2440391356 conn(0x7f65600fe6c0 msgr2=0x7f6560100b00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:37.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.198+0000 7f6567010700 1 -- 192.168.123.102:0/2440391356 shutdown_connections 2026-03-10T10:23:37.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.198+0000 7f6567010700 1 -- 192.168.123.102:0/2440391356 wait complete. 2026-03-10T10:23:37.198 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.198+0000 7f6567010700 1 Processor -- start 2026-03-10T10:23:37.199 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.198+0000 7f6567010700 1 -- start start 2026-03-10T10:23:37.199 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.199+0000 7f6567010700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6560103120 0x7f65601989a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.199 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.199+0000 7f6567010700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6560104320 0x7f6560198ee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.199 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.199+0000 7f6567010700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6560199500 con 0x7f6560103120 2026-03-10T10:23:37.199 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.199+0000 7f6567010700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6560199640 con 0x7f6560104320 2026-03-10T10:23:37.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.199+0000 7f655ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6560104320 0x7f6560198ee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.199+0000 7f655ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6560104320 0x7f6560198ee0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:48724/0 (socket says 192.168.123.102:48724) 2026-03-10T10:23:37.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.199+0000 7f6564dac700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6560103120 0x7f65601989a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.199+0000 7f655ffff700 1 -- 192.168.123.102:0/2740835124 learned_addr learned my addr 192.168.123.102:0/2740835124 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:37.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.200+0000 7f6564dac700 1 -- 192.168.123.102:0/2740835124 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6560104320 msgr2=0x7f6560198ee0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.200+0000 7f6564dac700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6560104320 0x7f6560198ee0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.200+0000 7f6564dac700 1 -- 192.168.123.102:0/2740835124 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f65500097e0 con 0x7f6560103120 2026-03-10T10:23:37.200 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.200+0000 7f655ffff700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6560104320 0x7f6560198ee0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:23:37.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.200+0000 7f6564dac700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6560103120 0x7f65601989a0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f655400ea80 tx=0x7f655400ee40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:37.201 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.200+0000 7f655dffb700 1 -- 192.168.123.102:0/2740835124 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f655400cbe0 con 0x7f6560103120 2026-03-10T10:23:37.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.200+0000 7f655dffb700 1 -- 192.168.123.102:0/2740835124 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6554004500 con 0x7f6560103120 2026-03-10T10:23:37.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.200+0000 7f655dffb700 1 -- 192.168.123.102:0/2740835124 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6554010430 con 0x7f6560103120 2026-03-10T10:23:37.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.201+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f656019e0f0 con 0x7f6560103120 2026-03-10T10:23:37.202 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.201+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f656019e640 con 0x7f6560103120 2026-03-10T10:23:37.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.202+0000 7f655dffb700 1 -- 192.168.123.102:0/2740835124 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f6554010590 con 0x7f6560103120 2026-03-10T10:23:37.203 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.202+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6560066e80 con 0x7f6560103120 2026-03-10T10:23:37.205 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.205+0000 7f655dffb700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6548077910 0x7f6548079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.205 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.205+0000 7f655dffb700 1 -- 192.168.123.102:0/2740835124 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(62..62 src has 1..62) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f6554014070 con 0x7f6560103120 2026-03-10T10:23:37.205 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.205+0000 7f655dffb700 1 -- 192.168.123.102:0/2740835124 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6554063720 con 0x7f6560103120 2026-03-10T10:23:37.206 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.205+0000 7f655ffff700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6548077910 0x7f6548079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.206 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.206+0000 7f655ffff700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6548077910 0x7f6548079dd0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f6550006010 tx=0x7f655000b540 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:37.341 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.340+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f656019e920 con 0x7f6548077910 2026-03-10T10:23:37.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.342+0000 7f655dffb700 1 -- 192.168.123.102:0/2740835124 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f656019e920 con 0x7f6548077910 2026-03-10T10:23:37.344 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.344+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6548077910 msgr2=0x7f6548079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.344+0000 7f6567010700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6548077910 0x7f6548079dd0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f6550006010 tx=0x7f655000b540 comp rx=0 tx=0).stop 2026-03-10T10:23:37.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.345+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6560103120 msgr2=0x7f65601989a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.345+0000 7f6567010700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6560103120 0x7f65601989a0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f655400ea80 tx=0x7f655400ee40 comp rx=0 tx=0).stop 2026-03-10T10:23:37.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.345+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 shutdown_connections 2026-03-10T10:23:37.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.345+0000 7f6567010700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6560103120 0x7f65601989a0 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.345+0000 7f6567010700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6548077910 0x7f6548079dd0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.345 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.345+0000 7f6567010700 1 --2- 192.168.123.102:0/2740835124 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6560104320 0x7f6560198ee0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.346 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.345+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 >> 192.168.123.102:0/2740835124 conn(0x7f65600fe6c0 msgr2=0x7f6560107550 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:37.346 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.346+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 shutdown_connections 2026-03-10T10:23:37.346 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.346+0000 7f6567010700 1 -- 192.168.123.102:0/2740835124 wait complete. 2026-03-10T10:23:37.355 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:23:37.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.412+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/2127130978 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbc28101a90 msgr2=0x7fbc28103e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.412+0000 7fbc2d1e5700 1 --2- 192.168.123.102:0/2127130978 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbc28101a90 0x7fbc28103e80 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fbc10009b00 tx=0x7fbc10009e10 comp rx=0 tx=0).stop 2026-03-10T10:23:37.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.412+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/2127130978 shutdown_connections 2026-03-10T10:23:37.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.412+0000 7fbc2d1e5700 1 --2- 192.168.123.102:0/2127130978 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc281043c0 0x7fbc281067b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.412+0000 7fbc2d1e5700 1 --2- 192.168.123.102:0/2127130978 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbc28101a90 0x7fbc28103e80 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.412+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/2127130978 >> 192.168.123.102:0/2127130978 conn(0x7fbc280fb3c0 msgr2=0x7fbc280fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:37.413 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.412+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/2127130978 shutdown_connections 2026-03-10T10:23:37.413 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.412+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/2127130978 wait complete. 2026-03-10T10:23:37.413 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.413+0000 7fbc2d1e5700 1 Processor -- start 2026-03-10T10:23:37.413 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.413+0000 7fbc2d1e5700 1 -- start start 2026-03-10T10:23:37.413 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.413+0000 7fbc2d1e5700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbc281043c0 0x7fbc28198c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.413+0000 7fbc2d1e5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc281991a0 0x7fbc2819e200 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.413+0000 7fbc2d1e5700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbc28199620 con 0x7fbc281043c0 2026-03-10T10:23:37.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.413+0000 7fbc2d1e5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbc28199790 con 0x7fbc281991a0 2026-03-10T10:23:37.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.414+0000 7fbc2659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc281991a0 0x7fbc2819e200 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.414+0000 7fbc2659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc281991a0 0x7fbc2819e200 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:48732/0 (socket says 192.168.123.102:48732) 2026-03-10T10:23:37.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.414+0000 7fbc2659c700 1 -- 192.168.123.102:0/3363608537 learned_addr learned my addr 192.168.123.102:0/3363608537 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:37.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.414+0000 7fbc2659c700 1 -- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbc281043c0 msgr2=0x7fbc28198c60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.414+0000 7fbc26d9d700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbc281043c0 0x7fbc28198c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.414 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.414+0000 7fbc2659c700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbc281043c0 0x7fbc28198c60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.415 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.414+0000 7fbc2659c700 1 -- 192.168.123.102:0/3363608537 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbc100097e0 con 0x7fbc281991a0 2026-03-10T10:23:37.415 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.414+0000 7fbc26d9d700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbc281043c0 0x7fbc28198c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:23:37.415 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.414+0000 7fbc2659c700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc281991a0 0x7fbc2819e200 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fbc1800c930 tx=0x7fbc1800ccf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:37.415 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.415+0000 7fbc1ffff700 1 -- 192.168.123.102:0/3363608537 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbc18007ab0 con 0x7fbc281991a0 2026-03-10T10:23:37.415 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.415+0000 7fbc1ffff700 1 -- 192.168.123.102:0/3363608537 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbc18007c10 con 0x7fbc281991a0 2026-03-10T10:23:37.415 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.415+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbc2819e740 con 0x7fbc281991a0 2026-03-10T10:23:37.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.415+0000 7fbc1ffff700 1 -- 192.168.123.102:0/3363608537 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbc18018730 con 0x7fbc281991a0 2026-03-10T10:23:37.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.415+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbc2819ec40 con 0x7fbc281991a0 2026-03-10T10:23:37.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.416+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbc28066e80 con 0x7fbc281991a0 2026-03-10T10:23:37.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.417+0000 7fbc1ffff700 1 -- 192.168.123.102:0/3363608537 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbc1801f030 con 0x7fbc281991a0 2026-03-10T10:23:37.418 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.417+0000 7fbc1ffff700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbc14077870 0x7fbc14079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.418 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.417+0000 7fbc1ffff700 1 -- 192.168.123.102:0/3363608537 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(62..62 src has 1..62) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fbc18099f40 con 0x7fbc281991a0 2026-03-10T10:23:37.418 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.418+0000 7fbc26d9d700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbc14077870 0x7fbc14079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.418 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.418+0000 7fbc26d9d700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbc14077870 0x7fbc14079d30 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fbc100052d0 tx=0x7fbc10005fd0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:37.420 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.420+0000 7fbc1ffff700 1 -- 192.168.123.102:0/3363608537 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbc180627a0 con 0x7fbc281991a0 2026-03-10T10:23:37.557 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.557+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fbc2819ef20 con 0x7fbc14077870 2026-03-10T10:23:37.558 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.558+0000 7fbc1ffff700 1 -- 192.168.123.102:0/3363608537 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7fbc2819ef20 con 0x7fbc14077870 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.560+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbc14077870 msgr2=0x7fbc14079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.560+0000 7fbc2d1e5700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbc14077870 0x7fbc14079d30 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fbc100052d0 tx=0x7fbc10005fd0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.560+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc281991a0 msgr2=0x7fbc2819e200 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.560+0000 7fbc2d1e5700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc281991a0 0x7fbc2819e200 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fbc1800c930 tx=0x7fbc1800ccf0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.560+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 shutdown_connections 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.560+0000 7fbc2d1e5700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbc281043c0 0x7fbc28198c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.560+0000 7fbc2d1e5700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbc14077870 0x7fbc14079d30 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.561+0000 7fbc2d1e5700 1 --2- 192.168.123.102:0/3363608537 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc281991a0 0x7fbc2819e200 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.561+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 >> 192.168.123.102:0/3363608537 conn(0x7fbc280fb3c0 msgr2=0x7fbc280ff880 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.561+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 shutdown_connections 2026-03-10T10:23:37.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.561+0000 7fbc2d1e5700 1 -- 192.168.123.102:0/3363608537 wait complete. 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 -- 192.168.123.102:0/1718230243 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1cc0107d90 msgr2=0x7f1cc010a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 --2- 192.168.123.102:0/1718230243 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1cc0107d90 0x7f1cc010a1c0 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f1cb001acd0 tx=0x7f1cb001c270 comp rx=0 tx=0).stop 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 -- 192.168.123.102:0/1718230243 shutdown_connections 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 --2- 192.168.123.102:0/1718230243 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1cc010a700 0x7f1cc010cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 --2- 192.168.123.102:0/1718230243 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1cc0107d90 0x7f1cc010a1c0 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 -- 192.168.123.102:0/1718230243 >> 192.168.123.102:0/1718230243 conn(0x7f1cc006daa0 msgr2=0x7f1cc006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 -- 192.168.123.102:0/1718230243 shutdown_connections 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 -- 192.168.123.102:0/1718230243 wait complete. 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 Processor -- start 2026-03-10T10:23:37.631 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.630+0000 7f1cc4992700 1 -- start start 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cc4992700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1cc010a700 0x7f1cc019cb20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cc4992700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1cc019d060 0x7f1cc01a20d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cc4992700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1cc019d570 con 0x7f1cc010a700 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cc4992700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1cc019d6e0 con 0x7f1cc019d060 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cbe59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1cc019d060 0x7f1cc01a20d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cbe59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1cc019d060 0x7f1cc01a20d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:48756/0 (socket says 192.168.123.102:48756) 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cbe59c700 1 -- 192.168.123.102:0/1693915196 learned_addr learned my addr 192.168.123.102:0/1693915196 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cbed9d700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1cc010a700 0x7f1cc019cb20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cbe59c700 1 -- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1cc010a700 msgr2=0x7f1cc019cb20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cbe59c700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1cc010a700 0x7f1cc019cb20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cbe59c700 1 -- 192.168.123.102:0/1693915196 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1cb001a720 con 0x7f1cc019d060 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.631+0000 7f1cbed9d700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1cc010a700 0x7f1cc019cb20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.632+0000 7f1cbe59c700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1cc019d060 0x7f1cc01a20d0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f1cb800e3f0 tx=0x7f1cb800e700 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.632+0000 7f1ca7fff700 1 -- 192.168.123.102:0/1693915196 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1cb800eea0 con 0x7f1cc019d060 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.632+0000 7f1ca7fff700 1 -- 192.168.123.102:0/1693915196 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1cb800f040 con 0x7f1cc019d060 2026-03-10T10:23:37.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.632+0000 7f1ca7fff700 1 -- 192.168.123.102:0/1693915196 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1cb8014690 con 0x7f1cc019d060 2026-03-10T10:23:37.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.632+0000 7f1cc4992700 1 -- 192.168.123.102:0/1693915196 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1cc01a2610 con 0x7f1cc019d060 2026-03-10T10:23:37.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.632+0000 7f1cc4992700 1 -- 192.168.123.102:0/1693915196 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1cc01a2b60 con 0x7f1cc019d060 2026-03-10T10:23:37.634 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.633+0000 7f1cc4992700 1 -- 192.168.123.102:0/1693915196 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1cc0196d50 con 0x7f1cc019d060 2026-03-10T10:23:37.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.635+0000 7f1ca7fff700 1 -- 192.168.123.102:0/1693915196 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f1cb8007950 con 0x7f1cc019d060 2026-03-10T10:23:37.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.635+0000 7f1ca7fff700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1ca80779e0 0x7f1ca8079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.635+0000 7f1ca7fff700 1 -- 192.168.123.102:0/1693915196 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(62..62 src has 1..62) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f1cb809a150 con 0x7f1cc019d060 2026-03-10T10:23:37.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.635+0000 7f1cbed9d700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1ca80779e0 0x7f1ca8079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.636+0000 7f1cbed9d700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1ca80779e0 0x7f1ca8079ea0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f1cb001c640 tx=0x7f1cb00058e0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:37.637 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.636+0000 7f1ca7fff700 1 -- 192.168.123.102:0/1693915196 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f1cb80629b0 con 0x7f1cc019d060 2026-03-10T10:23:37.770 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.770+0000 7f1cc4992700 1 -- 192.168.123.102:0/1693915196 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f1cc00611d0 con 0x7f1ca80779e0 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.775+0000 7f1ca7fff700 1 -- 192.168.123.102:0/1693915196 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7f1cc00611d0 con 0x7f1ca80779e0 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (8m) 8s ago 8m 23.3M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (8m) 8s ago 8m 9433k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (8m) 2m ago 8m 11.1M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (2m) 8s ago 8m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (2m) 2m ago 8m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (7m) 8s ago 8m 89.9M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (6m) 8s ago 6m 16.9M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (6m) 8s ago 6m 173M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (6m) 2m ago 6m 15.9M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (6m) 2m ago 6m 146M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (3m) 8s ago 9m 619M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (3m) 2m ago 8m 487M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (3m) 8s ago 9m 61.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (2m) 2m ago 8m 49.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:23:37.775 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (8m) 8s ago 8m 16.4M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:23:37.776 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (8m) 2m ago 8m 15.4M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:23:37.776 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (2m) 8s ago 7m 228M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:23:37.776 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (31s) 8s ago 7m 119M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6b6be7f62bd3 2026-03-10T10:23:37.776 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (9s) 8s ago 7m 13.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 745b9931485f 2026-03-10T10:23:37.776 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (7m) 2m ago 7m 426M 4096M 18.2.1 5be31c24972a 80ac26035893 2026-03-10T10:23:37.776 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (7m) 2m ago 7m 407M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:23:37.776 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (7m) 2m ago 7m 319M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:23:37.776 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (3m) 8s ago 8m 64.2M - 2.43.0 a07b618ecd1d 5ebb885bd417 2026-03-10T10:23:37.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 -- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1ca80779e0 msgr2=0x7f1ca8079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1ca80779e0 0x7f1ca8079ea0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f1cb001c640 tx=0x7f1cb00058e0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 -- 192.168.123.102:0/1693915196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1cc019d060 msgr2=0x7f1cc01a20d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1cc019d060 0x7f1cc01a20d0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f1cb800e3f0 tx=0x7f1cb800e700 comp rx=0 tx=0).stop 2026-03-10T10:23:37.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 -- 192.168.123.102:0/1693915196 shutdown_connections 2026-03-10T10:23:37.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1cc010a700 0x7f1cc019cb20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1ca80779e0 0x7f1ca8079ea0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 --2- 192.168.123.102:0/1693915196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1cc019d060 0x7f1cc01a20d0 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.778 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 -- 192.168.123.102:0/1693915196 >> 192.168.123.102:0/1693915196 conn(0x7f1cc006daa0 msgr2=0x7f1cc0109fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:37.779 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 -- 192.168.123.102:0/1693915196 shutdown_connections 2026-03-10T10:23:37.779 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.778+0000 7f1ca5ffb700 1 -- 192.168.123.102:0/1693915196 wait complete. 2026-03-10T10:23:37.826 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:37 vm02.local ceph-mon[110129]: pgmap v111: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 209 MiB data, 2.3 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 29/228 objects degraded (12.719%) 2026-03-10T10:23:37.826 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:37.826 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 -- 192.168.123.102:0/3420615654 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410075a10 msgr2=0x7fb410077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 --2- 192.168.123.102:0/3420615654 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410075a10 0x7fb410077ea0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fb40800cd40 tx=0x7fb40800a320 comp rx=0 tx=0).stop 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 -- 192.168.123.102:0/3420615654 shutdown_connections 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 --2- 192.168.123.102:0/3420615654 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410075a10 0x7fb410077ea0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 --2- 192.168.123.102:0/3420615654 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb410072b20 0x7fb410072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 -- 192.168.123.102:0/3420615654 >> 192.168.123.102:0/3420615654 conn(0x7fb41006daa0 msgr2=0x7fb41006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 -- 192.168.123.102:0/3420615654 shutdown_connections 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 -- 192.168.123.102:0/3420615654 wait complete. 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 Processor -- start 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 -- start start 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410072b20 0x7fb410080cd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb410075a10 0x7fb410081210 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb410081830 con 0x7fb410072b20 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb416835700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb410081970 con 0x7fb410075a10 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb415833700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410072b20 0x7fb410080cd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb415833700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410072b20 0x7fb410080cd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:38332/0 (socket says 192.168.123.102:38332) 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.854+0000 7fb415833700 1 -- 192.168.123.102:0/25693495 learned_addr learned my addr 192.168.123.102:0/25693495 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.855+0000 7fb415032700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb410075a10 0x7fb410081210 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.855+0000 7fb415833700 1 -- 192.168.123.102:0/25693495 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb410075a10 msgr2=0x7fb410081210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:37.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.855+0000 7fb415833700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb410075a10 0x7fb410081210 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.855+0000 7fb415833700 1 -- 192.168.123.102:0/25693495 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb40800c9f0 con 0x7fb410072b20 2026-03-10T10:23:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.855+0000 7fb415833700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410072b20 0x7fb410080cd0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7fb40c04f750 tx=0x7fb40c04fb10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.855+0000 7fb406ffd700 1 -- 192.168.123.102:0/25693495 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb40c051830 con 0x7fb410072b20 2026-03-10T10:23:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.855+0000 7fb416835700 1 -- 192.168.123.102:0/25693495 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb41012e5e0 con 0x7fb410072b20 2026-03-10T10:23:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.855+0000 7fb416835700 1 -- 192.168.123.102:0/25693495 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb41012ebb0 con 0x7fb410072b20 2026-03-10T10:23:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.856+0000 7fb406ffd700 1 -- 192.168.123.102:0/25693495 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb40c051e70 con 0x7fb410072b20 2026-03-10T10:23:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.856+0000 7fb416835700 1 -- 192.168.123.102:0/25693495 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb41007af20 con 0x7fb410072b20 2026-03-10T10:23:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.857+0000 7fb406ffd700 1 -- 192.168.123.102:0/25693495 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb40c05b610 con 0x7fb410072b20 2026-03-10T10:23:37.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.858+0000 7fb406ffd700 1 -- 192.168.123.102:0/25693495 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fb40c052450 con 0x7fb410072b20 2026-03-10T10:23:37.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.858+0000 7fb406ffd700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb3fc077870 0x7fb3fc079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:37.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.858+0000 7fb406ffd700 1 -- 192.168.123.102:0/25693495 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(62..62 src has 1..62) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fb40c0dd8d0 con 0x7fb410072b20 2026-03-10T10:23:37.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.861+0000 7fb415032700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb3fc077870 0x7fb3fc079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:37.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.861+0000 7fb415032700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb3fc077870 0x7fb3fc079d30 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fb40800a7a0 tx=0x7fb4080095d0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:37.867 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:37.861+0000 7fb406ffd700 1 -- 192.168.123.102:0/25693495 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb40c0a61f0 con 0x7fb410072b20 2026-03-10T10:23:38.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.021+0000 7fb416835700 1 -- 192.168.123.102:0/25693495 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fb41002cc70 con 0x7fb410072b20 2026-03-10T10:23:38.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.022+0000 7fb406ffd700 1 -- 192.168.123.102:0/25693495 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7fb40c0a5940 con 0x7fb410072b20 2026-03-10T10:23:38.022 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:23:38.022 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:23:38.022 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:23:38.022 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:23:38.022 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:23:38.022 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 3, 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 7, 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:23:38.023 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:23:38.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.024+0000 7fb404ef9700 1 -- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb3fc077870 msgr2=0x7fb3fc079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.024+0000 7fb404ef9700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb3fc077870 0x7fb3fc079d30 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fb40800a7a0 tx=0x7fb4080095d0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.025+0000 7fb404ef9700 1 -- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410072b20 msgr2=0x7fb410080cd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.025+0000 7fb404ef9700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410072b20 0x7fb410080cd0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7fb40c04f750 tx=0x7fb40c04fb10 comp rx=0 tx=0).stop 2026-03-10T10:23:38.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.025+0000 7fb404ef9700 1 -- 192.168.123.102:0/25693495 shutdown_connections 2026-03-10T10:23:38.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.025+0000 7fb404ef9700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fb410072b20 0x7fb410080cd0 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.025+0000 7fb404ef9700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fb3fc077870 0x7fb3fc079d30 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.025+0000 7fb404ef9700 1 --2- 192.168.123.102:0/25693495 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb410075a10 0x7fb410081210 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.026+0000 7fb404ef9700 1 -- 192.168.123.102:0/25693495 >> 192.168.123.102:0/25693495 conn(0x7fb41006daa0 msgr2=0x7fb41006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:38.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.026+0000 7fb404ef9700 1 -- 192.168.123.102:0/25693495 shutdown_connections 2026-03-10T10:23:38.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.026+0000 7fb404ef9700 1 -- 192.168.123.102:0/25693495 wait complete. 2026-03-10T10:23:38.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:37 vm05.local ceph-mon[103593]: pgmap v111: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 209 MiB data, 2.3 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 29/228 objects degraded (12.719%) 2026-03-10T10:23:38.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:38.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:38.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.098+0000 7f4806f89700 1 -- 192.168.123.102:0/4032146573 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800101740 msgr2=0x7f4800103b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.098+0000 7f4806f89700 1 --2- 192.168.123.102:0/4032146573 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800101740 0x7f4800103b30 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f47f4009b50 tx=0x7f47f4009e60 comp rx=0 tx=0).stop 2026-03-10T10:23:38.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.099+0000 7f4806f89700 1 -- 192.168.123.102:0/4032146573 shutdown_connections 2026-03-10T10:23:38.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.099+0000 7f4806f89700 1 --2- 192.168.123.102:0/4032146573 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4800104070 0x7f4800106460 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.099+0000 7f4806f89700 1 --2- 192.168.123.102:0/4032146573 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800101740 0x7f4800103b30 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.099+0000 7f4806f89700 1 -- 192.168.123.102:0/4032146573 >> 192.168.123.102:0/4032146573 conn(0x7f48000fb110 msgr2=0x7f48000fd590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:38.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.099+0000 7f4806f89700 1 -- 192.168.123.102:0/4032146573 shutdown_connections 2026-03-10T10:23:38.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.099+0000 7f4806f89700 1 -- 192.168.123.102:0/4032146573 wait complete. 2026-03-10T10:23:38.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.100+0000 7f4806f89700 1 Processor -- start 2026-03-10T10:23:38.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.100+0000 7f4806f89700 1 -- start start 2026-03-10T10:23:38.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.100+0000 7f4806f89700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4800101740 0x7f4800194140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:38.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.100+0000 7f4806f89700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800104070 0x7f4800194680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:38.101 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.100+0000 7f4804d25700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4800101740 0x7f4800194140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:38.101 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.101+0000 7f4804d25700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4800101740 0x7f4800194140 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:48786/0 (socket says 192.168.123.102:48786) 2026-03-10T10:23:38.101 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.101+0000 7f47fffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800104070 0x7f4800194680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:38.101 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.101+0000 7f47fffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800104070 0x7f4800194680 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:38356/0 (socket says 192.168.123.102:38356) 2026-03-10T10:23:38.101 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.101+0000 7f4804d25700 1 -- 192.168.123.102:0/2448188944 learned_addr learned my addr 192.168.123.102:0/2448188944 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:38.101 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.101+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4800194ca0 con 0x7f4800104070 2026-03-10T10:23:38.101 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.101+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4800194de0 con 0x7f4800101740 2026-03-10T10:23:38.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.101+0000 7f47fffff700 1 -- 192.168.123.102:0/2448188944 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4800101740 msgr2=0x7f4800194140 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.101+0000 7f47fffff700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4800101740 0x7f4800194140 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.101+0000 7f47fffff700 1 -- 192.168.123.102:0/2448188944 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f47f40097e0 con 0x7f4800104070 2026-03-10T10:23:38.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.102+0000 7f47fffff700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800104070 0x7f4800194680 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f47ec00d940 tx=0x7f47ec00dc50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:38.103 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.102+0000 7f47fdffb700 1 -- 192.168.123.102:0/2448188944 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f47ec0098e0 con 0x7f4800104070 2026-03-10T10:23:38.103 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.102+0000 7f47fdffb700 1 -- 192.168.123.102:0/2448188944 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f47ec00de90 con 0x7f4800104070 2026-03-10T10:23:38.103 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.103+0000 7f47fdffb700 1 -- 192.168.123.102:0/2448188944 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f47ec00f3c0 con 0x7f4800104070 2026-03-10T10:23:38.104 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.103+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4800199890 con 0x7f4800104070 2026-03-10T10:23:38.104 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.103+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4800199d60 con 0x7f4800104070 2026-03-10T10:23:38.108 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.105+0000 7f47fdffb700 1 -- 192.168.123.102:0/2448188944 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f47ec010460 con 0x7f4800104070 2026-03-10T10:23:38.108 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.105+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f480004ea90 con 0x7f4800104070 2026-03-10T10:23:38.108 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.105+0000 7f47fdffb700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f47f007bb20 0x7f47f007dfe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:38.108 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.105+0000 7f47fdffb700 1 -- 192.168.123.102:0/2448188944 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(62..62 src has 1..62) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f47ec099240 con 0x7f4800104070 2026-03-10T10:23:38.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.108+0000 7f47fdffb700 1 -- 192.168.123.102:0/2448188944 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f47ec061aa0 con 0x7f4800104070 2026-03-10T10:23:38.109 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.109+0000 7f4804d25700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f47f007bb20 0x7f47f007dfe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:38.113 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.113+0000 7f4804d25700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f47f007bb20 0x7f47f007dfe0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f47f4005b20 tx=0x7f47f4005a90 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:38.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.254+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f480019a040 con 0x7f4800104070 2026-03-10T10:23:38.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.255+0000 7f47fdffb700 1 -- 192.168.123.102:0/2448188944 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1945 (secure 0 0 0) 0x7f47ec0611f0 con 0x7f4800104070 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 0 members: 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:23:38.257 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:23:38.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f47f007bb20 msgr2=0x7f47f007dfe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f47f007bb20 0x7f47f007dfe0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f47f4005b20 tx=0x7f47f4005a90 comp rx=0 tx=0).stop 2026-03-10T10:23:38.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800104070 msgr2=0x7f4800194680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800104070 0x7f4800194680 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f47ec00d940 tx=0x7f47ec00dc50 comp rx=0 tx=0).stop 2026-03-10T10:23:38.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 shutdown_connections 2026-03-10T10:23:38.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f47f007bb20 0x7f47f007dfe0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4800101740 0x7f4800194140 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 --2- 192.168.123.102:0/2448188944 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4800104070 0x7f4800194680 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 >> 192.168.123.102:0/2448188944 conn(0x7f48000fb110 msgr2=0x7f48000fd590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:38.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.259+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 shutdown_connections 2026-03-10T10:23:38.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.260+0000 7f4806f89700 1 -- 192.168.123.102:0/2448188944 wait complete. 2026-03-10T10:23:38.260 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:23:38.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.326+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2008966501 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b58101a90 msgr2=0x7f0b58103e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.326+0000 7f0b5cea0700 1 --2- 192.168.123.102:0/2008966501 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b58101a90 0x7f0b58103e80 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f0b40009b00 tx=0x7f0b40009e10 comp rx=0 tx=0).stop 2026-03-10T10:23:38.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.327+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2008966501 shutdown_connections 2026-03-10T10:23:38.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.327+0000 7f0b5cea0700 1 --2- 192.168.123.102:0/2008966501 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b581043c0 0x7f0b581067b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.327+0000 7f0b5cea0700 1 --2- 192.168.123.102:0/2008966501 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b58101a90 0x7f0b58103e80 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.327+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2008966501 >> 192.168.123.102:0/2008966501 conn(0x7f0b580fb3c0 msgr2=0x7f0b580fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:38.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.327+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2008966501 shutdown_connections 2026-03-10T10:23:38.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.327+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2008966501 wait complete. 2026-03-10T10:23:38.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.327+0000 7f0b5cea0700 1 Processor -- start 2026-03-10T10:23:38.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.327+0000 7f0b5cea0700 1 -- start start 2026-03-10T10:23:38.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b5cea0700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b58101a90 0x7f0b581945b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:38.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b5cea0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b581043c0 0x7f0b58194af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:38.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b5cea0700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0b58195110 con 0x7f0b58101a90 2026-03-10T10:23:38.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b5cea0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0b58195250 con 0x7f0b581043c0 2026-03-10T10:23:38.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b55d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b581043c0 0x7f0b58194af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:38.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b55d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b581043c0 0x7f0b58194af0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:48798/0 (socket says 192.168.123.102:48798) 2026-03-10T10:23:38.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b55d9b700 1 -- 192.168.123.102:0/2051866948 learned_addr learned my addr 192.168.123.102:0/2051866948 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:38.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b5659c700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b58101a90 0x7f0b581945b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:38.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b5659c700 1 -- 192.168.123.102:0/2051866948 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b581043c0 msgr2=0x7f0b58194af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b5659c700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b581043c0 0x7f0b58194af0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.329 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.328+0000 7f0b5659c700 1 -- 192.168.123.102:0/2051866948 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0b400097e0 con 0x7f0b58101a90 2026-03-10T10:23:38.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.329+0000 7f0b5659c700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b58101a90 0x7f0b581945b0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f0b40009ad0 tx=0x7f0b40004a00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:38.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.329+0000 7f0b4f7fe700 1 -- 192.168.123.102:0/2051866948 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0b4001d070 con 0x7f0b58101a90 2026-03-10T10:23:38.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.329+0000 7f0b4f7fe700 1 -- 192.168.123.102:0/2051866948 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0b4000bc50 con 0x7f0b58101a90 2026-03-10T10:23:38.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.329+0000 7f0b4f7fe700 1 -- 192.168.123.102:0/2051866948 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0b4000f740 con 0x7f0b58101a90 2026-03-10T10:23:38.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.329+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0b58199ca0 con 0x7f0b58101a90 2026-03-10T10:23:38.330 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.329+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0b5819a190 con 0x7f0b58101a90 2026-03-10T10:23:38.331 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.331+0000 7f0b4f7fe700 1 -- 192.168.123.102:0/2051866948 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f0b4000f8a0 con 0x7f0b58101a90 2026-03-10T10:23:38.331 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.331+0000 7f0b4f7fe700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b4407bdf0 0x7f0b4407e2b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:38.332 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.331+0000 7f0b55d9b700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b4407bdf0 0x7f0b4407e2b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:38.332 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.331+0000 7f0b4f7fe700 1 -- 192.168.123.102:0/2051866948 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(62..62 src has 1..62) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f0b4009c1b0 con 0x7f0b58101a90 2026-03-10T10:23:38.332 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.332+0000 7f0b55d9b700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b4407bdf0 0x7f0b4407e2b0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f0b48005950 tx=0x7f0b4800a400 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:38.332 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.332+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0b5818e840 con 0x7f0b58101a90 2026-03-10T10:23:38.335 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.335+0000 7f0b4f7fe700 1 -- 192.168.123.102:0/2051866948 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0b40064ac0 con 0x7f0b58101a90 2026-03-10T10:23:38.459 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.459+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f0b580611d0 con 0x7f0b4407bdf0 2026-03-10T10:23:38.460 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.460+0000 7f0b4f7fe700 1 -- 192.168.123.102:0/2051866948 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f0b580611d0 con 0x7f0b4407bdf0 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: "crash" 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "9/23 daemons upgraded", 2026-03-10T10:23:38.462 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T10:23:38.463 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:23:38.463 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:23:38.465 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.464+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b4407bdf0 msgr2=0x7f0b4407e2b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.465 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.464+0000 7f0b5cea0700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b4407bdf0 0x7f0b4407e2b0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f0b48005950 tx=0x7f0b4800a400 comp rx=0 tx=0).stop 2026-03-10T10:23:38.465 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.464+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b58101a90 msgr2=0x7f0b581945b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.465 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.464+0000 7f0b5cea0700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b58101a90 0x7f0b581945b0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f0b40009ad0 tx=0x7f0b40004a00 comp rx=0 tx=0).stop 2026-03-10T10:23:38.465 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.464+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 shutdown_connections 2026-03-10T10:23:38.465 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.464+0000 7f0b5cea0700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b58101a90 0x7f0b581945b0 secure :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f0b40009ad0 tx=0x7f0b40004a00 comp rx=0 tx=0).stop 2026-03-10T10:23:38.465 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.464+0000 7f0b5cea0700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b4407bdf0 0x7f0b4407e2b0 secure :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f0b48005950 tx=0x7f0b4800a400 comp rx=0 tx=0).stop 2026-03-10T10:23:38.465 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.464+0000 7f0b5cea0700 1 --2- 192.168.123.102:0/2051866948 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b581043c0 0x7f0b58194af0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.466 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.464+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 >> 192.168.123.102:0/2051866948 conn(0x7f0b580fb3c0 msgr2=0x7f0b580fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:38.466 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.465+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 shutdown_connections 2026-03-10T10:23:38.466 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.465+0000 7f0b5cea0700 1 -- 192.168.123.102:0/2051866948 wait complete. 2026-03-10T10:23:38.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.533+0000 7fc9867c3700 1 -- 192.168.123.102:0/1845234760 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980101930 msgr2=0x7fc980103d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.533+0000 7fc9867c3700 1 --2- 192.168.123.102:0/1845234760 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980101930 0x7fc980103d20 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fc968009b00 tx=0x7fc968009e10 comp rx=0 tx=0).stop 2026-03-10T10:23:38.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.533+0000 7fc9867c3700 1 -- 192.168.123.102:0/1845234760 shutdown_connections 2026-03-10T10:23:38.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.533+0000 7fc9867c3700 1 --2- 192.168.123.102:0/1845234760 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc980104260 0x7fc980106650 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.533+0000 7fc9867c3700 1 --2- 192.168.123.102:0/1845234760 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980101930 0x7fc980103d20 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.533+0000 7fc9867c3700 1 -- 192.168.123.102:0/1845234760 >> 192.168.123.102:0/1845234760 conn(0x7fc9800fb260 msgr2=0x7fc9800fd6e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:38.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.533+0000 7fc9867c3700 1 -- 192.168.123.102:0/1845234760 shutdown_connections 2026-03-10T10:23:38.534 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.533+0000 7fc9867c3700 1 -- 192.168.123.102:0/1845234760 wait complete. 2026-03-10T10:23:38.534 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.534+0000 7fc9867c3700 1 Processor -- start 2026-03-10T10:23:38.534 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.534+0000 7fc9867c3700 1 -- start start 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.534+0000 7fc9867c3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc980104260 0x7fc980196970 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.534+0000 7fc9867c3700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980196eb0 0x7fc98019bf20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.534+0000 7fc9867c3700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc9801973c0 con 0x7fc980196eb0 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc9867c3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc980197530 con 0x7fc980104260 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97f7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980196eb0 0x7fc98019bf20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97f7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980196eb0 0x7fc98019bf20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:38394/0 (socket says 192.168.123.102:38394) 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97f7fe700 1 -- 192.168.123.102:0/2289298223 learned_addr learned my addr 192.168.123.102:0/2289298223 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97f7fe700 1 -- 192.168.123.102:0/2289298223 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc980104260 msgr2=0x7fc980196970 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97f7fe700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc980104260 0x7fc980196970 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97f7fe700 1 -- 192.168.123.102:0/2289298223 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc9680097e0 con 0x7fc980196eb0 2026-03-10T10:23:38.536 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97f7fe700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980196eb0 0x7fc98019bf20 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fc97000cc60 tx=0x7fc9700074a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:38.537 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97d7fa700 1 -- 192.168.123.102:0/2289298223 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc970007af0 con 0x7fc980196eb0 2026-03-10T10:23:38.537 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97d7fa700 1 -- 192.168.123.102:0/2289298223 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc970004d10 con 0x7fc980196eb0 2026-03-10T10:23:38.537 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc97d7fa700 1 -- 192.168.123.102:0/2289298223 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc970005710 con 0x7fc980196eb0 2026-03-10T10:23:38.537 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.535+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc9800ff780 con 0x7fc980196eb0 2026-03-10T10:23:38.537 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.536+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc9800ffc50 con 0x7fc980196eb0 2026-03-10T10:23:38.538 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.537+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc980190910 con 0x7fc980196eb0 2026-03-10T10:23:38.541 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.540+0000 7fc97d7fa700 1 -- 192.168.123.102:0/2289298223 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc970004750 con 0x7fc980196eb0 2026-03-10T10:23:38.541 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.541+0000 7fc97d7fa700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc96c077910 0x7fc96c079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:23:38.541 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.541+0000 7fc97d7fa700 1 -- 192.168.123.102:0/2289298223 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(62..62 src has 1..62) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fc970013070 con 0x7fc980196eb0 2026-03-10T10:23:38.542 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.541+0000 7fc97d7fa700 1 -- 192.168.123.102:0/2289298223 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc97009a2e0 con 0x7fc980196eb0 2026-03-10T10:23:38.542 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.541+0000 7fc97ffff700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc96c077910 0x7fc96c079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:23:38.542 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.542+0000 7fc97ffff700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc96c077910 0x7fc96c079dd0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7fc968006010 tx=0x7fc968005ad0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:23:38.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.706+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fc98004eac0 con 0x7fc980196eb0 2026-03-10T10:23:38.707 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.706+0000 7fc97d7fa700 1 -- 192.168.123.102:0/2289298223 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+291 (secure 0 0 0) 0x7fc9700628f0 con 0x7fc980196eb0 2026-03-10T10:23:38.707 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN Degraded data redundancy: 3/228 objects degraded (1.316%), 2 pgs degraded 2026-03-10T10:23:38.707 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 3/228 objects degraded (1.316%), 2 pgs degraded 2026-03-10T10:23:38.707 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.13 is active+undersized+degraded, acting [0,4] 2026-03-10T10:23:38.707 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.19 is active+undersized+degraded, acting [0,4] 2026-03-10T10:23:38.709 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.709+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc96c077910 msgr2=0x7fc96c079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.709 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.709+0000 7fc9867c3700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc96c077910 0x7fc96c079dd0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7fc968006010 tx=0x7fc968005ad0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.709 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.709+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980196eb0 msgr2=0x7fc98019bf20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:23:38.709 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.709+0000 7fc9867c3700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980196eb0 0x7fc98019bf20 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fc97000cc60 tx=0x7fc9700074a0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.709 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:38 vm02.local ceph-mon[110129]: from='client.34264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:38.709 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:38 vm02.local ceph-mon[110129]: from='client.44219 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:38.709 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:38 vm02.local ceph-mon[110129]: from='client.44223 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:38.709 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:38 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/25693495' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:38.709 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:38 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2448188944' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:23:38.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.710+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 shutdown_connections 2026-03-10T10:23:38.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.710+0000 7fc9867c3700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc96c077910 0x7fc96c079dd0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.710+0000 7fc9867c3700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc980104260 0x7fc980196970 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.710+0000 7fc9867c3700 1 --2- 192.168.123.102:0/2289298223 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc980196eb0 0x7fc98019bf20 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:23:38.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.710+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 >> 192.168.123.102:0/2289298223 conn(0x7fc9800fb260 msgr2=0x7fc9800fd6e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:23:38.711 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.710+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 shutdown_connections 2026-03-10T10:23:38.711 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:23:38.710+0000 7fc9867c3700 1 -- 192.168.123.102:0/2289298223 wait complete. 2026-03-10T10:23:39.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:38 vm05.local ceph-mon[103593]: from='client.34264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:39.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:38 vm05.local ceph-mon[103593]: from='client.44219 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:39.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:38 vm05.local ceph-mon[103593]: from='client.44223 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:39.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:38 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/25693495' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:39.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:38 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2448188944' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:23:40.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:39 vm02.local ceph-mon[110129]: from='client.34282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:40.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:39 vm02.local ceph-mon[110129]: pgmap v112: 65 pgs: 3 active+undersized, 2 active+undersized+degraded, 60 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1.5 KiB/s rd, 2 op/s; 3/228 objects degraded (1.316%) 2026-03-10T10:23:40.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:39 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2289298223' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:23:40.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:39 vm05.local ceph-mon[103593]: from='client.34282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:23:40.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:39 vm05.local ceph-mon[103593]: pgmap v112: 65 pgs: 3 active+undersized, 2 active+undersized+degraded, 60 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1.5 KiB/s rd, 2 op/s; 3/228 objects degraded (1.316%) 2026-03-10T10:23:40.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:39 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2289298223' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:23:41.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:40 vm02.local ceph-mon[110129]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/228 objects degraded (1.316%), 2 pgs degraded) 2026-03-10T10:23:41.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:40 vm02.local ceph-mon[110129]: Cluster is now healthy 2026-03-10T10:23:41.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:40 vm05.local ceph-mon[103593]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/228 objects degraded (1.316%), 2 pgs degraded) 2026-03-10T10:23:41.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:40 vm05.local ceph-mon[103593]: Cluster is now healthy 2026-03-10T10:23:42.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:41 vm02.local ceph-mon[110129]: pgmap v113: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 525 B/s rd, 1 op/s 2026-03-10T10:23:42.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:41 vm05.local ceph-mon[103593]: pgmap v113: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 525 B/s rd, 1 op/s 2026-03-10T10:23:43.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:43 vm02.local ceph-mon[110129]: pgmap v114: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 511 B/s rd, 1 op/s 2026-03-10T10:23:44.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:43 vm05.local ceph-mon[103593]: pgmap v114: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 511 B/s rd, 1 op/s 2026-03-10T10:23:46.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:45 vm02.local ceph-mon[110129]: pgmap v115: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 942 B/s rd, 1 op/s 2026-03-10T10:23:46.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:45 vm05.local ceph-mon[103593]: pgmap v115: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 942 B/s rd, 1 op/s 2026-03-10T10:23:46.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:46 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T10:23:46.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:46 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T10:23:46.879 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:46 vm05.local ceph-mon[103593]: Upgrade: osd.3 is safe to restart 2026-03-10T10:23:46.880 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:46 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:46.880 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:46 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:23:46.880 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:46 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:46.880 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:46 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:47.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:46 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T10:23:47.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:46 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T10:23:47.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:46 vm02.local ceph-mon[110129]: Upgrade: osd.3 is safe to restart 2026-03-10T10:23:47.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:46 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:47.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:46 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:23:47.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:46 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:47.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:46 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:47.148 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:47 vm05.local systemd[1]: Stopping Ceph osd.3 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:23:47.537 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:47 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[64918]: 2026-03-10T10:23:47.213+0000 7fd95b714700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:23:47.537 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:47 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[64918]: 2026-03-10T10:23:47.213+0000 7fd95b714700 -1 osd.3 62 *** Got signal Terminated *** 2026-03-10T10:23:47.537 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:47 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[64918]: 2026-03-10T10:23:47.213+0000 7fd95b714700 -1 osd.3 62 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:23:47.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:47 vm05.local ceph-mon[103593]: Upgrade: Updating osd.3 2026-03-10T10:23:47.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:47 vm05.local ceph-mon[103593]: Deploying daemon osd.3 on vm05 2026-03-10T10:23:47.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:47 vm05.local ceph-mon[103593]: pgmap v116: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 855 B/s rd, 1 op/s 2026-03-10T10:23:47.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:47 vm05.local ceph-mon[103593]: osd.3 marked itself down and dead 2026-03-10T10:23:47.995 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:47 vm05.local podman[112567]: 2026-03-10 10:23:47.838422761 +0000 UTC m=+0.637385440 container died 80ac260358930ac76db0202d654ded78ca4317d59f3c62f36aed61c7899bd9dc (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3, org.label-schema.name=CentOS Stream 8 Base Image, GIT_BRANCH=HEAD, RELEASE=HEAD, org.label-schema.vendor=CentOS, CEPH_POINT_RELEASE=-18.2.1, GIT_CLEAN=True, io.buildah.version=1.29.1, org.label-schema.schema-version=1.0, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, org.label-schema.build-date=20240222, org.label-schema.license=GPLv2) 2026-03-10T10:23:47.995 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:47 vm05.local podman[112567]: 2026-03-10 10:23:47.851811626 +0000 UTC m=+0.650774314 container remove 80ac260358930ac76db0202d654ded78ca4317d59f3c62f36aed61c7899bd9dc (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3, org.label-schema.name=CentOS Stream 8 Base Image, GIT_CLEAN=True, RELEASE=HEAD, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_POINT_RELEASE=-18.2.1, io.buildah.version=1.29.1, org.label-schema.license=GPLv2, GIT_BRANCH=HEAD, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.label-schema.build-date=20240222) 2026-03-10T10:23:47.995 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:47 vm05.local bash[112567]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3 2026-03-10T10:23:48.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:47 vm02.local ceph-mon[110129]: Upgrade: Updating osd.3 2026-03-10T10:23:48.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:47 vm02.local ceph-mon[110129]: Deploying daemon osd.3 on vm05 2026-03-10T10:23:48.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:47 vm02.local ceph-mon[110129]: pgmap v116: 65 pgs: 65 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 855 B/s rd, 1 op/s 2026-03-10T10:23:48.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:47 vm02.local ceph-mon[110129]: osd.3 marked itself down and dead 2026-03-10T10:23:48.257 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:47 vm05.local podman[112634]: 2026-03-10 10:23:47.995394885 +0000 UTC m=+0.016408929 container create ab716f246e9e8053cc4f6a1c2e70a9475cdcd6e9ea8ac30a97e2dd355e1d1c40 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:23:48.257 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112634]: 2026-03-10 10:23:48.024112333 +0000 UTC m=+0.045126377 container init ab716f246e9e8053cc4f6a1c2e70a9475cdcd6e9ea8ac30a97e2dd355e1d1c40 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T10:23:48.257 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112634]: 2026-03-10 10:23:48.029261684 +0000 UTC m=+0.050275717 container start ab716f246e9e8053cc4f6a1c2e70a9475cdcd6e9ea8ac30a97e2dd355e1d1c40 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-10T10:23:48.258 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112634]: 2026-03-10 10:23:48.030332798 +0000 UTC m=+0.051346832 container attach ab716f246e9e8053cc4f6a1c2e70a9475cdcd6e9ea8ac30a97e2dd355e1d1c40 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:23:48.258 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112634]: 2026-03-10 10:23:47.989202081 +0000 UTC m=+0.010216125 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:23:48.258 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local conmon[112645]: conmon ab716f246e9e8053cc4f : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab716f246e9e8053cc4f6a1c2e70a9475cdcd6e9ea8ac30a97e2dd355e1d1c40.scope/container/memory.events 2026-03-10T10:23:48.258 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112634]: 2026-03-10 10:23:48.158198764 +0000 UTC m=+0.179212808 container died ab716f246e9e8053cc4f6a1c2e70a9475cdcd6e9ea8ac30a97e2dd355e1d1c40 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:23:48.258 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112634]: 2026-03-10 10:23:48.175418029 +0000 UTC m=+0.196432063 container remove ab716f246e9e8053cc4f6a1c2e70a9475cdcd6e9ea8ac30a97e2dd355e1d1c40 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T10:23:48.258 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.3.service: Deactivated successfully. 2026-03-10T10:23:48.258 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.3.service: Unit process 112645 (conmon) remains running after unit stopped. 2026-03-10T10:23:48.258 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local systemd[1]: Stopped Ceph osd.3 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:23:48.258 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.3.service: Consumed 59.674s CPU time, 1.1G memory peak. 2026-03-10T10:23:48.630 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local systemd[1]: Starting Ceph osd.3 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:23:48.630 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112735]: 2026-03-10 10:23:48.459539245 +0000 UTC m=+0.016388019 container create fb166d9a03ba6320b4a776e04281e631b10144bd1d3459c6950753a8d1515686 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate, org.label-schema.build-date=20260223, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:23:48.631 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112735]: 2026-03-10 10:23:48.495756674 +0000 UTC m=+0.052605449 container init fb166d9a03ba6320b4a776e04281e631b10144bd1d3459c6950753a8d1515686 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20260223) 2026-03-10T10:23:48.631 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112735]: 2026-03-10 10:23:48.49864428 +0000 UTC m=+0.055493065 container start fb166d9a03ba6320b4a776e04281e631b10144bd1d3459c6950753a8d1515686 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:23:48.631 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112735]: 2026-03-10 10:23:48.499544195 +0000 UTC m=+0.056392980 container attach fb166d9a03ba6320b4a776e04281e631b10144bd1d3459c6950753a8d1515686 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0) 2026-03-10T10:23:48.631 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local podman[112735]: 2026-03-10 10:23:48.452881641 +0000 UTC m=+0.009730436 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:23:48.631 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:48.631 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local bash[112735]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:48.631 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:48.631 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:48 vm05.local bash[112735]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:49.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:48 vm02.local ceph-mon[110129]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:23:49.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:48 vm02.local ceph-mon[110129]: osdmap e63: 6 total, 5 up, 6 in 2026-03-10T10:23:49.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:48 vm05.local ceph-mon[103593]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:23:49.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:48 vm05.local ceph-mon[103593]: osdmap e63: 6 total, 5 up, 6 in 2026-03-10T10:23:49.411 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:23:49.411 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:23:49.411 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:49.411 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:49.411 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:49.411 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:23:49.412 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-10T10:23:49.412 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-10T10:23:49.412 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-374d491f-229c-4b78-ae96-8215a0ff7b1f/osd-block-70fa78db-d544-4037-a4e5-e2b601b924d7 --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-10T10:23:49.412 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-374d491f-229c-4b78-ae96-8215a0ff7b1f/osd-block-70fa78db-d544-4037-a4e5-e2b601b924d7 --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-10T10:23:49.412 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/ln -snf /dev/ceph-374d491f-229c-4b78-ae96-8215a0ff7b1f/osd-block-70fa78db-d544-4037-a4e5-e2b601b924d7 /var/lib/ceph/osd/ceph-3/block 2026-03-10T10:23:49.412 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: Running command: /usr/bin/ln -snf /dev/ceph-374d491f-229c-4b78-ae96-8215a0ff7b1f/osd-block-70fa78db-d544-4037-a4e5-e2b601b924d7 /var/lib/ceph/osd/ceph-3/block 2026-03-10T10:23:49.743 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-mon[103593]: pgmap v118: 65 pgs: 4 peering, 15 stale+active+clean, 46 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T10:23:49.744 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-mon[103593]: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:23:49.744 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-mon[103593]: osdmap e64: 6 total, 5 up, 6 in 2026-03-10T10:23:49.744 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate[112747]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[112735]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local podman[112735]: 2026-03-10 10:23:49.441493728 +0000 UTC m=+0.998342513 container died fb166d9a03ba6320b4a776e04281e631b10144bd1d3459c6950753a8d1515686 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid) 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local podman[112735]: 2026-03-10 10:23:49.461189949 +0000 UTC m=+1.018038725 container remove fb166d9a03ba6320b4a776e04281e631b10144bd1d3459c6950753a8d1515686 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local podman[113002]: 2026-03-10 10:23:49.55344446 +0000 UTC m=+0.016517000 container create fe29904ecf52192debc50149842b405666ee59a003daedcb382192e10ec2f386 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0) 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local podman[113002]: 2026-03-10 10:23:49.593821315 +0000 UTC m=+0.056893846 container init fe29904ecf52192debc50149842b405666ee59a003daedcb382192e10ec2f386 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid) 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local podman[113002]: 2026-03-10 10:23:49.59654808 +0000 UTC m=+0.059620620 container start fe29904ecf52192debc50149842b405666ee59a003daedcb382192e10ec2f386 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local bash[113002]: fe29904ecf52192debc50149842b405666ee59a003daedcb382192e10ec2f386 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local podman[113002]: 2026-03-10 10:23:49.546390916 +0000 UTC m=+0.009463456 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local systemd[1]: Started Ceph osd.3 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:23:49.744 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-osd[113016]: -- 192.168.123.105:0/2298218507 <== mon.1 v2:192.168.123.105:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55915f84a960 con 0x55915f9fa000 2026-03-10T10:23:50.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:49 vm02.local ceph-mon[110129]: pgmap v118: 65 pgs: 4 peering, 15 stale+active+clean, 46 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T10:23:50.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:49 vm02.local ceph-mon[110129]: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:23:50.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:49 vm02.local ceph-mon[110129]: osdmap e64: 6 total, 5 up, 6 in 2026-03-10T10:23:50.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:50.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:50.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:50.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:50.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:50.537 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:50 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[113012]: 2026-03-10T10:23:50.227+0000 7fd144585740 -1 Falling back to public interface 2026-03-10T10:23:51.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:51.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:51.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:51 vm05.local ceph-mon[103593]: pgmap v120: 65 pgs: 11 peering, 12 stale+active+clean, 42 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T10:23:51.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:51.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:51 vm02.local ceph-mon[110129]: pgmap v120: 65 pgs: 11 peering, 12 stale+active+clean, 42 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T10:23:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T10:23:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:52 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (5 PGs are or would become offline) 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T10:23:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:52 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (5 PGs are or would become offline) 2026-03-10T10:23:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:53 vm02.local ceph-mon[110129]: pgmap v121: 65 pgs: 14 active+undersized, 11 peering, 13 active+undersized+degraded, 27 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 42/228 objects degraded (18.421%) 2026-03-10T10:23:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:53 vm02.local ceph-mon[110129]: Health check failed: Degraded data redundancy: 42/228 objects degraded (18.421%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:54.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:53 vm05.local ceph-mon[103593]: pgmap v121: 65 pgs: 14 active+undersized, 11 peering, 13 active+undersized+degraded, 27 active+clean; 209 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 42/228 objects degraded (18.421%) 2026-03-10T10:23:54.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:53 vm05.local ceph-mon[103593]: Health check failed: Degraded data redundancy: 42/228 objects degraded (18.421%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:55.014 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:54 vm05.local ceph-mon[103593]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 4 pgs peering) 2026-03-10T10:23:55.014 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:54 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[113012]: 2026-03-10T10:23:54.656+0000 7fd144585740 -1 osd.3 0 read_superblock omap replica is missing. 2026-03-10T10:23:55.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:54 vm02.local ceph-mon[110129]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 4 pgs peering) 2026-03-10T10:23:55.287 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:55 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[113012]: 2026-03-10T10:23:55.014+0000 7fd144585740 -1 osd.3 62 log_to_monitors true 2026-03-10T10:23:56.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:55 vm02.local ceph-mon[110129]: pgmap v122: 65 pgs: 20 active+undersized, 18 active+undersized+degraded, 27 active+clean; 209 MiB data, 2.3 GiB used, 118 GiB / 120 GiB avail; 54/228 objects degraded (23.684%) 2026-03-10T10:23:56.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:55 vm02.local ceph-mon[110129]: from='osd.3 [v2:192.168.123.105:6800/1933315126,v1:192.168.123.105:6801/1933315126]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:23:56.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:55 vm02.local ceph-mon[110129]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:23:56.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:55 vm05.local ceph-mon[103593]: pgmap v122: 65 pgs: 20 active+undersized, 18 active+undersized+degraded, 27 active+clean; 209 MiB data, 2.3 GiB used, 118 GiB / 120 GiB avail; 54/228 objects degraded (23.684%) 2026-03-10T10:23:56.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:55 vm05.local ceph-mon[103593]: from='osd.3 [v2:192.168.123.105:6800/1933315126,v1:192.168.123.105:6801/1933315126]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:23:56.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:55 vm05.local ceph-mon[103593]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:23:56.037 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:23:55 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[113012]: 2026-03-10T10:23:55.739+0000 7fd13c31f640 -1 osd.3 62 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:23:57.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:56 vm02.local ceph-mon[110129]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:23:57.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:56 vm02.local ceph-mon[110129]: osdmap e65: 6 total, 5 up, 6 in 2026-03-10T10:23:57.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:56 vm02.local ceph-mon[110129]: from='osd.3 [v2:192.168.123.105:6800/1933315126,v1:192.168.123.105:6801/1933315126]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:23:57.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:56 vm02.local ceph-mon[110129]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:23:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:56 vm05.local ceph-mon[103593]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:23:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:56 vm05.local ceph-mon[103593]: osdmap e65: 6 total, 5 up, 6 in 2026-03-10T10:23:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:56 vm05.local ceph-mon[103593]: from='osd.3 [v2:192.168.123.105:6800/1933315126,v1:192.168.123.105:6801/1933315126]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:23:57.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:56 vm05.local ceph-mon[103593]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:23:58.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:57 vm02.local ceph-mon[110129]: pgmap v124: 65 pgs: 20 active+undersized, 18 active+undersized+degraded, 27 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 54/228 objects degraded (23.684%) 2026-03-10T10:23:58.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:57 vm02.local ceph-mon[110129]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:23:58.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:57 vm02.local ceph-mon[110129]: osd.3 [v2:192.168.123.105:6800/1933315126,v1:192.168.123.105:6801/1933315126] boot 2026-03-10T10:23:58.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:57 vm02.local ceph-mon[110129]: osdmap e66: 6 total, 6 up, 6 in 2026-03-10T10:23:58.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:57 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:23:58.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:57 vm02.local ceph-mon[110129]: osdmap e67: 6 total, 6 up, 6 in 2026-03-10T10:23:58.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:57 vm05.local ceph-mon[103593]: pgmap v124: 65 pgs: 20 active+undersized, 18 active+undersized+degraded, 27 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 54/228 objects degraded (23.684%) 2026-03-10T10:23:58.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:57 vm05.local ceph-mon[103593]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:23:58.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:57 vm05.local ceph-mon[103593]: osd.3 [v2:192.168.123.105:6800/1933315126,v1:192.168.123.105:6801/1933315126] boot 2026-03-10T10:23:58.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:57 vm05.local ceph-mon[103593]: osdmap e66: 6 total, 6 up, 6 in 2026-03-10T10:23:58.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:57 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:23:58.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:57 vm05.local ceph-mon[103593]: osdmap e67: 6 total, 6 up, 6 in 2026-03-10T10:23:59.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:58 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 29/228 objects degraded (12.719%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T10:23:59.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:58 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 29/228 objects degraded (12.719%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T10:24:00.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:23:59 vm02.local ceph-mon[110129]: pgmap v127: 65 pgs: 15 peering, 10 active+undersized, 10 active+undersized+degraded, 30 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 29/228 objects degraded (12.719%) 2026-03-10T10:24:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:23:59 vm05.local ceph-mon[103593]: pgmap v127: 65 pgs: 15 peering, 10 active+undersized, 10 active+undersized+degraded, 30 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 29/228 objects degraded (12.719%) 2026-03-10T10:24:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:01 vm02.local ceph-mon[110129]: pgmap v128: 65 pgs: 15 peering, 7 active+undersized, 9 active+undersized+degraded, 34 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 26/228 objects degraded (11.404%) 2026-03-10T10:24:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:01 vm05.local ceph-mon[103593]: pgmap v128: 65 pgs: 15 peering, 7 active+undersized, 9 active+undersized+degraded, 34 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 26/228 objects degraded (11.404%) 2026-03-10T10:24:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:02 vm02.local ceph-mon[110129]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 26/228 objects degraded (11.404%), 9 pgs degraded) 2026-03-10T10:24:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:02 vm02.local ceph-mon[110129]: Cluster is now healthy 2026-03-10T10:24:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:02 vm05.local ceph-mon[103593]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 26/228 objects degraded (11.404%), 9 pgs degraded) 2026-03-10T10:24:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:02 vm05.local ceph-mon[103593]: Cluster is now healthy 2026-03-10T10:24:04.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:03 vm05.local ceph-mon[103593]: pgmap v129: 65 pgs: 65 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 889 B/s rd, 2 op/s 2026-03-10T10:24:04.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:03 vm02.local ceph-mon[110129]: pgmap v129: 65 pgs: 65 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 889 B/s rd, 2 op/s 2026-03-10T10:24:06.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:05 vm02.local ceph-mon[110129]: pgmap v130: 65 pgs: 65 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:24:06.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:05 vm05.local ceph-mon[103593]: pgmap v130: 65 pgs: 65 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:24:07.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:07 vm02.local ceph-mon[110129]: pgmap v131: 65 pgs: 65 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.1 KiB/s rd, 2 op/s 2026-03-10T10:24:07.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:07.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:07.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T10:24:07.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:07 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T10:24:07.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:07 vm02.local ceph-mon[110129]: Upgrade: osd.4 is safe to restart 2026-03-10T10:24:07.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:07.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:24:07.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:07.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:07 vm05.local ceph-mon[103593]: pgmap v131: 65 pgs: 65 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.1 KiB/s rd, 2 op/s 2026-03-10T10:24:07.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:07.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:07.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T10:24:07.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:07 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T10:24:07.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:07 vm05.local ceph-mon[103593]: Upgrade: osd.4 is safe to restart 2026-03-10T10:24:07.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:07.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:24:07.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:08.156 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local systemd[1]: Stopping Ceph osd.4 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:24:08.537 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[70837]: 2026-03-10T10:24:08.224+0000 7f4f7c3f5700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:24:08.537 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[70837]: 2026-03-10T10:24:08.224+0000 7f4f7c3f5700 -1 osd.4 67 *** Got signal Terminated *** 2026-03-10T10:24:08.537 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[70837]: 2026-03-10T10:24:08.224+0000 7f4f7c3f5700 -1 osd.4 67 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:24:08.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.786+0000 7f0f9bac4700 1 -- 192.168.123.102:0/4132566107 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f94101ab0 msgr2=0x7f0f94103ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:08.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.786+0000 7f0f9bac4700 1 --2- 192.168.123.102:0/4132566107 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f94101ab0 0x7f0f94103ea0 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f0f90009b00 tx=0x7f0f90009e10 comp rx=0 tx=0).stop 2026-03-10T10:24:08.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.786+0000 7f0f9bac4700 1 -- 192.168.123.102:0/4132566107 shutdown_connections 2026-03-10T10:24:08.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.786+0000 7f0f9bac4700 1 --2- 192.168.123.102:0/4132566107 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0f941043e0 0x7f0f941067d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:08.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.786+0000 7f0f9bac4700 1 --2- 192.168.123.102:0/4132566107 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f94101ab0 0x7f0f94103ea0 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:08.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.786+0000 7f0f9bac4700 1 -- 192.168.123.102:0/4132566107 >> 192.168.123.102:0/4132566107 conn(0x7f0f940fb3c0 msgr2=0x7f0f940fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:08.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.786+0000 7f0f9bac4700 1 -- 192.168.123.102:0/4132566107 shutdown_connections 2026-03-10T10:24:08.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.786+0000 7f0f9bac4700 1 -- 192.168.123.102:0/4132566107 wait complete. 2026-03-10T10:24:08.791 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.788+0000 7f0f9bac4700 1 Processor -- start 2026-03-10T10:24:08.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f9bac4700 1 -- start start 2026-03-10T10:24:08.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f9bac4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0f94101ab0 0x7f0f94071cb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:08.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f9bac4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f941043e0 0x7f0f940721f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:08.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f9bac4700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0f94072760 con 0x7f0f941043e0 2026-03-10T10:24:08.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f9bac4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0f940728d0 con 0x7f0f94101ab0 2026-03-10T10:24:08.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f9905f700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f941043e0 0x7f0f940721f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:08.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f99860700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0f94101ab0 0x7f0f94071cb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:08.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f99860700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0f94101ab0 0x7f0f94071cb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:54248/0 (socket says 192.168.123.102:54248) 2026-03-10T10:24:08.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f99860700 1 -- 192.168.123.102:0/1084894120 learned_addr learned my addr 192.168.123.102:0/1084894120 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:08.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.793+0000 7f0f9905f700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f941043e0 0x7f0f940721f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:35866/0 (socket says 192.168.123.102:35866) 2026-03-10T10:24:08.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.794+0000 7f0f9905f700 1 -- 192.168.123.102:0/1084894120 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0f94101ab0 msgr2=0x7f0f94071cb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:08.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.794+0000 7f0f9905f700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0f94101ab0 0x7f0f94071cb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:08.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.794+0000 7f0f9905f700 1 -- 192.168.123.102:0/1084894120 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0f900097e0 con 0x7f0f941043e0 2026-03-10T10:24:08.794 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.794+0000 7f0f9905f700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f941043e0 0x7f0f940721f0 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f0f8400cc60 tx=0x7f0f8400cf70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:08.795 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.794+0000 7f0f8affd700 1 -- 192.168.123.102:0/1084894120 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0f840041e0 con 0x7f0f941043e0 2026-03-10T10:24:08.795 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.795+0000 7f0f9bac4700 1 -- 192.168.123.102:0/1084894120 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0f9419e1c0 con 0x7f0f941043e0 2026-03-10T10:24:08.795 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.795+0000 7f0f9bac4700 1 -- 192.168.123.102:0/1084894120 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0f9419e710 con 0x7f0f941043e0 2026-03-10T10:24:08.795 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.795+0000 7f0f8affd700 1 -- 192.168.123.102:0/1084894120 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0f84004d10 con 0x7f0f941043e0 2026-03-10T10:24:08.796 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.795+0000 7f0f8affd700 1 -- 192.168.123.102:0/1084894120 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0f84005020 con 0x7f0f941043e0 2026-03-10T10:24:08.796 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.796+0000 7f0f9bac4700 1 -- 192.168.123.102:0/1084894120 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0f78005320 con 0x7f0f941043e0 2026-03-10T10:24:08.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.797+0000 7f0f8affd700 1 -- 192.168.123.102:0/1084894120 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f0f84004340 con 0x7f0f941043e0 2026-03-10T10:24:08.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.797+0000 7f0f8affd700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0f800777d0 0x7f0f80079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:08.798 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.797+0000 7f0f8affd700 1 -- 192.168.123.102:0/1084894120 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(68..68 src has 1..68) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f0f84099730 con 0x7f0f941043e0 2026-03-10T10:24:08.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.799+0000 7f0f99860700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0f800777d0 0x7f0f80079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:08.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.799+0000 7f0f99860700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0f800777d0 0x7f0f80079c90 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f0f9000b5c0 tx=0x7f0f90005fb0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:08.800 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.800+0000 7f0f8affd700 1 -- 192.168.123.102:0/1084894120 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0f84061f90 con 0x7f0f941043e0 2026-03-10T10:24:08.933 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:08 vm05.local ceph-mon[103593]: Upgrade: Updating osd.4 2026-03-10T10:24:08.933 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:08 vm05.local ceph-mon[103593]: Deploying daemon osd.4 on vm05 2026-03-10T10:24:08.933 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:08 vm05.local ceph-mon[103593]: osd.4 marked itself down and dead 2026-03-10T10:24:08.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.932+0000 7f0f9bac4700 1 -- 192.168.123.102:0/1084894120 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f0f78000bf0 con 0x7f0f800777d0 2026-03-10T10:24:08.933 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local podman[117907]: 2026-03-10 10:24:08.756244896 +0000 UTC m=+0.546086268 container died c8a0a41b66543fe34be3bc82e36449beb63623e2466f99d090ead26c8681c0de (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4, org.label-schema.name=CentOS Stream 8 Base Image, CEPH_POINT_RELEASE=-18.2.1, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.29.1, maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.label-schema.build-date=20240222, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GIT_BRANCH=HEAD, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, RELEASE=HEAD) 2026-03-10T10:24:08.933 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local podman[117907]: 2026-03-10 10:24:08.780859666 +0000 UTC m=+0.570701037 container remove c8a0a41b66543fe34be3bc82e36449beb63623e2466f99d090ead26c8681c0de (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4, RELEASE=HEAD, io.buildah.version=1.29.1, org.label-schema.schema-version=1.0, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.label-schema.build-date=20240222, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_POINT_RELEASE=-18.2.1, org.label-schema.license=GPLv2, GIT_BRANCH=HEAD, GIT_CLEAN=True) 2026-03-10T10:24:08.933 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local bash[117907]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4 2026-03-10T10:24:08.933 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:08 vm02.local ceph-mon[110129]: Upgrade: Updating osd.4 2026-03-10T10:24:08.933 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:08 vm02.local ceph-mon[110129]: Deploying daemon osd.4 on vm05 2026-03-10T10:24:08.933 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:08 vm02.local ceph-mon[110129]: osd.4 marked itself down and dead 2026-03-10T10:24:08.935 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.935+0000 7f0f8affd700 1 -- 192.168.123.102:0/1084894120 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7f0f78000bf0 con 0x7f0f800777d0 2026-03-10T10:24:08.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.938+0000 7f0f88f39700 1 -- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0f800777d0 msgr2=0x7f0f80079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:08.938 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.938+0000 7f0f88f39700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0f800777d0 0x7f0f80079c90 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f0f9000b5c0 tx=0x7f0f90005fb0 comp rx=0 tx=0).stop 2026-03-10T10:24:08.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.938+0000 7f0f88f39700 1 -- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f941043e0 msgr2=0x7f0f940721f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:08.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.939+0000 7f0f88f39700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f941043e0 0x7f0f940721f0 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f0f8400cc60 tx=0x7f0f8400cf70 comp rx=0 tx=0).stop 2026-03-10T10:24:08.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.939+0000 7f0f88f39700 1 -- 192.168.123.102:0/1084894120 shutdown_connections 2026-03-10T10:24:08.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.939+0000 7f0f88f39700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0f800777d0 0x7f0f80079c90 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:08.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.939+0000 7f0f88f39700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0f94101ab0 0x7f0f94071cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:08.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.939+0000 7f0f88f39700 1 --2- 192.168.123.102:0/1084894120 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0f941043e0 0x7f0f940721f0 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:08.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.939+0000 7f0f88f39700 1 -- 192.168.123.102:0/1084894120 >> 192.168.123.102:0/1084894120 conn(0x7f0f940fb3c0 msgr2=0x7f0f940fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:08.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.939+0000 7f0f88f39700 1 -- 192.168.123.102:0/1084894120 shutdown_connections 2026-03-10T10:24:08.939 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:08.939+0000 7f0f88f39700 1 -- 192.168.123.102:0/1084894120 wait complete. 2026-03-10T10:24:08.957 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:24:09.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.019+0000 7f341884e700 1 -- 192.168.123.102:0/1566275475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414100ca0 msgr2=0x7f34141010c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.019+0000 7f341884e700 1 --2- 192.168.123.102:0/1566275475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414100ca0 0x7f34141010c0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f3404009b00 tx=0x7f3404009e10 comp rx=0 tx=0).stop 2026-03-10T10:24:09.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 -- 192.168.123.102:0/1566275475 shutdown_connections 2026-03-10T10:24:09.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 --2- 192.168.123.102:0/1566275475 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3414101ea0 0x7f3414102300 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 --2- 192.168.123.102:0/1566275475 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414100ca0 0x7f34141010c0 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 -- 192.168.123.102:0/1566275475 >> 192.168.123.102:0/1566275475 conn(0x7f34140fc240 msgr2=0x7f34140fe680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:09.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 -- 192.168.123.102:0/1566275475 shutdown_connections 2026-03-10T10:24:09.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 -- 192.168.123.102:0/1566275475 wait complete. 2026-03-10T10:24:09.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 Processor -- start 2026-03-10T10:24:09.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 -- start start 2026-03-10T10:24:09.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3414100ca0 0x7f3414071ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414101ea0 0x7f34140721e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3414072800 con 0x7f3414101ea0 2026-03-10T10:24:09.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.020+0000 7f341884e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3414072940 con 0x7f3414100ca0 2026-03-10T10:24:09.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.021+0000 7f341259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414101ea0 0x7f34140721e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.021+0000 7f341259c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414101ea0 0x7f34140721e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:35890/0 (socket says 192.168.123.102:35890) 2026-03-10T10:24:09.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.021+0000 7f341259c700 1 -- 192.168.123.102:0/1383659065 learned_addr learned my addr 192.168.123.102:0/1383659065 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:09.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.021+0000 7f3412d9d700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3414100ca0 0x7f3414071ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.021+0000 7f341259c700 1 -- 192.168.123.102:0/1383659065 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3414100ca0 msgr2=0x7f3414071ca0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.021+0000 7f341259c700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3414100ca0 0x7f3414071ca0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.021+0000 7f341259c700 1 -- 192.168.123.102:0/1383659065 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f34040097e0 con 0x7f3414101ea0 2026-03-10T10:24:09.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.022+0000 7f3412d9d700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3414100ca0 0x7f3414071ca0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:24:09.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.022+0000 7f341259c700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414101ea0 0x7f34140721e0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f340800b700 tx=0x7f340800bac0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.023+0000 7f3403fff700 1 -- 192.168.123.102:0/1383659065 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3408010820 con 0x7f3414101ea0 2026-03-10T10:24:09.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.023+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f341410c5b0 con 0x7f3414101ea0 2026-03-10T10:24:09.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.023+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f341410cb00 con 0x7f3414101ea0 2026-03-10T10:24:09.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.023+0000 7f3403fff700 1 -- 192.168.123.102:0/1383659065 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3408010e60 con 0x7f3414101ea0 2026-03-10T10:24:09.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.023+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f341404ea90 con 0x7f3414101ea0 2026-03-10T10:24:09.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.024+0000 7f3403fff700 1 -- 192.168.123.102:0/1383659065 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3408017570 con 0x7f3414101ea0 2026-03-10T10:24:09.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.025+0000 7f3403fff700 1 -- 192.168.123.102:0/1383659065 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f340800f3c0 con 0x7f3414101ea0 2026-03-10T10:24:09.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.025+0000 7f3403fff700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f33fc0779e0 0x7f33fc079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.026+0000 7f3412d9d700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f33fc0779e0 0x7f33fc079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.026+0000 7f3412d9d700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f33fc0779e0 0x7f33fc079ea0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f340400b5c0 tx=0x7f3404005c60 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.026+0000 7f3403fff700 1 -- 192.168.123.102:0/1383659065 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(68..68 src has 1..68) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f3408099750 con 0x7f3414101ea0 2026-03-10T10:24:09.028 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.028+0000 7f3403fff700 1 -- 192.168.123.102:0/1383659065 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3408061fb0 con 0x7f3414101ea0 2026-03-10T10:24:09.153 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.152+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f34141067f0 con 0x7f33fc0779e0 2026-03-10T10:24:09.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.154+0000 7f3403fff700 1 -- 192.168.123.102:0/1383659065 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7f34141067f0 con 0x7f33fc0779e0 2026-03-10T10:24:09.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.160+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f33fc0779e0 msgr2=0x7f33fc079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.160+0000 7f341884e700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f33fc0779e0 0x7f33fc079ea0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f340400b5c0 tx=0x7f3404005c60 comp rx=0 tx=0).stop 2026-03-10T10:24:09.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.160+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414101ea0 msgr2=0x7f34140721e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.161+0000 7f341884e700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414101ea0 0x7f34140721e0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f340800b700 tx=0x7f340800bac0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.161+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 shutdown_connections 2026-03-10T10:24:09.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.161+0000 7f341884e700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f33fc0779e0 0x7f33fc079ea0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.161+0000 7f341884e700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3414100ca0 0x7f3414071ca0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.161+0000 7f341884e700 1 --2- 192.168.123.102:0/1383659065 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3414101ea0 0x7f34140721e0 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.161+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 >> 192.168.123.102:0/1383659065 conn(0x7f34140fc240 msgr2=0x7f34141050d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:09.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.161+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 shutdown_connections 2026-03-10T10:24:09.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.162+0000 7f341884e700 1 -- 192.168.123.102:0/1383659065 wait complete. 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local podman[117974]: 2026-03-10 10:24:08.933002728 +0000 UTC m=+0.015856075 container create df84d89f8b8efd682012c229a376472c722de3cb953a5862840b00e37bba8f7f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3) 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local podman[117974]: 2026-03-10 10:24:08.975226702 +0000 UTC m=+0.058080058 container init df84d89f8b8efd682012c229a376472c722de3cb953a5862840b00e37bba8f7f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, ceph=True) 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local podman[117974]: 2026-03-10 10:24:08.978946475 +0000 UTC m=+0.061799831 container start df84d89f8b8efd682012c229a376472c722de3cb953a5862840b00e37bba8f7f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True) 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:08 vm05.local podman[117974]: 2026-03-10 10:24:08.98191334 +0000 UTC m=+0.064766696 container attach df84d89f8b8efd682012c229a376472c722de3cb953a5862840b00e37bba8f7f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0) 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local podman[117974]: 2026-03-10 10:24:08.926503432 +0000 UTC m=+0.009356797 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local podman[117993]: 2026-03-10 10:24:09.121159178 +0000 UTC m=+0.011681228 container died df84d89f8b8efd682012c229a376472c722de3cb953a5862840b00e37bba8f7f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default) 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local podman[117993]: 2026-03-10 10:24:09.136504044 +0000 UTC m=+0.027026104 container remove df84d89f8b8efd682012c229a376472c722de3cb953a5862840b00e37bba8f7f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.4.service: Deactivated successfully. 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local systemd[1]: Stopped Ceph osd.4 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:24:09.231 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.4.service: Consumed 50.111s CPU time. 2026-03-10T10:24:09.233 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.230+0000 7f838d72e700 1 -- 192.168.123.102:0/978376675 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f83880737f0 msgr2=0x7f8388073c70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.233 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.230+0000 7f838d72e700 1 --2- 192.168.123.102:0/978376675 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f83880737f0 0x7f8388073c70 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f837c009b50 tx=0x7f837c009e60 comp rx=0 tx=0).stop 2026-03-10T10:24:09.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.234+0000 7f838d72e700 1 -- 192.168.123.102:0/978376675 shutdown_connections 2026-03-10T10:24:09.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.234+0000 7f838d72e700 1 --2- 192.168.123.102:0/978376675 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f83880737f0 0x7f8388073c70 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.234+0000 7f838d72e700 1 --2- 192.168.123.102:0/978376675 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8388074dc0 0x7f8388073220 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.234+0000 7f838d72e700 1 -- 192.168.123.102:0/978376675 >> 192.168.123.102:0/978376675 conn(0x7f83880fc240 msgr2=0x7f83880fe6a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:09.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.234+0000 7f838d72e700 1 -- 192.168.123.102:0/978376675 shutdown_connections 2026-03-10T10:24:09.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.234+0000 7f838d72e700 1 -- 192.168.123.102:0/978376675 wait complete. 2026-03-10T10:24:09.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.235+0000 7f838d72e700 1 Processor -- start 2026-03-10T10:24:09.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.235+0000 7f838d72e700 1 -- start start 2026-03-10T10:24:09.236 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.236+0000 7f838d72e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83880737f0 0x7f838819cc50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.236 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.236+0000 7f838d72e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8388074dc0 0x7f838819d190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.236 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.236+0000 7f838d72e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f838819d7b0 con 0x7f8388074dc0 2026-03-10T10:24:09.237 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.236+0000 7f8386ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83880737f0 0x7f838819cc50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.237 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.236+0000 7f8386ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83880737f0 0x7f838819cc50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:54276/0 (socket says 192.168.123.102:54276) 2026-03-10T10:24:09.237 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.236+0000 7f8386ffd700 1 -- 192.168.123.102:0/2069001078 learned_addr learned my addr 192.168.123.102:0/2069001078 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:09.237 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.237+0000 7f83867fc700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8388074dc0 0x7f838819d190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.237 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.237+0000 7f838d72e700 1 -- 192.168.123.102:0/2069001078 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f83881a21c0 con 0x7f83880737f0 2026-03-10T10:24:09.238 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.238+0000 7f8386ffd700 1 -- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8388074dc0 msgr2=0x7f838819d190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.238 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.238+0000 7f8386ffd700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8388074dc0 0x7f838819d190 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.238 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.238+0000 7f8386ffd700 1 -- 192.168.123.102:0/2069001078 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f837c0097e0 con 0x7f83880737f0 2026-03-10T10:24:09.238 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.238+0000 7f8386ffd700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83880737f0 0x7f838819cc50 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f837800b700 tx=0x7f837800bac0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.239+0000 7f836ffff700 1 -- 192.168.123.102:0/2069001078 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8378010820 con 0x7f83880737f0 2026-03-10T10:24:09.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.239+0000 7f838d72e700 1 -- 192.168.123.102:0/2069001078 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f83881a23c0 con 0x7f83880737f0 2026-03-10T10:24:09.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.239+0000 7f838d72e700 1 -- 192.168.123.102:0/2069001078 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f83881a28e0 con 0x7f83880737f0 2026-03-10T10:24:09.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.240+0000 7f836ffff700 1 -- 192.168.123.102:0/2069001078 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8378010e60 con 0x7f83880737f0 2026-03-10T10:24:09.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.240+0000 7f838d72e700 1 -- 192.168.123.102:0/2069001078 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f838804ea90 con 0x7f83880737f0 2026-03-10T10:24:09.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.241+0000 7f836ffff700 1 -- 192.168.123.102:0/2069001078 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8378017570 con 0x7f83880737f0 2026-03-10T10:24:09.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.242+0000 7f836ffff700 1 -- 192.168.123.102:0/2069001078 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f8378017790 con 0x7f83880737f0 2026-03-10T10:24:09.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.242+0000 7f836ffff700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8370077920 0x7f8370079de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.242+0000 7f836ffff700 1 -- 192.168.123.102:0/2069001078 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(68..68 src has 1..68) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f837809a900 con 0x7f83880737f0 2026-03-10T10:24:09.243 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.242+0000 7f83867fc700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8370077920 0x7f8370079de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.243 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.243+0000 7f83867fc700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8370077920 0x7f8370079de0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f837c005310 tx=0x7f837c00b560 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.244 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.243+0000 7f836ffff700 1 -- 192.168.123.102:0/2069001078 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f83780630e0 con 0x7f83880737f0 2026-03-10T10:24:09.367 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.367+0000 7f838d72e700 1 -- 192.168.123.102:0/2069001078 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f83881a2b90 con 0x7f8370077920 2026-03-10T10:24:09.372 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.372+0000 7f836ffff700 1 -- 192.168.123.102:0/2069001078 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7f83881a2b90 con 0x7f8370077920 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (8m) 40s ago 9m 23.3M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (9m) 40s ago 9m 9433k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (8m) 18s ago 8m 11.3M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (3m) 40s ago 9m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (3m) 18s ago 8m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (8m) 40s ago 9m 89.9M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (7m) 40s ago 7m 16.9M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (7m) 40s ago 7m 173M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (7m) 18s ago 7m 17.1M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (7m) 18s ago 7m 95.6M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (4m) 40s ago 9m 619M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (3m) 18s ago 8m 489M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (3m) 40s ago 10m 61.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (3m) 18s ago 8m 51.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (9m) 40s ago 9m 16.4M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (8m) 18s ago 8m 15.5M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (2m) 40s ago 8m 228M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (63s) 40s ago 8m 119M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6b6be7f62bd3 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (41s) 40s ago 8m 13.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 745b9931485f 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (19s) 18s ago 7m 30.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe29904ecf52 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (7m) 18s ago 7m 486M 4096M 18.2.1 5be31c24972a c8a0a41b6654 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (7m) 18s ago 7m 400M 4096M 18.2.1 5be31c24972a e9be055e12ba 2026-03-10T10:24:09.373 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (3m) 40s ago 9m 64.2M - 2.43.0 a07b618ecd1d 5ebb885bd417 2026-03-10T10:24:09.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.376+0000 7f836dffb700 1 -- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8370077920 msgr2=0x7f8370079de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.376+0000 7f836dffb700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8370077920 0x7f8370079de0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f837c005310 tx=0x7f837c00b560 comp rx=0 tx=0).stop 2026-03-10T10:24:09.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.376+0000 7f836dffb700 1 -- 192.168.123.102:0/2069001078 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83880737f0 msgr2=0x7f838819cc50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.376+0000 7f836dffb700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83880737f0 0x7f838819cc50 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f837800b700 tx=0x7f837800bac0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.376+0000 7f836dffb700 1 -- 192.168.123.102:0/2069001078 shutdown_connections 2026-03-10T10:24:09.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.376+0000 7f836dffb700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8370077920 0x7f8370079de0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.376+0000 7f836dffb700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83880737f0 0x7f838819cc50 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.376+0000 7f836dffb700 1 --2- 192.168.123.102:0/2069001078 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8388074dc0 0x7f838819d190 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.376+0000 7f836dffb700 1 -- 192.168.123.102:0/2069001078 >> 192.168.123.102:0/2069001078 conn(0x7f83880fc240 msgr2=0x7f8388102650 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:09.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.378+0000 7f836dffb700 1 -- 192.168.123.102:0/2069001078 shutdown_connections 2026-03-10T10:24:09.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.378+0000 7f836dffb700 1 -- 192.168.123.102:0/2069001078 wait complete. 2026-03-10T10:24:09.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.448+0000 7f4659fe0700 1 -- 192.168.123.102:0/1220633294 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4654107d90 msgr2=0x7f465410a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.448+0000 7f4659fe0700 1 --2- 192.168.123.102:0/1220633294 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4654107d90 0x7f465410a1c0 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f464401acd0 tx=0x7f464401c270 comp rx=0 tx=0).stop 2026-03-10T10:24:09.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.449+0000 7f4659fe0700 1 -- 192.168.123.102:0/1220633294 shutdown_connections 2026-03-10T10:24:09.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.449+0000 7f4659fe0700 1 --2- 192.168.123.102:0/1220633294 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f465410a700 0x7f465410cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.449+0000 7f4659fe0700 1 --2- 192.168.123.102:0/1220633294 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4654107d90 0x7f465410a1c0 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.449+0000 7f4659fe0700 1 -- 192.168.123.102:0/1220633294 >> 192.168.123.102:0/1220633294 conn(0x7f465406daa0 msgr2=0x7f465406ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:09.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.449+0000 7f4659fe0700 1 -- 192.168.123.102:0/1220633294 shutdown_connections 2026-03-10T10:24:09.449 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.449+0000 7f4659fe0700 1 -- 192.168.123.102:0/1220633294 wait complete. 2026-03-10T10:24:09.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4659fe0700 1 Processor -- start 2026-03-10T10:24:09.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4659fe0700 1 -- start start 2026-03-10T10:24:09.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4659fe0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f465410a700 0x7f465419cb20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4659fe0700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f465419d060 0x7f46541a20d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4659fe0700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f465419d570 con 0x7f465419d060 2026-03-10T10:24:09.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4659fe0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f465419d6e0 con 0x7f465410a700 2026-03-10T10:24:09.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4653fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f465419d060 0x7f46541a20d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4653fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f465419d060 0x7f46541a20d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:35932/0 (socket says 192.168.123.102:35932) 2026-03-10T10:24:09.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4653fff700 1 -- 192.168.123.102:0/1372777640 learned_addr learned my addr 192.168.123.102:0/1372777640 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:09.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4653fff700 1 -- 192.168.123.102:0/1372777640 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f465410a700 msgr2=0x7f465419cb20 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:24:09.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4653fff700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f465410a700 0x7f465419cb20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4653fff700 1 -- 192.168.123.102:0/1372777640 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f464401a720 con 0x7f465419d060 2026-03-10T10:24:09.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.450+0000 7f4653fff700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f465419d060 0x7f46541a20d0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f464c00e530 tx=0x7f464c00e8f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.452+0000 7f4651ffb700 1 -- 192.168.123.102:0/1372777640 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f464c0090d0 con 0x7f465419d060 2026-03-10T10:24:09.452 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.452+0000 7f4659fe0700 1 -- 192.168.123.102:0/1372777640 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f46541a2610 con 0x7f465419d060 2026-03-10T10:24:09.453 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.452+0000 7f4659fe0700 1 -- 192.168.123.102:0/1372777640 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f46541a2b60 con 0x7f465419d060 2026-03-10T10:24:09.453 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.453+0000 7f4651ffb700 1 -- 192.168.123.102:0/1372777640 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f464c00f040 con 0x7f465419d060 2026-03-10T10:24:09.453 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.453+0000 7f4651ffb700 1 -- 192.168.123.102:0/1372777640 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f464c014790 con 0x7f465419d060 2026-03-10T10:24:09.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.454+0000 7f4651ffb700 1 -- 192.168.123.102:0/1372777640 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f464c0148f0 con 0x7f465419d060 2026-03-10T10:24:09.455 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.455+0000 7f4651ffb700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f463c0779e0 0x7f463c079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.455 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.455+0000 7f463b7fe700 1 -- 192.168.123.102:0/1372777640 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4640005320 con 0x7f465419d060 2026-03-10T10:24:09.455 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.455+0000 7f4658fde700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f463c0779e0 0x7f463c079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.455+0000 7f4658fde700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f463c0779e0 0x7f463c079ea0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f4644000c00 tx=0x7f464401f3f0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.456+0000 7f4651ffb700 1 -- 192.168.123.102:0/1372777640 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(68..68 src has 1..68) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f464c09adf0 con 0x7f465419d060 2026-03-10T10:24:09.464 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.463+0000 7f4651ffb700 1 -- 192.168.123.102:0/1372777640 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f464c063650 con 0x7f465419d060 2026-03-10T10:24:09.635 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.634+0000 7f463b7fe700 1 -- 192.168.123.102:0/1372777640 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f4640006160 con 0x7f465419d060 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.635+0000 7f4651ffb700 1 -- 192.168.123.102:0/1372777640 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7f464c062da0 con 0x7f465419d060 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 1, 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 4 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 5, 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:24:09.636 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.638+0000 7f463b7fe700 1 -- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f463c0779e0 msgr2=0x7f463c079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.638+0000 7f463b7fe700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f463c0779e0 0x7f463c079ea0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f4644000c00 tx=0x7f464401f3f0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.639+0000 7f463b7fe700 1 -- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f465419d060 msgr2=0x7f46541a20d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.639+0000 7f463b7fe700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f465419d060 0x7f46541a20d0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f464c00e530 tx=0x7f464c00e8f0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.639+0000 7f463b7fe700 1 -- 192.168.123.102:0/1372777640 shutdown_connections 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.639+0000 7f463b7fe700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f463c0779e0 0x7f463c079ea0 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.639+0000 7f463b7fe700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f465410a700 0x7f465419cb20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.639+0000 7f463b7fe700 1 --2- 192.168.123.102:0/1372777640 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f465419d060 0x7f46541a20d0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.639+0000 7f463b7fe700 1 -- 192.168.123.102:0/1372777640 >> 192.168.123.102:0/1372777640 conn(0x7f465406daa0 msgr2=0x7f4654109fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:09.639 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.639+0000 7f463b7fe700 1 -- 192.168.123.102:0/1372777640 shutdown_connections 2026-03-10T10:24:09.640 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.639+0000 7f463b7fe700 1 -- 192.168.123.102:0/1372777640 wait complete. 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local systemd[1]: Starting Ceph osd.4 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local podman[118077]: 2026-03-10 10:24:09.419156561 +0000 UTC m=+0.016208755 container create 65402a5e8b7d65eb7b1cf47dec5ee611dc7a46e6091e8400d919cae63306a88d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local podman[118077]: 2026-03-10 10:24:09.466803356 +0000 UTC m=+0.063855540 container init 65402a5e8b7d65eb7b1cf47dec5ee611dc7a46e6091e8400d919cae63306a88d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.schema-version=1.0) 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local podman[118077]: 2026-03-10 10:24:09.469618375 +0000 UTC m=+0.066670569 container start 65402a5e8b7d65eb7b1cf47dec5ee611dc7a46e6091e8400d919cae63306a88d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local podman[118077]: 2026-03-10 10:24:09.471602289 +0000 UTC m=+0.068654483 container attach 65402a5e8b7d65eb7b1cf47dec5ee611dc7a46e6091e8400d919cae63306a88d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local podman[118077]: 2026-03-10 10:24:09.412584097 +0000 UTC m=+0.009636291 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local bash[118077]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:09.664 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local bash[118077]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:09.665 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-mon[103593]: pgmap v132: 65 pgs: 65 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:24:09.665 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-mon[103593]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:24:09.665 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-mon[103593]: osdmap e68: 6 total, 5 up, 6 in 2026-03-10T10:24:09.665 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-mon[103593]: from='client.34292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:09.665 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-mon[103593]: from='client.34296 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:09.665 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1372777640' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:09.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.729+0000 7f024108e700 1 -- 192.168.123.102:0/1879196281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f023c075a10 msgr2=0x7f023c077ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.729+0000 7f024108e700 1 --2- 192.168.123.102:0/1879196281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f023c075a10 0x7f023c077ea0 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f023400a390 tx=0x7f023400a6a0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 -- 192.168.123.102:0/1879196281 shutdown_connections 2026-03-10T10:24:09.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 --2- 192.168.123.102:0/1879196281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f023c075a10 0x7f023c077ea0 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 --2- 192.168.123.102:0/1879196281 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f023c072b20 0x7f023c072f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 -- 192.168.123.102:0/1879196281 >> 192.168.123.102:0/1879196281 conn(0x7f023c06daa0 msgr2=0x7f023c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:09.730 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 -- 192.168.123.102:0/1879196281 shutdown_connections 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 -- 192.168.123.102:0/1879196281 wait complete. 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 Processor -- start 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 -- start start 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f023c072b20 0x7f023c0830e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f023c083620 0x7f023c1b3170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f023c083b30 con 0x7f023c072b20 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f024108e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f023c083ca0 con 0x7f023c083620 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f023bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f023c072b20 0x7f023c0830e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f023b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f023c083620 0x7f023c1b3170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f023b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f023c083620 0x7f023c1b3170 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:54288/0 (socket says 192.168.123.102:54288) 2026-03-10T10:24:09.731 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.730+0000 7f023b7fe700 1 -- 192.168.123.102:0/1180781022 learned_addr learned my addr 192.168.123.102:0/1180781022 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:09.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.732+0000 7f023b7fe700 1 -- 192.168.123.102:0/1180781022 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f023c072b20 msgr2=0x7f023c0830e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.732+0000 7f023b7fe700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f023c072b20 0x7f023c0830e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.732+0000 7f023b7fe700 1 -- 192.168.123.102:0/1180781022 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f023400a040 con 0x7f023c083620 2026-03-10T10:24:09.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.733+0000 7f023b7fe700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f023c083620 0x7f023c1b3170 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f0234009750 tx=0x7f0234004b30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.733+0000 7f02397fa700 1 -- 192.168.123.102:0/1180781022 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f02340093e0 con 0x7f023c083620 2026-03-10T10:24:09.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.733+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f023c1b36b0 con 0x7f023c083620 2026-03-10T10:24:09.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.733+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f023c1b3c00 con 0x7f023c083620 2026-03-10T10:24:09.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.734+0000 7f02397fa700 1 -- 192.168.123.102:0/1180781022 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f023401c070 con 0x7f023c083620 2026-03-10T10:24:09.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.734+0000 7f02397fa700 1 -- 192.168.123.102:0/1180781022 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f02340074e0 con 0x7f023c083620 2026-03-10T10:24:09.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.734+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f023c07d270 con 0x7f023c083620 2026-03-10T10:24:09.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.736+0000 7f02397fa700 1 -- 192.168.123.102:0/1180781022 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f023401e030 con 0x7f023c083620 2026-03-10T10:24:09.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.736+0000 7f02397fa700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f02240778c0 0x7f0224079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.737+0000 7f02397fa700 1 -- 192.168.123.102:0/1180781022 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(69..69 src has 1..69) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f023409e000 con 0x7f023c083620 2026-03-10T10:24:09.737 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:09 vm02.local ceph-mon[110129]: pgmap v132: 65 pgs: 65 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:24:09.737 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:09 vm02.local ceph-mon[110129]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:24:09.737 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:09 vm02.local ceph-mon[110129]: osdmap e68: 6 total, 5 up, 6 in 2026-03-10T10:24:09.737 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:09 vm02.local ceph-mon[110129]: from='client.34292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:09.737 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:09 vm02.local ceph-mon[110129]: from='client.34296 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:09.737 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:09 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1372777640' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:09.740 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.739+0000 7f02397fa700 1 -- 192.168.123.102:0/1180781022 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0234066970 con 0x7f023c083620 2026-03-10T10:24:09.746 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.744+0000 7f023bfff700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f02240778c0 0x7f0224079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.747 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.746+0000 7f023bfff700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f02240778c0 0x7f0224079d80 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f022c005950 tx=0x7f022c00b770 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.880 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.880+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f023c04ea90 con 0x7f023c083620 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.880+0000 7f02397fa700 1 -- 192.168.123.102:0/1180781022 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 15 v15) v1 ==== 76+0+1945 (secure 0 0 0) 0x7f02340660c0 con 0x7f023c083620 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:e15 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:epoch 15 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:17:02.433444+0000 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 39 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14464} 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:24:09.881 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 0 members: 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:14464} state up:active seq 5 join_fscid=1 addr [v2:192.168.123.102:6826/658252295,v1:192.168.123.102:6827/658252295] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{0:14484} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.105:6826/4269439469,v1:192.168.123.105:6827/4269439469] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:14494} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:24:09.882 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f02240778c0 msgr2=0x7f0224079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f02240778c0 0x7f0224079d80 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f022c005950 tx=0x7f022c00b770 comp rx=0 tx=0).stop 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f023c083620 msgr2=0x7f023c1b3170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f023c083620 0x7f023c1b3170 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f0234009750 tx=0x7f0234004b30 comp rx=0 tx=0).stop 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 shutdown_connections 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f023c072b20 0x7f023c0830e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f02240778c0 0x7f0224079d80 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 --2- 192.168.123.102:0/1180781022 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f023c083620 0x7f023c1b3170 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 >> 192.168.123.102:0/1180781022 conn(0x7f023c06daa0 msgr2=0x7f023c06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 shutdown_connections 2026-03-10T10:24:09.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.884+0000 7f024108e700 1 -- 192.168.123.102:0/1180781022 wait complete. 2026-03-10T10:24:09.885 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:24:09.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.949+0000 7fc02a369700 1 -- 192.168.123.102:0/1634838933 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc024104350 msgr2=0x7fc0241047b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.949+0000 7fc02a369700 1 --2- 192.168.123.102:0/1634838933 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc024104350 0x7fc0241047b0 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7fc014009b50 tx=0x7fc014009e60 comp rx=0 tx=0).stop 2026-03-10T10:24:09.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.950+0000 7fc02a369700 1 -- 192.168.123.102:0/1634838933 shutdown_connections 2026-03-10T10:24:09.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.950+0000 7fc02a369700 1 --2- 192.168.123.102:0/1634838933 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc024104350 0x7fc0241047b0 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.950+0000 7fc02a369700 1 --2- 192.168.123.102:0/1634838933 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc024103150 0x7fc024103570 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.950+0000 7fc02a369700 1 -- 192.168.123.102:0/1634838933 >> 192.168.123.102:0/1634838933 conn(0x7fc0240fe6d0 msgr2=0x7fc024100b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:09.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.950+0000 7fc02a369700 1 -- 192.168.123.102:0/1634838933 shutdown_connections 2026-03-10T10:24:09.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.950+0000 7fc02a369700 1 -- 192.168.123.102:0/1634838933 wait complete. 2026-03-10T10:24:09.950 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.950+0000 7fc02a369700 1 Processor -- start 2026-03-10T10:24:09.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.950+0000 7fc02a369700 1 -- start start 2026-03-10T10:24:09.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.951+0000 7fc02a369700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc024103150 0x7fc024198ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.951+0000 7fc02a369700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc024104350 0x7fc024199020 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.951+0000 7fc02a369700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc024199640 con 0x7fc024103150 2026-03-10T10:24:09.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.951+0000 7fc02a369700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc024199780 con 0x7fc024104350 2026-03-10T10:24:09.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.951+0000 7fc023fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc024103150 0x7fc024198ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.951+0000 7fc0237fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc024104350 0x7fc024199020 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.951+0000 7fc0237fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc024104350 0x7fc024199020 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:54300/0 (socket says 192.168.123.102:54300) 2026-03-10T10:24:09.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.951+0000 7fc0237fe700 1 -- 192.168.123.102:0/174796080 learned_addr learned my addr 192.168.123.102:0/174796080 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:09.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.952+0000 7fc0237fe700 1 -- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc024103150 msgr2=0x7fc024198ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:09.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.952+0000 7fc0237fe700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc024103150 0x7fc024198ae0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:09.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.952+0000 7fc0237fe700 1 -- 192.168.123.102:0/174796080 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc0140097e0 con 0x7fc024104350 2026-03-10T10:24:09.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.952+0000 7fc023fff700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc024103150 0x7fc024198ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:24:09.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.952+0000 7fc0237fe700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc024104350 0x7fc024199020 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fc014005270 tx=0x7fc014005710 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.952+0000 7fc0217fa700 1 -- 192.168.123.102:0/174796080 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc01401d070 con 0x7fc024104350 2026-03-10T10:24:09.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.953+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc02419e1d0 con 0x7fc024104350 2026-03-10T10:24:09.953 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.953+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc02419e6c0 con 0x7fc024104350 2026-03-10T10:24:09.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.953+0000 7fc0217fa700 1 -- 192.168.123.102:0/174796080 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc01400bc30 con 0x7fc024104350 2026-03-10T10:24:09.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.953+0000 7fc0217fa700 1 -- 192.168.123.102:0/174796080 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc0140217b0 con 0x7fc024104350 2026-03-10T10:24:09.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.954+0000 7fc0217fa700 1 -- 192.168.123.102:0/174796080 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc01402b430 con 0x7fc024104350 2026-03-10T10:24:09.954 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.954+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc024066e80 con 0x7fc024104350 2026-03-10T10:24:09.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.955+0000 7fc0217fa700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc00c077870 0x7fc00c079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:09.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.955+0000 7fc0217fa700 1 -- 192.168.123.102:0/174796080 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(69..69 src has 1..69) v4 ==== 6222+0+0 (secure 0 0 0) 0x7fc01409af70 con 0x7fc024104350 2026-03-10T10:24:09.955 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.955+0000 7fc023fff700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc00c077870 0x7fc00c079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:09.956 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.955+0000 7fc023fff700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc00c077870 0x7fc00c079d30 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fc018006fd0 tx=0x7fc018009380 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:09.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:09.958+0000 7fc0217fa700 1 -- 192.168.123.102:0/174796080 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc014063750 con 0x7fc024104350 2026-03-10T10:24:09.988 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:24:09.988 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:10.085 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.085+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fc024108ca0 con 0x7fc00c077870 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.086+0000 7fc0217fa700 1 -- 192.168.123.102:0/174796080 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7fc024108ca0 con 0x7fc00c077870 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "crash" 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "10/23 daemons upgraded", 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:24:10.086 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:24:10.089 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.089+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc00c077870 msgr2=0x7fc00c079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:10.089 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.089+0000 7fc02a369700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc00c077870 0x7fc00c079d30 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fc018006fd0 tx=0x7fc018009380 comp rx=0 tx=0).stop 2026-03-10T10:24:10.089 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.089+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc024104350 msgr2=0x7fc024199020 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:10.089 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.089+0000 7fc02a369700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc024104350 0x7fc024199020 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fc014005270 tx=0x7fc014005710 comp rx=0 tx=0).stop 2026-03-10T10:24:10.089 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.089+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 shutdown_connections 2026-03-10T10:24:10.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.089+0000 7fc02a369700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc024103150 0x7fc024198ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:10.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.089+0000 7fc02a369700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc00c077870 0x7fc00c079d30 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:10.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.089+0000 7fc02a369700 1 --2- 192.168.123.102:0/174796080 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc024104350 0x7fc024199020 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:10.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.089+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 >> 192.168.123.102:0/174796080 conn(0x7fc0240fe6d0 msgr2=0x7fc024107580 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:10.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.090+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 shutdown_connections 2026-03-10T10:24:10.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.090+0000 7fc02a369700 1 -- 192.168.123.102:0/174796080 wait complete. 2026-03-10T10:24:10.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.156+0000 7f55b399b700 1 -- 192.168.123.102:0/3921313557 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55ac104320 msgr2=0x7f55ac104780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:10.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.156+0000 7f55b399b700 1 --2- 192.168.123.102:0/3921313557 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55ac104320 0x7f55ac104780 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f55a8009b50 tx=0x7f55a8009e60 comp rx=0 tx=0).stop 2026-03-10T10:24:10.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.156+0000 7f55b399b700 1 -- 192.168.123.102:0/3921313557 shutdown_connections 2026-03-10T10:24:10.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.156+0000 7f55b399b700 1 --2- 192.168.123.102:0/3921313557 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55ac104320 0x7f55ac104780 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:10.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.156+0000 7f55b399b700 1 --2- 192.168.123.102:0/3921313557 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f55ac103120 0x7f55ac103540 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:10.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.156+0000 7f55b399b700 1 -- 192.168.123.102:0/3921313557 >> 192.168.123.102:0/3921313557 conn(0x7f55ac0fe6c0 msgr2=0x7f55ac100b00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:10.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.156+0000 7f55b399b700 1 -- 192.168.123.102:0/3921313557 shutdown_connections 2026-03-10T10:24:10.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.156+0000 7f55b399b700 1 -- 192.168.123.102:0/3921313557 wait complete. 2026-03-10T10:24:10.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.157+0000 7f55b399b700 1 Processor -- start 2026-03-10T10:24:10.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.157+0000 7f55b399b700 1 -- start start 2026-03-10T10:24:10.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.157+0000 7f55b399b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55ac103120 0x7f55ac198ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:10.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.157+0000 7f55b399b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f55ac104320 0x7f55ac199000 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:10.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.157+0000 7f55b399b700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f55ac199620 con 0x7f55ac103120 2026-03-10T10:24:10.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.157+0000 7f55b399b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f55ac199760 con 0x7f55ac104320 2026-03-10T10:24:10.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.158+0000 7f55b1737700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55ac103120 0x7f55ac198ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:10.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.158+0000 7f55b0f36700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f55ac104320 0x7f55ac199000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:10.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.158+0000 7f55b0f36700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f55ac104320 0x7f55ac199000 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:54312/0 (socket says 192.168.123.102:54312) 2026-03-10T10:24:10.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.158+0000 7f55b0f36700 1 -- 192.168.123.102:0/2857564074 learned_addr learned my addr 192.168.123.102:0/2857564074 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:10.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.158+0000 7f55b1737700 1 -- 192.168.123.102:0/2857564074 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f55ac104320 msgr2=0x7f55ac199000 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:10.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.158+0000 7f55b1737700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f55ac104320 0x7f55ac199000 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:10.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.158+0000 7f55b1737700 1 -- 192.168.123.102:0/2857564074 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f55a80097e0 con 0x7f55ac103120 2026-03-10T10:24:10.159 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.158+0000 7f55b1737700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55ac103120 0x7f55ac198ac0 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f55ac104180 tx=0x7f559c00bb10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:10.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.159+0000 7f55a27fc700 1 -- 192.168.123.102:0/2857564074 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f559c00d610 con 0x7f55ac103120 2026-03-10T10:24:10.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.159+0000 7f55a27fc700 1 -- 192.168.123.102:0/2857564074 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f559c00dc50 con 0x7f55ac103120 2026-03-10T10:24:10.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.159+0000 7f55a27fc700 1 -- 192.168.123.102:0/2857564074 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f559c017400 con 0x7f55ac103120 2026-03-10T10:24:10.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.159+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f55ac19e210 con 0x7f55ac103120 2026-03-10T10:24:10.160 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.159+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f55ac075460 con 0x7f55ac103120 2026-03-10T10:24:10.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.160+0000 7f55a27fc700 1 -- 192.168.123.102:0/2857564074 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f559c00ddc0 con 0x7f55ac103120 2026-03-10T10:24:10.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.160+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f55ac066e80 con 0x7f55ac103120 2026-03-10T10:24:10.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.161+0000 7f55a27fc700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f55980778c0 0x7f5598079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:10.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.161+0000 7f55a27fc700 1 -- 192.168.123.102:0/2857564074 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(69..69 src has 1..69) v4 ==== 6222+0+0 (secure 0 0 0) 0x7f559c0991d0 con 0x7f55ac103120 2026-03-10T10:24:10.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.163+0000 7f55b0f36700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f55980778c0 0x7f5598079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:10.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.163+0000 7f55a27fc700 1 -- 192.168.123.102:0/2857564074 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f559c061a30 con 0x7f55ac103120 2026-03-10T10:24:10.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.164+0000 7f55b0f36700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f55980778c0 0x7f5598079d80 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f55a8005310 tx=0x7f55a800b580 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:10.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local bash[118077]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:24:10.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local bash[118077]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:10.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:10.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:09 vm05.local bash[118077]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:10.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-10T10:24:10.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local bash[118077]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-10T10:24:10.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-558b6764-3f53-4348-8f1d-0b130646d804/osd-block-d0b95380-36d0-4fea-a134-f6abcd77b2ee --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-10T10:24:10.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local bash[118077]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-558b6764-3f53-4348-8f1d-0b130646d804/osd-block-d0b95380-36d0-4fea-a134-f6abcd77b2ee --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-10T10:24:10.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.320+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f55ac075e80 con 0x7f55ac103120 2026-03-10T10:24:10.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.320+0000 7f55a27fc700 1 -- 192.168.123.102:0/2857564074 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+95 (secure 0 0 0) 0x7f559c061180 con 0x7f55ac103120 2026-03-10T10:24:10.321 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN 1 osds down 2026-03-10T10:24:10.321 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] OSD_DOWN: 1 osds down 2026-03-10T10:24:10.321 INFO:teuthology.orchestra.run.vm02.stdout: osd.4 (root=default,host=vm05) is down 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.322+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f55980778c0 msgr2=0x7f5598079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.322+0000 7f55b399b700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f55980778c0 0x7f5598079d80 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f55a8005310 tx=0x7f55a800b580 comp rx=0 tx=0).stop 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.322+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55ac103120 msgr2=0x7f55ac198ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.322+0000 7f55b399b700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55ac103120 0x7f55ac198ac0 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f55ac104180 tx=0x7f559c00bb10 comp rx=0 tx=0).stop 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.323+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 shutdown_connections 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.323+0000 7f55b399b700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f55ac103120 0x7f55ac198ac0 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.323+0000 7f55b399b700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f55980778c0 0x7f5598079d80 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.323+0000 7f55b399b700 1 --2- 192.168.123.102:0/2857564074 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f55ac104320 0x7f55ac199000 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.323+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 >> 192.168.123.102:0/2857564074 conn(0x7f55ac0fe6c0 msgr2=0x7f55ac107550 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.323+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 shutdown_connections 2026-03-10T10:24:10.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:10.323+0000 7f55b399b700 1 -- 192.168.123.102:0/2857564074 wait complete. 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/ln -snf /dev/ceph-558b6764-3f53-4348-8f1d-0b130646d804/osd-block-d0b95380-36d0-4fea-a134-f6abcd77b2ee /var/lib/ceph/osd/ceph-4/block 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local bash[118077]: Running command: /usr/bin/ln -snf /dev/ceph-558b6764-3f53-4348-8f1d-0b130646d804/osd-block-d0b95380-36d0-4fea-a134-f6abcd77b2ee /var/lib/ceph/osd/ceph-4/block 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local bash[118077]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local bash[118077]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local bash[118077]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate[118088]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local bash[118077]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local conmon[118088]: conmon 65402a5e8b7d65eb7b1c : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65402a5e8b7d65eb7b1cf47dec5ee611dc7a46e6091e8400d919cae63306a88d.scope/container/memory.events 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local podman[118077]: 2026-03-10 10:24:10.36894565 +0000 UTC m=+0.965997844 container died 65402a5e8b7d65eb7b1cf47dec5ee611dc7a46e6091e8400d919cae63306a88d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local podman[118077]: 2026-03-10 10:24:10.39482715 +0000 UTC m=+0.991879344 container remove 65402a5e8b7d65eb7b1cf47dec5ee611dc7a46e6091e8400d919cae63306a88d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local podman[118338]: 2026-03-10 10:24:10.524718527 +0000 UTC m=+0.025714619 container create fe0b3f802cec8cd51fa95c03893dc79ef1887c437250693623f042848aa7a479 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local podman[118338]: 2026-03-10 10:24:10.558960839 +0000 UTC m=+0.059956931 container init fe0b3f802cec8cd51fa95c03893dc79ef1887c437250693623f042848aa7a479 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, OSD_FLAVOR=default) 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local podman[118338]: 2026-03-10 10:24:10.562001391 +0000 UTC m=+0.062997483 container start fe0b3f802cec8cd51fa95c03893dc79ef1887c437250693623f042848aa7a479 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local bash[118338]: fe0b3f802cec8cd51fa95c03893dc79ef1887c437250693623f042848aa7a479 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local podman[118338]: 2026-03-10 10:24:10.517598879 +0000 UTC m=+0.018594971 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:24:10.654 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:10 vm05.local systemd[1]: Started Ceph osd.4 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:24:10.910 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-mon[103593]: from='client.44241 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:10.910 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-mon[103593]: osdmap e69: 6 total, 5 up, 6 in 2026-03-10T10:24:10.910 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1180781022' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:24:10.910 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-mon[103593]: from='client.44249 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:10.910 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2857564074' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:24:10.910 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:10.910 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:10.910 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:10 vm02.local ceph-mon[110129]: from='client.44241 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:10 vm02.local ceph-mon[110129]: osdmap e69: 6 total, 5 up, 6 in 2026-03-10T10:24:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:10 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1180781022' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:24:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:10 vm02.local ceph-mon[110129]: from='client.44249 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:10 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2857564074' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:24:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:11.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:12.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:11 vm02.local ceph-mon[110129]: pgmap v135: 65 pgs: 6 stale+active+clean, 59 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 639 B/s rd, 1 op/s 2026-03-10T10:24:12.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:11 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:12.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:11 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:12.037 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:11 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[118348]: 2026-03-10T10:24:11.910+0000 7f9313bd6740 -1 Falling back to public interface 2026-03-10T10:24:12.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:11 vm05.local ceph-mon[103593]: pgmap v135: 65 pgs: 6 stale+active+clean, 59 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 639 B/s rd, 1 op/s 2026-03-10T10:24:12.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:11 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:12.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:11 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:12.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:12 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:12.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:12 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:12.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:12 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:13.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:12 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:13.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:12 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:13.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:12 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: pgmap v136: 65 pgs: 13 active+undersized, 3 stale+active+clean, 11 active+undersized+degraded, 38 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 767 B/s rd, 1 op/s; 31/228 objects degraded (13.596%) 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: Health check failed: Degraded data redundancy: 31/228 objects degraded (13.596%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:14.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T10:24:14.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T10:24:14.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:13 vm05.local ceph-mon[103593]: Upgrade: unsafe to stop osd(s) at this time (9 PGs are or would become offline) 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: pgmap v136: 65 pgs: 13 active+undersized, 3 stale+active+clean, 11 active+undersized+degraded, 38 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 767 B/s rd, 1 op/s; 31/228 objects degraded (13.596%) 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: Health check failed: Degraded data redundancy: 31/228 objects degraded (13.596%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:14.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T10:24:14.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T10:24:14.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:13 vm02.local ceph-mon[110129]: Upgrade: unsafe to stop osd(s) at this time (9 PGs are or would become offline) 2026-03-10T10:24:15.954 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:15 vm05.local ceph-mon[103593]: pgmap v137: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 40/228 objects degraded (17.544%) 2026-03-10T10:24:15.954 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:15 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[118348]: 2026-03-10T10:24:15.690+0000 7f9313bd6740 -1 osd.4 0 read_superblock omap replica is missing. 2026-03-10T10:24:16.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:15 vm02.local ceph-mon[110129]: pgmap v137: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 40/228 objects degraded (17.544%) 2026-03-10T10:24:16.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:15 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[118348]: 2026-03-10T10:24:15.953+0000 7f9313bd6740 -1 osd.4 67 log_to_monitors true 2026-03-10T10:24:17.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:16 vm02.local ceph-mon[110129]: from='osd.4 [v2:192.168.123.105:6808/1991291011,v1:192.168.123.105:6809/1991291011]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:24:17.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:16 vm02.local ceph-mon[110129]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:24:17.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:16 vm05.local ceph-mon[103593]: from='osd.4 [v2:192.168.123.105:6808/1991291011,v1:192.168.123.105:6809/1991291011]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:24:17.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:16 vm05.local ceph-mon[103593]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:24:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:17 vm02.local ceph-mon[110129]: pgmap v138: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 40/228 objects degraded (17.544%) 2026-03-10T10:24:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:17 vm02.local ceph-mon[110129]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:24:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:17 vm02.local ceph-mon[110129]: osdmap e70: 6 total, 5 up, 6 in 2026-03-10T10:24:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:17 vm02.local ceph-mon[110129]: from='osd.4 [v2:192.168.123.105:6808/1991291011,v1:192.168.123.105:6809/1991291011]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:24:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:17 vm02.local ceph-mon[110129]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:24:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:17 vm02.local ceph-mon[110129]: from='osd.4 ' entity='osd.4' 2026-03-10T10:24:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:17 vm05.local ceph-mon[103593]: pgmap v138: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 209 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 40/228 objects degraded (17.544%) 2026-03-10T10:24:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:17 vm05.local ceph-mon[103593]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:24:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:17 vm05.local ceph-mon[103593]: osdmap e70: 6 total, 5 up, 6 in 2026-03-10T10:24:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:17 vm05.local ceph-mon[103593]: from='osd.4 [v2:192.168.123.105:6808/1991291011,v1:192.168.123.105:6809/1991291011]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:24:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:17 vm05.local ceph-mon[103593]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:24:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:17 vm05.local ceph-mon[103593]: from='osd.4 ' entity='osd.4' 2026-03-10T10:24:18.287 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:24:17 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[118348]: 2026-03-10T10:24:17.833+0000 7f930b16f640 -1 osd.4 67 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:24:19.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:18 vm02.local ceph-mon[110129]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:24:19.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:18 vm02.local ceph-mon[110129]: osd.4 [v2:192.168.123.105:6808/1991291011,v1:192.168.123.105:6809/1991291011] boot 2026-03-10T10:24:19.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:18 vm02.local ceph-mon[110129]: osdmap e71: 6 total, 6 up, 6 in 2026-03-10T10:24:19.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:18 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:24:19.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:18 vm02.local ceph-mon[110129]: osdmap e72: 6 total, 6 up, 6 in 2026-03-10T10:24:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:18 vm05.local ceph-mon[103593]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:24:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:18 vm05.local ceph-mon[103593]: osd.4 [v2:192.168.123.105:6808/1991291011,v1:192.168.123.105:6809/1991291011] boot 2026-03-10T10:24:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:18 vm05.local ceph-mon[103593]: osdmap e71: 6 total, 6 up, 6 in 2026-03-10T10:24:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:18 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:24:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:18 vm05.local ceph-mon[103593]: osdmap e72: 6 total, 6 up, 6 in 2026-03-10T10:24:20.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:19 vm02.local ceph-mon[110129]: pgmap v141: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 209 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 40/228 objects degraded (17.544%) 2026-03-10T10:24:20.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:19 vm05.local ceph-mon[103593]: pgmap v141: 65 pgs: 19 active+undersized, 15 active+undersized+degraded, 31 active+clean; 209 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 40/228 objects degraded (17.544%) 2026-03-10T10:24:21.104 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:21 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 40/228 objects degraded (17.544%), 15 pgs degraded (PG_DEGRADED) 2026-03-10T10:24:21.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:21 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 40/228 objects degraded (17.544%), 15 pgs degraded (PG_DEGRADED) 2026-03-10T10:24:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:22 vm05.local ceph-mon[103593]: pgmap v143: 65 pgs: 6 peering, 16 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 0 op/s; 34/228 objects degraded (14.912%) 2026-03-10T10:24:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:22.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:22 vm02.local ceph-mon[110129]: pgmap v143: 65 pgs: 6 peering, 16 active+undersized, 12 active+undersized+degraded, 31 active+clean; 209 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 0 op/s; 34/228 objects degraded (14.912%) 2026-03-10T10:24:22.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:22.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:23.504 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:23 vm02.local ceph-mon[110129]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 34/228 objects degraded (14.912%), 12 pgs degraded) 2026-03-10T10:24:23.504 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:23 vm02.local ceph-mon[110129]: Cluster is now healthy 2026-03-10T10:24:23.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:23 vm05.local ceph-mon[103593]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 34/228 objects degraded (14.912%), 12 pgs degraded) 2026-03-10T10:24:23.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:23 vm05.local ceph-mon[103593]: Cluster is now healthy 2026-03-10T10:24:24.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:24 vm02.local ceph-mon[110129]: pgmap v144: 65 pgs: 6 peering, 1 active+undersized, 58 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 682 B/s rd, 1 op/s 2026-03-10T10:24:24.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:24 vm05.local ceph-mon[103593]: pgmap v144: 65 pgs: 6 peering, 1 active+undersized, 58 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 682 B/s rd, 1 op/s 2026-03-10T10:24:26.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:26 vm02.local ceph-mon[110129]: pgmap v145: 65 pgs: 65 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 531 B/s rd, 1 op/s 2026-03-10T10:24:26.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:26 vm05.local ceph-mon[103593]: pgmap v145: 65 pgs: 65 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 531 B/s rd, 1 op/s 2026-03-10T10:24:28.465 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:28 vm05.local ceph-mon[103593]: pgmap v146: 65 pgs: 65 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:24:28.465 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T10:24:28.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:28 vm02.local ceph-mon[110129]: pgmap v146: 65 pgs: 65 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:24:28.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T10:24:29.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:29 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T10:24:29.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:29 vm05.local ceph-mon[103593]: Upgrade: osd.5 is safe to restart 2026-03-10T10:24:29.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:29 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:29.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:29 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:24:29.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:29 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:29.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:29 vm05.local ceph-mon[103593]: osd.5 marked itself down and dead 2026-03-10T10:24:29.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local systemd[1]: Stopping Ceph osd.5 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:24:29.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[77624]: 2026-03-10T10:24:29.075+0000 7fbf9b044700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:24:29.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[77624]: 2026-03-10T10:24:29.075+0000 7fbf9b044700 -1 osd.5 72 *** Got signal Terminated *** 2026-03-10T10:24:29.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[77624]: 2026-03-10T10:24:29.075+0000 7fbf9b044700 -1 osd.5 72 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:24:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:29 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T10:24:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:29 vm02.local ceph-mon[110129]: Upgrade: osd.5 is safe to restart 2026-03-10T10:24:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:29 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:29 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:24:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:29 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:29 vm02.local ceph-mon[110129]: osd.5 marked itself down and dead 2026-03-10T10:24:29.646 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local podman[123092]: 2026-03-10 10:24:29.376438069 +0000 UTC m=+0.316732510 container died e9be055e12ba7b70250ccb9231d87d2fb68fd565027a66d54de6e69fb1721158 (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5, ceph=True, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, CEPH_POINT_RELEASE=-18.2.1, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.build-date=20240222, org.label-schema.name=CentOS Stream 8 Base Image, RELEASE=HEAD, io.buildah.version=1.29.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GIT_BRANCH=HEAD, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd) 2026-03-10T10:24:29.646 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local podman[123092]: 2026-03-10 10:24:29.402515994 +0000 UTC m=+0.342810425 container remove e9be055e12ba7b70250ccb9231d87d2fb68fd565027a66d54de6e69fb1721158 (image=quay.io/ceph/ceph@sha256:9f35728f6070a596500c0804814a12ab6b98e05067316dc64876fb4b28d04af3, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5, GIT_BRANCH=HEAD, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.vendor=CentOS, GIT_CLEAN=True, RELEASE=HEAD, ceph=True, io.buildah.version=1.29.1, maintainer=Guillaume Abrioux , org.label-schema.build-date=20240222, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0, GIT_COMMIT=1617517b9622744995c4661989e3c30d036a7cfd, org.label-schema.license=GPLv2, CEPH_POINT_RELEASE=-18.2.1) 2026-03-10T10:24:29.646 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local bash[123092]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5 2026-03-10T10:24:29.646 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local podman[123160]: 2026-03-10 10:24:29.554895212 +0000 UTC m=+0.018549854 container create 54358ee705e8cb146bd652a5a53b86cfcc38dc81dbbc9ddb2b45e6a5332a8c12 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) 2026-03-10T10:24:29.646 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local podman[123160]: 2026-03-10 10:24:29.594118714 +0000 UTC m=+0.057773356 container init 54358ee705e8cb146bd652a5a53b86cfcc38dc81dbbc9ddb2b45e6a5332a8c12 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:24:29.646 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local podman[123160]: 2026-03-10 10:24:29.596827925 +0000 UTC m=+0.060482567 container start 54358ee705e8cb146bd652a5a53b86cfcc38dc81dbbc9ddb2b45e6a5332a8c12 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223) 2026-03-10T10:24:29.646 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local podman[123160]: 2026-03-10 10:24:29.603921033 +0000 UTC m=+0.067575675 container attach 54358ee705e8cb146bd652a5a53b86cfcc38dc81dbbc9ddb2b45e6a5332a8c12 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:24:29.914 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local podman[123160]: 2026-03-10 10:24:29.547663494 +0000 UTC m=+0.011318147 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:24:29.914 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local podman[123160]: 2026-03-10 10:24:29.721853039 +0000 UTC m=+0.185507681 container died 54358ee705e8cb146bd652a5a53b86cfcc38dc81dbbc9ddb2b45e6a5332a8c12 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:24:29.914 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local podman[123160]: 2026-03-10 10:24:29.743370447 +0000 UTC m=+0.207025089 container remove 54358ee705e8cb146bd652a5a53b86cfcc38dc81dbbc9ddb2b45e6a5332a8c12 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:24:29.914 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.5.service: Deactivated successfully. 2026-03-10T10:24:29.914 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.5.service: Unit process 123171 (conmon) remains running after unit stopped. 2026-03-10T10:24:29.914 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.5.service: Unit process 123179 (podman) remains running after unit stopped. 2026-03-10T10:24:29.914 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local systemd[1]: Stopped Ceph osd.5 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:24:29.914 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local systemd[1]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.5.service: Consumed 46.902s CPU time, 905.6M memory peak. 2026-03-10T10:24:29.914 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:29 vm05.local systemd[1]: Starting Ceph osd.5 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:24:30.176 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-mon[103593]: Upgrade: Updating osd.5 2026-03-10T10:24:30.176 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-mon[103593]: Deploying daemon osd.5 on vm05 2026-03-10T10:24:30.176 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-mon[103593]: pgmap v147: 65 pgs: 65 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 921 B/s rd, 2 op/s 2026-03-10T10:24:30.176 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-mon[103593]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:24:30.176 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-mon[103593]: Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T10:24:30.176 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-mon[103593]: osdmap e73: 6 total, 5 up, 6 in 2026-03-10T10:24:30.176 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local podman[123262]: 2026-03-10 10:24:30.000320691 +0000 UTC m=+0.015410359 container create 63971789795a7555ff72c0e26bc20cfe2db1708576b394e7d8220bbf499fa084 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T10:24:30.177 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local podman[123262]: 2026-03-10 10:24:30.041796738 +0000 UTC m=+0.056886406 container init 63971789795a7555ff72c0e26bc20cfe2db1708576b394e7d8220bbf499fa084 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20260223, io.buildah.version=1.41.3) 2026-03-10T10:24:30.177 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local podman[123262]: 2026-03-10 10:24:30.047714416 +0000 UTC m=+0.062804084 container start 63971789795a7555ff72c0e26bc20cfe2db1708576b394e7d8220bbf499fa084 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default) 2026-03-10T10:24:30.177 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local podman[123262]: 2026-03-10 10:24:30.048710552 +0000 UTC m=+0.063800220 container attach 63971789795a7555ff72c0e26bc20cfe2db1708576b394e7d8220bbf499fa084 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-10T10:24:30.177 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local podman[123262]: 2026-03-10 10:24:29.99407036 +0000 UTC m=+0.009160028 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:24:30.177 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:30.177 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local bash[123262]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:30.177 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:30.177 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local bash[123262]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:30 vm02.local ceph-mon[110129]: Upgrade: Updating osd.5 2026-03-10T10:24:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:30 vm02.local ceph-mon[110129]: Deploying daemon osd.5 on vm05 2026-03-10T10:24:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:30 vm02.local ceph-mon[110129]: pgmap v147: 65 pgs: 65 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 921 B/s rd, 2 op/s 2026-03-10T10:24:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:30 vm02.local ceph-mon[110129]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T10:24:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:30 vm02.local ceph-mon[110129]: Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T10:24:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:30 vm02.local ceph-mon[110129]: osdmap e73: 6 total, 5 up, 6 in 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local bash[123262]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local bash[123262]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local bash[123262]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local bash[123262]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a223f893-3b72-434f-a735-892f26c15123/osd-block-bf16e555-2559-41cf-b9cc-38646188d928 --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-10T10:24:30.994 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local bash[123262]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a223f893-3b72-434f-a735-892f26c15123/osd-block-bf16e555-2559-41cf-b9cc-38646188d928 --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-10T10:24:31.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/ln -snf /dev/ceph-a223f893-3b72-434f-a735-892f26c15123/osd-block-bf16e555-2559-41cf-b9cc-38646188d928 /var/lib/ceph/osd/ceph-5/block 2026-03-10T10:24:31.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:30 vm05.local bash[123262]: Running command: /usr/bin/ln -snf /dev/ceph-a223f893-3b72-434f-a735-892f26c15123/osd-block-bf16e555-2559-41cf-b9cc-38646188d928 /var/lib/ceph/osd/ceph-5/block 2026-03-10T10:24:31.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-10T10:24:31.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local bash[123262]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-10T10:24:31.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T10:24:31.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local bash[123262]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T10:24:31.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local bash[123262]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate[123273]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local bash[123262]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local conmon[123273]: conmon 63971789795a7555ff72 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63971789795a7555ff72c0e26bc20cfe2db1708576b394e7d8220bbf499fa084.scope/container/memory.events 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local podman[123262]: 2026-03-10 10:24:31.020931223 +0000 UTC m=+1.036020891 container died 63971789795a7555ff72c0e26bc20cfe2db1708576b394e7d8220bbf499fa084 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=squid) 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local podman[123262]: 2026-03-10 10:24:31.043497525 +0000 UTC m=+1.058587193 container remove 63971789795a7555ff72c0e26bc20cfe2db1708576b394e7d8220bbf499fa084 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-activate, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local podman[123542]: 2026-03-10 10:24:31.133534895 +0000 UTC m=+0.016939430 container create c60f7383494fc3060f444b458f96ceaed9016f9dd747044bc574ab6497b83ba1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default) 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local podman[123542]: 2026-03-10 10:24:31.17469337 +0000 UTC m=+0.058097905 container init c60f7383494fc3060f444b458f96ceaed9016f9dd747044bc574ab6497b83ba1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local podman[123542]: 2026-03-10 10:24:31.17795215 +0000 UTC m=+0.061356685 container start c60f7383494fc3060f444b458f96ceaed9016f9dd747044bc574ab6497b83ba1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local bash[123542]: c60f7383494fc3060f444b458f96ceaed9016f9dd747044bc574ab6497b83ba1 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local podman[123542]: 2026-03-10 10:24:31.126601076 +0000 UTC m=+0.010005611 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:24:31.288 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local systemd[1]: Started Ceph osd.5 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d. 2026-03-10T10:24:31.578 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-mon[103593]: osdmap e74: 6 total, 5 up, 6 in 2026-03-10T10:24:31.578 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-mon[103593]: pgmap v150: 65 pgs: 13 stale+active+clean, 52 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 639 B/s rd, 1 op/s 2026-03-10T10:24:31.578 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:31.578 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:31.578 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:31.578 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:31 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:24:31.520+0000 7f5292c37740 -1 Falling back to public interface 2026-03-10T10:24:31.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:31 vm02.local ceph-mon[110129]: osdmap e74: 6 total, 5 up, 6 in 2026-03-10T10:24:31.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:31 vm02.local ceph-mon[110129]: pgmap v150: 65 pgs: 13 stale+active+clean, 52 active+clean; 209 MiB data, 1.6 GiB used, 118 GiB / 120 GiB avail; 639 B/s rd, 1 op/s 2026-03-10T10:24:31.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:31.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:31.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:31 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:32.666 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:32 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:32.666 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:32 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:32.666 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:32 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:32.666 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:32 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:32.666 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:32 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:32.709 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:32 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:32.709 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:32 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:32.709 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:32 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:32.709 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:32 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:32.709 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:32 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:33.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:33 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:33.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:33 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:33.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:33 vm02.local ceph-mon[110129]: pgmap v151: 65 pgs: 10 active+undersized, 5 stale+active+clean, 10 active+undersized+degraded, 40 active+clean; 209 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 895 B/s rd, 1 op/s; 32/228 objects degraded (14.035%) 2026-03-10T10:24:33.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:33 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:33.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:33 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:33.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:33 vm05.local ceph-mon[103593]: pgmap v151: 65 pgs: 10 active+undersized, 5 stale+active+clean, 10 active+undersized+degraded, 40 active+clean; 209 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 895 B/s rd, 1 op/s; 32/228 objects degraded (14.035%) 2026-03-10T10:24:34.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: Health check failed: Degraded data redundancy: 32/228 objects degraded (14.035%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T10:24:34.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:34.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:34.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:34.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:34.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:34.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all osd 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: Upgrade: Setting require_osd_release to 19 squid 2026-03-10T10:24:34.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: Health check failed: Degraded data redundancy: 32/228 objects degraded (14.035%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all osd 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T10:24:34.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T10:24:34.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T10:24:34.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T10:24:34.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T10:24:34.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T10:24:34.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: Upgrade: Setting require_osd_release to 19 squid 2026-03-10T10:24:34.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T10:24:35.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:35 vm05.local ceph-mon[103593]: pgmap v152: 65 pgs: 16 active+undersized, 12 active+undersized+degraded, 37 active+clean; 209 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 255 B/s rd, 1 op/s; 35/228 objects degraded (15.351%) 2026-03-10T10:24:35.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:35 vm05.local ceph-mon[103593]: Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T10:24:35.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:35 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T10:24:35.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:35 vm05.local ceph-mon[103593]: osdmap e75: 6 total, 5 up, 6 in 2026-03-10T10:24:35.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:35 vm05.local ceph-mon[103593]: Upgrade: Disabling standby-replay for filesystem cephfs 2026-03-10T10:24:35.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:35 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:35.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:35 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]: dispatch 2026-03-10T10:24:35.537 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:35 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:24:35.271+0000 7f5292c37740 -1 osd.5 0 read_superblock omap replica is missing. 2026-03-10T10:24:35.537 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:35 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:24:35.507+0000 7f5292c37740 -1 osd.5 72 log_to_monitors true 2026-03-10T10:24:35.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:35 vm02.local ceph-mon[110129]: pgmap v152: 65 pgs: 16 active+undersized, 12 active+undersized+degraded, 37 active+clean; 209 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 255 B/s rd, 1 op/s; 35/228 objects degraded (15.351%) 2026-03-10T10:24:35.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:35 vm02.local ceph-mon[110129]: Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T10:24:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:35 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T10:24:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:35 vm02.local ceph-mon[110129]: osdmap e75: 6 total, 5 up, 6 in 2026-03-10T10:24:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:35 vm02.local ceph-mon[110129]: Upgrade: Disabling standby-replay for filesystem cephfs 2026-03-10T10:24:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:35 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:35.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:35 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: osdmap e76: 6 total, 5 up, 6 in 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='osd.5 [v2:192.168.123.105:6816/3443749848,v1:192.168.123.105:6817/3443749848]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]': finished 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 2 up:standby 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm02.zymcrs"]}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: Upgrade: It appears safe to stop mds.cephfs.vm02.zymcrs 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: Upgrade: Updating mds.cephfs.vm02.zymcrs 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.zymcrs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:36.525 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-mon[110129]: Deploying daemon mds.cephfs.vm02.zymcrs on vm02 2026-03-10T10:24:36.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:36 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02[110125]: 2026-03-10T10:24:36.594+0000 7f2532449640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: osdmap e76: 6 total, 5 up, 6 in 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='osd.5 [v2:192.168.123.105:6816/3443749848,v1:192.168.123.105:6817/3443749848]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]': finished 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 2 up:standby 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:36.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:36.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:36.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:36.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm02.zymcrs"]}]: dispatch 2026-03-10T10:24:36.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: Upgrade: It appears safe to stop mds.cephfs.vm02.zymcrs 2026-03-10T10:24:36.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: Upgrade: Updating mds.cephfs.vm02.zymcrs 2026-03-10T10:24:36.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:36.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.zymcrs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:24:36.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:36.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:36 vm05.local ceph-mon[103593]: Deploying daemon mds.cephfs.vm02.zymcrs on vm02 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: osdmap e77: 6 total, 5 up, 6 in 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: from='osd.5 [v2:192.168.123.105:6816/3443749848,v1:192.168.123.105:6817/3443749848]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: osdmap e78: 6 total, 5 up, 6 in 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.105:6826/3693577687,v1:192.168.123.105:6827/3693577687] up:boot 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: Standby daemon mds.cephfs.vm02.stcvsz assigned to filesystem cephfs as rank 0 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:24:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-10T10:24:37.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:24:37.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm02.stcvsz=up:replay} 2 up:standby 2026-03-10T10:24:37.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: pgmap v157: 65 pgs: 16 active+undersized, 12 active+undersized+degraded, 37 active+clean; 209 MiB data, 913 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s; 35/228 objects degraded (15.351%) 2026-03-10T10:24:37.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:37.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:37.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:37 vm02.local ceph-mon[110129]: from='osd.5 ' entity='osd.5' 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: osdmap e77: 6 total, 5 up, 6 in 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: from='osd.5 [v2:192.168.123.105:6816/3443749848,v1:192.168.123.105:6817/3443749848]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: osdmap e78: 6 total, 5 up, 6 in 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.105:6826/3693577687,v1:192.168.123.105:6827/3693577687] up:boot 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: Standby daemon mds.cephfs.vm02.stcvsz assigned to filesystem cephfs as rank 0 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm02.stcvsz=up:replay} 2 up:standby 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: pgmap v157: 65 pgs: 16 active+undersized, 12 active+undersized+degraded, 37 active+clean; 209 MiB data, 913 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s; 35/228 objects degraded (15.351%) 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-mon[103593]: from='osd.5 ' entity='osd.5' 2026-03-10T10:24:37.787 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:24:37 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:24:37.493+0000 7f528a1d0640 -1 osd.5 72 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T10:24:39.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:39 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:24:39.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:39 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:24:40.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 -- 192.168.123.102:0/1145059162 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faab0072b20 msgr2=0x7faab0072f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 --2- 192.168.123.102:0/1145059162 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faab0072b20 0x7faab0072f40 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7faaa0007780 tx=0x7faaa000c050 comp rx=0 tx=0).stop 2026-03-10T10:24:40.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 -- 192.168.123.102:0/1145059162 shutdown_connections 2026-03-10T10:24:40.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 --2- 192.168.123.102:0/1145059162 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faab0075a10 0x7faab0077ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 --2- 192.168.123.102:0/1145059162 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faab0072b20 0x7faab0072f40 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 -- 192.168.123.102:0/1145059162 >> 192.168.123.102:0/1145059162 conn(0x7faab006daa0 msgr2=0x7faab006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:40.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 -- 192.168.123.102:0/1145059162 shutdown_connections 2026-03-10T10:24:40.401 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 -- 192.168.123.102:0/1145059162 wait complete. 2026-03-10T10:24:40.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 Processor -- start 2026-03-10T10:24:40.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 -- start start 2026-03-10T10:24:40.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faab0075a10 0x7faab0083970 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:40.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faab012bdb0 0x7faab012e240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:40.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7faab012e780 con 0x7faab0075a10 2026-03-10T10:24:40.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.399+0000 7faab58a7700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7faab012e8f0 con 0x7faab012bdb0 2026-03-10T10:24:40.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faaaffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faab012bdb0 0x7faab012e240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:40.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faaaffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faab012bdb0 0x7faab012e240 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:43818/0 (socket says 192.168.123.102:43818) 2026-03-10T10:24:40.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faaaffff700 1 -- 192.168.123.102:0/2047608937 learned_addr learned my addr 192.168.123.102:0/2047608937 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:40.402 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:40 vm02.local ceph-mon[110129]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:24:40.402 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:40 vm02.local ceph-mon[110129]: pgmap v158: 65 pgs: 16 active+undersized, 12 active+undersized+degraded, 37 active+clean; 209 MiB data, 913 MiB used, 119 GiB / 120 GiB avail; 11 MiB/s rd, 6 op/s; 35/228 objects degraded (15.351%) 2026-03-10T10:24:40.402 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:40 vm02.local ceph-mon[110129]: osd.5 [v2:192.168.123.105:6816/3443749848,v1:192.168.123.105:6817/3443749848] boot 2026-03-10T10:24:40.402 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:40 vm02.local ceph-mon[110129]: osdmap e79: 6 total, 6 up, 6 in 2026-03-10T10:24:40.402 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:40 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faab48a5700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faab0075a10 0x7faab0083970 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faaaffff700 1 -- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faab0075a10 msgr2=0x7faab0083970 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faaaffff700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faab0075a10 0x7faab0083970 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faaaffff700 1 -- 192.168.123.102:0/2047608937 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7faaa0007430 con 0x7faab012bdb0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faaaffff700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faab012bdb0 0x7faab012e240 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7faaa800d350 tx=0x7faaa800d710 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faaadffb700 1 -- 192.168.123.102:0/2047608937 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7faaa80155b0 con 0x7faab012bdb0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faab58a7700 1 -- 192.168.123.102:0/2047608937 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7faab012ebd0 con 0x7faab012bdb0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.400+0000 7faab58a7700 1 -- 192.168.123.102:0/2047608937 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7faab012f120 con 0x7faab012bdb0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.401+0000 7faaadffb700 1 -- 192.168.123.102:0/2047608937 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7faaa800f040 con 0x7faab012bdb0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.401+0000 7faaadffb700 1 -- 192.168.123.102:0/2047608937 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7faaa80149c0 con 0x7faab012bdb0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.401+0000 7faab58a7700 1 -- 192.168.123.102:0/2047608937 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7faa9c005320 con 0x7faab012bdb0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.402+0000 7faaadffb700 1 -- 192.168.123.102:0/2047608937 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7faaa8014be0 con 0x7faab012bdb0 2026-03-10T10:24:40.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.402+0000 7faaadffb700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7faa98077910 0x7faa98079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:40.405 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.403+0000 7faaadffb700 1 -- 192.168.123.102:0/2047608937 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(80..80 src has 1..80) v4 ==== 6308+0+0 (secure 0 0 0) 0x7faaa8099840 con 0x7faab012bdb0 2026-03-10T10:24:40.405 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.403+0000 7faab48a5700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7faa98077910 0x7faa98079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:40.405 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.403+0000 7faab48a5700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7faa98077910 0x7faa98079dd0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7faaa000c4d0 tx=0x7faaa0015040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:40.405 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.404+0000 7faaadffb700 1 -- 192.168.123.102:0/2047608937 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7faaa80620f0 con 0x7faab012bdb0 2026-03-10T10:24:40.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.529+0000 7faab58a7700 1 -- 192.168.123.102:0/2047608937 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7faa9c000bf0 con 0x7faa98077910 2026-03-10T10:24:40.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.530+0000 7faaadffb700 1 -- 192.168.123.102:0/2047608937 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7faa9c000bf0 con 0x7faa98077910 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.532+0000 7faa977fe700 1 -- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7faa98077910 msgr2=0x7faa98079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.532+0000 7faa977fe700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7faa98077910 0x7faa98079dd0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7faaa000c4d0 tx=0x7faaa0015040 comp rx=0 tx=0).stop 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.532+0000 7faa977fe700 1 -- 192.168.123.102:0/2047608937 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faab012bdb0 msgr2=0x7faab012e240 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.532+0000 7faa977fe700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faab012bdb0 0x7faab012e240 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7faaa800d350 tx=0x7faaa800d710 comp rx=0 tx=0).stop 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.533+0000 7faa977fe700 1 -- 192.168.123.102:0/2047608937 shutdown_connections 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.533+0000 7faa977fe700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7faab0075a10 0x7faab0083970 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.533+0000 7faa977fe700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7faa98077910 0x7faa98079dd0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.533+0000 7faa977fe700 1 --2- 192.168.123.102:0/2047608937 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faab012bdb0 0x7faab012e240 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.533+0000 7faa977fe700 1 -- 192.168.123.102:0/2047608937 >> 192.168.123.102:0/2047608937 conn(0x7faab006daa0 msgr2=0x7faab006ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.533+0000 7faa977fe700 1 -- 192.168.123.102:0/2047608937 shutdown_connections 2026-03-10T10:24:40.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.533+0000 7faa977fe700 1 -- 192.168.123.102:0/2047608937 wait complete. 2026-03-10T10:24:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:40 vm05.local ceph-mon[103593]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T10:24:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:40 vm05.local ceph-mon[103593]: pgmap v158: 65 pgs: 16 active+undersized, 12 active+undersized+degraded, 37 active+clean; 209 MiB data, 913 MiB used, 119 GiB / 120 GiB avail; 11 MiB/s rd, 6 op/s; 35/228 objects degraded (15.351%) 2026-03-10T10:24:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:40 vm05.local ceph-mon[103593]: osd.5 [v2:192.168.123.105:6816/3443749848,v1:192.168.123.105:6817/3443749848] boot 2026-03-10T10:24:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:40 vm05.local ceph-mon[103593]: osdmap e79: 6 total, 6 up, 6 in 2026-03-10T10:24:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:40 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:24:40.543 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:24:40.603 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.602+0000 7f972388a700 1 -- 192.168.123.102:0/588496897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c075a40 msgr2=0x7f971c077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.603 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.602+0000 7f972388a700 1 --2- 192.168.123.102:0/588496897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c075a40 0x7f971c077ed0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f971400d3f0 tx=0x7f971400d700 comp rx=0 tx=0).stop 2026-03-10T10:24:40.603 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.602+0000 7f972388a700 1 -- 192.168.123.102:0/588496897 shutdown_connections 2026-03-10T10:24:40.603 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.602+0000 7f972388a700 1 --2- 192.168.123.102:0/588496897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c075a40 0x7f971c077ed0 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.603 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.602+0000 7f972388a700 1 --2- 192.168.123.102:0/588496897 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f971c072b50 0x7f971c072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.603 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.602+0000 7f972388a700 1 -- 192.168.123.102:0/588496897 >> 192.168.123.102:0/588496897 conn(0x7f971c06dae0 msgr2=0x7f971c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:40.604 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f972388a700 1 -- 192.168.123.102:0/588496897 shutdown_connections 2026-03-10T10:24:40.604 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f972388a700 1 -- 192.168.123.102:0/588496897 wait complete. 2026-03-10T10:24:40.604 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f972388a700 1 Processor -- start 2026-03-10T10:24:40.604 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f972388a700 1 -- start start 2026-03-10T10:24:40.604 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f972388a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f971c072b50 0x7f971c083100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:40.604 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f972388a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c083640 0x7f971c12e400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:40.604 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f972388a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f971c083b80 con 0x7f971c083640 2026-03-10T10:24:40.605 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f972388a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f971c083cf0 con 0x7f971c072b50 2026-03-10T10:24:40.605 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f9720e25700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c083640 0x7f971c12e400 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:40.605 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f9720e25700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c083640 0x7f971c12e400 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:46970/0 (socket says 192.168.123.102:46970) 2026-03-10T10:24:40.605 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.603+0000 7f9720e25700 1 -- 192.168.123.102:0/1911234197 learned_addr learned my addr 192.168.123.102:0/1911234197 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:40.605 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.604+0000 7f9721626700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f971c072b50 0x7f971c083100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:40.605 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.604+0000 7f9720e25700 1 -- 192.168.123.102:0/1911234197 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f971c072b50 msgr2=0x7f971c083100 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.605 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.604+0000 7f9720e25700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f971c072b50 0x7f971c083100 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.605 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.604+0000 7f9720e25700 1 -- 192.168.123.102:0/1911234197 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9714007ed0 con 0x7f971c083640 2026-03-10T10:24:40.605 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.604+0000 7f9720e25700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c083640 0x7f971c12e400 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f9714003c60 tx=0x7f9714003d40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:40.606 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.606+0000 7f97127fc700 1 -- 192.168.123.102:0/1911234197 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f971401c070 con 0x7f971c083640 2026-03-10T10:24:40.606 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.606+0000 7f972388a700 1 -- 192.168.123.102:0/1911234197 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f971c12ea00 con 0x7f971c083640 2026-03-10T10:24:40.607 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.606+0000 7f972388a700 1 -- 192.168.123.102:0/1911234197 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f971c12eea0 con 0x7f971c083640 2026-03-10T10:24:40.607 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.607+0000 7f97127fc700 1 -- 192.168.123.102:0/1911234197 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f971400fb40 con 0x7f971c083640 2026-03-10T10:24:40.607 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.607+0000 7f97127fc700 1 -- 192.168.123.102:0/1911234197 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9714021cb0 con 0x7f971c083640 2026-03-10T10:24:40.608 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.608+0000 7f97127fc700 1 -- 192.168.123.102:0/1911234197 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f971402b430 con 0x7f971c083640 2026-03-10T10:24:40.608 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.608+0000 7f97127fc700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9708077ab0 0x7f9708079f70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:40.609 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.608+0000 7f9721626700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9708077ab0 0x7f9708079f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:40.609 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.609+0000 7f97127fc700 1 -- 192.168.123.102:0/1911234197 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(80..80 src has 1..80) v4 ==== 6308+0+0 (secure 0 0 0) 0x7f9714013070 con 0x7f971c083640 2026-03-10T10:24:40.609 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.609+0000 7f972388a700 1 -- 192.168.123.102:0/1911234197 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9700005320 con 0x7f971c083640 2026-03-10T10:24:40.610 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.609+0000 7f9721626700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9708077ab0 0x7f9708079f70 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f9718009990 tx=0x7f9718008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:40.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.618+0000 7f97127fc700 1 -- 192.168.123.102:0/1911234197 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f97140650f0 con 0x7f971c083640 2026-03-10T10:24:40.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.748+0000 7f972388a700 1 -- 192.168.123.102:0/1911234197 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f9700000bf0 con 0x7f9708077ab0 2026-03-10T10:24:40.751 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.751+0000 7f97127fc700 1 -- 192.168.123.102:0/1911234197 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7f9700000bf0 con 0x7f9708077ab0 2026-03-10T10:24:40.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.758+0000 7f9707fff700 1 -- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9708077ab0 msgr2=0x7f9708079f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.758+0000 7f9707fff700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9708077ab0 0x7f9708079f70 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f9718009990 tx=0x7f9718008040 comp rx=0 tx=0).stop 2026-03-10T10:24:40.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.758+0000 7f9707fff700 1 -- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c083640 msgr2=0x7f971c12e400 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.758+0000 7f9707fff700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c083640 0x7f971c12e400 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f9714003c60 tx=0x7f9714003d40 comp rx=0 tx=0).stop 2026-03-10T10:24:40.758 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.758+0000 7f9707fff700 1 -- 192.168.123.102:0/1911234197 shutdown_connections 2026-03-10T10:24:40.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.758+0000 7f9707fff700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9708077ab0 0x7f9708079f70 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.758+0000 7f9707fff700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f971c072b50 0x7f971c083100 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.758+0000 7f9707fff700 1 --2- 192.168.123.102:0/1911234197 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f971c083640 0x7f971c12e400 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.759+0000 7f9707fff700 1 -- 192.168.123.102:0/1911234197 >> 192.168.123.102:0/1911234197 conn(0x7f971c06dae0 msgr2=0x7f971c06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:40.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.759+0000 7f9707fff700 1 -- 192.168.123.102:0/1911234197 shutdown_connections 2026-03-10T10:24:40.759 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.759+0000 7f9707fff700 1 -- 192.168.123.102:0/1911234197 wait complete. 2026-03-10T10:24:40.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.824+0000 7f07d1598700 1 -- 192.168.123.102:0/2393855264 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f07cc072b50 msgr2=0x7f07cc072f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.824+0000 7f07d1598700 1 --2- 192.168.123.102:0/2393855264 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f07cc072b50 0x7f07cc072f70 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f07bc007780 tx=0x7f07bc007a90 comp rx=0 tx=0).stop 2026-03-10T10:24:40.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.824+0000 7f07d1598700 1 -- 192.168.123.102:0/2393855264 shutdown_connections 2026-03-10T10:24:40.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.824+0000 7f07d1598700 1 --2- 192.168.123.102:0/2393855264 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07cc075a40 0x7f07cc077ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.824+0000 7f07d1598700 1 --2- 192.168.123.102:0/2393855264 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f07cc072b50 0x7f07cc072f70 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.824+0000 7f07d1598700 1 -- 192.168.123.102:0/2393855264 >> 192.168.123.102:0/2393855264 conn(0x7f07cc06dae0 msgr2=0x7f07cc06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:40.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.824+0000 7f07d1598700 1 -- 192.168.123.102:0/2393855264 shutdown_connections 2026-03-10T10:24:40.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.824+0000 7f07d1598700 1 -- 192.168.123.102:0/2393855264 wait complete. 2026-03-10T10:24:40.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.825+0000 7f07d1598700 1 Processor -- start 2026-03-10T10:24:40.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.825+0000 7f07d1598700 1 -- start start 2026-03-10T10:24:40.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.825+0000 7f07d1598700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f07cc075a40 0x7f07cc0830e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:40.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.825+0000 7f07d1598700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07cc083620 0x7f07cc1b3170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:40.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.825+0000 7f07d1598700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f07cc083b30 con 0x7f07cc075a40 2026-03-10T10:24:40.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.825+0000 7f07d1598700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f07cc083ca0 con 0x7f07cc083620 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.826+0000 7f07ca7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07cc083620 0x7f07cc1b3170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.826+0000 7f07ca7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07cc083620 0x7f07cc1b3170 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:43862/0 (socket says 192.168.123.102:43862) 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.826+0000 7f07ca7fc700 1 -- 192.168.123.102:0/1618685288 learned_addr learned my addr 192.168.123.102:0/1618685288 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.826+0000 7f07caffd700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f07cc075a40 0x7f07cc0830e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.826+0000 7f07ca7fc700 1 -- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f07cc075a40 msgr2=0x7f07cc0830e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.826+0000 7f07ca7fc700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f07cc075a40 0x7f07cc0830e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.826+0000 7f07ca7fc700 1 -- 192.168.123.102:0/1618685288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f07bc007430 con 0x7f07cc083620 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.826+0000 7f07ca7fc700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07cc083620 0x7f07cc1b3170 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f07c400c390 tx=0x7f07c400c6a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.827+0000 7f07b3fff700 1 -- 192.168.123.102:0/1618685288 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f07c400e030 con 0x7f07cc083620 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.827+0000 7f07d1598700 1 -- 192.168.123.102:0/1618685288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f07cc1c4720 con 0x7f07cc083620 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.827+0000 7f07d1598700 1 -- 192.168.123.102:0/1618685288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f07cc1c4c40 con 0x7f07cc083620 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.827+0000 7f07b3fff700 1 -- 192.168.123.102:0/1618685288 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f07c400f040 con 0x7f07cc083620 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.827+0000 7f07b3fff700 1 -- 192.168.123.102:0/1618685288 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f07c4014650 con 0x7f07cc083620 2026-03-10T10:24:40.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.828+0000 7f07d1598700 1 -- 192.168.123.102:0/1618685288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f07cc04ea90 con 0x7f07cc083620 2026-03-10T10:24:40.829 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.829+0000 7f07b3fff700 1 -- 192.168.123.102:0/1618685288 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f07c4009110 con 0x7f07cc083620 2026-03-10T10:24:40.832 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.829+0000 7f07b3fff700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f07b4077910 0x7f07b4079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:40.832 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.830+0000 7f07b3fff700 1 -- 192.168.123.102:0/1618685288 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(80..80 src has 1..80) v4 ==== 6308+0+0 (secure 0 0 0) 0x7f07c4099e40 con 0x7f07cc083620 2026-03-10T10:24:40.832 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.830+0000 7f07caffd700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f07b4077910 0x7f07b4079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:40.832 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.830+0000 7f07caffd700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f07b4077910 0x7f07b4079dd0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f07bc007750 tx=0x7f07bc01f040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:40.832 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.831+0000 7f07b3fff700 1 -- 192.168.123.102:0/1618685288 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f07c4062640 con 0x7f07cc083620 2026-03-10T10:24:40.949 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.948+0000 7f07d1598700 1 -- 192.168.123.102:0/1618685288 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f07cc1c4f20 con 0x7f07b4077910 2026-03-10T10:24:40.957 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (9m) 71s ago 9m 23.3M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (9m) 71s ago 9m 9433k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (9m) 8s ago 9m 11.8M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (3m) 71s ago 9m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (3m) 8s ago 9m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (9m) 71s ago 9m 89.9M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (7m) 71s ago 7m 16.9M - 18.2.1 5be31c24972a e97c369450c8 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (7m) 71s ago 7m 173M - 18.2.1 5be31c24972a 56b76ae59bcb 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (7m) 8s ago 7m 17.4M - 18.2.1 5be31c24972a 02b882918ab0 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (7m) 8s ago 7m 95.6M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (4m) 71s ago 10m 619M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (4m) 8s ago 9m 489M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (4m) 71s ago 10m 61.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (3m) 8s ago 9m 53.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (9m) 71s ago 9m 16.4M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (9m) 8s ago 9m 15.6M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (3m) 71s ago 8m 228M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (95s) 71s ago 8m 119M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6b6be7f62bd3 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (73s) 71s ago 8m 13.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 745b9931485f 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (51s) 8s ago 8m 159M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe29904ecf52 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (30s) 8s ago 8m 142M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe0b3f802cec 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (9s) 8s ago 8m 12.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c60f7383494f 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (4m) 71s ago 9m 64.2M - 2.43.0 a07b618ecd1d 5ebb885bd417 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.955+0000 7f07b3fff700 1 -- 192.168.123.102:0/1618685288 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7f07cc1c4f20 con 0x7f07b4077910 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.957+0000 7f07b1ffb700 1 -- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f07b4077910 msgr2=0x7f07b4079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.957+0000 7f07b1ffb700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f07b4077910 0x7f07b4079dd0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f07bc007750 tx=0x7f07bc01f040 comp rx=0 tx=0).stop 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.957+0000 7f07b1ffb700 1 -- 192.168.123.102:0/1618685288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07cc083620 msgr2=0x7f07cc1b3170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.957+0000 7f07b1ffb700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07cc083620 0x7f07cc1b3170 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f07c400c390 tx=0x7f07c400c6a0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.958+0000 7f07b1ffb700 1 -- 192.168.123.102:0/1618685288 shutdown_connections 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.958+0000 7f07b1ffb700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f07cc075a40 0x7f07cc0830e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.958+0000 7f07b1ffb700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f07b4077910 0x7f07b4079dd0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.958+0000 7f07b1ffb700 1 --2- 192.168.123.102:0/1618685288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07cc083620 0x7f07cc1b3170 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.958+0000 7f07b1ffb700 1 -- 192.168.123.102:0/1618685288 >> 192.168.123.102:0/1618685288 conn(0x7f07cc06dae0 msgr2=0x7f07cc06ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.958+0000 7f07b1ffb700 1 -- 192.168.123.102:0/1618685288 shutdown_connections 2026-03-10T10:24:40.958 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:40.958+0000 7f07b1ffb700 1 -- 192.168.123.102:0/1618685288 wait complete. 2026-03-10T10:24:41.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.036+0000 7f98fd01e700 1 -- 192.168.123.102:0/3822580034 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98f810a700 msgr2=0x7f98f810cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.036+0000 7f98fd01e700 1 --2- 192.168.123.102:0/3822580034 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98f810a700 0x7f98f810cb90 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f98ec009b00 tx=0x7f98ec009e10 comp rx=0 tx=0).stop 2026-03-10T10:24:41.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.036+0000 7f98fd01e700 1 -- 192.168.123.102:0/3822580034 shutdown_connections 2026-03-10T10:24:41.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.036+0000 7f98fd01e700 1 --2- 192.168.123.102:0/3822580034 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98f810a700 0x7f98f810cb90 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.036+0000 7f98fd01e700 1 --2- 192.168.123.102:0/3822580034 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98f8107d90 0x7f98f810a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.036+0000 7f98fd01e700 1 -- 192.168.123.102:0/3822580034 >> 192.168.123.102:0/3822580034 conn(0x7f98f806dae0 msgr2=0x7f98f806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:41.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.036+0000 7f98fd01e700 1 -- 192.168.123.102:0/3822580034 shutdown_connections 2026-03-10T10:24:41.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.037+0000 7f98fd01e700 1 -- 192.168.123.102:0/3822580034 wait complete. 2026-03-10T10:24:41.037 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.037+0000 7f98fd01e700 1 Processor -- start 2026-03-10T10:24:41.038 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.037+0000 7f98fd01e700 1 -- start start 2026-03-10T10:24:41.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98fd01e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98f8107d90 0x7f98f81a5380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98fd01e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98f810a700 0x7f98f81a58c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.039 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98fd01e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f98f81a5ee0 con 0x7f98f8107d90 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98fd01e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f98f81a6020 con 0x7f98f810a700 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98f6d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98f8107d90 0x7f98f81a5380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98f6d9d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98f8107d90 0x7f98f81a5380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:46992/0 (socket says 192.168.123.102:46992) 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98f6d9d700 1 -- 192.168.123.102:0/251654834 learned_addr learned my addr 192.168.123.102:0/251654834 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98f659c700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98f810a700 0x7f98f81a58c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98f6d9d700 1 -- 192.168.123.102:0/251654834 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98f810a700 msgr2=0x7f98f81a58c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98f6d9d700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98f810a700 0x7f98f81a58c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98f6d9d700 1 -- 192.168.123.102:0/251654834 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f98ec0097e0 con 0x7f98f8107d90 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98f6d9d700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98f8107d90 0x7f98f81a5380 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f98f000c3b0 tx=0x7f98f000c6c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.038+0000 7f98dffff700 1 -- 192.168.123.102:0/251654834 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f98f0017070 con 0x7f98f8107d90 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.039+0000 7f98dffff700 1 -- 192.168.123.102:0/251654834 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f98f000f040 con 0x7f98f8107d90 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.039+0000 7f98dffff700 1 -- 192.168.123.102:0/251654834 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f98f000e050 con 0x7f98f8107d90 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.039+0000 7f98fd01e700 1 -- 192.168.123.102:0/251654834 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f98f81aaa70 con 0x7f98f8107d90 2026-03-10T10:24:41.040 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.039+0000 7f98fd01e700 1 -- 192.168.123.102:0/251654834 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f98f81aaf40 con 0x7f98f8107d90 2026-03-10T10:24:41.043 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.040+0000 7f98ddffb700 1 -- 192.168.123.102:0/251654834 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f98f804ea90 con 0x7f98f8107d90 2026-03-10T10:24:41.043 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.041+0000 7f98dffff700 1 -- 192.168.123.102:0/251654834 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f98f0007500 con 0x7f98f8107d90 2026-03-10T10:24:41.043 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.041+0000 7f98dffff700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f98e00776c0 0x7f98e0079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.043 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.042+0000 7f98f659c700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f98e00776c0 0x7f98e0079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.043 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.042+0000 7f98f659c700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f98e00776c0 0x7f98e0079b80 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f98ec005270 tx=0x7f98ec01a040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:41.043 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.042+0000 7f98dffff700 1 -- 192.168.123.102:0/251654834 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(80..80 src has 1..80) v4 ==== 6308+0+0 (secure 0 0 0) 0x7f98f00669d0 con 0x7f98f8107d90 2026-03-10T10:24:41.045 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.045+0000 7f98dffff700 1 -- 192.168.123.102:0/251654834 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f98f0061d20 con 0x7f98f8107d90 2026-03-10T10:24:41.211 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.211+0000 7f98ddffb700 1 -- 192.168.123.102:0/251654834 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f98f80619a0 con 0x7f98f8107d90 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.212+0000 7f98dffff700 1 -- 192.168.123.102:0/251654834 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+709 (secure 0 0 0) 0x7f98f0061b40 con 0x7f98f8107d90 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 3 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)": 3, 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 10 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:24:41.213 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 -- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f98e00776c0 msgr2=0x7f98e0079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f98e00776c0 0x7f98e0079b80 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f98ec005270 tx=0x7f98ec01a040 comp rx=0 tx=0).stop 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 -- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98f8107d90 msgr2=0x7f98f81a5380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98f8107d90 0x7f98f81a5380 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f98f000c3b0 tx=0x7f98f000c6c0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 -- 192.168.123.102:0/251654834 shutdown_connections 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f98f8107d90 0x7f98f81a5380 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f98e00776c0 0x7f98e0079b80 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 --2- 192.168.123.102:0/251654834 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f98f810a700 0x7f98f81a58c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 -- 192.168.123.102:0/251654834 >> 192.168.123.102:0/251654834 conn(0x7f98f806dae0 msgr2=0x7f98f806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 -- 192.168.123.102:0/251654834 shutdown_connections 2026-03-10T10:24:41.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.217+0000 7f98fd01e700 1 -- 192.168.123.102:0/251654834 wait complete. 2026-03-10T10:24:41.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.301+0000 7fbd1759e700 1 -- 192.168.123.102:0/1082174623 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd180ffb80 msgr2=0x7fbd180fffa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.302 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.301+0000 7fbd1759e700 1 --2- 192.168.123.102:0/1082174623 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd180ffb80 0x7fbd180fffa0 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fbd08009b50 tx=0x7fbd08009e60 comp rx=0 tx=0).stop 2026-03-10T10:24:41.302 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:41 vm02.local ceph-mon[110129]: osdmap e80: 6 total, 6 up, 6 in 2026-03-10T10:24:41.302 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:41 vm02.local ceph-mon[110129]: Health check update: Degraded data redundancy: 35/228 objects degraded (15.351%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:24:41.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.304+0000 7fbd1759e700 1 -- 192.168.123.102:0/1082174623 shutdown_connections 2026-03-10T10:24:41.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.304+0000 7fbd1759e700 1 --2- 192.168.123.102:0/1082174623 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd18100d80 0x7fbd181011e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.304+0000 7fbd1759e700 1 --2- 192.168.123.102:0/1082174623 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd180ffb80 0x7fbd180fffa0 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.304+0000 7fbd1759e700 1 -- 192.168.123.102:0/1082174623 >> 192.168.123.102:0/1082174623 conn(0x7fbd180fb0e0 msgr2=0x7fbd180fd560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:41.304 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.304+0000 7fbd1759e700 1 -- 192.168.123.102:0/1082174623 shutdown_connections 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.304+0000 7fbd1759e700 1 -- 192.168.123.102:0/1082174623 wait complete. 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.304+0000 7fbd1759e700 1 Processor -- start 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.305+0000 7fbd1759e700 1 -- start start 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.305+0000 7fbd1759e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd180ffb80 0x7fbd181a6c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.305+0000 7fbd1759e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd18100d80 0x7fbd181a7150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.305+0000 7fbd1759e700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbd181a5260 con 0x7fbd18100d80 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.305+0000 7fbd1759e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbd181a53a0 con 0x7fbd180ffb80 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.305+0000 7fbd1659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd180ffb80 0x7fbd181a6c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.305+0000 7fbd15d9b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd18100d80 0x7fbd181a7150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.305+0000 7fbd15d9b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd18100d80 0x7fbd181a7150 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:47024/0 (socket says 192.168.123.102:47024) 2026-03-10T10:24:41.305 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.305+0000 7fbd15d9b700 1 -- 192.168.123.102:0/797886975 learned_addr learned my addr 192.168.123.102:0/797886975 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:41.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.306+0000 7fbd1659c700 1 -- 192.168.123.102:0/797886975 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd18100d80 msgr2=0x7fbd181a7150 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.306+0000 7fbd1659c700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd18100d80 0x7fbd181a7150 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.306+0000 7fbd1659c700 1 -- 192.168.123.102:0/797886975 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbd080097e0 con 0x7fbd180ffb80 2026-03-10T10:24:41.306 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.306+0000 7fbd1659c700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd180ffb80 0x7fbd181a6c10 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fbd08005310 tx=0x7fbd0800baa0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:41.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.309+0000 7fbd077fe700 1 -- 192.168.123.102:0/797886975 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbd0801d070 con 0x7fbd180ffb80 2026-03-10T10:24:41.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.309+0000 7fbd1759e700 1 -- 192.168.123.102:0/797886975 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbd181a5620 con 0x7fbd180ffb80 2026-03-10T10:24:41.309 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.309+0000 7fbd1759e700 1 -- 192.168.123.102:0/797886975 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbd181a5a80 con 0x7fbd180ffb80 2026-03-10T10:24:41.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.309+0000 7fbd077fe700 1 -- 192.168.123.102:0/797886975 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbd08004500 con 0x7fbd180ffb80 2026-03-10T10:24:41.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.309+0000 7fbd077fe700 1 -- 192.168.123.102:0/797886975 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbd0800f810 con 0x7fbd180ffb80 2026-03-10T10:24:41.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.310+0000 7fbd1759e700 1 -- 192.168.123.102:0/797886975 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbd1804ea90 con 0x7fbd180ffb80 2026-03-10T10:24:41.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.310+0000 7fbd077fe700 1 -- 192.168.123.102:0/797886975 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbd08003b10 con 0x7fbd180ffb80 2026-03-10T10:24:41.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.311+0000 7fbd077fe700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd000779e0 0x7fbd00079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.311+0000 7fbd077fe700 1 -- 192.168.123.102:0/797886975 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(80..80 src has 1..80) v4 ==== 6308+0+0 (secure 0 0 0) 0x7fbd0809b110 con 0x7fbd180ffb80 2026-03-10T10:24:41.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.311+0000 7fbd15d9b700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd000779e0 0x7fbd00079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.312 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.311+0000 7fbd15d9b700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd000779e0 0x7fbd00079ea0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fbd0c009cc0 tx=0x7fbd0c009400 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.314+0000 7fbd077fe700 1 -- 192.168.123.102:0/797886975 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbd08063910 con 0x7fbd180ffb80 2026-03-10T10:24:41.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.450+0000 7fbd1759e700 1 -- 192.168.123.102:0/797886975 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fbd181a5d60 con 0x7fbd180ffb80 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.450+0000 7fbd077fe700 1 -- 192.168.123.102:0/797886975 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 18 v18) v1 ==== 76+0+1743 (secure 0 0 0) 0x7fbd08063060 con 0x7fbd180ffb80 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:e18 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:btime 2026-03-10T10:24:36:615911+0000 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:epoch 18 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:flags 12 joinable allow_snaps allow_multimds_snaps 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:24:36.615892+0000 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 78 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:up {0=14494} 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 0 members: 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{0:14494} state up:replay seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:24299} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:24:41.451 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{-1:34328} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6826/3693577687,v1:192.168.123.105:6827/3693577687] compat {c=[1],r=[1],i=[7ff]}] 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 -- 192.168.123.102:0/797886975 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd000779e0 msgr2=0x7fbd00079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd000779e0 0x7fbd00079ea0 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fbd0c009cc0 tx=0x7fbd0c009400 comp rx=0 tx=0).stop 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 -- 192.168.123.102:0/797886975 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd180ffb80 msgr2=0x7fbd181a6c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd180ffb80 0x7fbd181a6c10 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fbd08005310 tx=0x7fbd0800baa0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 -- 192.168.123.102:0/797886975 shutdown_connections 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd000779e0 0x7fbd00079ea0 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd180ffb80 0x7fbd181a6c10 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 --2- 192.168.123.102:0/797886975 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd18100d80 0x7fbd181a7150 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 -- 192.168.123.102:0/797886975 >> 192.168.123.102:0/797886975 conn(0x7fbd180fb0e0 msgr2=0x7fbd180684d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 -- 192.168.123.102:0/797886975 shutdown_connections 2026-03-10T10:24:41.454 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.454+0000 7fbd057fa700 1 -- 192.168.123.102:0/797886975 wait complete. 2026-03-10T10:24:41.456 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 18 2026-03-10T10:24:41.526 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.525+0000 7f13b1ada700 1 -- 192.168.123.102:0/961388073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13ac10a700 msgr2=0x7f13ac10cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.526 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.525+0000 7f13b1ada700 1 --2- 192.168.123.102:0/961388073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13ac10a700 0x7f13ac10cb90 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f13a400b3a0 tx=0x7f13a400b6b0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.526 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.526+0000 7f13b1ada700 1 -- 192.168.123.102:0/961388073 shutdown_connections 2026-03-10T10:24:41.526 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.526+0000 7f13b1ada700 1 --2- 192.168.123.102:0/961388073 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13ac10a700 0x7f13ac10cb90 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.526 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.526+0000 7f13b1ada700 1 --2- 192.168.123.102:0/961388073 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13ac107d90 0x7f13ac10a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.526 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.526+0000 7f13b1ada700 1 -- 192.168.123.102:0/961388073 >> 192.168.123.102:0/961388073 conn(0x7f13ac06daa0 msgr2=0x7f13ac06ff00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:41.527 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.526+0000 7f13b1ada700 1 -- 192.168.123.102:0/961388073 shutdown_connections 2026-03-10T10:24:41.527 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.526+0000 7f13b1ada700 1 -- 192.168.123.102:0/961388073 wait complete. 2026-03-10T10:24:41.527 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.526+0000 7f13b1ada700 1 Processor -- start 2026-03-10T10:24:41.527 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.526+0000 7f13b1ada700 1 -- start start 2026-03-10T10:24:41.527 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.527+0000 7f13b1ada700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13ac107d90 0x7f13ac1a53c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.527 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.527+0000 7f13b1ada700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13ac1a5900 0x7f13ac076fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.527+0000 7f13b1ada700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13ac1a5e10 con 0x7f13ac107d90 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.527+0000 7f13b1ada700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13ac1a5f80 con 0x7f13ac1a5900 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.527+0000 7f13abfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13ac1a5900 0x7f13ac076fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.527+0000 7f13abfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13ac1a5900 0x7f13ac076fe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:43892/0 (socket says 192.168.123.102:43892) 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.527+0000 7f13abfff700 1 -- 192.168.123.102:0/1152372915 learned_addr learned my addr 192.168.123.102:0/1152372915 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.527+0000 7f13b0ad8700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13ac107d90 0x7f13ac1a53c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.528+0000 7f13abfff700 1 -- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13ac107d90 msgr2=0x7f13ac1a53c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.528+0000 7f13abfff700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13ac107d90 0x7f13ac1a53c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.528+0000 7f13abfff700 1 -- 192.168.123.102:0/1152372915 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f13a400b050 con 0x7f13ac1a5900 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.528+0000 7f13abfff700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13ac1a5900 0x7f13ac076fe0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f13a4000f80 tx=0x7f13a4007b60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:41.528 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.528+0000 7f13a9ffb700 1 -- 192.168.123.102:0/1152372915 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f13a400e050 con 0x7f13ac1a5900 2026-03-10T10:24:41.529 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.528+0000 7f13b1ada700 1 -- 192.168.123.102:0/1152372915 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f13ac077520 con 0x7f13ac1a5900 2026-03-10T10:24:41.529 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.528+0000 7f13b1ada700 1 -- 192.168.123.102:0/1152372915 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f13ac077a70 con 0x7f13ac1a5900 2026-03-10T10:24:41.529 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.529+0000 7f13b1ada700 1 -- 192.168.123.102:0/1152372915 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f13ac118760 con 0x7f13ac1a5900 2026-03-10T10:24:41.529 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.529+0000 7f13a9ffb700 1 -- 192.168.123.102:0/1152372915 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f13a4007c00 con 0x7f13ac1a5900 2026-03-10T10:24:41.530 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.529+0000 7f13a9ffb700 1 -- 192.168.123.102:0/1152372915 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f13a401b9a0 con 0x7f13ac1a5900 2026-03-10T10:24:41.531 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.531+0000 7f13a9ffb700 1 -- 192.168.123.102:0/1152372915 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f13a4019040 con 0x7f13ac1a5900 2026-03-10T10:24:41.531 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.531+0000 7f13a9ffb700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1394077910 0x7f1394079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.532 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.531+0000 7f13b0ad8700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1394077910 0x7f1394079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.532 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.531+0000 7f13a9ffb700 1 -- 192.168.123.102:0/1152372915 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(80..80 src has 1..80) v4 ==== 6308+0+0 (secure 0 0 0) 0x7f13a409b460 con 0x7f13ac1a5900 2026-03-10T10:24:41.532 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.532+0000 7f13b0ad8700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1394077910 0x7f1394079dd0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f139c005950 tx=0x7f139c00b410 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:41.533 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.533+0000 7f13a9ffb700 1 -- 192.168.123.102:0/1152372915 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f13a4063be0 con 0x7f13ac1a5900 2026-03-10T10:24:41.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:41 vm05.local ceph-mon[103593]: osdmap e80: 6 total, 6 up, 6 in 2026-03-10T10:24:41.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:41 vm05.local ceph-mon[103593]: Health check update: Degraded data redundancy: 35/228 objects degraded (15.351%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T10:24:41.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.658+0000 7f13b1ada700 1 -- 192.168.123.102:0/1152372915 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f13ac0776b0 con 0x7f1394077910 2026-03-10T10:24:41.663 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.663+0000 7f13a9ffb700 1 -- 192.168.123.102:0/1152372915 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7f13ac0776b0 con 0x7f1394077910 2026-03-10T10:24:41.663 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "osd", 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "crash" 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "12/23 daemons upgraded", 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading mds daemons", 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:24:41.664 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.668+0000 7f13937fe700 1 -- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1394077910 msgr2=0x7f1394079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.668+0000 7f13937fe700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1394077910 0x7f1394079dd0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f139c005950 tx=0x7f139c00b410 comp rx=0 tx=0).stop 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.668+0000 7f13937fe700 1 -- 192.168.123.102:0/1152372915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13ac1a5900 msgr2=0x7f13ac076fe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.668+0000 7f13937fe700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13ac1a5900 0x7f13ac076fe0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f13a4000f80 tx=0x7f13a4007b60 comp rx=0 tx=0).stop 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.669+0000 7f13937fe700 1 -- 192.168.123.102:0/1152372915 shutdown_connections 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.669+0000 7f13937fe700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13ac107d90 0x7f13ac1a53c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.669+0000 7f13937fe700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1394077910 0x7f1394079dd0 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.669+0000 7f13937fe700 1 --2- 192.168.123.102:0/1152372915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13ac1a5900 0x7f13ac076fe0 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.669+0000 7f13937fe700 1 -- 192.168.123.102:0/1152372915 >> 192.168.123.102:0/1152372915 conn(0x7f13ac06daa0 msgr2=0x7f13ac06e780 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.669+0000 7f13937fe700 1 -- 192.168.123.102:0/1152372915 shutdown_connections 2026-03-10T10:24:41.669 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.669+0000 7f13937fe700 1 -- 192.168.123.102:0/1152372915 wait complete. 2026-03-10T10:24:41.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 -- 192.168.123.102:0/3648307113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f33a410a700 msgr2=0x7f33a410cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 --2- 192.168.123.102:0/3648307113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f33a410a700 0x7f33a410cb90 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f339c00b3a0 tx=0x7f339c00b6b0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 -- 192.168.123.102:0/3648307113 shutdown_connections 2026-03-10T10:24:41.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 --2- 192.168.123.102:0/3648307113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f33a410a700 0x7f33a410cb90 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 --2- 192.168.123.102:0/3648307113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33a4107d90 0x7f33a410a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 -- 192.168.123.102:0/3648307113 >> 192.168.123.102:0/3648307113 conn(0x7f33a406dae0 msgr2=0x7f33a406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:41.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 -- 192.168.123.102:0/3648307113 shutdown_connections 2026-03-10T10:24:41.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 -- 192.168.123.102:0/3648307113 wait complete. 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 Processor -- start 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.733+0000 7f33aaef9700 1 -- start start 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33aaef9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f33a4107d90 0x7f33a4116b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33aaef9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33a4117060 0x7f33a41b31a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33aaef9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f33a4117570 con 0x7f33a4117060 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33aaef9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f33a41176e0 con 0x7f33a4107d90 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33a3fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33a4117060 0x7f33a41b31a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33a3fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33a4117060 0x7f33a41b31a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:47034/0 (socket says 192.168.123.102:47034) 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33a3fff700 1 -- 192.168.123.102:0/682858000 learned_addr learned my addr 192.168.123.102:0/682858000 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:24:41.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33a8c95700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f33a4107d90 0x7f33a4116b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33a3fff700 1 -- 192.168.123.102:0/682858000 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f33a4107d90 msgr2=0x7f33a4116b20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33a3fff700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f33a4107d90 0x7f33a4116b20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.734+0000 7f33a3fff700 1 -- 192.168.123.102:0/682858000 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f339c00b050 con 0x7f33a4117060 2026-03-10T10:24:41.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.735+0000 7f33a3fff700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33a4117060 0x7f33a41b31a0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f339c003c30 tx=0x7f339c003d10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:41.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.736+0000 7f33a1ffb700 1 -- 192.168.123.102:0/682858000 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f339c00e050 con 0x7f33a4117060 2026-03-10T10:24:41.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.736+0000 7f33aaef9700 1 -- 192.168.123.102:0/682858000 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f33a41b3740 con 0x7f33a4117060 2026-03-10T10:24:41.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.736+0000 7f33aaef9700 1 -- 192.168.123.102:0/682858000 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f33a41b3c60 con 0x7f33a4117060 2026-03-10T10:24:41.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.736+0000 7f33a1ffb700 1 -- 192.168.123.102:0/682858000 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f339c007b10 con 0x7f33a4117060 2026-03-10T10:24:41.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.736+0000 7f33a1ffb700 1 -- 192.168.123.102:0/682858000 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f339c01bce0 con 0x7f33a4117060 2026-03-10T10:24:41.737 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.737+0000 7f33aaef9700 1 -- 192.168.123.102:0/682858000 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3390005320 con 0x7f33a4117060 2026-03-10T10:24:41.739 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.738+0000 7f33a1ffb700 1 -- 192.168.123.102:0/682858000 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f339c019040 con 0x7f33a4117060 2026-03-10T10:24:41.739 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.738+0000 7f33a1ffb700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f338c0779e0 0x7f338c079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:24:41.739 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.738+0000 7f33a1ffb700 1 -- 192.168.123.102:0/682858000 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(80..80 src has 1..80) v4 ==== 6308+0+0 (secure 0 0 0) 0x7f339c01be40 con 0x7f33a4117060 2026-03-10T10:24:41.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.741+0000 7f33a8c95700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f338c0779e0 0x7f338c079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:24:41.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.742+0000 7f33a1ffb700 1 -- 192.168.123.102:0/682858000 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f339c063640 con 0x7f33a4117060 2026-03-10T10:24:41.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.742+0000 7f33a8c95700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f338c0779e0 0x7f338c079ea0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f3398009d20 tx=0x7f3398009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:24:41.920 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.920+0000 7f33aaef9700 1 -- 192.168.123.102:0/682858000 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f3390005190 con 0x7f33a4117060 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.923+0000 7f33a1ffb700 1 -- 192.168.123.102:0/682858000 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+948 (secure 0 0 0) 0x7f339c062d90 con 0x7f33a4117060 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN 1 filesystem is degraded; Degraded data redundancy: 35/228 objects degraded (15.351%), 12 pgs degraded 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] FS_DEGRADED: 1 filesystem is degraded 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: fs cephfs is degraded 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 35/228 objects degraded (15.351%), 12 pgs degraded 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.2 is active+undersized+degraded, acting [1,0] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.3 is active+undersized+degraded, acting [2,1] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.8 is active+undersized+degraded, acting [3,0] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.b is active+undersized+degraded, acting [3,4] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.f is active+undersized+degraded, acting [4,0] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.14 is active+undersized+degraded, acting [3,4] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.16 is active+undersized+degraded, acting [3,2] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.18 is active+undersized+degraded, acting [4,3] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.1a is active+undersized+degraded, acting [3,4] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.1c is active+undersized+degraded, acting [4,2] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.1d is active+undersized+degraded, acting [3,0] 2026-03-10T10:24:41.923 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.1e is active+undersized+degraded, acting [2,0] 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 -- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f338c0779e0 msgr2=0x7f338c079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f338c0779e0 0x7f338c079ea0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f3398009d20 tx=0x7f3398009450 comp rx=0 tx=0).stop 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 -- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33a4117060 msgr2=0x7f33a41b31a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33a4117060 0x7f33a41b31a0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f339c003c30 tx=0x7f339c003d10 comp rx=0 tx=0).stop 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 -- 192.168.123.102:0/682858000 shutdown_connections 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f338c0779e0 0x7f338c079ea0 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f33a4107d90 0x7f33a4116b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 --2- 192.168.123.102:0/682858000 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f33a4117060 0x7f33a41b31a0 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 -- 192.168.123.102:0/682858000 >> 192.168.123.102:0/682858000 conn(0x7f33a406dae0 msgr2=0x7f33a406e7c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 -- 192.168.123.102:0/682858000 shutdown_connections 2026-03-10T10:24:41.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:24:41.931+0000 7f338b7fe700 1 -- 192.168.123.102:0/682858000 wait complete. 2026-03-10T10:24:42.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:42 vm02.local ceph-mon[110129]: from='client.44267 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:42.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:42 vm02.local ceph-mon[110129]: pgmap v161: 65 pgs: 16 active+undersized, 12 active+undersized+degraded, 37 active+clean; 209 MiB data, 913 MiB used, 119 GiB / 120 GiB avail; 13 MiB/s rd, 6 op/s; 35/228 objects degraded (15.351%) 2026-03-10T10:24:42.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:42 vm02.local ceph-mon[110129]: from='client.34336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:42.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:42 vm02.local ceph-mon[110129]: from='client.44273 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:42.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:42 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/251654834' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:42.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:42 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/797886975' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:24:42.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:42 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/682858000' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:24:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:42 vm05.local ceph-mon[103593]: from='client.44267 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:42 vm05.local ceph-mon[103593]: pgmap v161: 65 pgs: 16 active+undersized, 12 active+undersized+degraded, 37 active+clean; 209 MiB data, 913 MiB used, 119 GiB / 120 GiB avail; 13 MiB/s rd, 6 op/s; 35/228 objects degraded (15.351%) 2026-03-10T10:24:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:42 vm05.local ceph-mon[103593]: from='client.34336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:42 vm05.local ceph-mon[103593]: from='client.44273 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:42 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/251654834' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:42 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/797886975' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:24:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:42 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/682858000' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:24:43.503 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:43 vm02.local ceph-mon[110129]: from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:43.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:43 vm05.local ceph-mon[103593]: from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:24:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:44 vm05.local ceph-mon[103593]: pgmap v162: 65 pgs: 2 active+undersized, 1 active+undersized+degraded, 62 active+clean; 209 MiB data, 914 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 9 op/s; 2/228 objects degraded (0.877%) 2026-03-10T10:24:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:44 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] up:reconnect 2026-03-10T10:24:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:44 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm02.stcvsz=up:reconnect} 2 up:standby 2026-03-10T10:24:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:44 vm05.local ceph-mon[103593]: reconnect by client.14516 192.168.144.1:0/2782098490 after 0 2026-03-10T10:24:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:44 vm05.local ceph-mon[103593]: reconnect by client.24319 192.168.144.1:0/2118398450 after 0.002 2026-03-10T10:24:44.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:44 vm02.local ceph-mon[110129]: pgmap v162: 65 pgs: 2 active+undersized, 1 active+undersized+degraded, 62 active+clean; 209 MiB data, 914 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 9 op/s; 2/228 objects degraded (0.877%) 2026-03-10T10:24:44.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:44 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] up:reconnect 2026-03-10T10:24:44.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:44 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm02.stcvsz=up:reconnect} 2 up:standby 2026-03-10T10:24:44.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:44 vm02.local ceph-mon[110129]: reconnect by client.14516 192.168.144.1:0/2782098490 after 0 2026-03-10T10:24:44.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:44 vm02.local ceph-mon[110129]: reconnect by client.24319 192.168.144.1:0/2118398450 after 0.002 2026-03-10T10:24:45.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:45 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] up:rejoin 2026-03-10T10:24:45.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:45 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm02.stcvsz=up:rejoin} 2 up:standby 2026-03-10T10:24:45.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:45 vm02.local ceph-mon[110129]: daemon mds.cephfs.vm02.stcvsz is now active in filesystem cephfs as rank 0 2026-03-10T10:24:45.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:45 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] up:rejoin 2026-03-10T10:24:45.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:45 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm02.stcvsz=up:rejoin} 2 up:standby 2026-03-10T10:24:45.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:45 vm05.local ceph-mon[103593]: daemon mds.cephfs.vm02.stcvsz is now active in filesystem cephfs as rank 0 2026-03-10T10:24:46.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:46 vm02.local ceph-mon[110129]: pgmap v163: 65 pgs: 65 active+clean; 209 MiB data, 914 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 7 op/s 2026-03-10T10:24:46.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:46 vm02.local ceph-mon[110129]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/228 objects degraded (0.877%), 1 pg degraded) 2026-03-10T10:24:46.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:46 vm02.local ceph-mon[110129]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:24:46.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:46 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] up:active 2026-03-10T10:24:46.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:46 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm02.stcvsz=up:active} 2 up:standby 2026-03-10T10:24:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:46 vm05.local ceph-mon[103593]: pgmap v163: 65 pgs: 65 active+clean; 209 MiB data, 914 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 7 op/s 2026-03-10T10:24:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:46 vm05.local ceph-mon[103593]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/228 objects degraded (0.877%), 1 pg degraded) 2026-03-10T10:24:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:46 vm05.local ceph-mon[103593]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:24:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:46 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.102:6828/2194475647,v1:192.168.123.102:6829/2194475647] up:active 2026-03-10T10:24:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:46 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm02.stcvsz=up:active} 2 up:standby 2026-03-10T10:24:48.181 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:48 vm02.local ceph-mon[110129]: pgmap v164: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 15 MiB/s rd, 127 B/s wr, 5 op/s 2026-03-10T10:24:48.181 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:48 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:48.181 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:48 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:48.181 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:48 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:48.181 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:48 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] up:boot 2026-03-10T10:24:48.181 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:48 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm02.stcvsz=up:active} 3 up:standby 2026-03-10T10:24:48.181 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:48 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:24:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:48 vm05.local ceph-mon[103593]: pgmap v164: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 15 MiB/s rd, 127 B/s wr, 5 op/s 2026-03-10T10:24:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:48 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:48 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:48 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:48 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] up:boot 2026-03-10T10:24:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:48 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm02.stcvsz=up:active} 3 up:standby 2026-03-10T10:24:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:48 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:24:49.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:49.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:49.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:49 vm02.local ceph-mon[110129]: pgmap v165: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 12 MiB/s rd, 214 B/s wr, 6 op/s 2026-03-10T10:24:49.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:49.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:49.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:49.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:49.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:49 vm05.local ceph-mon[103593]: pgmap v165: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 12 MiB/s rd, 214 B/s wr, 6 op/s 2026-03-10T10:24:49.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:49.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02[110125]: 2026-03-10T10:24:50.476+0000 7f2532449640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: Detected new or changed devices on vm02 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm02.stcvsz"]}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: Upgrade: It appears safe to stop mds.cephfs.vm02.stcvsz 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: Upgrade: Updating mds.cephfs.vm02.stcvsz 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.stcvsz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: Deploying daemon mds.cephfs.vm02.stcvsz on vm02 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: osdmap e81: 6 total, 6 up, 6 in 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: Standby daemon mds.cephfs.vm05.liatdh assigned to filesystem cephfs as rank 0 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-10T10:24:50.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:50 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm05.liatdh=up:replay} 2 up:standby 2026-03-10T10:24:51.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: Detected new or changed devices on vm02 2026-03-10T10:24:51.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:51.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:51.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:51.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:51.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:51.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:51.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm02.stcvsz"]}]: dispatch 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: Upgrade: It appears safe to stop mds.cephfs.vm02.stcvsz 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: Upgrade: Updating mds.cephfs.vm02.stcvsz 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm02.stcvsz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: Deploying daemon mds.cephfs.vm02.stcvsz on vm02 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: osdmap e81: 6 total, 6 up, 6 in 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: Standby daemon mds.cephfs.vm05.liatdh assigned to filesystem cephfs as rank 0 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-10T10:24:51.038 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:50 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm05.liatdh=up:replay} 2 up:standby 2026-03-10T10:24:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:51 vm02.local ceph-mon[110129]: pgmap v167: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 11 MiB/s rd, 204 B/s wr, 7 op/s 2026-03-10T10:24:52.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:51 vm05.local ceph-mon[103593]: pgmap v167: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 11 MiB/s rd, 204 B/s wr, 7 op/s 2026-03-10T10:24:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:53.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:53.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:53 vm02.local ceph-mon[110129]: pgmap v168: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 7.6 MiB/s rd, 5.0 KiB/s wr, 9 op/s 2026-03-10T10:24:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:54 vm05.local ceph-mon[103593]: pgmap v168: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 7.6 MiB/s rd, 5.0 KiB/s wr, 9 op/s 2026-03-10T10:24:56.103 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:55 vm02.local ceph-mon[110129]: pgmap v169: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 9.6 MiB/s rd, 5.0 KiB/s wr, 9 op/s 2026-03-10T10:24:56.103 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:55 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] up:reconnect 2026-03-10T10:24:56.103 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:55 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm05.liatdh=up:reconnect} 2 up:standby 2026-03-10T10:24:56.103 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:55 vm02.local ceph-mon[110129]: reconnect by client.14516 192.168.144.1:0/2782098490 after 0 2026-03-10T10:24:56.103 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:55 vm02.local ceph-mon[110129]: reconnect by client.24319 192.168.144.1:0/2118398450 after 0.002 2026-03-10T10:24:56.103 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:55 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:56.103 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:55 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:56.103 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:55 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:55 vm05.local ceph-mon[103593]: pgmap v169: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 9.6 MiB/s rd, 5.0 KiB/s wr, 9 op/s 2026-03-10T10:24:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:55 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] up:reconnect 2026-03-10T10:24:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:55 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm05.liatdh=up:reconnect} 2 up:standby 2026-03-10T10:24:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:55 vm05.local ceph-mon[103593]: reconnect by client.14516 192.168.144.1:0/2782098490 after 0 2026-03-10T10:24:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:55 vm05.local ceph-mon[103593]: reconnect by client.24319 192.168.144.1:0/2118398450 after 0.002 2026-03-10T10:24:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:55 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:55 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:55 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:57.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:56 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] up:rejoin 2026-03-10T10:24:57.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:56 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.102:6828/3727526116,v1:192.168.123.102:6829/3727526116] up:boot 2026-03-10T10:24:57.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:56 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm05.liatdh=up:rejoin} 3 up:standby 2026-03-10T10:24:57.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:56 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:24:57.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:56 vm02.local ceph-mon[110129]: daemon mds.cephfs.vm05.liatdh is now active in filesystem cephfs as rank 0 2026-03-10T10:24:57.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:56 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] up:rejoin 2026-03-10T10:24:57.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:56 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.102:6828/3727526116,v1:192.168.123.102:6829/3727526116] up:boot 2026-03-10T10:24:57.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:56 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm05.liatdh=up:rejoin} 3 up:standby 2026-03-10T10:24:57.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:56 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:24:57.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:56 vm05.local ceph-mon[103593]: daemon mds.cephfs.vm05.liatdh is now active in filesystem cephfs as rank 0 2026-03-10T10:24:58.128 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:57 vm02.local ceph-mon[110129]: pgmap v170: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.9 KiB/s wr, 10 op/s 2026-03-10T10:24:58.129 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:57 vm02.local ceph-mon[110129]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:24:58.129 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:57 vm02.local ceph-mon[110129]: Cluster is now healthy 2026-03-10T10:24:58.129 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:57 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:58.129 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:57 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] up:active 2026-03-10T10:24:58.129 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:57 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 3 up:standby 2026-03-10T10:24:58.129 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:57 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:58.129 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:57 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:58.129 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:57 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:57 vm05.local ceph-mon[103593]: pgmap v170: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.9 KiB/s wr, 10 op/s 2026-03-10T10:24:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:57 vm05.local ceph-mon[103593]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:24:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:57 vm05.local ceph-mon[103593]: Cluster is now healthy 2026-03-10T10:24:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:57 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:57 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.105:6824/3526415895,v1:192.168.123.105:6825/3526415895] up:active 2026-03-10T10:24:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:57 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm05.liatdh=up:active} 3 up:standby 2026-03-10T10:24:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:57 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:57 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:57 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:59.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02[110125]: 2026-03-10T10:24:59.118+0000 7f2532449640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm05.liatdh"]}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.liatdh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: osdmap e82: 6 total, 6 up, 6 in 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: Standby daemon mds.cephfs.vm05.sudjys assigned to filesystem cephfs as rank 0 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-10T10:24:59.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:24:59 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm05.sudjys=up:replay} 2 up:standby 2026-03-10T10:24:59.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:59.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:59.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:59.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:59.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm05.liatdh"]}]: dispatch 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.liatdh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: osdmap e82: 6 total, 6 up, 6 in 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: Standby daemon mds.cephfs.vm05.sudjys assigned to filesystem cephfs as rank 0 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-10T10:24:59.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:24:59 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm05.sudjys=up:replay} 2 up:standby 2026-03-10T10:25:00.567 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:00 vm02.local ceph-mon[110129]: Upgrade: It appears safe to stop mds.cephfs.vm05.liatdh 2026-03-10T10:25:00.567 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:00 vm02.local ceph-mon[110129]: Upgrade: Updating mds.cephfs.vm05.liatdh 2026-03-10T10:25:00.567 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:00 vm02.local ceph-mon[110129]: Deploying daemon mds.cephfs.vm05.liatdh on vm05 2026-03-10T10:25:00.567 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:00 vm02.local ceph-mon[110129]: pgmap v171: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.9 KiB/s wr, 11 op/s 2026-03-10T10:25:00.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:00 vm05.local ceph-mon[103593]: Upgrade: It appears safe to stop mds.cephfs.vm05.liatdh 2026-03-10T10:25:00.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:00 vm05.local ceph-mon[103593]: Upgrade: Updating mds.cephfs.vm05.liatdh 2026-03-10T10:25:00.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:00 vm05.local ceph-mon[103593]: Deploying daemon mds.cephfs.vm05.liatdh on vm05 2026-03-10T10:25:00.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:00 vm05.local ceph-mon[103593]: pgmap v171: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.9 KiB/s wr, 11 op/s 2026-03-10T10:25:02.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:01 vm02.local ceph-mon[110129]: pgmap v173: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.9 KiB/s wr, 11 op/s 2026-03-10T10:25:02.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:01 vm05.local ceph-mon[103593]: pgmap v173: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.9 KiB/s wr, 11 op/s 2026-03-10T10:25:03.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:25:03.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:25:04.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:04 vm02.local ceph-mon[110129]: pgmap v174: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 28 MiB/s rd, 204 B/s wr, 12 op/s 2026-03-10T10:25:04.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:04 vm05.local ceph-mon[103593]: pgmap v174: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 28 MiB/s rd, 204 B/s wr, 12 op/s 2026-03-10T10:25:06.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:05 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.105:6826/3693577687,v1:192.168.123.105:6827/3693577687] up:reconnect 2026-03-10T10:25:06.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:05 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm05.sudjys=up:reconnect} 2 up:standby 2026-03-10T10:25:06.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:05 vm02.local ceph-mon[110129]: reconnect by client.24319 192.168.144.1:0/2118398450 after 0.003 2026-03-10T10:25:06.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:05 vm02.local ceph-mon[110129]: reconnect by client.14516 192.168.144.1:0/2782098490 after 0.00400001 2026-03-10T10:25:06.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:05 vm02.local ceph-mon[110129]: pgmap v175: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 29 MiB/s rd, 204 B/s wr, 12 op/s 2026-03-10T10:25:06.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:05 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.105:6826/3693577687,v1:192.168.123.105:6827/3693577687] up:reconnect 2026-03-10T10:25:06.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:05 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm05.sudjys=up:reconnect} 2 up:standby 2026-03-10T10:25:06.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:05 vm05.local ceph-mon[103593]: reconnect by client.24319 192.168.144.1:0/2118398450 after 0.003 2026-03-10T10:25:06.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:05 vm05.local ceph-mon[103593]: reconnect by client.14516 192.168.144.1:0/2782098490 after 0.00400001 2026-03-10T10:25:06.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:05 vm05.local ceph-mon[103593]: pgmap v175: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 29 MiB/s rd, 204 B/s wr, 12 op/s 2026-03-10T10:25:06.981 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:06 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.105:6826/3693577687,v1:192.168.123.105:6827/3693577687] up:rejoin 2026-03-10T10:25:06.981 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:06 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm05.sudjys=up:rejoin} 2 up:standby 2026-03-10T10:25:06.981 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:06 vm05.local ceph-mon[103593]: daemon mds.cephfs.vm05.sudjys is now active in filesystem cephfs as rank 0 2026-03-10T10:25:07.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:06 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.105:6826/3693577687,v1:192.168.123.105:6827/3693577687] up:rejoin 2026-03-10T10:25:07.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:06 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm05.sudjys=up:rejoin} 2 up:standby 2026-03-10T10:25:07.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:06 vm02.local ceph-mon[110129]: daemon mds.cephfs.vm05.sudjys is now active in filesystem cephfs as rank 0 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: Cluster is now healthy 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.105:6826/3693577687,v1:192.168.123.105:6827/3693577687] up:active 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm05.sudjys=up:active} 2 up:standby 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: pgmap v176: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 23 MiB/s rd, 307 B/s wr, 11 op/s 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: Cluster is now healthy 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.105:6826/3693577687,v1:192.168.123.105:6827/3693577687] up:active 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm05.sudjys=up:active} 2 up:standby 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: pgmap v176: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 23 MiB/s rd, 307 B/s wr, 11 op/s 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:07.898 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:09.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:09.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:09.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:09 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.105:6824/462039658,v1:192.168.123.105:6825/462039658] up:boot 2026-03-10T10:25:09.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:09 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm05.sudjys=up:active} 3 up:standby 2026-03-10T10:25:09.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:25:09.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:09 vm05.local ceph-mon[103593]: pgmap v177: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 307 B/s wr, 10 op/s 2026-03-10T10:25:09.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:09.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:09 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:09.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:09.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:09.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:09 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.105:6824/462039658,v1:192.168.123.105:6825/462039658] up:boot 2026-03-10T10:25:09.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:09 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm05.sudjys=up:active} 3 up:standby 2026-03-10T10:25:09.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:25:09.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:09 vm02.local ceph-mon[110129]: pgmap v177: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 307 B/s wr, 10 op/s 2026-03-10T10:25:09.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:09.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:09 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:10.833 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: Detected new or changed devices on vm05 2026-03-10T10:25:10.833 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:10.833 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:10.833 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm05.sudjys"]}]: dispatch 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: Upgrade: It appears safe to stop mds.cephfs.vm05.sudjys 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.sudjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:25:10.834 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:10 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: Detected new or changed devices on vm05 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds ok-to-stop", "ids": ["cephfs.vm05.sudjys"]}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: Upgrade: It appears safe to stop mds.cephfs.vm05.sudjys 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.sudjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T10:25:10.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:11.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:10 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-mon-vm02[110125]: 2026-03-10T10:25:10.901+0000 7f2532449640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:25:12.015 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.013+0000 7f060fb45700 1 -- 192.168.123.102:0/2801747034 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0608107d90 msgr2=0x7f060810a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.015 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.013+0000 7f060fb45700 1 --2- 192.168.123.102:0/2801747034 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0608107d90 0x7f060810a1c0 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7f0604009b00 tx=0x7f0604009e10 comp rx=0 tx=0).stop 2026-03-10T10:25:12.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.017+0000 7f060fb45700 1 -- 192.168.123.102:0/2801747034 shutdown_connections 2026-03-10T10:25:12.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.017+0000 7f060fb45700 1 --2- 192.168.123.102:0/2801747034 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f060810a700 0x7f060810cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.017+0000 7f060fb45700 1 --2- 192.168.123.102:0/2801747034 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0608107d90 0x7f060810a1c0 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.017+0000 7f060fb45700 1 -- 192.168.123.102:0/2801747034 >> 192.168.123.102:0/2801747034 conn(0x7f060806dae0 msgr2=0x7f060806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:12.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.017+0000 7f060fb45700 1 -- 192.168.123.102:0/2801747034 shutdown_connections 2026-03-10T10:25:12.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.017+0000 7f060fb45700 1 -- 192.168.123.102:0/2801747034 wait complete. 2026-03-10T10:25:12.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.017+0000 7f060fb45700 1 Processor -- start 2026-03-10T10:25:12.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.018+0000 7f060fb45700 1 -- start start 2026-03-10T10:25:12.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.018+0000 7f060fb45700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0608107d90 0x7f0608116cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.018+0000 7f060fb45700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f060810a700 0x7f0608117230 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.018+0000 7f060fb45700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0608117850 con 0x7f0608107d90 2026-03-10T10:25:12.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.018+0000 7f060fb45700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0608117990 con 0x7f060810a700 2026-03-10T10:25:12.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.018+0000 7f060d0e0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f060810a700 0x7f0608117230 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.018+0000 7f060d0e0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f060810a700 0x7f0608117230 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:60584/0 (socket says 192.168.123.102:60584) 2026-03-10T10:25:12.018 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.018+0000 7f060d0e0700 1 -- 192.168.123.102:0/1671806340 learned_addr learned my addr 192.168.123.102:0/1671806340 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:12.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.019+0000 7f060d0e0700 1 -- 192.168.123.102:0/1671806340 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0608107d90 msgr2=0x7f0608116cf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.019+0000 7f060d0e0700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0608107d90 0x7f0608116cf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.019 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.019+0000 7f060d0e0700 1 -- 192.168.123.102:0/1671806340 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f06040097e0 con 0x7f060810a700 2026-03-10T10:25:12.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.020+0000 7f060d0e0700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f060810a700 0x7f0608117230 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f060000d350 tx=0x7f060000d710 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.020+0000 7f05feffd700 1 -- 192.168.123.102:0/1671806340 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f06000155b0 con 0x7f060810a700 2026-03-10T10:25:12.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.020+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f06081b3560 con 0x7f060810a700 2026-03-10T10:25:12.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.020+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f06081b3ab0 con 0x7f060810a700 2026-03-10T10:25:12.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.020+0000 7f05feffd700 1 -- 192.168.123.102:0/1671806340 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f060000f040 con 0x7f060810a700 2026-03-10T10:25:12.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.020+0000 7f05feffd700 1 -- 192.168.123.102:0/1671806340 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f06000149c0 con 0x7f060810a700 2026-03-10T10:25:12.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.021+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f05ec005320 con 0x7f060810a700 2026-03-10T10:25:12.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.022+0000 7f05feffd700 1 -- 192.168.123.102:0/1671806340 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f0600014c20 con 0x7f060810a700 2026-03-10T10:25:12.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.022+0000 7f05feffd700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f05f4077910 0x7f05f4079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.022 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.022+0000 7f060d8e1700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f05f4077910 0x7f05f4079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.023+0000 7f05feffd700 1 -- 192.168.123.102:0/1671806340 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f06000996d0 con 0x7f060810a700 2026-03-10T10:25:12.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.023+0000 7f060d8e1700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f05f4077910 0x7f05f4079dd0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f060400b5c0 tx=0x7f0604009f90 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.024+0000 7f05feffd700 1 -- 192.168.123.102:0/1671806340 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0600061e80 con 0x7f060810a700 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: Upgrade: Updating mds.cephfs.vm05.sudjys 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: Deploying daemon mds.cephfs.vm05.sudjys on vm05 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: pgmap v178: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 18 MiB/s rd, 266 B/s wr, 9 op/s 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: osdmap e83: 6 total, 6 up, 6 in 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: Standby daemon mds.cephfs.vm02.zymcrs assigned to filesystem cephfs as rank 0 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:replay} 2 up:standby 2026-03-10T10:25:12.154 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:11 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:25:12.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.153+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f05ec000bf0 con 0x7f05f4077910 2026-03-10T10:25:12.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.162+0000 7f05feffd700 1 -- 192.168.123.102:0/1671806340 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7f05ec000bf0 con 0x7f05f4077910 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.164+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f05f4077910 msgr2=0x7f05f4079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.164+0000 7f060fb45700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f05f4077910 0x7f05f4079dd0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f060400b5c0 tx=0x7f0604009f90 comp rx=0 tx=0).stop 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.164+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f060810a700 msgr2=0x7f0608117230 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.164+0000 7f060fb45700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f060810a700 0x7f0608117230 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f060000d350 tx=0x7f060000d710 comp rx=0 tx=0).stop 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.164+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 shutdown_connections 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.164+0000 7f060fb45700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0608107d90 0x7f0608116cf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.164+0000 7f060fb45700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f05f4077910 0x7f05f4079dd0 secure :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f060400b5c0 tx=0x7f0604009f90 comp rx=0 tx=0).stop 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.164+0000 7f060fb45700 1 --2- 192.168.123.102:0/1671806340 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f060810a700 0x7f0608117230 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.164+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 >> 192.168.123.102:0/1671806340 conn(0x7f060806dae0 msgr2=0x7f060806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.165+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 shutdown_connections 2026-03-10T10:25:12.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.165+0000 7f060fb45700 1 -- 192.168.123.102:0/1671806340 wait complete. 2026-03-10T10:25:12.182 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.240+0000 7fe189274700 1 -- 192.168.123.102:0/24554025 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184075a40 msgr2=0x7fe184077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.240+0000 7fe189274700 1 --2- 192.168.123.102:0/24554025 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184075a40 0x7fe184077ed0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fe17c00a390 tx=0x7fe17c00a6a0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.240+0000 7fe189274700 1 -- 192.168.123.102:0/24554025 shutdown_connections 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.240+0000 7fe189274700 1 --2- 192.168.123.102:0/24554025 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184075a40 0x7fe184077ed0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.240+0000 7fe189274700 1 --2- 192.168.123.102:0/24554025 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe184072b50 0x7fe184072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.240+0000 7fe189274700 1 -- 192.168.123.102:0/24554025 >> 192.168.123.102:0/24554025 conn(0x7fe18406dae0 msgr2=0x7fe18406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.240+0000 7fe189274700 1 -- 192.168.123.102:0/24554025 shutdown_connections 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.240+0000 7fe189274700 1 -- 192.168.123.102:0/24554025 wait complete. 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.241+0000 7fe189274700 1 Processor -- start 2026-03-10T10:25:12.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.241+0000 7fe189274700 1 -- start start 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.241+0000 7fe189274700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184072b50 0x7fe184083080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.241+0000 7fe189274700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe1840835c0 0x7fe1841b3090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.241+0000 7fe189274700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe184083b00 con 0x7fe1840835c0 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.241+0000 7fe189274700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe184083c70 con 0x7fe184072b50 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.241+0000 7fe182ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184072b50 0x7fe184083080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.241+0000 7fe182ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184072b50 0x7fe184083080 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:60588/0 (socket says 192.168.123.102:60588) 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.241+0000 7fe182ffd700 1 -- 192.168.123.102:0/3007289297 learned_addr learned my addr 192.168.123.102:0/3007289297 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.242+0000 7fe182ffd700 1 -- 192.168.123.102:0/3007289297 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe1840835c0 msgr2=0x7fe1841b3090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.242+0000 7fe182ffd700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe1840835c0 0x7fe1841b3090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.242 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.242+0000 7fe182ffd700 1 -- 192.168.123.102:0/3007289297 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe17c00a040 con 0x7fe184072b50 2026-03-10T10:25:12.243 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.242+0000 7fe182ffd700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184072b50 0x7fe184083080 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fe174013f80 tx=0x7fe17400e450 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.244 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.242+0000 7fe16bfff700 1 -- 192.168.123.102:0/3007289297 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe17401b3f0 con 0x7fe184072b50 2026-03-10T10:25:12.244 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.242+0000 7fe189274700 1 -- 192.168.123.102:0/3007289297 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe1841b36f0 con 0x7fe184072b50 2026-03-10T10:25:12.244 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.243+0000 7fe189274700 1 -- 192.168.123.102:0/3007289297 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe1841b3bc0 con 0x7fe184072b50 2026-03-10T10:25:12.244 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.243+0000 7fe16bfff700 1 -- 192.168.123.102:0/3007289297 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe1740098b0 con 0x7fe184072b50 2026-03-10T10:25:12.244 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.243+0000 7fe16bfff700 1 -- 192.168.123.102:0/3007289297 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe17401a900 con 0x7fe184072b50 2026-03-10T10:25:12.244 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.243+0000 7fe189274700 1 -- 192.168.123.102:0/3007289297 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe18404ea90 con 0x7fe184072b50 2026-03-10T10:25:12.246 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.244+0000 7fe16bfff700 1 -- 192.168.123.102:0/3007289297 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe174009a20 con 0x7fe184072b50 2026-03-10T10:25:12.246 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.245+0000 7fe16bfff700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe16c0800d0 0x7fe16c082590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.246 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.245+0000 7fe16bfff700 1 -- 192.168.123.102:0/3007289297 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fe1740999f0 con 0x7fe184072b50 2026-03-10T10:25:12.246 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.246+0000 7fe1827fc700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe16c0800d0 0x7fe16c082590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.248 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.247+0000 7fe16bfff700 1 -- 192.168.123.102:0/3007289297 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe1740620f0 con 0x7fe184072b50 2026-03-10T10:25:12.250 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.250+0000 7fe1827fc700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe16c0800d0 0x7fe16c082590 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fe17c00bc40 tx=0x7fe17c006040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: Upgrade: Updating mds.cephfs.vm05.sudjys 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: Deploying daemon mds.cephfs.vm05.sudjys on vm05 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: pgmap v178: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 18 MiB/s rd, 266 B/s wr, 9 op/s 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: osdmap e83: 6 total, 6 up, 6 in 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: Standby daemon mds.cephfs.vm02.zymcrs assigned to filesystem cephfs as rank 0 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:replay} 2 up:standby 2026-03-10T10:25:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:11 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:25:12.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.391+0000 7fe189274700 1 -- 192.168.123.102:0/3007289297 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fe1841b3ea0 con 0x7fe16c0800d0 2026-03-10T10:25:12.393 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.393+0000 7fe16bfff700 1 -- 192.168.123.102:0/3007289297 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7fe1841b3ea0 con 0x7fe16c0800d0 2026-03-10T10:25:12.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.397+0000 7fe169ffb700 1 -- 192.168.123.102:0/3007289297 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe16c0800d0 msgr2=0x7fe16c082590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.397+0000 7fe169ffb700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe16c0800d0 0x7fe16c082590 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fe17c00bc40 tx=0x7fe17c006040 comp rx=0 tx=0).stop 2026-03-10T10:25:12.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.397+0000 7fe169ffb700 1 -- 192.168.123.102:0/3007289297 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184072b50 msgr2=0x7fe184083080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.397+0000 7fe169ffb700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184072b50 0x7fe184083080 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fe174013f80 tx=0x7fe17400e450 comp rx=0 tx=0).stop 2026-03-10T10:25:12.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.397+0000 7fe169ffb700 1 -- 192.168.123.102:0/3007289297 shutdown_connections 2026-03-10T10:25:12.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.397+0000 7fe169ffb700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe16c0800d0 0x7fe16c082590 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.397+0000 7fe169ffb700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe184072b50 0x7fe184083080 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.397+0000 7fe169ffb700 1 --2- 192.168.123.102:0/3007289297 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe1840835c0 0x7fe1841b3090 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.399 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.397+0000 7fe169ffb700 1 -- 192.168.123.102:0/3007289297 >> 192.168.123.102:0/3007289297 conn(0x7fe18406dae0 msgr2=0x7fe18406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:12.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.400+0000 7fe169ffb700 1 -- 192.168.123.102:0/3007289297 shutdown_connections 2026-03-10T10:25:12.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.402+0000 7fe169ffb700 1 -- 192.168.123.102:0/3007289297 wait complete. 2026-03-10T10:25:12.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.472+0000 7f2935d08700 1 -- 192.168.123.102:0/2087505662 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 msgr2=0x7f293010a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.472+0000 7f2935d08700 1 --2- 192.168.123.102:0/2087505662 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 0x7f293010a1c0 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f2920009b00 tx=0x7f2920009e10 comp rx=0 tx=0).stop 2026-03-10T10:25:12.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.474+0000 7f2935d08700 1 -- 192.168.123.102:0/2087505662 shutdown_connections 2026-03-10T10:25:12.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.474+0000 7f2935d08700 1 --2- 192.168.123.102:0/2087505662 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f293010a700 0x7f293010cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.474+0000 7f2935d08700 1 --2- 192.168.123.102:0/2087505662 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 0x7f293010a1c0 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.474+0000 7f2935d08700 1 -- 192.168.123.102:0/2087505662 >> 192.168.123.102:0/2087505662 conn(0x7f293006dae0 msgr2=0x7f293006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f2935d08700 1 -- 192.168.123.102:0/2087505662 shutdown_connections 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f2935d08700 1 -- 192.168.123.102:0/2087505662 wait complete. 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f2935d08700 1 Processor -- start 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f2935d08700 1 -- start start 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f2935d08700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 0x7f2930116cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f2935d08700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f293010a700 0x7f2930117230 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f2935d08700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2930117850 con 0x7f2930107d90 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f2935d08700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2930117990 con 0x7f293010a700 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f292f7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 0x7f2930116cf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f292f7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 0x7f2930116cf0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:39782/0 (socket says 192.168.123.102:39782) 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f292f7fe700 1 -- 192.168.123.102:0/64916028 learned_addr learned my addr 192.168.123.102:0/64916028 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.476+0000 7f292effd700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f293010a700 0x7f2930117230 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.477+0000 7f292f7fe700 1 -- 192.168.123.102:0/64916028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f293010a700 msgr2=0x7f2930117230 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.477+0000 7f292f7fe700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f293010a700 0x7f2930117230 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.477+0000 7f292f7fe700 1 -- 192.168.123.102:0/64916028 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f29200097e0 con 0x7f2930107d90 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.477+0000 7f292f7fe700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 0x7f2930116cf0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f29200055d0 tx=0x7f2920005340 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.477+0000 7f292cff9700 1 -- 192.168.123.102:0/64916028 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f292001d070 con 0x7f2930107d90 2026-03-10T10:25:12.478 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.477+0000 7f2935d08700 1 -- 192.168.123.102:0/64916028 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f29301b3500 con 0x7f2930107d90 2026-03-10T10:25:12.478 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.477+0000 7f2935d08700 1 -- 192.168.123.102:0/64916028 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f29301b39f0 con 0x7f2930107d90 2026-03-10T10:25:12.478 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.478+0000 7f292cff9700 1 -- 192.168.123.102:0/64916028 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f2920003c20 con 0x7f2930107d90 2026-03-10T10:25:12.478 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.478+0000 7f292cff9700 1 -- 192.168.123.102:0/64916028 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f29200177b0 con 0x7f2930107d90 2026-03-10T10:25:12.480 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.479+0000 7f2935d08700 1 -- 192.168.123.102:0/64916028 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f291c005320 con 0x7f2930107d90 2026-03-10T10:25:12.480 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.480+0000 7f292cff9700 1 -- 192.168.123.102:0/64916028 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f2920017910 con 0x7f2930107d90 2026-03-10T10:25:12.480 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.480+0000 7f292cff9700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2918077910 0x7f2918079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.480 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.480+0000 7f292cff9700 1 -- 192.168.123.102:0/64916028 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f292009ae60 con 0x7f2930107d90 2026-03-10T10:25:12.480 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.480+0000 7f292effd700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2918077910 0x7f2918079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.481 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.481+0000 7f292effd700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2918077910 0x7f2918079dd0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f2928003eb0 tx=0x7f292800b040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.484 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.484+0000 7f292cff9700 1 -- 192.168.123.102:0/64916028 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f29200634e0 con 0x7f2930107d90 2026-03-10T10:25:12.610 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.609+0000 7f2935d08700 1 -- 192.168.123.102:0/64916028 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f291c000bf0 con 0x7f2918077910 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.614+0000 7f292cff9700 1 -- 192.168.123.102:0/64916028 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f291c000bf0 con 0x7f2918077910 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (9m) 15s ago 10m 23.6M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (10m) 15s ago 10m 9810k - 18.2.1 5be31c24972a ff5c82740b39 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (9m) 4s ago 9m 12.0M - 18.2.1 5be31c24972a 456b3bd5efb4 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (4m) 15s ago 10m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (4m) 4s ago 9m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (9m) 15s ago 10m 90.3M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (17s) 15s ago 8m 12.3M - 19.2.3-678-ge911bdeb 654f31e6858e 5e606b7866f6 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (25s) 15s ago 8m 20.6M - 19.2.3-678-ge911bdeb 654f31e6858e f748fd699eac 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (5s) 4s ago 8m 15.0M - 19.2.3-678-ge911bdeb 654f31e6858e 3385525a533a 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (8m) 4s ago 8m 156M - 18.2.1 5be31c24972a 0127a771956a 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (5m) 15s ago 11m 624M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (4m) 4s ago 9m 489M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (4m) 15s ago 11m 65.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (4m) 4s ago 9m 56.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (10m) 15s ago 10m 16.3M - 1.5.0 0da6a335fe13 745b21ae6768 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (9m) 4s ago 9m 15.4M - 1.5.0 0da6a335fe13 2453c8484ba5 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (4m) 15s ago 9m 231M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (2m) 15s ago 9m 131M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6b6be7f62bd3 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (104s) 15s ago 9m 124M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 745b9931485f 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (83s) 4s ago 9m 170M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe29904ecf52 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (62s) 4s ago 8m 149M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe0b3f802cec 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (41s) 4s ago 8m 129M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c60f7383494f 2026-03-10T10:25:12.615 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (4m) 15s ago 10m 66.6M - 2.43.0 a07b618ecd1d 5ebb885bd417 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.617+0000 7f29167fc700 1 -- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2918077910 msgr2=0x7f2918079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.617+0000 7f29167fc700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2918077910 0x7f2918079dd0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f2928003eb0 tx=0x7f292800b040 comp rx=0 tx=0).stop 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.617+0000 7f29167fc700 1 -- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 msgr2=0x7f2930116cf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.617+0000 7f29167fc700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 0x7f2930116cf0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f29200055d0 tx=0x7f2920005340 comp rx=0 tx=0).stop 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.617+0000 7f29167fc700 1 -- 192.168.123.102:0/64916028 shutdown_connections 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.617+0000 7f29167fc700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2930107d90 0x7f2930116cf0 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.618+0000 7f29167fc700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2918077910 0x7f2918079dd0 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.618+0000 7f29167fc700 1 --2- 192.168.123.102:0/64916028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f293010a700 0x7f2930117230 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.618+0000 7f29167fc700 1 -- 192.168.123.102:0/64916028 >> 192.168.123.102:0/64916028 conn(0x7f293006dae0 msgr2=0x7f293006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.618+0000 7f29167fc700 1 -- 192.168.123.102:0/64916028 shutdown_connections 2026-03-10T10:25:12.618 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.618+0000 7f29167fc700 1 -- 192.168.123.102:0/64916028 wait complete. 2026-03-10T10:25:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.689+0000 7fc351e8e700 1 -- 192.168.123.102:0/1292997649 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0fff10 msgr2=0x7fc34c100390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.689+0000 7fc351e8e700 1 --2- 192.168.123.102:0/1292997649 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0fff10 0x7fc34c100390 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7fc340009b00 tx=0x7fc340009e10 comp rx=0 tx=0).stop 2026-03-10T10:25:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.690+0000 7fc351e8e700 1 -- 192.168.123.102:0/1292997649 shutdown_connections 2026-03-10T10:25:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.690+0000 7fc351e8e700 1 --2- 192.168.123.102:0/1292997649 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0fff10 0x7fc34c100390 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.690+0000 7fc351e8e700 1 --2- 192.168.123.102:0/1292997649 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc34c0ff5b0 0x7fc34c0ff9d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.690+0000 7fc351e8e700 1 -- 192.168.123.102:0/1292997649 >> 192.168.123.102:0/1292997649 conn(0x7fc34c0fb110 msgr2=0x7fc34c0fd590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.690+0000 7fc351e8e700 1 -- 192.168.123.102:0/1292997649 shutdown_connections 2026-03-10T10:25:12.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.690+0000 7fc351e8e700 1 -- 192.168.123.102:0/1292997649 wait complete. 2026-03-10T10:25:12.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.691+0000 7fc351e8e700 1 Processor -- start 2026-03-10T10:25:12.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.691+0000 7fc351e8e700 1 -- start start 2026-03-10T10:25:12.691 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.691+0000 7fc351e8e700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0ff5b0 0x7fc34c198590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.691+0000 7fc34b7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0ff5b0 0x7fc34c198590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.691+0000 7fc34b7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0ff5b0 0x7fc34c198590 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:39790/0 (socket says 192.168.123.102:39790) 2026-03-10T10:25:12.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.691+0000 7fc34b7fe700 1 -- 192.168.123.102:0/1366730653 learned_addr learned my addr 192.168.123.102:0/1366730653 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:12.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.692+0000 7fc351e8e700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc34c0fff10 0x7fc34c198ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.692+0000 7fc351e8e700 1 -- 192.168.123.102:0/1366730653 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc34c1990f0 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.692+0000 7fc351e8e700 1 -- 192.168.123.102:0/1366730653 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc34c199230 con 0x7fc34c0fff10 2026-03-10T10:25:12.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.692+0000 7fc34affd700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc34c0fff10 0x7fc34c198ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.692+0000 7fc34b7fe700 1 -- 192.168.123.102:0/1366730653 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc34c0fff10 msgr2=0x7fc34c198ad0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.692+0000 7fc34b7fe700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc34c0fff10 0x7fc34c198ad0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.692+0000 7fc34b7fe700 1 -- 192.168.123.102:0/1366730653 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc3400097e0 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.693+0000 7fc34b7fe700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0ff5b0 0x7fc34c198590 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7fc33c00d8d0 tx=0x7fc33c00dbe0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.693+0000 7fc348ff9700 1 -- 192.168.123.102:0/1366730653 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc33c009880 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.693+0000 7fc348ff9700 1 -- 192.168.123.102:0/1366730653 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc33c010460 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.693+0000 7fc348ff9700 1 -- 192.168.123.102:0/1366730653 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc33c00f5d0 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.693+0000 7fc351e8e700 1 -- 192.168.123.102:0/1366730653 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc34c19dce0 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.693+0000 7fc351e8e700 1 -- 192.168.123.102:0/1366730653 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc34c19e1d0 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.694 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.694+0000 7fc351e8e700 1 -- 192.168.123.102:0/1366730653 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc34c04ea90 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.696+0000 7fc348ff9700 1 -- 192.168.123.102:0/1366730653 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc33c0105d0 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.696+0000 7fc348ff9700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc3340779e0 0x7fc334079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.696+0000 7fc348ff9700 1 -- 192.168.123.102:0/1366730653 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fc33c020030 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.698 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.698+0000 7fc34affd700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc3340779e0 0x7fc334079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.698+0000 7fc348ff9700 1 -- 192.168.123.102:0/1366730653 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc33c0624d0 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.704 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.704+0000 7fc34affd700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc3340779e0 0x7fc334079ea0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7fc34000b5c0 tx=0x7fc340009f90 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.881+0000 7fc351e8e700 1 -- 192.168.123.102:0/1366730653 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fc34c19e570 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.885+0000 7fc348ff9700 1 -- 192.168.123.102:0/1366730653 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7fc33c061c20 con 0x7fc34c0ff5b0 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 13 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:25:12.885 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:25:12.891 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 -- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc3340779e0 msgr2=0x7fc334079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.891 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc3340779e0 0x7fc334079ea0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7fc34000b5c0 tx=0x7fc340009f90 comp rx=0 tx=0).stop 2026-03-10T10:25:12.891 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 -- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0ff5b0 msgr2=0x7fc34c198590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.891 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0ff5b0 0x7fc34c198590 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7fc33c00d8d0 tx=0x7fc33c00dbe0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.891 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 -- 192.168.123.102:0/1366730653 shutdown_connections 2026-03-10T10:25:12.891 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc34c0ff5b0 0x7fc34c198590 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.891 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc3340779e0 0x7fc334079ea0 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.892 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 --2- 192.168.123.102:0/1366730653 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc34c0fff10 0x7fc34c198ad0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.892 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 -- 192.168.123.102:0/1366730653 >> 192.168.123.102:0/1366730653 conn(0x7fc34c0fb110 msgr2=0x7fc34c0fd3f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:12.892 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 -- 192.168.123.102:0/1366730653 shutdown_connections 2026-03-10T10:25:12.892 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.891+0000 7fc3327fc700 1 -- 192.168.123.102:0/1366730653 wait complete. 2026-03-10T10:25:12.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.981+0000 7f9d46995700 1 -- 192.168.123.102:0/334593854 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d4010a700 msgr2=0x7f9d4010cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.981+0000 7f9d46995700 1 --2- 192.168.123.102:0/334593854 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d4010a700 0x7f9d4010cb90 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f9d3800b3a0 tx=0x7f9d3800b6b0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.981+0000 7f9d46995700 1 -- 192.168.123.102:0/334593854 shutdown_connections 2026-03-10T10:25:12.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.981+0000 7f9d46995700 1 --2- 192.168.123.102:0/334593854 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d4010a700 0x7f9d4010cb90 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.981+0000 7f9d46995700 1 --2- 192.168.123.102:0/334593854 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d40107d90 0x7f9d4010a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.981+0000 7f9d46995700 1 -- 192.168.123.102:0/334593854 >> 192.168.123.102:0/334593854 conn(0x7f9d4006dda0 msgr2=0x7f9d40070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:12.983 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d46995700 1 -- 192.168.123.102:0/334593854 shutdown_connections 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d46995700 1 -- 192.168.123.102:0/334593854 wait complete. 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d46995700 1 Processor -- start 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d46995700 1 -- start start 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d46995700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d40116b90 0x7f9d40116fb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d46995700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d401174f0 0x7f9d401a20e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d46995700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9d401a26b0 con 0x7f9d401174f0 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d46995700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9d401a2820 con 0x7f9d40116b90 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d3ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d40116b90 0x7f9d40116fb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d3ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d40116b90 0x7f9d40116fb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:60640/0 (socket says 192.168.123.102:60640) 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.983+0000 7f9d3ffff700 1 -- 192.168.123.102:0/71465506 learned_addr learned my addr 192.168.123.102:0/71465506 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d3f7fe700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d401174f0 0x7f9d401a20e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d3ffff700 1 -- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d401174f0 msgr2=0x7f9d401a20e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d3ffff700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d401174f0 0x7f9d401a20e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d3ffff700 1 -- 192.168.123.102:0/71465506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9d3800b050 con 0x7f9d40116b90 2026-03-10T10:25:12.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d3ffff700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d40116b90 0x7f9d40116fb0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f9d3400b6b0 tx=0x7f9d3400ba70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d3d7fa700 1 -- 192.168.123.102:0/71465506 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9d340107c0 con 0x7f9d40116b90 2026-03-10T10:25:12.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d46995700 1 -- 192.168.123.102:0/71465506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9d401a2ab0 con 0x7f9d40116b90 2026-03-10T10:25:12.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d46995700 1 -- 192.168.123.102:0/71465506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9d401a3000 con 0x7f9d40116b90 2026-03-10T10:25:12.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d3d7fa700 1 -- 192.168.123.102:0/71465506 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9d34010e00 con 0x7f9d40116b90 2026-03-10T10:25:12.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.984+0000 7f9d3d7fa700 1 -- 192.168.123.102:0/71465506 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9d34010920 con 0x7f9d40116b90 2026-03-10T10:25:12.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.985+0000 7f9d3d7fa700 1 -- 192.168.123.102:0/71465506 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9d340176d0 con 0x7f9d40116b90 2026-03-10T10:25:12.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.986+0000 7f9d3d7fa700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d28077910 0x7f9d28079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:12.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.986+0000 7f9d3d7fa700 1 -- 192.168.123.102:0/71465506 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f9d34099340 con 0x7f9d40116b90 2026-03-10T10:25:12.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.986+0000 7f9d46995700 1 -- 192.168.123.102:0/71465506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9d2c005320 con 0x7f9d40116b90 2026-03-10T10:25:12.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.986+0000 7f9d3f7fe700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d28077910 0x7f9d28079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:12.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.987+0000 7f9d3f7fe700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d28077910 0x7f9d28079dd0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f9d38000f80 tx=0x7f9d3800bf90 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:12.991 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:12.990+0000 7f9d3d7fa700 1 -- 192.168.123.102:0/71465506 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9d34061940 con 0x7f9d40116b90 2026-03-10T10:25:13.150 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.150+0000 7f9d46995700 1 -- 192.168.123.102:0/71465506 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f9d2c006200 con 0x7f9d40116b90 2026-03-10T10:25:13.150 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:12 vm02.local ceph-mon[110129]: from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:13.150 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:12 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1366730653' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.153+0000 7f9d3d7fa700 1 -- 192.168.123.102:0/71465506 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 35 v35) v1 ==== 76+0+1786 (secure 0 0 0) 0x7f9d34061090 con 0x7f9d40116b90 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:e35 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:btime 2026-03-10T10:25:10:909790+0000 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:epoch 35 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:flags 12 joinable allow_snaps allow_multimds_snaps 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:25:10.909776+0000 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 83 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:up {0=34360} 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:25:13.153 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 0 members: 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:34360} state up:replay seq 1 join_fscid=1 addr [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{-1:34364} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/3727526116,v1:192.168.123.102:6829/3727526116] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:25:13.154 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:34368} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/462039658,v1:192.168.123.105:6825/462039658] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 -- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d28077910 msgr2=0x7f9d28079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d28077910 0x7f9d28079dd0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f9d38000f80 tx=0x7f9d3800bf90 comp rx=0 tx=0).stop 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 -- 192.168.123.102:0/71465506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d40116b90 msgr2=0x7f9d40116fb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d40116b90 0x7f9d40116fb0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f9d3400b6b0 tx=0x7f9d3400ba70 comp rx=0 tx=0).stop 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 -- 192.168.123.102:0/71465506 shutdown_connections 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9d28077910 0x7f9d28079dd0 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9d40116b90 0x7f9d40116fb0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 --2- 192.168.123.102:0/71465506 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9d401174f0 0x7f9d401a20e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 -- 192.168.123.102:0/71465506 >> 192.168.123.102:0/71465506 conn(0x7f9d4006dda0 msgr2=0x7f9d40109960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:13.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 -- 192.168.123.102:0/71465506 shutdown_connections 2026-03-10T10:25:13.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.156+0000 7f9d26ffd700 1 -- 192.168.123.102:0/71465506 wait complete. 2026-03-10T10:25:13.161 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 35 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.233+0000 7fe2cd7b8700 1 -- 192.168.123.102:0/3413669715 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2c8107d90 msgr2=0x7fe2c810a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.233+0000 7fe2cd7b8700 1 --2- 192.168.123.102:0/3413669715 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2c8107d90 0x7fe2c810a1c0 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7fe2b8009b00 tx=0x7fe2b8009e10 comp rx=0 tx=0).stop 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.233+0000 7fe2cd7b8700 1 -- 192.168.123.102:0/3413669715 shutdown_connections 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.233+0000 7fe2cd7b8700 1 --2- 192.168.123.102:0/3413669715 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2c810a700 0x7fe2c810cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.233+0000 7fe2cd7b8700 1 --2- 192.168.123.102:0/3413669715 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2c8107d90 0x7fe2c810a1c0 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.233+0000 7fe2cd7b8700 1 -- 192.168.123.102:0/3413669715 >> 192.168.123.102:0/3413669715 conn(0x7fe2c806dae0 msgr2=0x7fe2c806ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2cd7b8700 1 -- 192.168.123.102:0/3413669715 shutdown_connections 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2cd7b8700 1 -- 192.168.123.102:0/3413669715 wait complete. 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2cd7b8700 1 Processor -- start 2026-03-10T10:25:13.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2cd7b8700 1 -- start start 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2cd7b8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2c810a700 0x7fe2c8116a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2cd7b8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2c8116fd0 0x7fe2c81b3160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2cd7b8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2c81174e0 con 0x7fe2c8116fd0 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2cd7b8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2c8117650 con 0x7fe2c810a700 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2c67fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2c8116fd0 0x7fe2c81b3160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2c67fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2c8116fd0 0x7fe2c81b3160 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:39834/0 (socket says 192.168.123.102:39834) 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.234+0000 7fe2c67fc700 1 -- 192.168.123.102:0/626051056 learned_addr learned my addr 192.168.123.102:0/626051056 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.235+0000 7fe2c6ffd700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2c810a700 0x7fe2c8116a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.235+0000 7fe2c6ffd700 1 -- 192.168.123.102:0/626051056 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2c8116fd0 msgr2=0x7fe2c81b3160 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.235+0000 7fe2c6ffd700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2c8116fd0 0x7fe2c81b3160 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.235+0000 7fe2c6ffd700 1 -- 192.168.123.102:0/626051056 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe2b80097e0 con 0x7fe2c810a700 2026-03-10T10:25:13.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.235+0000 7fe2c6ffd700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2c810a700 0x7fe2c8116a90 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fe2b8009ad0 tx=0x7fe2b800f710 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:13.236 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.236+0000 7fe2affff700 1 -- 192.168.123.102:0/626051056 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe2b801c070 con 0x7fe2c810a700 2026-03-10T10:25:13.236 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.236+0000 7fe2cd7b8700 1 -- 192.168.123.102:0/626051056 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe2c81b36a0 con 0x7fe2c810a700 2026-03-10T10:25:13.236 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.236+0000 7fe2cd7b8700 1 -- 192.168.123.102:0/626051056 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe2c81b3b90 con 0x7fe2c810a700 2026-03-10T10:25:13.237 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.236+0000 7fe2affff700 1 -- 192.168.123.102:0/626051056 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe2b80056d0 con 0x7fe2c810a700 2026-03-10T10:25:13.237 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.236+0000 7fe2affff700 1 -- 192.168.123.102:0/626051056 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe2b8017400 con 0x7fe2c810a700 2026-03-10T10:25:13.237 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.237+0000 7fe2cd7b8700 1 -- 192.168.123.102:0/626051056 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe2b4005320 con 0x7fe2c810a700 2026-03-10T10:25:13.238 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.238+0000 7fe2affff700 1 -- 192.168.123.102:0/626051056 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe2b8005350 con 0x7fe2c810a700 2026-03-10T10:25:13.238 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.238+0000 7fe2affff700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe2b0077910 0x7fe2b0079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:13.238 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.238+0000 7fe2affff700 1 -- 192.168.123.102:0/626051056 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fe2b809ace0 con 0x7fe2c810a700 2026-03-10T10:25:13.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.238+0000 7fe2c67fc700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe2b0077910 0x7fe2b0079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:13.239 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.239+0000 7fe2c67fc700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe2b0077910 0x7fe2b0079dd0 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fe2c00060b0 tx=0x7fe2c0009040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:13.241 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.241+0000 7fe2affff700 1 -- 192.168.123.102:0/626051056 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe2b80633e0 con 0x7fe2c810a700 2026-03-10T10:25:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:12 vm05.local ceph-mon[103593]: from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:13.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:12 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1366730653' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:13.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.369+0000 7fe2cd7b8700 1 -- 192.168.123.102:0/626051056 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fe2b4000bf0 con 0x7fe2b0077910 2026-03-10T10:25:13.370 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.370+0000 7fe2affff700 1 -- 192.168.123.102:0/626051056 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7fe2b4000bf0 con 0x7fe2b0077910 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "osd", 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "crash" 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "15/23 daemons upgraded", 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading mds daemons", 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:25:13.371 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:25:13.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 -- 192.168.123.102:0/626051056 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe2b0077910 msgr2=0x7fe2b0079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe2b0077910 0x7fe2b0079dd0 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fe2c00060b0 tx=0x7fe2c0009040 comp rx=0 tx=0).stop 2026-03-10T10:25:13.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 -- 192.168.123.102:0/626051056 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2c810a700 msgr2=0x7fe2c8116a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2c810a700 0x7fe2c8116a90 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fe2b8009ad0 tx=0x7fe2b800f710 comp rx=0 tx=0).stop 2026-03-10T10:25:13.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 -- 192.168.123.102:0/626051056 shutdown_connections 2026-03-10T10:25:13.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe2b0077910 0x7fe2b0079dd0 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2c810a700 0x7fe2c8116a90 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 --2- 192.168.123.102:0/626051056 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe2c8116fd0 0x7fe2c81b3160 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 -- 192.168.123.102:0/626051056 >> 192.168.123.102:0/626051056 conn(0x7fe2c806dae0 msgr2=0x7fe2c8109fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:13.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 -- 192.168.123.102:0/626051056 shutdown_connections 2026-03-10T10:25:13.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.377+0000 7fe2adffb700 1 -- 192.168.123.102:0/626051056 wait complete. 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.445+0000 7f8ea5a23700 1 -- 192.168.123.102:0/3338083166 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 msgr2=0x7f8ea010a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.445+0000 7f8ea5a23700 1 --2- 192.168.123.102:0/3338083166 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 0x7f8ea010a1c0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f8e9800b3a0 tx=0x7f8e9800b6b0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.445+0000 7f8ea5a23700 1 -- 192.168.123.102:0/3338083166 shutdown_connections 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.445+0000 7f8ea5a23700 1 --2- 192.168.123.102:0/3338083166 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8ea010a700 0x7f8ea010cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.445+0000 7f8ea5a23700 1 --2- 192.168.123.102:0/3338083166 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 0x7f8ea010a1c0 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.445+0000 7f8ea5a23700 1 -- 192.168.123.102:0/3338083166 >> 192.168.123.102:0/3338083166 conn(0x7f8ea006dae0 msgr2=0x7f8ea006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.445+0000 7f8ea5a23700 1 -- 192.168.123.102:0/3338083166 shutdown_connections 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.445+0000 7f8ea5a23700 1 -- 192.168.123.102:0/3338083166 wait complete. 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.446+0000 7f8ea5a23700 1 Processor -- start 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.446+0000 7f8ea5a23700 1 -- start start 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.446+0000 7f8ea5a23700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 0x7f8ea01169e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.446+0000 7f8ea5a23700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8ea010a700 0x7f8ea0116f40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.446+0000 7f8ea5a23700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8ea0117580 con 0x7f8ea0107d90 2026-03-10T10:25:13.446 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.446+0000 7f8ea5a23700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8ea01a1e80 con 0x7f8ea010a700 2026-03-10T10:25:13.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.446+0000 7f8e9effd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 0x7f8ea01169e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:13.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.446+0000 7f8e9effd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 0x7f8ea01169e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:39864/0 (socket says 192.168.123.102:39864) 2026-03-10T10:25:13.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.446+0000 7f8e9effd700 1 -- 192.168.123.102:0/1011075094 learned_addr learned my addr 192.168.123.102:0/1011075094 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:13.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.447+0000 7f8e9e7fc700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8ea010a700 0x7f8ea0116f40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:13.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.447+0000 7f8e9effd700 1 -- 192.168.123.102:0/1011075094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8ea010a700 msgr2=0x7f8ea0116f40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.447+0000 7f8e9effd700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8ea010a700 0x7f8ea0116f40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.447+0000 7f8e9effd700 1 -- 192.168.123.102:0/1011075094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8e9800b050 con 0x7f8ea0107d90 2026-03-10T10:25:13.447 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.447+0000 7f8e9effd700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 0x7f8ea01169e0 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f8e98007d70 tx=0x7f8e98003ce0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:13.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.450+0000 7f8ea4a21700 1 -- 192.168.123.102:0/1011075094 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8e9800e050 con 0x7f8ea0107d90 2026-03-10T10:25:13.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.450+0000 7f8ea4a21700 1 -- 192.168.123.102:0/1011075094 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8e9801d440 con 0x7f8ea0107d90 2026-03-10T10:25:13.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.450+0000 7f8ea4a21700 1 -- 192.168.123.102:0/1011075094 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8e98012620 con 0x7f8ea0107d90 2026-03-10T10:25:13.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.450+0000 7f8ea5a23700 1 -- 192.168.123.102:0/1011075094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8ea01a2100 con 0x7f8ea0107d90 2026-03-10T10:25:13.450 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.450+0000 7f8ea5a23700 1 -- 192.168.123.102:0/1011075094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8ea01a2650 con 0x7f8ea0107d90 2026-03-10T10:25:13.451 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.450+0000 7f8ea5a23700 1 -- 192.168.123.102:0/1011075094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8ea0110c60 con 0x7f8ea0107d90 2026-03-10T10:25:13.455 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.454+0000 7f8ea4a21700 1 -- 192.168.123.102:0/1011075094 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f8e98019040 con 0x7f8ea0107d90 2026-03-10T10:25:13.455 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.454+0000 7f8ea4a21700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8e88077910 0x7f8e88079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:13.455 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.454+0000 7f8ea4a21700 1 -- 192.168.123.102:0/1011075094 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f8e9809b720 con 0x7f8ea0107d90 2026-03-10T10:25:13.455 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.454+0000 7f8e9e7fc700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8e88077910 0x7f8e88079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:13.455 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.455+0000 7f8e9e7fc700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8e88077910 0x7f8e88079dd0 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f8e94006fd0 tx=0x7f8e94008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:13.456 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.456+0000 7f8ea4a21700 1 -- 192.168.123.102:0/1011075094 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8e98063ee0 con 0x7f8ea0107d90 2026-03-10T10:25:13.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.625+0000 7f8ea5a23700 1 -- 192.168.123.102:0/1011075094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f8ea004ea90 con 0x7f8ea0107d90 2026-03-10T10:25:13.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.627+0000 7f8ea4a21700 1 -- 192.168.123.102:0/1011075094 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+107 (secure 0 0 0) 0x7f8e98063630 con 0x7f8ea0107d90 2026-03-10T10:25:13.627 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN 1 filesystem is degraded 2026-03-10T10:25:13.627 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] FS_DEGRADED: 1 filesystem is degraded 2026-03-10T10:25:13.627 INFO:teuthology.orchestra.run.vm02.stdout: fs cephfs is degraded 2026-03-10T10:25:13.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 -- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8e88077910 msgr2=0x7f8e88079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8e88077910 0x7f8e88079dd0 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f8e94006fd0 tx=0x7f8e94008040 comp rx=0 tx=0).stop 2026-03-10T10:25:13.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 -- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 msgr2=0x7f8ea01169e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:13.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 0x7f8ea01169e0 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f8e98007d70 tx=0x7f8e98003ce0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 -- 192.168.123.102:0/1011075094 shutdown_connections 2026-03-10T10:25:13.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f8ea0107d90 0x7f8ea01169e0 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.632 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f8e88077910 0x7f8e88079dd0 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 --2- 192.168.123.102:0/1011075094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8ea010a700 0x7f8ea0116f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:13.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 -- 192.168.123.102:0/1011075094 >> 192.168.123.102:0/1011075094 conn(0x7f8ea006dae0 msgr2=0x7f8ea006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:13.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 -- 192.168.123.102:0/1011075094 shutdown_connections 2026-03-10T10:25:13.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:13.632+0000 7f8e867fc700 1 -- 192.168.123.102:0/1011075094 wait complete. 2026-03-10T10:25:14.173 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:13 vm05.local ceph-mon[103593]: from='client.44303 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:14.173 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:13 vm05.local ceph-mon[103593]: from='client.34376 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:14.173 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:13 vm05.local ceph-mon[103593]: pgmap v180: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 10 MiB/s rd, 5.0 KiB/s wr, 9 op/s 2026-03-10T10:25:14.173 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:13 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/71465506' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:25:14.173 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:13 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1011075094' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:25:14.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:13 vm02.local ceph-mon[110129]: from='client.44303 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:14.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:13 vm02.local ceph-mon[110129]: from='client.34376 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:14.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:13 vm02.local ceph-mon[110129]: pgmap v180: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 10 MiB/s rd, 5.0 KiB/s wr, 9 op/s 2026-03-10T10:25:14.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:13 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/71465506' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:25:14.285 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:13 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1011075094' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:25:15.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:15 vm02.local ceph-mon[110129]: from='client.44319 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:15.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:15 vm05.local ceph-mon[103593]: from='client.44319 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:16.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:16 vm02.local ceph-mon[110129]: pgmap v181: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 9.0 MiB/s rd, 5.0 KiB/s wr, 9 op/s 2026-03-10T10:25:16.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:16 vm05.local ceph-mon[103593]: pgmap v181: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 9.0 MiB/s rd, 5.0 KiB/s wr, 9 op/s 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] up:reconnect 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:reconnect} 2 up:standby 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: reconnect by client.14516 192.168.144.1:0/2782098490 after 0.001 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: reconnect by client.24319 192.168.144.1:0/2118398450 after 0.001 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] up:rejoin 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.105:6826/682293963,v1:192.168.123.105:6827/682293963] up:boot 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:rejoin} 3 up:standby 2026-03-10T10:25:17.517 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:17 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:25:17.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] up:reconnect 2026-03-10T10:25:17.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:reconnect} 2 up:standby 2026-03-10T10:25:17.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: reconnect by client.14516 192.168.144.1:0/2782098490 after 0.001 2026-03-10T10:25:17.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: reconnect by client.24319 192.168.144.1:0/2118398450 after 0.001 2026-03-10T10:25:17.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:17.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:17.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:17.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:17.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:17.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] up:rejoin 2026-03-10T10:25:17.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.105:6826/682293963,v1:192.168.123.105:6827/682293963] up:boot 2026-03-10T10:25:17.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: fsmap cephfs:1/1 {0=cephfs.vm02.zymcrs=up:rejoin} 3 up:standby 2026-03-10T10:25:17.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:17 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:25:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:18 vm02.local ceph-mon[110129]: pgmap v182: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.9 KiB/s wr, 11 op/s 2026-03-10T10:25:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:18 vm02.local ceph-mon[110129]: daemon mds.cephfs.vm02.zymcrs is now active in filesystem cephfs as rank 0 2026-03-10T10:25:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:18 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:18 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:18.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:18 vm05.local ceph-mon[103593]: pgmap v182: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.9 KiB/s wr, 11 op/s 2026-03-10T10:25:18.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:18 vm05.local ceph-mon[103593]: daemon mds.cephfs.vm02.zymcrs is now active in filesystem cephfs as rank 0 2026-03-10T10:25:18.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:18 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:18.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:18 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:19.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:19 vm02.local ceph-mon[110129]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:25:19.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:19 vm02.local ceph-mon[110129]: Cluster is now healthy 2026-03-10T10:25:19.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:19 vm02.local ceph-mon[110129]: mds.? [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] up:active 2026-03-10T10:25:19.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:19 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 3 up:standby 2026-03-10T10:25:19.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:19 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:19.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:19 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:19.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:19 vm05.local ceph-mon[103593]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-10T10:25:19.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:19 vm05.local ceph-mon[103593]: Cluster is now healthy 2026-03-10T10:25:19.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:19 vm05.local ceph-mon[103593]: mds.? [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] up:active 2026-03-10T10:25:19.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:19 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 3 up:standby 2026-03-10T10:25:19.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:19 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:19.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:19 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: pgmap v183: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.8 KiB/s wr, 10 op/s 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm02.stcvsz"}]': finished 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm02.zymcrs"}]': finished 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.liatdh"}]': finished 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.sudjys"}]': finished 2026-03-10T10:25:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:20 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]: dispatch 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: pgmap v183: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.8 KiB/s wr, 10 op/s 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm02.stcvsz"}]: dispatch 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm02.stcvsz"}]': finished 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm02.zymcrs"}]: dispatch 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm02.zymcrs"}]': finished 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.liatdh"}]: dispatch 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.liatdh"}]': finished 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.sudjys"}]: dispatch 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.sudjys"}]': finished 2026-03-10T10:25:20.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:20 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]: dispatch 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all mds 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: Upgrade: Enabling allow_standby_replay on filesystem cephfs 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]': finished 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 3 up:standby 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:25:21.751 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all mds 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: Upgrade: Enabling allow_standby_replay on filesystem cephfs 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]': finished 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 3 up:standby 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: fsmap cephfs:1 {0=cephfs.vm02.zymcrs=up:active} 1 up:standby-replay 2 up:standby 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:25:21.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:22 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all rgw 2026-03-10T10:25:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:22 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all rbd-mirror 2026-03-10T10:25:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:22 vm02.local ceph-mon[110129]: pgmap v184: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.8 KiB/s wr, 10 op/s 2026-03-10T10:25:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:22 vm02.local ceph-mon[110129]: Upgrade: Updating ceph-exporter.vm02 (1/2) 2026-03-10T10:25:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:22 vm02.local ceph-mon[110129]: Deploying daemon ceph-exporter.vm02 on vm02 2026-03-10T10:25:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:22.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:22 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all rgw 2026-03-10T10:25:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:22 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all rbd-mirror 2026-03-10T10:25:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:22 vm05.local ceph-mon[103593]: pgmap v184: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.8 KiB/s wr, 10 op/s 2026-03-10T10:25:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:22 vm05.local ceph-mon[103593]: Upgrade: Updating ceph-exporter.vm02 (1/2) 2026-03-10T10:25:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:22 vm05.local ceph-mon[103593]: Deploying daemon ceph-exporter.vm02 on vm02 2026-03-10T10:25:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:22.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:23.935 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:23 vm05.local ceph-mon[103593]: pgmap v185: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 22 MiB/s rd, 4.3 KiB/s wr, 11 op/s 2026-03-10T10:25:23.935 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:23 vm05.local ceph-mon[103593]: Upgrade: Updating ceph-exporter.vm05 (2/2) 2026-03-10T10:25:23.935 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:23 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:23.935 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:23 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:25:23.935 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:23 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:23.935 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:23 vm05.local ceph-mon[103593]: Deploying daemon ceph-exporter.vm05 on vm05 2026-03-10T10:25:23.935 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:23 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:23.935 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:23 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:23.935 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:23 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:23.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:23 vm02.local ceph-mon[110129]: pgmap v185: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 22 MiB/s rd, 4.3 KiB/s wr, 11 op/s 2026-03-10T10:25:23.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:23 vm02.local ceph-mon[110129]: Upgrade: Updating ceph-exporter.vm05 (2/2) 2026-03-10T10:25:23.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:23 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:23.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:23 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T10:25:23.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:23 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:23.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:23 vm02.local ceph-mon[110129]: Deploying daemon ceph-exporter.vm05 on vm05 2026-03-10T10:25:23.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:23 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:23.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:23 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:23.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:23 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:25.656 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:25 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:25.656 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:25 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:25.656 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:25 vm05.local ceph-mon[103593]: pgmap v186: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 170 B/s wr, 9 op/s 2026-03-10T10:25:25.656 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:25 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:25.656 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:25 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:25.752 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:25 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:25.752 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:25 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:25.752 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:25 vm02.local ceph-mon[110129]: pgmap v186: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 170 B/s wr, 9 op/s 2026-03-10T10:25:25.752 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:25 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:25.752 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:25 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:26.868 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:26 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:26.868 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:26 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:26.868 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:26 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:26.868 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:26 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:26.868 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:26 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:26.868 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:26 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:27.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:26 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:27.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:26 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:27.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:26 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:27.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:26 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:27.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:26 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:27.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:26 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: pgmap v187: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 170 B/s wr, 10 op/s 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.299 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm02"}]: dispatch 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm02"}]': finished 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm05"}]: dispatch 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm05"}]': finished 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.300 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:28 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: pgmap v187: 65 pgs: 65 active+clean; 209 MiB data, 918 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 170 B/s wr, 10 op/s 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm02"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm02"}]': finished 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm05"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm05"}]': finished 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:28.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:28.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:28 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:29 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all ceph-exporter 2026-03-10T10:25:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:29 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all iscsi 2026-03-10T10:25:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:29 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all nfs 2026-03-10T10:25:29.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:29 vm02.local ceph-mon[110129]: Upgrade: Setting container_image for all nvmeof 2026-03-10T10:25:29.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:29 vm02.local ceph-mon[110129]: Upgrade: Updating node-exporter.vm02 (1/2) 2026-03-10T10:25:29.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:29 vm02.local ceph-mon[110129]: Deploying daemon node-exporter.vm02 on vm02 2026-03-10T10:25:29.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:29 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all ceph-exporter 2026-03-10T10:25:29.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:29 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all iscsi 2026-03-10T10:25:29.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:29 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all nfs 2026-03-10T10:25:29.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:29 vm05.local ceph-mon[103593]: Upgrade: Setting container_image for all nvmeof 2026-03-10T10:25:29.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:29 vm05.local ceph-mon[103593]: Upgrade: Updating node-exporter.vm02 (1/2) 2026-03-10T10:25:29.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:29 vm05.local ceph-mon[103593]: Deploying daemon node-exporter.vm02 on vm02 2026-03-10T10:25:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:30 vm02.local ceph-mon[110129]: pgmap v188: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.2 KiB/s wr, 11 op/s 2026-03-10T10:25:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:30 vm05.local ceph-mon[103593]: pgmap v188: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 4.2 KiB/s wr, 11 op/s 2026-03-10T10:25:32.484 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:32 vm02.local ceph-mon[110129]: pgmap v189: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:25:32.484 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:32 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:32.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:32 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:32.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:32 vm02.local ceph-mon[110129]: Upgrade: Updating node-exporter.vm05 (2/2) 2026-03-10T10:25:32.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:32 vm02.local ceph-mon[110129]: Deploying daemon node-exporter.vm05 on vm05 2026-03-10T10:25:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:32 vm05.local ceph-mon[103593]: pgmap v189: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:25:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:32 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:32 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:32 vm05.local ceph-mon[103593]: Upgrade: Updating node-exporter.vm05 (2/2) 2026-03-10T10:25:32.538 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:32 vm05.local ceph-mon[103593]: Deploying daemon node-exporter.vm05 on vm05 2026-03-10T10:25:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:34 vm05.local ceph-mon[103593]: pgmap v190: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:25:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:34 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:34.456 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:34 vm02.local ceph-mon[110129]: pgmap v190: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:25:34.456 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:34.456 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:34.456 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:34 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:36.209 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:35 vm02.local ceph-mon[110129]: pgmap v191: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 12 MiB/s rd, 4.0 KiB/s wr, 7 op/s 2026-03-10T10:25:36.209 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:35 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.209 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:35 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.209 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:35 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.209 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:35 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.209 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:35 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.209 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:35 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.269 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:35 vm05.local ceph-mon[103593]: pgmap v191: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 12 MiB/s rd, 4.0 KiB/s wr, 7 op/s 2026-03-10T10:25:36.269 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:35 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.270 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:35 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.270 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:35 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.270 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:35 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.270 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:35 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:36.270 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:35 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.267 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.267 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.267 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.267 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.267 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:37.267 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.267 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.267 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:37.267 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:37.984 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: pgmap v192: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 9.8 MiB/s rd, 4.0 KiB/s wr, 5 op/s 2026-03-10T10:25:37.984 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:37.984 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:37.984 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.984 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.984 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.984 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.985 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.985 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.985 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.985 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.985 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.985 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.985 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.985 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:37.985 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: pgmap v192: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 9.8 MiB/s rd, 4.0 KiB/s wr, 5 op/s 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:39.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:39 vm02.local ceph-mon[110129]: Upgrade: Updating prometheus.vm02 2026-03-10T10:25:39.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:39 vm02.local ceph-mon[110129]: Deploying daemon prometheus.vm02 on vm02 2026-03-10T10:25:39.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:39 vm05.local ceph-mon[103593]: Upgrade: Updating prometheus.vm02 2026-03-10T10:25:39.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:39 vm05.local ceph-mon[103593]: Deploying daemon prometheus.vm02 on vm02 2026-03-10T10:25:40.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:40 vm05.local ceph-mon[103593]: pgmap v193: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 3.1 MiB/s rd, 4.0 KiB/s wr, 4 op/s 2026-03-10T10:25:40.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:40 vm02.local ceph-mon[110129]: pgmap v193: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 3.1 MiB/s rd, 4.0 KiB/s wr, 4 op/s 2026-03-10T10:25:42.281 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:42 vm02.local ceph-mon[110129]: pgmap v194: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 4.1 KiB/s rd, 1 op/s 2026-03-10T10:25:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:42 vm05.local ceph-mon[103593]: pgmap v194: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 4.1 KiB/s rd, 1 op/s 2026-03-10T10:25:43.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.732+0000 7f325afcf700 1 -- 192.168.123.102:0/366831245 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f32540ff7e0 msgr2=0x7f3254101c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:43.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.732+0000 7f325afcf700 1 --2- 192.168.123.102:0/366831245 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f32540ff7e0 0x7f3254101c10 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f3248009b00 tx=0x7f3248009e10 comp rx=0 tx=0).stop 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.732+0000 7f325afcf700 1 -- 192.168.123.102:0/366831245 shutdown_connections 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.733+0000 7f325afcf700 1 --2- 192.168.123.102:0/366831245 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3254102150 0x7f32541045e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.733+0000 7f325afcf700 1 --2- 192.168.123.102:0/366831245 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f32540ff7e0 0x7f3254101c10 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.733+0000 7f325afcf700 1 -- 192.168.123.102:0/366831245 >> 192.168.123.102:0/366831245 conn(0x7f32540fb3c0 msgr2=0x7f32540fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f325afcf700 1 -- 192.168.123.102:0/366831245 shutdown_connections 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f325afcf700 1 -- 192.168.123.102:0/366831245 wait complete. 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f325afcf700 1 Processor -- start 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f325afcf700 1 -- start start 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f325afcf700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3254102150 0x7f325410d3a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f325afcf700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f325410d8e0 0x7f32541aac40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:43.734 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f325afcf700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f325410dd60 con 0x7f325410d8e0 2026-03-10T10:25:43.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f325afcf700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f325410ded0 con 0x7f3254102150 2026-03-10T10:25:43.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f3253fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f325410d8e0 0x7f32541aac40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:43.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.734+0000 7f3253fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f325410d8e0 0x7f32541aac40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:36338/0 (socket says 192.168.123.102:36338) 2026-03-10T10:25:43.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.735+0000 7f3253fff700 1 -- 192.168.123.102:0/2725118687 learned_addr learned my addr 192.168.123.102:0/2725118687 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:43.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.735+0000 7f3258d6b700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3254102150 0x7f325410d3a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:43.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.735+0000 7f3253fff700 1 -- 192.168.123.102:0/2725118687 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3254102150 msgr2=0x7f325410d3a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:43.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.735+0000 7f3253fff700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3254102150 0x7f325410d3a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.735+0000 7f3253fff700 1 -- 192.168.123.102:0/2725118687 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f32480097e0 con 0x7f325410d8e0 2026-03-10T10:25:43.735 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.735+0000 7f3253fff700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f325410d8e0 0x7f32541aac40 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f324000d8d0 tx=0x7f324000dbe0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:43.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.735+0000 7f3251ffb700 1 -- 192.168.123.102:0/2725118687 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3240009880 con 0x7f325410d8e0 2026-03-10T10:25:43.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.736+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f325406a7b0 con 0x7f325410d8e0 2026-03-10T10:25:43.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.736+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f325406acb0 con 0x7f325410d8e0 2026-03-10T10:25:43.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.736+0000 7f3251ffb700 1 -- 192.168.123.102:0/2725118687 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3240010460 con 0x7f325410d8e0 2026-03-10T10:25:43.736 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.736+0000 7f3251ffb700 1 -- 192.168.123.102:0/2725118687 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f324000f5d0 con 0x7f325410d8e0 2026-03-10T10:25:43.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.738+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3238005320 con 0x7f325410d8e0 2026-03-10T10:25:43.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.740+0000 7f3251ffb700 1 -- 192.168.123.102:0/2725118687 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f32400099e0 con 0x7f325410d8e0 2026-03-10T10:25:43.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.741+0000 7f3251ffb700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f32440779e0 0x7f3244079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:43.741 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.741+0000 7f3251ffb700 1 -- 192.168.123.102:0/2725118687 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f32400997e0 con 0x7f325410d8e0 2026-03-10T10:25:43.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.741+0000 7f3258d6b700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f32440779e0 0x7f3244079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:43.742 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.742+0000 7f3258d6b700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f32440779e0 0x7f3244079ea0 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f3248000c00 tx=0x7f3248019040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:43.744 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.744+0000 7f3251ffb700 1 -- 192.168.123.102:0/2725118687 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3240061ee0 con 0x7f325410d8e0 2026-03-10T10:25:43.894 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.893+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f3238000bf0 con 0x7f32440779e0 2026-03-10T10:25:43.894 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.894+0000 7f3251ffb700 1 -- 192.168.123.102:0/2725118687 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+463 (secure 0 0 0) 0x7f3238000bf0 con 0x7f32440779e0 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.896+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f32440779e0 msgr2=0x7f3244079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.896+0000 7f325afcf700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f32440779e0 0x7f3244079ea0 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f3248000c00 tx=0x7f3248019040 comp rx=0 tx=0).stop 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.897+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f325410d8e0 msgr2=0x7f32541aac40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.897+0000 7f325afcf700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f325410d8e0 0x7f32541aac40 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f324000d8d0 tx=0x7f324000dbe0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.897+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 shutdown_connections 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.897+0000 7f325afcf700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f32440779e0 0x7f3244079ea0 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.897+0000 7f325afcf700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3254102150 0x7f325410d3a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.897+0000 7f325afcf700 1 --2- 192.168.123.102:0/2725118687 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f325410d8e0 0x7f32541aac40 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.897+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 >> 192.168.123.102:0/2725118687 conn(0x7f32540fb3c0 msgr2=0x7f3254101a00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:43.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.897+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 shutdown_connections 2026-03-10T10:25:43.898 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.897+0000 7f325afcf700 1 -- 192.168.123.102:0/2725118687 wait complete. 2026-03-10T10:25:43.912 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:25:43.983 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:43 vm02.local ceph-mon[110129]: pgmap v195: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 4.5 KiB/s rd, 2 op/s 2026-03-10T10:25:43.983 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:43 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:43.983 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:43 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:43.983 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:43 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.979+0000 7f9ebab63700 1 -- 192.168.123.102:0/2817261815 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9eac0a4830 msgr2=0x7f9eac0a4c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.979+0000 7f9ebab63700 1 --2- 192.168.123.102:0/2817261815 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9eac0a4830 0x7f9eac0a4c90 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f9eb4066a30 tx=0x7f9eb4069a50 comp rx=0 tx=0).stop 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.979+0000 7f9ebab63700 1 -- 192.168.123.102:0/2817261815 shutdown_connections 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.979+0000 7f9ebab63700 1 --2- 192.168.123.102:0/2817261815 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9eac0a4830 0x7f9eac0a4c90 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.979+0000 7f9ebab63700 1 --2- 192.168.123.102:0/2817261815 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac0a61f0 0x7f9eac0a6610 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.979+0000 7f9ebab63700 1 -- 192.168.123.102:0/2817261815 >> 192.168.123.102:0/2817261815 conn(0x7f9eac0a0190 msgr2=0x7f9eac0a25f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.979+0000 7f9ebab63700 1 -- 192.168.123.102:0/2817261815 shutdown_connections 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.979+0000 7f9ebab63700 1 -- 192.168.123.102:0/2817261815 wait complete. 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.982+0000 7f9ebab63700 1 Processor -- start 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.982+0000 7f9ebab63700 1 -- start start 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.982+0000 7f9ebab63700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9eac0a61f0 0x7f9eac0d0510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.982+0000 7f9ebab63700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac0d0a50 0x7f9eac010e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.982+0000 7f9ebab63700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9eac0d0f60 con 0x7f9eac0a61f0 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.982+0000 7f9ebab63700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9eac0d10d0 con 0x7f9eac0d0a50 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.982+0000 7f9eb9360700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac0d0a50 0x7f9eac010e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.982+0000 7f9eb9360700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac0d0a50 0x7f9eac010e70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:54454/0 (socket says 192.168.123.102:54454) 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.982+0000 7f9eb9360700 1 -- 192.168.123.102:0/828921212 learned_addr learned my addr 192.168.123.102:0/828921212 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.983+0000 7f9eb9b61700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9eac0a61f0 0x7f9eac0d0510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.983+0000 7f9eb9360700 1 -- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9eac0a61f0 msgr2=0x7f9eac0d0510 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.983+0000 7f9eb9360700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9eac0a61f0 0x7f9eac0d0510 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.983+0000 7f9eb9360700 1 -- 192.168.123.102:0/828921212 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9eb4067090 con 0x7f9eac0d0a50 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.983+0000 7f9eb9360700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac0d0a50 0x7f9eac010e70 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f9eb4066b20 tx=0x7f9eb40688e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.983+0000 7f9eaaffd700 1 -- 192.168.123.102:0/828921212 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9eb4068d30 con 0x7f9eac0d0a50 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.983+0000 7f9eaaffd700 1 -- 192.168.123.102:0/828921212 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9eb407f070 con 0x7f9eac0d0a50 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.983+0000 7f9ebab63700 1 -- 192.168.123.102:0/828921212 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9eac0113b0 con 0x7f9eac0d0a50 2026-03-10T10:25:43.984 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.984+0000 7f9ebab63700 1 -- 192.168.123.102:0/828921212 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9eac011870 con 0x7f9eac0d0a50 2026-03-10T10:25:43.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.984+0000 7f9eaaffd700 1 -- 192.168.123.102:0/828921212 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9eb407b630 con 0x7f9eac0d0a50 2026-03-10T10:25:43.985 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.985+0000 7f9ea8ff9700 1 -- 192.168.123.102:0/828921212 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9e98005320 con 0x7f9eac0d0a50 2026-03-10T10:25:43.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.986+0000 7f9eaaffd700 1 -- 192.168.123.102:0/828921212 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9eb4081030 con 0x7f9eac0d0a50 2026-03-10T10:25:43.986 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.986+0000 7f9eaaffd700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9ea00779e0 0x7f9ea0079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:43.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.986+0000 7f9eb9b61700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9ea00779e0 0x7f9ea0079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:43.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.987+0000 7f9eb9b61700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9ea00779e0 0x7f9ea0079ea0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f9eb0006fd0 tx=0x7f9eb0008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:43.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.987+0000 7f9eaaffd700 1 -- 192.168.123.102:0/828921212 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f9eb40fe420 con 0x7f9eac0d0a50 2026-03-10T10:25:43.989 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:43.989+0000 7f9eaaffd700 1 -- 192.168.123.102:0/828921212 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9eb40c6b20 con 0x7f9eac0d0a50 2026-03-10T10:25:44.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:43 vm05.local ceph-mon[103593]: pgmap v195: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 4.5 KiB/s rd, 2 op/s 2026-03-10T10:25:44.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:43 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:44.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:43 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:44.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:43 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:44.134 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.132+0000 7f9ea8ff9700 1 -- 192.168.123.102:0/828921212 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f9e98000bf0 con 0x7f9ea00779e0 2026-03-10T10:25:44.136 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.133+0000 7f9eaaffd700 1 -- 192.168.123.102:0/828921212 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+463 (secure 0 0 0) 0x7f9e98000bf0 con 0x7f9ea00779e0 2026-03-10T10:25:44.138 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.137+0000 7f9ea8ff9700 1 -- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9ea00779e0 msgr2=0x7f9ea0079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.138 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.137+0000 7f9ea8ff9700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9ea00779e0 0x7f9ea0079ea0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f9eb0006fd0 tx=0x7f9eb0008040 comp rx=0 tx=0).stop 2026-03-10T10:25:44.138 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.137+0000 7f9ea8ff9700 1 -- 192.168.123.102:0/828921212 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac0d0a50 msgr2=0x7f9eac010e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.138 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.137+0000 7f9ea8ff9700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac0d0a50 0x7f9eac010e70 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f9eb4066b20 tx=0x7f9eb40688e0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.138 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.137+0000 7f9ea8ff9700 1 -- 192.168.123.102:0/828921212 shutdown_connections 2026-03-10T10:25:44.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.137+0000 7f9ea8ff9700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9eac0a61f0 0x7f9eac0d0510 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.137+0000 7f9ea8ff9700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9ea00779e0 0x7f9ea0079ea0 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.137+0000 7f9ea8ff9700 1 --2- 192.168.123.102:0/828921212 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac0d0a50 0x7f9eac010e70 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.139 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.137+0000 7f9ea8ff9700 1 -- 192.168.123.102:0/828921212 >> 192.168.123.102:0/828921212 conn(0x7f9eac0a0190 msgr2=0x7f9eac0a22b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:44.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.139+0000 7f9ea8ff9700 1 -- 192.168.123.102:0/828921212 shutdown_connections 2026-03-10T10:25:44.140 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.139+0000 7f9ea8ff9700 1 -- 192.168.123.102:0/828921212 wait complete. 2026-03-10T10:25:44.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.224+0000 7fc2d5225700 1 -- 192.168.123.102:0/1187295828 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2d010a700 msgr2=0x7fc2d010cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.224+0000 7fc2d5225700 1 --2- 192.168.123.102:0/1187295828 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2d010a700 0x7fc2d010cb90 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7fc2c801c580 tx=0x7fc2c801c890 comp rx=0 tx=0).stop 2026-03-10T10:25:44.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.224+0000 7fc2d5225700 1 -- 192.168.123.102:0/1187295828 shutdown_connections 2026-03-10T10:25:44.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.224+0000 7fc2d5225700 1 --2- 192.168.123.102:0/1187295828 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2d010a700 0x7fc2d010cb90 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.224+0000 7fc2d5225700 1 --2- 192.168.123.102:0/1187295828 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2d0107d90 0x7fc2d010a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.225 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.224+0000 7fc2d5225700 1 -- 192.168.123.102:0/1187295828 >> 192.168.123.102:0/1187295828 conn(0x7fc2d006dae0 msgr2=0x7fc2d006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:44.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.226+0000 7fc2d5225700 1 -- 192.168.123.102:0/1187295828 shutdown_connections 2026-03-10T10:25:44.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.226+0000 7fc2d5225700 1 -- 192.168.123.102:0/1187295828 wait complete. 2026-03-10T10:25:44.226 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.226+0000 7fc2d5225700 1 Processor -- start 2026-03-10T10:25:44.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.226+0000 7fc2d5225700 1 -- start start 2026-03-10T10:25:44.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.226+0000 7fc2d5225700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2d0107d90 0x7fc2d019cc80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:44.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.226+0000 7fc2d5225700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2d010a700 0x7fc2d019d1c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:44.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.226+0000 7fc2d5225700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc2d019d7c0 con 0x7fc2d0107d90 2026-03-10T10:25:44.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.226+0000 7fc2d5225700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc2d019d930 con 0x7fc2d010a700 2026-03-10T10:25:44.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.227+0000 7fc2ce59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2d010a700 0x7fc2d019d1c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:44.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.227+0000 7fc2ce59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2d010a700 0x7fc2d019d1c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:54460/0 (socket says 192.168.123.102:54460) 2026-03-10T10:25:44.227 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.227+0000 7fc2ce59c700 1 -- 192.168.123.102:0/837395763 learned_addr learned my addr 192.168.123.102:0/837395763 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:44.228 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.227+0000 7fc2ce59c700 1 -- 192.168.123.102:0/837395763 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2d0107d90 msgr2=0x7fc2d019cc80 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:25:44.228 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.227+0000 7fc2ce59c700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2d0107d90 0x7fc2d019cc80 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.228 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.227+0000 7fc2ce59c700 1 -- 192.168.123.102:0/837395763 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc2c801c060 con 0x7fc2d010a700 2026-03-10T10:25:44.228 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.228+0000 7fc2ce59c700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2d010a700 0x7fc2d019d1c0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fc2c8026040 tx=0x7fc2c8009580 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:44.228 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.228+0000 7fc2b7fff700 1 -- 192.168.123.102:0/837395763 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc2c8003ef0 con 0x7fc2d010a700 2026-03-10T10:25:44.229 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.228+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc2d01a2360 con 0x7fc2d010a700 2026-03-10T10:25:44.229 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.228+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc2d01a2850 con 0x7fc2d010a700 2026-03-10T10:25:44.229 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.229+0000 7fc2b7fff700 1 -- 192.168.123.102:0/837395763 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc2c8004630 con 0x7fc2d010a700 2026-03-10T10:25:44.229 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.229+0000 7fc2b7fff700 1 -- 192.168.123.102:0/837395763 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc2c8023c80 con 0x7fc2d010a700 2026-03-10T10:25:44.229 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.229+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc2bc005320 con 0x7fc2d010a700 2026-03-10T10:25:44.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.230+0000 7fc2b7fff700 1 -- 192.168.123.102:0/837395763 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc2c802a070 con 0x7fc2d010a700 2026-03-10T10:25:44.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.230+0000 7fc2b7fff700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc2b80778e0 0x7fc2b8079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:44.234 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.230+0000 7fc2b7fff700 1 -- 192.168.123.102:0/837395763 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fc2c80ac670 con 0x7fc2d010a700 2026-03-10T10:25:44.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.234+0000 7fc2b7fff700 1 -- 192.168.123.102:0/837395763 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc2c8074d70 con 0x7fc2d010a700 2026-03-10T10:25:44.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.235+0000 7fc2ced9d700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc2b80778e0 0x7fc2b8079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:44.235 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.235+0000 7fc2ced9d700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc2b80778e0 0x7fc2b8079da0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7fc2d019db60 tx=0x7fc2c4006c60 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:44.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.396+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fc2bc000bf0 con 0x7fc2b80778e0 2026-03-10T10:25:44.402 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.401+0000 7fc2b7fff700 1 -- 192.168.123.102:0/837395763 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7fc2bc000bf0 con 0x7fc2b80778e0 2026-03-10T10:25:44.406 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:25:44.406 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (10m) 0s ago 10m 23.6M - 0.25.0 c8568f914cd2 2b779430dfc4 2026-03-10T10:25:44.406 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (22s) 0s ago 11m 10.4M - 19.2.3-678-ge911bdeb 654f31e6858e 8087bcfa99e6 2026-03-10T10:25:44.406 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (20s) 9s ago 10m 10.3M - 19.2.3-678-ge911bdeb 654f31e6858e 1a5d21254a78 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (4m) 0s ago 11m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (4m) 9s ago 10m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (10m) 0s ago 10m 91.4M - 9.4.7 954c08fa6188 f310d22468b8 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (48s) 0s ago 8m 92.0M - 19.2.3-678-ge911bdeb 654f31e6858e 5e606b7866f6 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (57s) 0s ago 8m 101M - 19.2.3-678-ge911bdeb 654f31e6858e f748fd699eac 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (37s) 9s ago 8m 17.4M - 19.2.3-678-ge911bdeb 654f31e6858e 3385525a533a 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (27s) 9s ago 8m 13.7M - 19.2.3-678-ge911bdeb 654f31e6858e 16684b1d1299 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (5m) 0s ago 11m 630M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (5m) 9s ago 10m 490M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (5m) 0s ago 11m 66.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (5m) 9s ago 10m 54.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (13s) 0s ago 10m 6098k - 1.7.0 72c9c2088986 d9fc4bda3e14 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (10s) 9s ago 10m 2911k - 1.7.0 72c9c2088986 7fe7006c8ad2 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (4m) 0s ago 10m 233M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (2m) 0s ago 9m 134M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6b6be7f62bd3 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (2m) 0s ago 9m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 745b9931485f 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (114s) 9s ago 9m 172M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe29904ecf52 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (93s) 9s ago 9m 149M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe0b3f802cec 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (73s) 9s ago 9m 133M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c60f7383494f 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (1s) 0s ago 10m 48.3M - 2.51.0 1d3b7f56885b 27a06ba517f6 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.406+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc2b80778e0 msgr2=0x7fc2b8079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.406+0000 7fc2d5225700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc2b80778e0 0x7fc2b8079da0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7fc2d019db60 tx=0x7fc2c4006c60 comp rx=0 tx=0).stop 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.406+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2d010a700 msgr2=0x7fc2d019d1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.406+0000 7fc2d5225700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2d010a700 0x7fc2d019d1c0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fc2c8026040 tx=0x7fc2c8009580 comp rx=0 tx=0).stop 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.407+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 shutdown_connections 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.407+0000 7fc2d5225700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc2d0107d90 0x7fc2d019cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.407+0000 7fc2d5225700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc2b80778e0 0x7fc2b8079da0 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.407+0000 7fc2d5225700 1 --2- 192.168.123.102:0/837395763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc2d010a700 0x7fc2d019d1c0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.407 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.407+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 >> 192.168.123.102:0/837395763 conn(0x7fc2d006dae0 msgr2=0x7fc2d006ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:44.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.407+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 shutdown_connections 2026-03-10T10:25:44.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.407+0000 7fc2d5225700 1 -- 192.168.123.102:0/837395763 wait complete. 2026-03-10T10:25:44.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.486+0000 7f321968c700 1 -- 192.168.123.102:0/1027074641 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f321410a700 msgr2=0x7f321410cb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.486+0000 7f321968c700 1 --2- 192.168.123.102:0/1027074641 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f321410a700 0x7f321410cb90 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f320c00b3a0 tx=0x7f320c00b6b0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.487+0000 7f321968c700 1 -- 192.168.123.102:0/1027074641 shutdown_connections 2026-03-10T10:25:44.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.487+0000 7f321968c700 1 --2- 192.168.123.102:0/1027074641 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f321410a700 0x7f321410cb90 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.487+0000 7f321968c700 1 --2- 192.168.123.102:0/1027074641 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3214107d90 0x7f321410a1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.489 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.487+0000 7f321968c700 1 -- 192.168.123.102:0/1027074641 >> 192.168.123.102:0/1027074641 conn(0x7f321406dae0 msgr2=0x7f321406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:44.490 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.488+0000 7f321968c700 1 -- 192.168.123.102:0/1027074641 shutdown_connections 2026-03-10T10:25:44.490 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.488+0000 7f321968c700 1 -- 192.168.123.102:0/1027074641 wait complete. 2026-03-10T10:25:44.490 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.489+0000 7f321968c700 1 Processor -- start 2026-03-10T10:25:44.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.489+0000 7f321968c700 1 -- start start 2026-03-10T10:25:44.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.489+0000 7f321968c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3214107d90 0x7f32141a5440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:44.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.489+0000 7f321968c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f321410a700 0x7f32141a5980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:44.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.489+0000 7f321968c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f32141a5fa0 con 0x7f3214107d90 2026-03-10T10:25:44.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.489+0000 7f321968c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f32141a60e0 con 0x7f321410a700 2026-03-10T10:25:44.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.490+0000 7f32127fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f321410a700 0x7f32141a5980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:44.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.490+0000 7f3212ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3214107d90 0x7f32141a5440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:44.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.490+0000 7f3212ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3214107d90 0x7f32141a5440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:36372/0 (socket says 192.168.123.102:36372) 2026-03-10T10:25:44.491 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.490+0000 7f3212ffd700 1 -- 192.168.123.102:0/1593252115 learned_addr learned my addr 192.168.123.102:0/1593252115 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:44.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.490+0000 7f32127fc700 1 -- 192.168.123.102:0/1593252115 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3214107d90 msgr2=0x7f32141a5440 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.490+0000 7f32127fc700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3214107d90 0x7f32141a5440 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.490+0000 7f32127fc700 1 -- 192.168.123.102:0/1593252115 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f320c00b050 con 0x7f321410a700 2026-03-10T10:25:44.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.491+0000 7f32127fc700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f321410a700 0x7f32141a5980 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f320c000f80 tx=0x7f320c008ef0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:44.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.492+0000 7f31fbfff700 1 -- 192.168.123.102:0/1593252115 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f320c00e050 con 0x7f321410a700 2026-03-10T10:25:44.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.492+0000 7f31fbfff700 1 -- 192.168.123.102:0/1593252115 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f320c0047d0 con 0x7f321410a700 2026-03-10T10:25:44.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.492+0000 7f31fbfff700 1 -- 192.168.123.102:0/1593252115 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f320c01daa0 con 0x7f321410a700 2026-03-10T10:25:44.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.492+0000 7f321968c700 1 -- 192.168.123.102:0/1593252115 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3214077120 con 0x7f321410a700 2026-03-10T10:25:44.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.492+0000 7f321968c700 1 -- 192.168.123.102:0/1593252115 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f32140776f0 con 0x7f321410a700 2026-03-10T10:25:44.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.493+0000 7f321968c700 1 -- 192.168.123.102:0/1593252115 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f321419f600 con 0x7f321410a700 2026-03-10T10:25:44.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.494+0000 7f31fbfff700 1 -- 192.168.123.102:0/1593252115 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f320c019040 con 0x7f321410a700 2026-03-10T10:25:44.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.494+0000 7f31fbfff700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f31fc0779e0 0x7f31fc079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:44.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.494+0000 7f31fbfff700 1 -- 192.168.123.102:0/1593252115 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f320c09b860 con 0x7f321410a700 2026-03-10T10:25:44.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.495+0000 7f3212ffd700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f31fc0779e0 0x7f31fc079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:44.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.495+0000 7f3212ffd700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f31fc0779e0 0x7f31fc079ea0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f3208009cc0 tx=0x7f3208009480 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:44.496 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.496+0000 7f31fbfff700 1 -- 192.168.123.102:0/1593252115 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f320c063f60 con 0x7f321410a700 2026-03-10T10:25:44.667 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.666+0000 7f321968c700 1 -- 192.168.123.102:0/1593252115 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f321404ea90 con 0x7f321410a700 2026-03-10T10:25:44.667 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:25:44.667 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 14 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:25:44.668 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.667+0000 7f31fbfff700 1 -- 192.168.123.102:0/1593252115 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f320c0636b0 con 0x7f321410a700 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.669+0000 7f31f9ffb700 1 -- 192.168.123.102:0/1593252115 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f31fc0779e0 msgr2=0x7f31fc079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.669+0000 7f31f9ffb700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f31fc0779e0 0x7f31fc079ea0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f3208009cc0 tx=0x7f3208009480 comp rx=0 tx=0).stop 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.669+0000 7f31f9ffb700 1 -- 192.168.123.102:0/1593252115 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f321410a700 msgr2=0x7f32141a5980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.669+0000 7f31f9ffb700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f321410a700 0x7f32141a5980 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f320c000f80 tx=0x7f320c008ef0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.670+0000 7f31f9ffb700 1 -- 192.168.123.102:0/1593252115 shutdown_connections 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.670+0000 7f31f9ffb700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3214107d90 0x7f32141a5440 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.670+0000 7f31f9ffb700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f31fc0779e0 0x7f31fc079ea0 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.670+0000 7f31f9ffb700 1 --2- 192.168.123.102:0/1593252115 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f321410a700 0x7f32141a5980 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.670+0000 7f31f9ffb700 1 -- 192.168.123.102:0/1593252115 >> 192.168.123.102:0/1593252115 conn(0x7f321406dae0 msgr2=0x7f321406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.670+0000 7f31f9ffb700 1 -- 192.168.123.102:0/1593252115 shutdown_connections 2026-03-10T10:25:44.670 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.670+0000 7f31f9ffb700 1 -- 192.168.123.102:0/1593252115 wait complete. 2026-03-10T10:25:44.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.745+0000 7fefec2ef700 1 -- 192.168.123.102:0/4146960119 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe4075a40 msgr2=0x7fefe4077ed0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.745+0000 7fefec2ef700 1 --2- 192.168.123.102:0/4146960119 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe4075a40 0x7fefe4077ed0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7fefdc00b3a0 tx=0x7fefdc00b6b0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.745+0000 7fefec2ef700 1 -- 192.168.123.102:0/4146960119 shutdown_connections 2026-03-10T10:25:44.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.745+0000 7fefec2ef700 1 --2- 192.168.123.102:0/4146960119 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe4075a40 0x7fefe4077ed0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.745+0000 7fefec2ef700 1 --2- 192.168.123.102:0/4146960119 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fefe4072b50 0x7fefe4072f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.749 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.745+0000 7fefec2ef700 1 -- 192.168.123.102:0/4146960119 >> 192.168.123.102:0/4146960119 conn(0x7fefe406dae0 msgr2=0x7fefe406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.748+0000 7fefec2ef700 1 -- 192.168.123.102:0/4146960119 shutdown_connections 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.748+0000 7fefec2ef700 1 -- 192.168.123.102:0/4146960119 wait complete. 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.748+0000 7fefec2ef700 1 Processor -- start 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.748+0000 7fefec2ef700 1 -- start start 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.748+0000 7fefec2ef700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fefe4072b50 0x7fefe40830b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.748+0000 7fefec2ef700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe40835f0 0x7fefe412e490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.748+0000 7fefec2ef700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fefe4083a70 con 0x7fefe4072b50 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.748+0000 7fefec2ef700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fefe4083be0 con 0x7fefe40835f0 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.749+0000 7fefe988a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe40835f0 0x7fefe412e490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.749+0000 7fefe988a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe40835f0 0x7fefe412e490 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:54502/0 (socket says 192.168.123.102:54502) 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.749+0000 7fefe988a700 1 -- 192.168.123.102:0/3282857957 learned_addr learned my addr 192.168.123.102:0/3282857957 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.749+0000 7fefea08b700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fefe4072b50 0x7fefe40830b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.749+0000 7fefe988a700 1 -- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fefe4072b50 msgr2=0x7fefe40830b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.750+0000 7fefe988a700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fefe4072b50 0x7fefe40830b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.750+0000 7fefe988a700 1 -- 192.168.123.102:0/3282857957 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fefe0009710 con 0x7fefe40835f0 2026-03-10T10:25:44.750 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.750+0000 7fefea08b700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fefe4072b50 0x7fefe40830b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:25:44.751 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.751+0000 7fefe988a700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe40835f0 0x7fefe412e490 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fefdc00ba80 tx=0x7fefdc009580 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:44.751 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.751+0000 7fefdb7fe700 1 -- 192.168.123.102:0/3282857957 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fefdc00e050 con 0x7fefe40835f0 2026-03-10T10:25:44.751 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.751+0000 7fefdb7fe700 1 -- 192.168.123.102:0/3282857957 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fefdc003e80 con 0x7fefe40835f0 2026-03-10T10:25:44.751 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.751+0000 7fefdb7fe700 1 -- 192.168.123.102:0/3282857957 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fefdc01baf0 con 0x7fefe40835f0 2026-03-10T10:25:44.752 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.752+0000 7fefec2ef700 1 -- 192.168.123.102:0/3282857957 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fefdc00b050 con 0x7fefe40835f0 2026-03-10T10:25:44.752 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.752+0000 7fefec2ef700 1 -- 192.168.123.102:0/3282857957 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fefe412ece0 con 0x7fefe40835f0 2026-03-10T10:25:44.752 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.752+0000 7fefec2ef700 1 -- 192.168.123.102:0/3282857957 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fefe407acc0 con 0x7fefe40835f0 2026-03-10T10:25:44.754 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.753+0000 7fefdb7fe700 1 -- 192.168.123.102:0/3282857957 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fefdc019040 con 0x7fefe40835f0 2026-03-10T10:25:44.754 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.754+0000 7fefdb7fe700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fefd0077910 0x7fefd0079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:44.754 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.754+0000 7fefdb7fe700 1 -- 192.168.123.102:0/3282857957 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fefdc029030 con 0x7fefe40835f0 2026-03-10T10:25:44.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.755+0000 7fefea08b700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fefd0077910 0x7fefd0079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:44.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.756+0000 7fefdb7fe700 1 -- 192.168.123.102:0/3282857957 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fefdc063f70 con 0x7fefe40835f0 2026-03-10T10:25:44.756 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.756+0000 7fefea08b700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fefd0077910 0x7fefd0079dd0 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7fefe412a400 tx=0x7fefe0009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:44.917 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.916+0000 7fefec2ef700 1 -- 192.168.123.102:0/3282857957 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fefe412f3e0 con 0x7fefe40835f0 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.917+0000 7fefdb7fe700 1 -- 192.168.123.102:0/3282857957 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 40 v40) v1 ==== 76+0+1999 (secure 0 0 0) 0x7fefdc004a10 con 0x7fefe40835f0 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:e40 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:btime 2026-03-10T10:25:20:367937+0000 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:epoch 40 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:25:20.367934+0000 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 83 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:up {0=34360} 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:25:44.918 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 34360 members: 34360 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:34360} state up:active seq 11 join_fscid=1 addr [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{0:34364} state up:standby-replay seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/3727526116,v1:192.168.123.102:6829/3727526116] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:34368} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/462039658,v1:192.168.123.105:6825/462039658] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:25:44.919 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{-1:44325} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6826/682293963,v1:192.168.123.105:6827/682293963] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:25:44.920 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.920+0000 7fefd977a700 1 -- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fefd0077910 msgr2=0x7fefd0079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.920 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.920+0000 7fefd977a700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fefd0077910 0x7fefd0079dd0 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7fefe412a400 tx=0x7fefe0009450 comp rx=0 tx=0).stop 2026-03-10T10:25:44.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.920+0000 7fefd977a700 1 -- 192.168.123.102:0/3282857957 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe40835f0 msgr2=0x7fefe412e490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:44.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.921+0000 7fefd977a700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe40835f0 0x7fefe412e490 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fefdc00ba80 tx=0x7fefdc009580 comp rx=0 tx=0).stop 2026-03-10T10:25:44.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.921+0000 7fefd977a700 1 -- 192.168.123.102:0/3282857957 shutdown_connections 2026-03-10T10:25:44.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.921+0000 7fefd977a700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fefe4072b50 0x7fefe40830b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.921+0000 7fefd977a700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fefd0077910 0x7fefd0079dd0 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.921+0000 7fefd977a700 1 --2- 192.168.123.102:0/3282857957 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fefe40835f0 0x7fefe412e490 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:44.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.921+0000 7fefd977a700 1 -- 192.168.123.102:0/3282857957 >> 192.168.123.102:0/3282857957 conn(0x7fefe406dae0 msgr2=0x7fefe406ff40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:44.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.921+0000 7fefd977a700 1 -- 192.168.123.102:0/3282857957 shutdown_connections 2026-03-10T10:25:44.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:44.921+0000 7fefd977a700 1 -- 192.168.123.102:0/3282857957 wait complete. 2026-03-10T10:25:44.922 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 40 2026-03-10T10:25:45.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:44 vm02.local ceph-mon[110129]: from='client.34394 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:45.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:44 vm02.local ceph-mon[110129]: from='client.44335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:45.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:44 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:44 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:44 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1593252115' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:44 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.011 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:44 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.007+0000 7fbd779ca700 1 -- 192.168.123.102:0/3366626578 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd70107d90 msgr2=0x7fbd7010a1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:45.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.007+0000 7fbd779ca700 1 --2- 192.168.123.102:0/3366626578 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd70107d90 0x7fbd7010a1c0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fbd6c009b00 tx=0x7fbd6c009e10 comp rx=0 tx=0).stop 2026-03-10T10:25:45.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.009+0000 7fbd779ca700 1 -- 192.168.123.102:0/3366626578 shutdown_connections 2026-03-10T10:25:45.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.009+0000 7fbd779ca700 1 --2- 192.168.123.102:0/3366626578 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd7010a700 0x7fbd7010cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.009+0000 7fbd779ca700 1 --2- 192.168.123.102:0/3366626578 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd70107d90 0x7fbd7010a1c0 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.009+0000 7fbd779ca700 1 -- 192.168.123.102:0/3366626578 >> 192.168.123.102:0/3366626578 conn(0x7fbd7006dda0 msgr2=0x7fbd70070220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:45.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.010+0000 7fbd779ca700 1 -- 192.168.123.102:0/3366626578 shutdown_connections 2026-03-10T10:25:45.011 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.010+0000 7fbd779ca700 1 -- 192.168.123.102:0/3366626578 wait complete. 2026-03-10T10:25:45.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.011+0000 7fbd779ca700 1 Processor -- start 2026-03-10T10:25:45.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.011+0000 7fbd779ca700 1 -- start start 2026-03-10T10:25:45.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd779ca700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd70107d90 0x7fbd701a58a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:45.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd779ca700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd7010a700 0x7fbd701a5de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:45.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd779ca700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbd701a6370 con 0x7fbd7010a700 2026-03-10T10:25:45.012 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd779ca700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbd701a64b0 con 0x7fbd70107d90 2026-03-10T10:25:45.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd74f65700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd7010a700 0x7fbd701a5de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:45.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd74f65700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd7010a700 0x7fbd701a5de0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:36394/0 (socket says 192.168.123.102:36394) 2026-03-10T10:25:45.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd74f65700 1 -- 192.168.123.102:0/3927860516 learned_addr learned my addr 192.168.123.102:0/3927860516 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:45.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd75766700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd70107d90 0x7fbd701a58a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:45.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd74f65700 1 -- 192.168.123.102:0/3927860516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd70107d90 msgr2=0x7fbd701a58a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:45.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd74f65700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd70107d90 0x7fbd701a58a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd74f65700 1 -- 192.168.123.102:0/3927860516 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbd6c0097e0 con 0x7fbd7010a700 2026-03-10T10:25:45.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.012+0000 7fbd74f65700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd7010a700 0x7fbd701a5de0 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7fbd6000c930 tx=0x7fbd6000ccf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:45.013 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.013+0000 7fbd667fc700 1 -- 192.168.123.102:0/3927860516 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbd60007ab0 con 0x7fbd7010a700 2026-03-10T10:25:45.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.013+0000 7fbd667fc700 1 -- 192.168.123.102:0/3927860516 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbd60007c10 con 0x7fbd7010a700 2026-03-10T10:25:45.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.013+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbd701aaf70 con 0x7fbd7010a700 2026-03-10T10:25:45.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.013+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbd701ab490 con 0x7fbd7010a700 2026-03-10T10:25:45.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.014+0000 7fbd667fc700 1 -- 192.168.123.102:0/3927860516 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbd6000e690 con 0x7fbd7010a700 2026-03-10T10:25:45.017 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.014+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbd54005320 con 0x7fbd7010a700 2026-03-10T10:25:45.020 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.020+0000 7fbd667fc700 1 -- 192.168.123.102:0/3927860516 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbd6000e8f0 con 0x7fbd7010a700 2026-03-10T10:25:45.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.020+0000 7fbd667fc700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd5c077700 0x7fbd5c079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:45.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.021+0000 7fbd667fc700 1 -- 192.168.123.102:0/3927860516 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fbd60099da0 con 0x7fbd7010a700 2026-03-10T10:25:45.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.021+0000 7fbd75766700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd5c077700 0x7fbd5c079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:45.021 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.021+0000 7fbd667fc700 1 -- 192.168.123.102:0/3927860516 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbd6009a220 con 0x7fbd7010a700 2026-03-10T10:25:45.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.021+0000 7fbd75766700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd5c077700 0x7fbd5c079bc0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fbd6c005fd0 tx=0x7fbd6c009f90 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:45.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.187+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fbd54000bf0 con 0x7fbd5c077700 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.188+0000 7fbd667fc700 1 -- 192.168.123.102:0/3927860516 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+463 (secure 0 0 0) 0x7fbd54000bf0 con 0x7fbd5c077700 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "mds", 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "ceph-exporter", 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "crash", 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "osd" 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "18/23 daemons upgraded", 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading prometheus daemons", 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:25:45.189 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:25:45.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.191+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd5c077700 msgr2=0x7fbd5c079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:45.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.192+0000 7fbd779ca700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd5c077700 0x7fbd5c079bc0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fbd6c005fd0 tx=0x7fbd6c009f90 comp rx=0 tx=0).stop 2026-03-10T10:25:45.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.192+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd7010a700 msgr2=0x7fbd701a5de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:45.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.192+0000 7fbd779ca700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd7010a700 0x7fbd701a5de0 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7fbd6000c930 tx=0x7fbd6000ccf0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.192+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 shutdown_connections 2026-03-10T10:25:45.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.192+0000 7fbd779ca700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fbd5c077700 0x7fbd5c079bc0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.192 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.192+0000 7fbd779ca700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd70107d90 0x7fbd701a58a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.193 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.192+0000 7fbd779ca700 1 --2- 192.168.123.102:0/3927860516 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fbd7010a700 0x7fbd701a5de0 unknown :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.193 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.192+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 >> 192.168.123.102:0/3927860516 conn(0x7fbd7006dda0 msgr2=0x7fbd7010c2f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:45.193 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.193+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 shutdown_connections 2026-03-10T10:25:45.193 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.193+0000 7fbd779ca700 1 -- 192.168.123.102:0/3927860516 wait complete. 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.280+0000 7f60d833d700 1 -- 192.168.123.102:0/1414916561 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0102e80 msgr2=0x7f60d01032a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.280+0000 7f60d833d700 1 --2- 192.168.123.102:0/1414916561 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0102e80 0x7f60d01032a0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7f60cc009b50 tx=0x7f60cc009e60 comp rx=0 tx=0).stop 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.281+0000 7f60d833d700 1 -- 192.168.123.102:0/1414916561 shutdown_connections 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.281+0000 7f60d833d700 1 --2- 192.168.123.102:0/1414916561 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f60d0104070 0x7f60d01044f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.281+0000 7f60d833d700 1 --2- 192.168.123.102:0/1414916561 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0102e80 0x7f60d01032a0 unknown :-1 s=CLOSED pgs=132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.281+0000 7f60d833d700 1 -- 192.168.123.102:0/1414916561 >> 192.168.123.102:0/1414916561 conn(0x7f60d00fe470 msgr2=0x7f60d01008b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.281+0000 7f60d833d700 1 -- 192.168.123.102:0/1414916561 shutdown_connections 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.281+0000 7f60d833d700 1 -- 192.168.123.102:0/1414916561 wait complete. 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d833d700 1 Processor -- start 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d833d700 1 -- start start 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d833d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0104070 0x7f60d0072980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:45.282 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d833d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f60d006f3e0 0x7f60d006f860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d833d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f60d006fda0 con 0x7f60d0104070 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d833d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f60d006ff10 con 0x7f60d006f3e0 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d58d8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f60d006f3e0 0x7f60d006f860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d60d9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0104070 0x7f60d0072980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d60d9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0104070 0x7f60d0072980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:36418/0 (socket says 192.168.123.102:36418) 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.282+0000 7f60d60d9700 1 -- 192.168.123.102:0/2952830562 learned_addr learned my addr 192.168.123.102:0/2952830562 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.283+0000 7f60d60d9700 1 -- 192.168.123.102:0/2952830562 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f60d006f3e0 msgr2=0x7f60d006f860 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.283+0000 7f60d60d9700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f60d006f3e0 0x7f60d006f860 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.283+0000 7f60d60d9700 1 -- 192.168.123.102:0/2952830562 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f60cc0097e0 con 0x7f60d0104070 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.283+0000 7f60d60d9700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0104070 0x7f60d0072980 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f60cc000c00 tx=0x7f60cc004a20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:45.283 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.283+0000 7f60c77fe700 1 -- 192.168.123.102:0/2952830562 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f60cc01c070 con 0x7f60d0104070 2026-03-10T10:25:45.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.283+0000 7f60d833d700 1 -- 192.168.123.102:0/2952830562 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f60d01aaf70 con 0x7f60d0104070 2026-03-10T10:25:45.284 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.283+0000 7f60d833d700 1 -- 192.168.123.102:0/2952830562 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f60d01ab2d0 con 0x7f60d0104070 2026-03-10T10:25:45.286 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.285+0000 7f60c77fe700 1 -- 192.168.123.102:0/2952830562 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f60cc004da0 con 0x7f60d0104070 2026-03-10T10:25:45.286 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.285+0000 7f60c77fe700 1 -- 192.168.123.102:0/2952830562 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f60cc0208f0 con 0x7f60d0104070 2026-03-10T10:25:45.286 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.285+0000 7f60c77fe700 1 -- 192.168.123.102:0/2952830562 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f60cc020b90 con 0x7f60d0104070 2026-03-10T10:25:45.286 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.286+0000 7f60c77fe700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f60bc077ab0 0x7f60bc079f70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:25:45.286 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.286+0000 7f60c57fa700 1 -- 192.168.123.102:0/2952830562 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f60b4005320 con 0x7f60d0104070 2026-03-10T10:25:45.286 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.286+0000 7f60d58d8700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f60bc077ab0 0x7f60bc079f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:25:45.287 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.286+0000 7f60d58d8700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f60bc077ab0 0x7f60bc079f70 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f60d0102bb0 tx=0x7f60c0008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:25:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:44 vm05.local ceph-mon[103593]: from='client.34394 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:44 vm05.local ceph-mon[103593]: from='client.44335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:44 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:44 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:44 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1593252115' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:44 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:44 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.289 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.287+0000 7f60c77fe700 1 -- 192.168.123.102:0/2952830562 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f60cc09bf40 con 0x7f60d0104070 2026-03-10T10:25:45.296 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.294+0000 7f60c77fe700 1 -- 192.168.123.102:0/2952830562 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f60cc0646f0 con 0x7f60d0104070 2026-03-10T10:25:45.474 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.473+0000 7f60c57fa700 1 -- 192.168.123.102:0/2952830562 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f60b40059f0 con 0x7f60d0104070 2026-03-10T10:25:45.474 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.473+0000 7f60c77fe700 1 -- 192.168.123.102:0/2952830562 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f60cc05ee60 con 0x7f60d0104070 2026-03-10T10:25:45.474 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:25:45.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 -- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f60bc077ab0 msgr2=0x7f60bc079f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:45.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f60bc077ab0 0x7f60bc079f70 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f60d0102bb0 tx=0x7f60c0008040 comp rx=0 tx=0).stop 2026-03-10T10:25:45.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 -- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0104070 msgr2=0x7f60d0072980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:25:45.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0104070 0x7f60d0072980 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f60cc000c00 tx=0x7f60cc004a20 comp rx=0 tx=0).stop 2026-03-10T10:25:45.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 -- 192.168.123.102:0/2952830562 shutdown_connections 2026-03-10T10:25:45.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f60d0104070 0x7f60d0072980 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f60bc077ab0 0x7f60bc079f70 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 --2- 192.168.123.102:0/2952830562 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f60d006f3e0 0x7f60d006f860 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:25:45.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 -- 192.168.123.102:0/2952830562 >> 192.168.123.102:0/2952830562 conn(0x7f60d00fe470 msgr2=0x7f60d01006a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:25:45.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 -- 192.168.123.102:0/2952830562 shutdown_connections 2026-03-10T10:25:45.477 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:25:45.476+0000 7f60c57fa700 1 -- 192.168.123.102:0/2952830562 wait complete. 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='client.44337 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: pgmap v196: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3282857957' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='client.34410 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2952830562' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:45.999 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:45 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='client.44337 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: pgmap v196: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3282857957' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='client.34410 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2952830562' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:45 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:46.903 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:46 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:25:47.223 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:46 vm02.local ceph-mon[110129]: Upgrade: Updating alertmanager.vm02 2026-03-10T10:25:47.223 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:46 vm02.local ceph-mon[110129]: Deploying daemon alertmanager.vm02 on vm02 2026-03-10T10:25:47.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:46 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:25:47.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:46 vm05.local ceph-mon[103593]: Upgrade: Updating alertmanager.vm02 2026-03-10T10:25:47.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:46 vm05.local ceph-mon[103593]: Deploying daemon alertmanager.vm02 on vm02 2026-03-10T10:25:47.906 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:47 vm02.local ceph-mon[110129]: pgmap v197: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:25:47.906 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:47 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:47.906 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:47 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:47.906 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:47 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:47 vm05.local ceph-mon[103593]: pgmap v197: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:25:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:47 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:47 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:47 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:49.701 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:49 vm05.local ceph-mon[103593]: pgmap v198: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:25:49.965 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:49 vm02.local ceph-mon[110129]: pgmap v198: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:25:49.965 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:49.965 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:49.965 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:49.965 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:49 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:50.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:50.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:50.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:50.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:49 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:51.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:25:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:52 vm05.local ceph-mon[103593]: Upgrade: Updating grafana.vm02 2026-03-10T10:25:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:52 vm05.local ceph-mon[103593]: Deploying daemon grafana.vm02 on vm02 2026-03-10T10:25:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:52 vm05.local ceph-mon[103593]: pgmap v199: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:25:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:52.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:52 vm02.local ceph-mon[110129]: Upgrade: Updating grafana.vm02 2026-03-10T10:25:52.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:52 vm02.local ceph-mon[110129]: Deploying daemon grafana.vm02 on vm02 2026-03-10T10:25:52.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:52 vm02.local ceph-mon[110129]: pgmap v199: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:25:52.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:54 vm05.local ceph-mon[103593]: pgmap v200: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:25:54.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:54 vm02.local ceph-mon[110129]: pgmap v200: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:25:56.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:56 vm02.local ceph-mon[110129]: pgmap v201: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:25:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:56 vm05.local ceph-mon[103593]: pgmap v201: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:25:58.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:58 vm02.local ceph-mon[110129]: pgmap v202: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:25:58.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:58 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:58.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:58 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:58.282 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:25:58 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:58.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:58 vm05.local ceph-mon[103593]: pgmap v202: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:25:58.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:58 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:58.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:58 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:25:58.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:25:58 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:00.006 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:00 vm02.local ceph-mon[110129]: pgmap v203: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:00.006 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:00 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:00.006 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:00 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:00.006 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:00 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:00.006 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:00 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:00 vm05.local ceph-mon[103593]: pgmap v203: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:00 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:00 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:00 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:00 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: Upgrade: Finalizing container_image settings 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: Upgrade: Complete! 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T10:26:01.780 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T10:26:01.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:01.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:01.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:01.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:01.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:01.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:01.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.781 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:01 vm02.local ceph-mon[110129]: pgmap v204: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: Upgrade: Finalizing container_image settings 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: Upgrade: Complete! 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:01.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:01 vm05.local ceph-mon[103593]: pgmap v204: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:03.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:03 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:03.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:03 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:26:04.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:04 vm02.local ceph-mon[110129]: pgmap v205: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:04 vm05.local ceph-mon[103593]: pgmap v205: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:05.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:05 vm02.local ceph-mon[110129]: pgmap v206: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:05.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:05 vm05.local ceph-mon[103593]: pgmap v206: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:07.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:08.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:07 vm02.local ceph-mon[110129]: pgmap v207: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:08.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:07 vm05.local ceph-mon[103593]: pgmap v207: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:09.732 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:09 vm05.local ceph-mon[103593]: pgmap v208: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:10.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:09 vm02.local ceph-mon[110129]: pgmap v208: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:12.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:11 vm05.local ceph-mon[103593]: pgmap v209: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:12.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:11 vm02.local ceph-mon[110129]: pgmap v209: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:13 vm02.local ceph-mon[110129]: pgmap v210: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:13 vm05.local ceph-mon[103593]: pgmap v210: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.549+0000 7f730622d700 1 -- 192.168.123.102:0/2878876290 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 msgr2=0x7f73000ffc80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.549+0000 7f730622d700 1 --2- 192.168.123.102:0/2878876290 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 0x7f73000ffc80 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f72e8009b00 tx=0x7f72e8009e10 comp rx=0 tx=0).stop 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.550+0000 7f730622d700 1 -- 192.168.123.102:0/2878876290 shutdown_connections 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.550+0000 7f730622d700 1 --2- 192.168.123.102:0/2878876290 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73001001c0 0x7f7300100640 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.550+0000 7f730622d700 1 --2- 192.168.123.102:0/2878876290 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 0x7f73000ffc80 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.550+0000 7f730622d700 1 -- 192.168.123.102:0/2878876290 >> 192.168.123.102:0/2878876290 conn(0x7f73000fb3c0 msgr2=0x7f73000fd840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.551+0000 7f730622d700 1 -- 192.168.123.102:0/2878876290 shutdown_connections 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.551+0000 7f730622d700 1 -- 192.168.123.102:0/2878876290 wait complete. 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.551+0000 7f730622d700 1 Processor -- start 2026-03-10T10:26:15.552 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.551+0000 7f730622d700 1 -- start start 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f730622d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 0x7f7300198ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f730622d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73001001c0 0x7f7300199010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f730622d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7300199630 con 0x7f73000ff860 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f730622d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7300199770 con 0x7f73001001c0 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f72feffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73001001c0 0x7f7300199010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f72ff7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 0x7f7300198ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f72ff7fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 0x7f7300198ad0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:35144/0 (socket says 192.168.123.102:35144) 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f72ff7fe700 1 -- 192.168.123.102:0/2459565139 learned_addr learned my addr 192.168.123.102:0/2459565139 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f72ff7fe700 1 -- 192.168.123.102:0/2459565139 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73001001c0 msgr2=0x7f7300199010 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:15.553 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f72ff7fe700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73001001c0 0x7f7300199010 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:15.554 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f72ff7fe700 1 -- 192.168.123.102:0/2459565139 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f72e80097e0 con 0x7f73000ff860 2026-03-10T10:26:15.554 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.552+0000 7f72feffd700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73001001c0 0x7f7300199010 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:26:15.554 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.553+0000 7f72ff7fe700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 0x7f7300198ad0 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f72e8005230 tx=0x7f72e8005790 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:15.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.553+0000 7f72fcff9700 1 -- 192.168.123.102:0/2459565139 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f72e801d070 con 0x7f73000ff860 2026-03-10T10:26:15.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.553+0000 7f72fcff9700 1 -- 192.168.123.102:0/2459565139 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f72e800be00 con 0x7f73000ff860 2026-03-10T10:26:15.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.553+0000 7f72fcff9700 1 -- 192.168.123.102:0/2459565139 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f72e800f460 con 0x7f73000ff860 2026-03-10T10:26:15.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.553+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f730019e1c0 con 0x7f73000ff860 2026-03-10T10:26:15.555 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.553+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7300101cf0 con 0x7f73000ff860 2026-03-10T10:26:15.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.554+0000 7f72fcff9700 1 -- 192.168.123.102:0/2459565139 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f72e8003780 con 0x7f73000ff860 2026-03-10T10:26:15.556 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.555+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7300066e80 con 0x7f73000ff860 2026-03-10T10:26:15.559 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.555+0000 7f72fcff9700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f72ec0778c0 0x7f72ec079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:15.559 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.555+0000 7f72fcff9700 1 -- 192.168.123.102:0/2459565139 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f72e809b420 con 0x7f73000ff860 2026-03-10T10:26:15.559 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.555+0000 7f72feffd700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f72ec0778c0 0x7f72ec079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:15.559 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.556+0000 7f72feffd700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f72ec0778c0 0x7f72ec079d80 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f72f0006fd0 tx=0x7f72f0008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:15.559 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.558+0000 7f72fcff9700 1 -- 192.168.123.102:0/2459565139 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f72e8063aa0 con 0x7f73000ff860 2026-03-10T10:26:15.689 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.687+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f73001095b0 con 0x7f72ec0778c0 2026-03-10T10:26:15.690 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.689+0000 7f72fcff9700 1 -- 192.168.123.102:0/2459565139 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+175 (secure 0 0 0) 0x7f73001095b0 con 0x7f72ec0778c0 2026-03-10T10:26:15.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.691+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f72ec0778c0 msgr2=0x7f72ec079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:15.692 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.691+0000 7f730622d700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f72ec0778c0 0x7f72ec079d80 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f72f0006fd0 tx=0x7f72f0008040 comp rx=0 tx=0).stop 2026-03-10T10:26:15.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.691+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 msgr2=0x7f7300198ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:15.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.691+0000 7f730622d700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 0x7f7300198ad0 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f72e8005230 tx=0x7f72e8005790 comp rx=0 tx=0).stop 2026-03-10T10:26:15.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.691+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 shutdown_connections 2026-03-10T10:26:15.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.691+0000 7f730622d700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f73000ff860 0x7f7300198ad0 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:15.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.691+0000 7f730622d700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f72ec0778c0 0x7f72ec079d80 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:15.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.691+0000 7f730622d700 1 --2- 192.168.123.102:0/2459565139 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f73001001c0 0x7f7300199010 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:15.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.691+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 >> 192.168.123.102:0/2459565139 conn(0x7f73000fb3c0 msgr2=0x7f7300107e90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:15.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.692+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 shutdown_connections 2026-03-10T10:26:15.693 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:15.692+0000 7f730622d700 1 -- 192.168.123.102:0/2459565139 wait complete. 2026-03-10T10:26:15.737 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T10:26:15.942 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:26:16.092 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:15 vm02.local ceph-mon[110129]: pgmap v211: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:16.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.210+0000 7f96c9878700 1 -- 192.168.123.102:0/128601102 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c4103340 msgr2=0x7f96c4103720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:16.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.210+0000 7f96c9878700 1 --2- 192.168.123.102:0/128601102 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c4103340 0x7f96c4103720 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f96ac009b00 tx=0x7f96ac009e10 comp rx=0 tx=0).stop 2026-03-10T10:26:16.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.211+0000 7f96c9878700 1 -- 192.168.123.102:0/128601102 shutdown_connections 2026-03-10T10:26:16.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.211+0000 7f96c9878700 1 --2- 192.168.123.102:0/128601102 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96c4103cf0 0x7f96c4107d40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.211+0000 7f96c9878700 1 --2- 192.168.123.102:0/128601102 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c4103340 0x7f96c4103720 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.211+0000 7f96c9878700 1 -- 192.168.123.102:0/128601102 >> 192.168.123.102:0/128601102 conn(0x7f96c40feb90 msgr2=0x7f96c4100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:16.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.211+0000 7f96c9878700 1 -- 192.168.123.102:0/128601102 shutdown_connections 2026-03-10T10:26:16.212 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.211+0000 7f96c9878700 1 -- 192.168.123.102:0/128601102 wait complete. 2026-03-10T10:26:16.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.211+0000 7f96c9878700 1 Processor -- start 2026-03-10T10:26:16.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.212+0000 7f96c9878700 1 -- start start 2026-03-10T10:26:16.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.212+0000 7f96c9878700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96c4103340 0x7f96c4198df0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:16.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.212+0000 7f96c9878700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c4103cf0 0x7f96c4199330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:16.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.212+0000 7f96c9878700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f96c4199a10 con 0x7f96c4103cf0 2026-03-10T10:26:16.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.212+0000 7f96c9878700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f96c419d7a0 con 0x7f96c4103340 2026-03-10T10:26:16.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.212+0000 7f96c2ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96c4103340 0x7f96c4198df0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:16.213 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.212+0000 7f96c2ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96c4103340 0x7f96c4198df0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:52136/0 (socket says 192.168.123.102:52136) 2026-03-10T10:26:16.214 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.212+0000 7f96c2ffd700 1 -- 192.168.123.102:0/667649301 learned_addr learned my addr 192.168.123.102:0/667649301 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:26:16.214 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.212+0000 7f96c2ffd700 1 -- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c4103cf0 msgr2=0x7f96c4199330 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:26:16.214 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.213+0000 7f96c27fc700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c4103cf0 0x7f96c4199330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:16.214 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.213+0000 7f96c2ffd700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c4103cf0 0x7f96c4199330 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.214 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.213+0000 7f96c2ffd700 1 -- 192.168.123.102:0/667649301 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f96ac0097e0 con 0x7f96c4103340 2026-03-10T10:26:16.214 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.213+0000 7f96c2ffd700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96c4103340 0x7f96c4198df0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f96ac00b5c0 tx=0x7f96ac004970 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:16.215 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.213+0000 7f96c27fc700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c4103cf0 0x7f96c4199330 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:26:16.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.213+0000 7f96c8876700 1 -- 192.168.123.102:0/667649301 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f96ac01d070 con 0x7f96c4103340 2026-03-10T10:26:16.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.213+0000 7f96c8876700 1 -- 192.168.123.102:0/667649301 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f96ac004b90 con 0x7f96c4103340 2026-03-10T10:26:16.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.213+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f96c419da20 con 0x7f96c4103340 2026-03-10T10:26:16.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.214+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f96c419df10 con 0x7f96c4103340 2026-03-10T10:26:16.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.214+0000 7f96c8876700 1 -- 192.168.123.102:0/667649301 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f96ac00f790 con 0x7f96c4103340 2026-03-10T10:26:16.216 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.215+0000 7f96c8876700 1 -- 192.168.123.102:0/667649301 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f96ac00f930 con 0x7f96c4103340 2026-03-10T10:26:16.217 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.216+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f96a4005320 con 0x7f96c4103340 2026-03-10T10:26:16.218 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.216+0000 7f96c8876700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f96b00800d0 0x7f96b0082590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:16.218 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.216+0000 7f96c8876700 1 -- 192.168.123.102:0/667649301 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f96ac09b180 con 0x7f96c4103340 2026-03-10T10:26:16.218 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.217+0000 7f96c27fc700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f96b00800d0 0x7f96b0082590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:16.218 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.217+0000 7f96c27fc700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f96b00800d0 0x7f96b0082590 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f96b4005fd0 tx=0x7f96b4005ee0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:16.221 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.219+0000 7f96c8876700 1 -- 192.168.123.102:0/667649301 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f96ac063930 con 0x7f96c4103340 2026-03-10T10:26:16.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:15 vm05.local ceph-mon[103593]: pgmap v211: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:16.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.350+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f96a4000bf0 con 0x7f96b00800d0 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.356+0000 7f96c8876700 1 -- 192.168.123.102:0/667649301 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f96a4000bf0 con 0x7f96b00800d0 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (29s) 17s ago 11m 15.7M - 0.25.0 c8568f914cd2 c99a3d6ff714 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (54s) 17s ago 11m 10.4M - 19.2.3-678-ge911bdeb 654f31e6858e 8087bcfa99e6 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (52s) 41s ago 10m 10.3M - 19.2.3-678-ge911bdeb 654f31e6858e 1a5d21254a78 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (5m) 17s ago 11m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (5m) 41s ago 10m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (18s) 17s ago 11m 42.0M - 10.4.0 c8b91775d855 8096f78367f5 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (80s) 17s ago 9m 92.0M - 19.2.3-678-ge911bdeb 654f31e6858e 5e606b7866f6 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (89s) 17s ago 9m 101M - 19.2.3-678-ge911bdeb 654f31e6858e f748fd699eac 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (69s) 41s ago 9m 17.4M - 19.2.3-678-ge911bdeb 654f31e6858e 3385525a533a 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (59s) 41s ago 9m 13.7M - 19.2.3-678-ge911bdeb 654f31e6858e 16684b1d1299 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (6m) 17s ago 12m 638M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (5m) 41s ago 10m 490M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (5m) 17s ago 12m 67.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (5m) 41s ago 10m 54.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (45s) 17s ago 11m 6723k - 1.7.0 72c9c2088986 d9fc4bda3e14 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (42s) 41s ago 10m 2911k - 1.7.0 72c9c2088986 7fe7006c8ad2 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (5m) 17s ago 10m 233M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:26:16.357 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (3m) 17s ago 10m 134M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6b6be7f62bd3 2026-03-10T10:26:16.358 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (2m) 17s ago 10m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 745b9931485f 2026-03-10T10:26:16.358 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (2m) 41s ago 10m 172M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe29904ecf52 2026-03-10T10:26:16.358 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (2m) 41s ago 9m 149M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe0b3f802cec 2026-03-10T10:26:16.358 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (105s) 41s ago 9m 133M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c60f7383494f 2026-03-10T10:26:16.358 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (33s) 17s ago 11m 49.0M - 2.51.0 1d3b7f56885b 27a06ba517f6 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.358+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f96b00800d0 msgr2=0x7f96b0082590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.358+0000 7f96c9878700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f96b00800d0 0x7f96b0082590 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f96b4005fd0 tx=0x7f96b4005ee0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.358+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96c4103340 msgr2=0x7f96c4198df0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.358+0000 7f96c9878700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96c4103340 0x7f96c4198df0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f96ac00b5c0 tx=0x7f96ac004970 comp rx=0 tx=0).stop 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.359+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 shutdown_connections 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.359+0000 7f96c9878700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f96b00800d0 0x7f96b0082590 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.359+0000 7f96c9878700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96c4103340 0x7f96c4198df0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.359+0000 7f96c9878700 1 --2- 192.168.123.102:0/667649301 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f96c4103cf0 0x7f96c4199330 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.359+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 >> 192.168.123.102:0/667649301 conn(0x7f96c40feb90 msgr2=0x7f96c4100f30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.359+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 shutdown_connections 2026-03-10T10:26:16.360 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.359+0000 7f96c9878700 1 -- 192.168.123.102:0/667649301 wait complete. 2026-03-10T10:26:16.403 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T10:26:16.571 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:26:16.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.817+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/127566965 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3d781033c0 msgr2=0x7f3d781037a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:16.819 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.817+0000 7f3d7fb6a700 1 --2- 192.168.123.102:0/127566965 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3d781033c0 0x7f3d781037a0 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f3d68009b50 tx=0x7f3d68009e60 comp rx=0 tx=0).stop 2026-03-10T10:26:16.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.818+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/127566965 shutdown_connections 2026-03-10T10:26:16.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.818+0000 7f3d7fb6a700 1 --2- 192.168.123.102:0/127566965 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d78103d70 0x7f3d78107dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.818+0000 7f3d7fb6a700 1 --2- 192.168.123.102:0/127566965 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3d781033c0 0x7f3d781037a0 unknown :-1 s=CLOSED pgs=137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.818+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/127566965 >> 192.168.123.102:0/127566965 conn(0x7f3d780fec30 msgr2=0x7f3d78101050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:16.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.818+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/127566965 shutdown_connections 2026-03-10T10:26:16.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.818+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/127566965 wait complete. 2026-03-10T10:26:16.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.819+0000 7f3d7fb6a700 1 Processor -- start 2026-03-10T10:26:16.820 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.819+0000 7f3d7fb6a700 1 -- start start 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.819+0000 7f3d7fb6a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3d781033c0 0x7f3d78198f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7fb6a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d78103d70 0x7f3d78199440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7fb6a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3d78199b20 con 0x7f3d781033c0 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7fb6a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3d7819d8b0 con 0x7f3d78103d70 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7d105700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d78103d70 0x7f3d78199440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7d105700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d78103d70 0x7f3d78199440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:52546/0 (socket says 192.168.123.102:52546) 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7d105700 1 -- 192.168.123.102:0/2387557327 learned_addr learned my addr 192.168.123.102:0/2387557327 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7d105700 1 -- 192.168.123.102:0/2387557327 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3d781033c0 msgr2=0x7f3d78198f00 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7d105700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3d781033c0 0x7f3d78198f00 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7d105700 1 -- 192.168.123.102:0/2387557327 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3d680097e0 con 0x7f3d78103d70 2026-03-10T10:26:16.821 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d7d105700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d78103d70 0x7f3d78199440 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f3d74009fd0 tx=0x7f3d7400eea0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:16.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.820+0000 7f3d6effd700 1 -- 192.168.123.102:0/2387557327 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3d7400cca0 con 0x7f3d78103d70 2026-03-10T10:26:16.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.821+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3d7819db90 con 0x7f3d78103d70 2026-03-10T10:26:16.822 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.821+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3d7819e0e0 con 0x7f3d78103d70 2026-03-10T10:26:16.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.821+0000 7f3d6effd700 1 -- 192.168.123.102:0/2387557327 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3d7400ce00 con 0x7f3d78103d70 2026-03-10T10:26:16.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.821+0000 7f3d6effd700 1 -- 192.168.123.102:0/2387557327 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3d74010490 con 0x7f3d78103d70 2026-03-10T10:26:16.823 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.822+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3d7810b760 con 0x7f3d78103d70 2026-03-10T10:26:16.824 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.823+0000 7f3d6effd700 1 -- 192.168.123.102:0/2387557327 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3d74004750 con 0x7f3d78103d70 2026-03-10T10:26:16.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.823+0000 7f3d6effd700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3d640778c0 0x7f3d64079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:16.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.823+0000 7f3d6effd700 1 -- 192.168.123.102:0/2387557327 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f3d74014070 con 0x7f3d78103d70 2026-03-10T10:26:16.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.823+0000 7f3d7d906700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3d640778c0 0x7f3d64079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:16.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.824+0000 7f3d7d906700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3d640778c0 0x7f3d64079d80 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f3d68000c00 tx=0x7f3d68005fb0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:16.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.825+0000 7f3d6effd700 1 -- 192.168.123.102:0/2387557327 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3d740629c0 con 0x7f3d78103d70 2026-03-10T10:26:16.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.962+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f3d7819a370 con 0x7f3d640778c0 2026-03-10T10:26:16.964 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:16 vm02.local ceph-mon[110129]: from='client.34418 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:26:16.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.966+0000 7f3d6effd700 1 -- 192.168.123.102:0/2387557327 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+175 (secure 0 0 0) 0x7f3d7819a370 con 0x7f3d640778c0 2026-03-10T10:26:16.967 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:26:16.967 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": null, 2026-03-10T10:26:16.967 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": false, 2026-03-10T10:26:16.967 INFO:teuthology.orchestra.run.vm02.stdout: "which": "", 2026-03-10T10:26:16.967 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T10:26:16.967 INFO:teuthology.orchestra.run.vm02.stdout: "progress": null, 2026-03-10T10:26:16.968 INFO:teuthology.orchestra.run.vm02.stdout: "message": "", 2026-03-10T10:26:16.968 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T10:26:16.968 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.968+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3d640778c0 msgr2=0x7f3d64079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.968+0000 7f3d7fb6a700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3d640778c0 0x7f3d64079d80 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f3d68000c00 tx=0x7f3d68005fb0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.968+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d78103d70 msgr2=0x7f3d78199440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.968+0000 7f3d7fb6a700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d78103d70 0x7f3d78199440 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f3d74009fd0 tx=0x7f3d7400eea0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.969+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 shutdown_connections 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.969+0000 7f3d7fb6a700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3d781033c0 0x7f3d78198f00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.969+0000 7f3d7fb6a700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3d640778c0 0x7f3d64079d80 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.969+0000 7f3d7fb6a700 1 --2- 192.168.123.102:0/2387557327 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d78103d70 0x7f3d78199440 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.969+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 >> 192.168.123.102:0/2387557327 conn(0x7f3d780fec30 msgr2=0x7f3d78101030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.969+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 shutdown_connections 2026-03-10T10:26:16.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:16.969+0000 7f3d7fb6a700 1 -- 192.168.123.102:0/2387557327 wait complete. 2026-03-10T10:26:17.051 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T10:26:17.208 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:26:17.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:16 vm05.local ceph-mon[103593]: from='client.34418 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:26:17.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.468+0000 7f4c40f22700 1 -- 192.168.123.102:0/259982267 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c3c101ad0 msgr2=0x7f4c3c105b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:17.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.468+0000 7f4c40f22700 1 --2- 192.168.123.102:0/259982267 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c3c101ad0 0x7f4c3c105b20 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f4c30009b50 tx=0x7f4c30009e60 comp rx=0 tx=0).stop 2026-03-10T10:26:17.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.469+0000 7f4c40f22700 1 -- 192.168.123.102:0/259982267 shutdown_connections 2026-03-10T10:26:17.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.469+0000 7f4c40f22700 1 --2- 192.168.123.102:0/259982267 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c3c101ad0 0x7f4c3c105b20 unknown :-1 s=CLOSED pgs=138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:17.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.469+0000 7f4c40f22700 1 --2- 192.168.123.102:0/259982267 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4c3c101120 0x7f4c3c101500 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:17.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.469+0000 7f4c40f22700 1 -- 192.168.123.102:0/259982267 >> 192.168.123.102:0/259982267 conn(0x7f4c3c0fc9b0 msgr2=0x7f4c3c0fedd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:17.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.470+0000 7f4c40f22700 1 -- 192.168.123.102:0/259982267 shutdown_connections 2026-03-10T10:26:17.471 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.470+0000 7f4c40f22700 1 -- 192.168.123.102:0/259982267 wait complete. 2026-03-10T10:26:17.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.470+0000 7f4c40f22700 1 Processor -- start 2026-03-10T10:26:17.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c40f22700 1 -- start start 2026-03-10T10:26:17.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c40f22700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c3c101120 0x7f4c3c073090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:17.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c40f22700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4c3c101ad0 0x7f4c3c0735d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:17.472 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c40f22700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4c3c074e40 con 0x7f4c3c101120 2026-03-10T10:26:17.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c40f22700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4c3c074fb0 con 0x7f4c3c101ad0 2026-03-10T10:26:17.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c39d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4c3c101ad0 0x7f4c3c0735d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:17.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c39d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4c3c101ad0 0x7f4c3c0735d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:52564/0 (socket says 192.168.123.102:52564) 2026-03-10T10:26:17.473 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c39d9b700 1 -- 192.168.123.102:0/3947226545 learned_addr learned my addr 192.168.123.102:0/3947226545 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:26:17.474 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c3a59c700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c3c101120 0x7f4c3c073090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:17.474 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c39d9b700 1 -- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c3c101120 msgr2=0x7f4c3c073090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:17.474 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c39d9b700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c3c101120 0x7f4c3c073090 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:17.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c39d9b700 1 -- 192.168.123.102:0/3947226545 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4c300097e0 con 0x7f4c3c101ad0 2026-03-10T10:26:17.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.471+0000 7f4c39d9b700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4c3c101ad0 0x7f4c3c0735d0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f4c300048f0 tx=0x7f4c300049d0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:17.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.472+0000 7f4c2b7fe700 1 -- 192.168.123.102:0/3947226545 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4c3001d070 con 0x7f4c3c101ad0 2026-03-10T10:26:17.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.472+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4c3c073b70 con 0x7f4c3c101ad0 2026-03-10T10:26:17.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.472+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4c3c1a7640 con 0x7f4c3c101ad0 2026-03-10T10:26:17.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.472+0000 7f4c2b7fe700 1 -- 192.168.123.102:0/3947226545 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f4c3000bc50 con 0x7f4c3c101ad0 2026-03-10T10:26:17.475 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.472+0000 7f4c2b7fe700 1 -- 192.168.123.102:0/3947226545 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4c3000f670 con 0x7f4c3c101ad0 2026-03-10T10:26:17.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.473+0000 7f4c2b7fe700 1 -- 192.168.123.102:0/3947226545 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f4c3000f7d0 con 0x7f4c3c101ad0 2026-03-10T10:26:17.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.474+0000 7f4c2b7fe700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4c24077870 0x7f4c24079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:17.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.474+0000 7f4c2b7fe700 1 -- 192.168.123.102:0/3947226545 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f4c3009b120 con 0x7f4c3c101ad0 2026-03-10T10:26:17.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.474+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4c3c109520 con 0x7f4c3c101ad0 2026-03-10T10:26:17.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.474+0000 7f4c3a59c700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4c24077870 0x7f4c24079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:17.476 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.475+0000 7f4c3a59c700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4c24077870 0x7f4c24079d30 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7f4c2c009de0 tx=0x7f4c2c009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:17.479 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.478+0000 7f4c2b7fe700 1 -- 192.168.123.102:0/3947226545 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4c30063850 con 0x7f4c3c101ad0 2026-03-10T10:26:17.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.648+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f4c3c066e80 con 0x7f4c3c101ad0 2026-03-10T10:26:17.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.649+0000 7f4c2b7fe700 1 -- 192.168.123.102:0/3947226545 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+10 (secure 0 0 0) 0x7f4c30027020 con 0x7f4c3c101ad0 2026-03-10T10:26:17.651 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T10:26:17.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.652+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4c24077870 msgr2=0x7f4c24079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.652+0000 7f4c40f22700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4c24077870 0x7f4c24079d30 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7f4c2c009de0 tx=0x7f4c2c009450 comp rx=0 tx=0).stop 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.652+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4c3c101ad0 msgr2=0x7f4c3c0735d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.652+0000 7f4c40f22700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4c3c101ad0 0x7f4c3c0735d0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f4c300048f0 tx=0x7f4c300049d0 comp rx=0 tx=0).stop 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.653+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 shutdown_connections 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.653+0000 7f4c40f22700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f4c3c101120 0x7f4c3c073090 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.653+0000 7f4c40f22700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f4c24077870 0x7f4c24079d30 unknown :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.653+0000 7f4c40f22700 1 --2- 192.168.123.102:0/3947226545 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4c3c101ad0 0x7f4c3c0735d0 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.653+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 >> 192.168.123.102:0/3947226545 conn(0x7f4c3c0fc9b0 msgr2=0x7f4c3c0fedb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.653+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 shutdown_connections 2026-03-10T10:26:17.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:17.653+0000 7f4c40f22700 1 -- 192.168.123.102:0/3947226545 wait complete. 2026-03-10T10:26:17.699 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T10:26:17.857 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:26:17.904 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:17 vm02.local ceph-mon[110129]: from='client.44351 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:26:17.904 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:17 vm02.local ceph-mon[110129]: pgmap v212: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:17.904 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:17 vm02.local ceph-mon[110129]: from='client.44355 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:26:17.904 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:17 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3947226545' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:26:18.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.145+0000 7f616758c700 1 -- 192.168.123.102:0/4115736194 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 msgr2=0x7f6160107d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:18.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.145+0000 7f616758c700 1 --2- 192.168.123.102:0/4115736194 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 0x7f6160107d40 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7f615c009b50 tx=0x7f615c009e60 comp rx=0 tx=0).stop 2026-03-10T10:26:18.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.146+0000 7f616758c700 1 -- 192.168.123.102:0/4115736194 shutdown_connections 2026-03-10T10:26:18.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.146+0000 7f616758c700 1 --2- 192.168.123.102:0/4115736194 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 0x7f6160107d40 unknown :-1 s=CLOSED pgs=139 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:18.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.146+0000 7f616758c700 1 --2- 192.168.123.102:0/4115736194 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6160103340 0x7f6160103720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:18.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.146+0000 7f616758c700 1 -- 192.168.123.102:0/4115736194 >> 192.168.123.102:0/4115736194 conn(0x7f61600feb90 msgr2=0x7f6160100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:18.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.146+0000 7f616758c700 1 -- 192.168.123.102:0/4115736194 shutdown_connections 2026-03-10T10:26:18.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.146+0000 7f616758c700 1 -- 192.168.123.102:0/4115736194 wait complete. 2026-03-10T10:26:18.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.147+0000 7f616758c700 1 Processor -- start 2026-03-10T10:26:18.148 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.147+0000 7f616758c700 1 -- start start 2026-03-10T10:26:18.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f616758c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6160103340 0x7f6160198e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:18.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f616758c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 0x7f6160199390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:18.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f616758c700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6160199a70 con 0x7f6160103cf0 2026-03-10T10:26:18.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f616758c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f616019d800 con 0x7f6160103340 2026-03-10T10:26:18.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f6164b27700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 0x7f6160199390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:18.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f6164b27700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 0x7f6160199390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:54306/0 (socket says 192.168.123.102:54306) 2026-03-10T10:26:18.149 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f6164b27700 1 -- 192.168.123.102:0/1391338605 learned_addr learned my addr 192.168.123.102:0/1391338605 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:26:18.150 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f6164b27700 1 -- 192.168.123.102:0/1391338605 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6160103340 msgr2=0x7f6160198e50 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:26:18.150 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f6164b27700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6160103340 0x7f6160198e50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:18.150 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f6164b27700 1 -- 192.168.123.102:0/1391338605 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f615c0097e0 con 0x7f6160103cf0 2026-03-10T10:26:18.150 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.148+0000 7f6164b27700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 0x7f6160199390 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7f615c004cb0 tx=0x7f615c005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:18.151 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.149+0000 7f61567fc700 1 -- 192.168.123.102:0/1391338605 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f615c01d070 con 0x7f6160103cf0 2026-03-10T10:26:18.151 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.149+0000 7f61567fc700 1 -- 192.168.123.102:0/1391338605 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f615c022470 con 0x7f6160103cf0 2026-03-10T10:26:18.151 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.149+0000 7f61567fc700 1 -- 192.168.123.102:0/1391338605 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f615c00f670 con 0x7f6160103cf0 2026-03-10T10:26:18.151 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.149+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f616019da80 con 0x7f6160103cf0 2026-03-10T10:26:18.151 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.149+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f616019de90 con 0x7f6160103cf0 2026-03-10T10:26:18.152 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.150+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f616010b740 con 0x7f6160103cf0 2026-03-10T10:26:18.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.154+0000 7f61567fc700 1 -- 192.168.123.102:0/1391338605 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f615c00bac0 con 0x7f6160103cf0 2026-03-10T10:26:18.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.154+0000 7f61567fc700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f614c077990 0x7f614c079e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:26:18.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.154+0000 7f61567fc700 1 -- 192.168.123.102:0/1391338605 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f615c09bcd0 con 0x7f6160103cf0 2026-03-10T10:26:18.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.155+0000 7f61567fc700 1 -- 192.168.123.102:0/1391338605 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f615c09c150 con 0x7f6160103cf0 2026-03-10T10:26:18.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.157+0000 7f6165328700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f614c077990 0x7f614c079e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:26:18.159 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.158+0000 7f6165328700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f614c077990 0x7f614c079e50 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f6150009ce0 tx=0x7f6150009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:26:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:17 vm05.local ceph-mon[103593]: from='client.44351 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:26:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:17 vm05.local ceph-mon[103593]: pgmap v212: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:17 vm05.local ceph-mon[103593]: from='client.44355 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:26:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:17 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3947226545' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T10:26:18.319 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.318+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f616019a250 con 0x7f6160103cf0 2026-03-10T10:26:18.320 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.318+0000 7f61567fc700 1 -- 192.168.123.102:0/1391338605 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f615c0643d0 con 0x7f6160103cf0 2026-03-10T10:26:18.320 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:26:18.320 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:26:18.320 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:26:18.320 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:26:18.320 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 14 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:26:18.321 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:26:18.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.321+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f614c077990 msgr2=0x7f614c079e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:18.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.321+0000 7f616758c700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f614c077990 0x7f614c079e50 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f6150009ce0 tx=0x7f6150009450 comp rx=0 tx=0).stop 2026-03-10T10:26:18.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.321+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 msgr2=0x7f6160199390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:26:18.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.321+0000 7f616758c700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 0x7f6160199390 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7f615c004cb0 tx=0x7f615c005dc0 comp rx=0 tx=0).stop 2026-03-10T10:26:18.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.321+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 shutdown_connections 2026-03-10T10:26:18.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.321+0000 7f616758c700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f614c077990 0x7f614c079e50 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:18.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.322+0000 7f616758c700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6160103340 0x7f6160198e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:18.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.322+0000 7f616758c700 1 --2- 192.168.123.102:0/1391338605 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6160103cf0 0x7f6160199390 unknown :-1 s=CLOSED pgs=140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:26:18.323 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.322+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 >> 192.168.123.102:0/1391338605 conn(0x7f61600feb90 msgr2=0x7f6160100fa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:26:18.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.322+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 shutdown_connections 2026-03-10T10:26:18.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:26:18.322+0000 7f616758c700 1 -- 192.168.123.102:0/1391338605 wait complete. 2026-03-10T10:26:18.372 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'echo "wait for servicemap items w/ changing names to refresh"' 2026-03-10T10:26:18.518 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:26:18.706 INFO:teuthology.orchestra.run.vm02.stdout:wait for servicemap items w/ changing names to refresh 2026-03-10T10:26:18.744 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 60' 2026-03-10T10:26:18.844 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:18 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1391338605' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:18.913 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:26:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:18 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1391338605' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:26:20.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:19 vm02.local ceph-mon[110129]: pgmap v213: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:20.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:19 vm05.local ceph-mon[103593]: pgmap v213: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:22.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:21 vm02.local ceph-mon[110129]: pgmap v214: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:22.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:21 vm05.local ceph-mon[103593]: pgmap v214: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:24.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:23 vm02.local ceph-mon[110129]: pgmap v215: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:24.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:23 vm05.local ceph-mon[103593]: pgmap v215: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:26.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:25 vm02.local ceph-mon[110129]: pgmap v216: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:26.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:25 vm05.local ceph-mon[103593]: pgmap v216: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:28.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:28 vm02.local ceph-mon[110129]: pgmap v217: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:28 vm05.local ceph-mon[103593]: pgmap v217: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:30 vm02.local ceph-mon[110129]: pgmap v218: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:30 vm05.local ceph-mon[103593]: pgmap v218: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:32.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:32 vm02.local ceph-mon[110129]: pgmap v219: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:32 vm05.local ceph-mon[103593]: pgmap v219: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:34.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:34 vm02.local ceph-mon[110129]: pgmap v220: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:34 vm05.local ceph-mon[103593]: pgmap v220: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:36.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:36 vm02.local ceph-mon[110129]: pgmap v221: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:36.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:36 vm05.local ceph-mon[103593]: pgmap v221: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:37.431 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:37.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:38.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:38 vm02.local ceph-mon[110129]: pgmap v222: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:38.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:38 vm05.local ceph-mon[103593]: pgmap v222: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:40.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:40 vm02.local ceph-mon[110129]: pgmap v223: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:40 vm05.local ceph-mon[103593]: pgmap v223: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:42.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:42 vm02.local ceph-mon[110129]: pgmap v224: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:42 vm05.local ceph-mon[103593]: pgmap v224: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:44.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:44 vm02.local ceph-mon[110129]: pgmap v225: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:44.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:44 vm05.local ceph-mon[103593]: pgmap v225: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:46.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:46 vm02.local ceph-mon[110129]: pgmap v226: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:46 vm05.local ceph-mon[103593]: pgmap v226: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:48.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:48 vm02.local ceph-mon[110129]: pgmap v227: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:48 vm05.local ceph-mon[103593]: pgmap v227: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:50.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:50 vm02.local ceph-mon[110129]: pgmap v228: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:50.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:50 vm05.local ceph-mon[103593]: pgmap v228: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:26:52.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:52 vm02.local ceph-mon[110129]: pgmap v229: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:52.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:52 vm05.local ceph-mon[103593]: pgmap v229: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:52.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:54.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:54 vm02.local ceph-mon[110129]: pgmap v230: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:54.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:54 vm05.local ceph-mon[103593]: pgmap v230: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:56.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:56 vm02.local ceph-mon[110129]: pgmap v231: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:56.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:56 vm05.local ceph-mon[103593]: pgmap v231: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:26:58.484 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:26:58 vm02.local ceph-mon[110129]: pgmap v232: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:26:58.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:26:58 vm05.local ceph-mon[103593]: pgmap v232: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:00.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:00 vm02.local ceph-mon[110129]: pgmap v233: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:00.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:00 vm05.local ceph-mon[103593]: pgmap v233: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:01.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:27:01.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:27:01.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:27:01.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:27:01.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:27:01.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:27:01.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:27:01.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:27:02.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:02 vm02.local ceph-mon[110129]: pgmap v234: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:02.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:02 vm05.local ceph-mon[103593]: pgmap v234: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:04.500 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:04 vm05.local ceph-mon[103593]: pgmap v235: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:04.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:04 vm02.local ceph-mon[110129]: pgmap v235: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:06.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:06 vm02.local ceph-mon[110129]: pgmap v236: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:06.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:06 vm05.local ceph-mon[103593]: pgmap v236: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:07.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:08.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:08 vm02.local ceph-mon[110129]: pgmap v237: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:08 vm05.local ceph-mon[103593]: pgmap v237: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:10.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:10 vm02.local ceph-mon[110129]: pgmap v238: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:10.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:10 vm05.local ceph-mon[103593]: pgmap v238: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:12.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:12 vm02.local ceph-mon[110129]: pgmap v239: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:12.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:12 vm05.local ceph-mon[103593]: pgmap v239: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:13.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:13 vm02.local ceph-mon[110129]: pgmap v240: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:13.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:13 vm05.local ceph-mon[103593]: pgmap v240: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:16.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:15 vm02.local ceph-mon[110129]: pgmap v241: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:16.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:15 vm05.local ceph-mon[103593]: pgmap v241: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:18.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:17 vm02.local ceph-mon[110129]: pgmap v242: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:18.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:17 vm05.local ceph-mon[103593]: pgmap v242: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:19.186 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T10:27:19.361 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.651+0000 7f7293bdd700 1 -- 192.168.123.102:0/4028102046 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c103d70 msgr2=0x7f728c107dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.651+0000 7f7293bdd700 1 --2- 192.168.123.102:0/4028102046 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c103d70 0x7f728c107dc0 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7f727c009b00 tx=0x7f727c009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.654+0000 7f7293bdd700 1 -- 192.168.123.102:0/4028102046 shutdown_connections 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.654+0000 7f7293bdd700 1 --2- 192.168.123.102:0/4028102046 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c103d70 0x7f728c107dc0 unknown :-1 s=CLOSED pgs=141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.654+0000 7f7293bdd700 1 --2- 192.168.123.102:0/4028102046 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f728c1033c0 0x7f728c1037a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.654+0000 7f7293bdd700 1 -- 192.168.123.102:0/4028102046 >> 192.168.123.102:0/4028102046 conn(0x7f728c0fec30 msgr2=0x7f728c101050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.655+0000 7f7293bdd700 1 -- 192.168.123.102:0/4028102046 shutdown_connections 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.655+0000 7f7293bdd700 1 -- 192.168.123.102:0/4028102046 wait complete. 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.655+0000 7f7293bdd700 1 Processor -- start 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.655+0000 7f7293bdd700 1 -- start start 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.656+0000 7f7293bdd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c1033c0 0x7f728c198e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.656+0000 7f7291979700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c1033c0 0x7f728c198e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.656+0000 7f7291979700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c1033c0 0x7f728c198e30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57746/0 (socket says 192.168.123.102:57746) 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.656+0000 7f7291979700 1 -- 192.168.123.102:0/713728915 learned_addr learned my addr 192.168.123.102:0/713728915 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.656+0000 7f7293bdd700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f728c103d70 0x7f728c199370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.656+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f728c1999c0 con 0x7f728c1033c0 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.656+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f728c199b00 con 0x7f728c103d70 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.656+0000 7f7291178700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f728c103d70 0x7f728c199370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.657+0000 7f7291178700 1 -- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c1033c0 msgr2=0x7f728c198e30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.657+0000 7f7291178700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c1033c0 0x7f728c198e30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.657+0000 7f7291178700 1 -- 192.168.123.102:0/713728915 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f727c0097e0 con 0x7f728c103d70 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.657+0000 7f7291979700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c1033c0 0x7f728c198e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.657+0000 7f7291178700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f728c103d70 0x7f728c199370 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f727c009fd0 tx=0x7f727c004930 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.657+0000 7f7282ffd700 1 -- 192.168.123.102:0/713728915 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f727c01d070 con 0x7f728c103d70 2026-03-10T10:27:19.665 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.657+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f728c19d8f0 con 0x7f728c103d70 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.658+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f728c19dde0 con 0x7f728c103d70 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.658+0000 7f7282ffd700 1 -- 192.168.123.102:0/713728915 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f727c004b90 con 0x7f728c103d70 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.658+0000 7f7282ffd700 1 -- 192.168.123.102:0/713728915 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f727c00f670 con 0x7f728c103d70 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.659+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f728c04ea90 con 0x7f728c103d70 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.659+0000 7f7282ffd700 1 -- 192.168.123.102:0/713728915 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f727c00bc50 con 0x7f728c103d70 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.660+0000 7f7282ffd700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7278077910 0x7f7278079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.660+0000 7f7282ffd700 1 -- 192.168.123.102:0/713728915 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f727c09b280 con 0x7f728c103d70 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.660+0000 7f7291979700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7278077910 0x7f7278079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.661+0000 7f7291979700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7278077910 0x7f7278079dd0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f728800f4d0 tx=0x7f7288005f90 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:19.666 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.663+0000 7f7282ffd700 1 -- 192.168.123.102:0/713728915 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f727c0639b0 con 0x7f728c103d70 2026-03-10T10:27:19.789 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.788+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 --> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f728c19e080 con 0x7f7278077910 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.794+0000 7f7282ffd700 1 -- 192.168.123.102:0/713728915 <== mgr.24549 v2:192.168.123.102:6800/4215644163 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f728c19e080 con 0x7f7278077910 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (92s) 80s ago 12m 15.7M - 0.25.0 c8568f914cd2 c99a3d6ff714 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (117s) 80s ago 12m 10.4M - 19.2.3-678-ge911bdeb 654f31e6858e 8087bcfa99e6 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm05 vm05 running (116s) 104s ago 12m 10.3M - 19.2.3-678-ge911bdeb 654f31e6858e 1a5d21254a78 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (6m) 80s ago 12m 7838k - 19.2.3-678-ge911bdeb 654f31e6858e c494730ab019 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm05 vm05 running (6m) 104s ago 11m 7852k - 19.2.3-678-ge911bdeb 654f31e6858e 1dc17b49fee4 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (82s) 80s ago 12m 42.0M - 10.4.0 c8b91775d855 8096f78367f5 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.stcvsz vm02 running (2m) 80s ago 10m 92.0M - 19.2.3-678-ge911bdeb 654f31e6858e 5e606b7866f6 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm02.zymcrs vm02 running (2m) 80s ago 10m 101M - 19.2.3-678-ge911bdeb 654f31e6858e f748fd699eac 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.liatdh vm05 running (2m) 104s ago 10m 17.4M - 19.2.3-678-ge911bdeb 654f31e6858e 3385525a533a 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:mds.cephfs.vm05.sudjys vm05 running (2m) 104s ago 10m 13.7M - 19.2.3-678-ge911bdeb 654f31e6858e 16684b1d1299 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.zmavgl vm02 *:8443,9283,8765 running (7m) 80s ago 13m 638M - 19.2.3-678-ge911bdeb 654f31e6858e 68cc2cd6b2d7 2026-03-10T10:27:19.795 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm05.coparq vm05 *:8443,9283,8765 running (6m) 104s ago 11m 490M - 19.2.3-678-ge911bdeb 654f31e6858e 6bb8d736ce66 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (6m) 80s ago 13m 67.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1a2a2cb182f4 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm05 vm05 running (6m) 104s ago 11m 54.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 3fb75dafefb6 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (109s) 80s ago 12m 6723k - 1.7.0 72c9c2088986 d9fc4bda3e14 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm05 vm05 *:9100 running (105s) 104s ago 11m 2911k - 1.7.0 72c9c2088986 7fe7006c8ad2 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (6m) 80s ago 11m 233M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 319155aac718 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (4m) 80s ago 11m 134M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6b6be7f62bd3 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (3m) 80s ago 11m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 745b9931485f 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm05 running (3m) 104s ago 11m 172M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe29904ecf52 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (3m) 104s ago 10m 149M 4096M 19.2.3-678-ge911bdeb 654f31e6858e fe0b3f802cec 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (2m) 104s ago 10m 133M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c60f7383494f 2026-03-10T10:27:19.796 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (97s) 80s ago 12m 49.0M - 2.51.0 1d3b7f56885b 27a06ba517f6 2026-03-10T10:27:19.798 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7278077910 msgr2=0x7f7278079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:19.798 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7278077910 0x7f7278079dd0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f728800f4d0 tx=0x7f7288005f90 comp rx=0 tx=0).stop 2026-03-10T10:27:19.798 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f728c103d70 msgr2=0x7f728c199370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:19.798 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f728c103d70 0x7f728c199370 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f727c009fd0 tx=0x7f727c004930 comp rx=0 tx=0).stop 2026-03-10T10:27:19.798 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 shutdown_connections 2026-03-10T10:27:19.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f728c1033c0 0x7f728c198e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:19.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7278077910 0x7f7278079dd0 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:19.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 --2- 192.168.123.102:0/713728915 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f728c103d70 0x7f728c199370 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:19.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 >> 192.168.123.102:0/713728915 conn(0x7f728c0fec30 msgr2=0x7f728c1001e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:19.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 shutdown_connections 2026-03-10T10:27:19.799 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:19.797+0000 7f7293bdd700 1 -- 192.168.123.102:0/713728915 wait complete. 2026-03-10T10:27:19.924 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:19 vm02.local ceph-mon[110129]: pgmap v243: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:19.927 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T10:27:20.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:19 vm05.local ceph-mon[103593]: pgmap v243: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:20.100 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:20.391 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.389+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/2563631978 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 msgr2=0x7f7cdc107d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:20.391 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.389+0000 7f7ce1ac8700 1 --2- 192.168.123.102:0/2563631978 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 0x7f7cdc107d40 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f7ccc009b00 tx=0x7f7ccc009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:20.391 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.390+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/2563631978 shutdown_connections 2026-03-10T10:27:20.391 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.390+0000 7f7ce1ac8700 1 --2- 192.168.123.102:0/2563631978 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 0x7f7cdc107d40 unknown :-1 s=CLOSED pgs=142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:20.391 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.390+0000 7f7ce1ac8700 1 --2- 192.168.123.102:0/2563631978 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7cdc103340 0x7f7cdc103720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:20.391 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.390+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/2563631978 >> 192.168.123.102:0/2563631978 conn(0x7f7cdc0febd0 msgr2=0x7f7cdc100ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:20.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.390+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/2563631978 shutdown_connections 2026-03-10T10:27:20.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.390+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/2563631978 wait complete. 2026-03-10T10:27:20.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.391+0000 7f7ce1ac8700 1 Processor -- start 2026-03-10T10:27:20.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.391+0000 7f7ce1ac8700 1 -- start start 2026-03-10T10:27:20.392 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.391+0000 7f7ce1ac8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7cdc103340 0x7f7cdc198e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:20.393 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.391+0000 7f7ce1ac8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 0x7f7cdc199360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:20.393 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.391+0000 7f7ce1ac8700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7cdc1999b0 con 0x7f7cdc103cf0 2026-03-10T10:27:20.393 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.391+0000 7f7ce1ac8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7cdc199af0 con 0x7f7cdc103340 2026-03-10T10:27:20.393 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.392+0000 7f7cdaffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 0x7f7cdc199360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:20.393 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.392+0000 7f7cdaffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 0x7f7cdc199360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57768/0 (socket says 192.168.123.102:57768) 2026-03-10T10:27:20.393 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.392+0000 7f7cdaffd700 1 -- 192.168.123.102:0/842029415 learned_addr learned my addr 192.168.123.102:0/842029415 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:20.393 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.392+0000 7f7cdb7fe700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7cdc103340 0x7f7cdc198e20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:20.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.392+0000 7f7cdb7fe700 1 -- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 msgr2=0x7f7cdc199360 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:20.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.393+0000 7f7cdb7fe700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 0x7f7cdc199360 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:20.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.393+0000 7f7cdb7fe700 1 -- 192.168.123.102:0/842029415 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7ccc0097e0 con 0x7f7cdc103340 2026-03-10T10:27:20.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.393+0000 7f7cdaffd700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 0x7f7cdc199360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:27:20.394 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.393+0000 7f7cdb7fe700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7cdc103340 0x7f7cdc198e20 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f7cc400c8f0 tx=0x7f7cc400ccb0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:20.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.394+0000 7f7cd8ff9700 1 -- 192.168.123.102:0/842029415 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7cc40043f0 con 0x7f7cdc103340 2026-03-10T10:27:20.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.394+0000 7f7cd8ff9700 1 -- 192.168.123.102:0/842029415 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f7cc400ce90 con 0x7f7cdc103340 2026-03-10T10:27:20.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.394+0000 7f7cd8ff9700 1 -- 192.168.123.102:0/842029415 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7cc4003de0 con 0x7f7cdc103340 2026-03-10T10:27:20.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.396+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7cdc19d940 con 0x7f7cdc103340 2026-03-10T10:27:20.397 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.396+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7cdc19de60 con 0x7f7cdc103340 2026-03-10T10:27:20.398 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.396+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7cdc04ea90 con 0x7f7cdc103340 2026-03-10T10:27:20.399 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.398+0000 7f7cd8ff9700 1 -- 192.168.123.102:0/842029415 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f7cc4007690 con 0x7f7cdc103340 2026-03-10T10:27:20.399 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.398+0000 7f7cd8ff9700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7cc8077910 0x7f7cc8079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:20.399 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.398+0000 7f7cd8ff9700 1 -- 192.168.123.102:0/842029415 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f7cc4099ee0 con 0x7f7cdc103340 2026-03-10T10:27:20.399 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.398+0000 7f7cdaffd700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7cc8077910 0x7f7cc8079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:20.400 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.399+0000 7f7cdaffd700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7cc8077910 0x7f7cc8079dd0 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f7ccc00b5c0 tx=0x7f7ccc005fb0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:20.403 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.402+0000 7f7cd8ff9700 1 -- 192.168.123.102:0/842029415 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f7cc40625e0 con 0x7f7cdc103340 2026-03-10T10:27:20.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.563+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f7cdc19e140 con 0x7f7cdc103340 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.564+0000 7f7cd8ff9700 1 -- 192.168.123.102:0/842029415 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f7cc4004550 con 0x7f7cdc103340 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "mds": { 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 14 2026-03-10T10:27:20.566 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T10:27:20.567 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T10:27:20.569 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.568+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7cc8077910 msgr2=0x7f7cc8079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:20.569 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.568+0000 7f7ce1ac8700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7cc8077910 0x7f7cc8079dd0 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f7ccc00b5c0 tx=0x7f7ccc005fb0 comp rx=0 tx=0).stop 2026-03-10T10:27:20.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.568+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7cdc103340 msgr2=0x7f7cdc198e20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:20.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.568+0000 7f7ce1ac8700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7cdc103340 0x7f7cdc198e20 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f7cc400c8f0 tx=0x7f7cc400ccb0 comp rx=0 tx=0).stop 2026-03-10T10:27:20.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.569+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 shutdown_connections 2026-03-10T10:27:20.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.569+0000 7f7ce1ac8700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7cc8077910 0x7f7cc8079dd0 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:20.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.569+0000 7f7ce1ac8700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7cdc103340 0x7f7cdc198e20 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:20.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.569+0000 7f7ce1ac8700 1 --2- 192.168.123.102:0/842029415 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7cdc103cf0 0x7f7cdc199360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:20.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.569+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 >> 192.168.123.102:0/842029415 conn(0x7f7cdc0febd0 msgr2=0x7f7cdc100180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:20.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.569+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 shutdown_connections 2026-03-10T10:27:20.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:20.569+0000 7f7ce1ac8700 1 -- 192.168.123.102:0/842029415 wait complete. 2026-03-10T10:27:20.653 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 1'"'"'' 2026-03-10T10:27:20.840 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:20.877 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:20 vm02.local ceph-mon[110129]: from='client.44363 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:27:20.877 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:20 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/842029415' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:27:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:20 vm05.local ceph-mon[103593]: from='client.44363 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:27:21.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:20 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/842029415' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.164+0000 7f2d633ea700 1 -- 192.168.123.102:0/1257367441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10d0f0 msgr2=0x7f2d5c10d570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.164+0000 7f2d633ea700 1 --2- 192.168.123.102:0/1257367441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10d0f0 0x7f2d5c10d570 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f2d4c009b00 tx=0x7f2d4c009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.164+0000 7f2d633ea700 1 -- 192.168.123.102:0/1257367441 shutdown_connections 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.164+0000 7f2d633ea700 1 --2- 192.168.123.102:0/1257367441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10d0f0 0x7f2d5c10d570 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.164+0000 7f2d633ea700 1 --2- 192.168.123.102:0/1257367441 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d5c10f340 0x7f2d5c10f720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.164+0000 7f2d633ea700 1 -- 192.168.123.102:0/1257367441 >> 192.168.123.102:0/1257367441 conn(0x7f2d5c06ce20 msgr2=0x7f2d5c06d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.165+0000 7f2d633ea700 1 -- 192.168.123.102:0/1257367441 shutdown_connections 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.165+0000 7f2d633ea700 1 -- 192.168.123.102:0/1257367441 wait complete. 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.165+0000 7f2d633ea700 1 Processor -- start 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.165+0000 7f2d633ea700 1 -- start start 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.165+0000 7f2d633ea700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d5c10d0f0 0x7f2d5c11bfa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.165+0000 7f2d633ea700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10f340 0x7f2d5c116fa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.165+0000 7f2d633ea700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2d5c117570 con 0x7f2d5c10d0f0 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.165+0000 7f2d633ea700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2d5c1176e0 con 0x7f2d5c10f340 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.166+0000 7f2d61be7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10f340 0x7f2d5c116fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.166+0000 7f2d61be7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10f340 0x7f2d5c116fa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:41268/0 (socket says 192.168.123.102:41268) 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.166+0000 7f2d61be7700 1 -- 192.168.123.102:0/230926371 learned_addr learned my addr 192.168.123.102:0/230926371 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.166+0000 7f2d623e8700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d5c10d0f0 0x7f2d5c11bfa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.166+0000 7f2d61be7700 1 -- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d5c10d0f0 msgr2=0x7f2d5c11bfa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.166+0000 7f2d61be7700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d5c10d0f0 0x7f2d5c11bfa0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.166+0000 7f2d61be7700 1 -- 192.168.123.102:0/230926371 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2d4c0097e0 con 0x7f2d5c10f340 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.167+0000 7f2d61be7700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10f340 0x7f2d5c116fa0 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f2d4c0048c0 tx=0x7f2d4c0048f0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.167+0000 7f2d537fe700 1 -- 192.168.123.102:0/230926371 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2d4c01d070 con 0x7f2d5c10f340 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.167+0000 7f2d633ea700 1 -- 192.168.123.102:0/230926371 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2d5c117960 con 0x7f2d5c10f340 2026-03-10T10:27:21.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.167+0000 7f2d633ea700 1 -- 192.168.123.102:0/230926371 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2d5c077ff0 con 0x7f2d5c10f340 2026-03-10T10:27:21.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.167+0000 7f2d537fe700 1 -- 192.168.123.102:0/230926371 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f2d4c022470 con 0x7f2d5c10f340 2026-03-10T10:27:21.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.168+0000 7f2d537fe700 1 -- 192.168.123.102:0/230926371 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2d4c00f670 con 0x7f2d5c10f340 2026-03-10T10:27:21.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.170+0000 7f2d537fe700 1 -- 192.168.123.102:0/230926371 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f2d4c00f7d0 con 0x7f2d5c10f340 2026-03-10T10:27:21.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.170+0000 7f2d537fe700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2d48077990 0x7f2d48079e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:21.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.170+0000 7f2d537fe700 1 -- 192.168.123.102:0/230926371 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f2d4c09c120 con 0x7f2d5c10f340 2026-03-10T10:27:21.174 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.172+0000 7f2d623e8700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2d48077990 0x7f2d48079e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:21.174 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.172+0000 7f2d633ea700 1 -- 192.168.123.102:0/230926371 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2d40005320 con 0x7f2d5c10f340 2026-03-10T10:27:21.174 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.173+0000 7f2d623e8700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2d48077990 0x7f2d48079e50 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f2d58009510 tx=0x7f2d580093a0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:21.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.177+0000 7f2d537fe700 1 -- 192.168.123.102:0/230926371 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f2d4c0648d0 con 0x7f2d5c10f340 2026-03-10T10:27:21.351 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.349+0000 7f2d633ea700 1 -- 192.168.123.102:0/230926371 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f2d40006200 con 0x7f2d5c10f340 2026-03-10T10:27:21.353 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.351+0000 7f2d537fe700 1 -- 192.168.123.102:0/230926371 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f2d4c027070 con 0x7f2d5c10f340 2026-03-10T10:27:21.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.354+0000 7f2d517fa700 1 -- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2d48077990 msgr2=0x7f2d48079e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:21.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.354+0000 7f2d517fa700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2d48077990 0x7f2d48079e50 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f2d58009510 tx=0x7f2d580093a0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.355+0000 7f2d517fa700 1 -- 192.168.123.102:0/230926371 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10f340 msgr2=0x7f2d5c116fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:21.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.355+0000 7f2d517fa700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10f340 0x7f2d5c116fa0 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f2d4c0048c0 tx=0x7f2d4c0048f0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.355+0000 7f2d517fa700 1 -- 192.168.123.102:0/230926371 shutdown_connections 2026-03-10T10:27:21.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.355+0000 7f2d517fa700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2d5c10d0f0 0x7f2d5c11bfa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.355+0000 7f2d517fa700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2d48077990 0x7f2d48079e50 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.355+0000 7f2d517fa700 1 --2- 192.168.123.102:0/230926371 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2d5c10f340 0x7f2d5c116fa0 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.356 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.355+0000 7f2d517fa700 1 -- 192.168.123.102:0/230926371 >> 192.168.123.102:0/230926371 conn(0x7f2d5c06ce20 msgr2=0x7f2d5c070510 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:21.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.355+0000 7f2d517fa700 1 -- 192.168.123.102:0/230926371 shutdown_connections 2026-03-10T10:27:21.357 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.356+0000 7f2d517fa700 1 -- 192.168.123.102:0/230926371 wait complete. 2026-03-10T10:27:21.373 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T10:27:21.418 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | keys'"'"' | grep $sha1' 2026-03-10T10:27:21.622 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:21.962 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:21 vm02.local ceph-mon[110129]: pgmap v244: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:21.962 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:21 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/230926371' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:27:21.962 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:21.962 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.960+0000 7fd3886af700 1 -- 192.168.123.102:0/1594240429 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd380100b50 msgr2=0x7fd380104a40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:21.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.960+0000 7fd3886af700 1 --2- 192.168.123.102:0/1594240429 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd380100b50 0x7fd380104a40 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7fd370009b50 tx=0x7fd370009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:21.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.960+0000 7fd3886af700 1 -- 192.168.123.102:0/1594240429 shutdown_connections 2026-03-10T10:27:21.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.960+0000 7fd3886af700 1 --2- 192.168.123.102:0/1594240429 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd380100b50 0x7fd380104a40 secure :-1 s=CLOSED pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7fd370009b50 tx=0x7fd370009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:21.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.960+0000 7fd3886af700 1 --2- 192.168.123.102:0/1594240429 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd3801001a0 0x7fd380100580 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.960+0000 7fd3886af700 1 -- 192.168.123.102:0/1594240429 >> 192.168.123.102:0/1594240429 conn(0x7fd380075960 msgr2=0x7fd380075d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:21.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.962+0000 7fd3886af700 1 -- 192.168.123.102:0/1594240429 shutdown_connections 2026-03-10T10:27:21.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.962+0000 7fd3886af700 1 -- 192.168.123.102:0/1594240429 wait complete. 2026-03-10T10:27:21.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd3886af700 1 Processor -- start 2026-03-10T10:27:21.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd3886af700 1 -- start start 2026-03-10T10:27:21.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd3886af700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd3801001a0 0x7fd380199100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:21.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd3886af700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd380199640 0x7fd38019dab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:21.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd3886af700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd380199c60 con 0x7fd3801001a0 2026-03-10T10:27:21.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd3886af700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd380199dd0 con 0x7fd380199640 2026-03-10T10:27:21.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd38644b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd3801001a0 0x7fd380199100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:21.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd38644b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd3801001a0 0x7fd380199100 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57804/0 (socket says 192.168.123.102:57804) 2026-03-10T10:27:21.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd38644b700 1 -- 192.168.123.102:0/3076095282 learned_addr learned my addr 192.168.123.102:0/3076095282 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:21.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.963+0000 7fd385c4a700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd380199640 0x7fd38019dab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:21.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.964+0000 7fd38644b700 1 -- 192.168.123.102:0/3076095282 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd380199640 msgr2=0x7fd38019dab0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:21.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.964+0000 7fd38644b700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd380199640 0x7fd38019dab0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:21.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.964+0000 7fd38644b700 1 -- 192.168.123.102:0/3076095282 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd3700097e0 con 0x7fd3801001a0 2026-03-10T10:27:21.965 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.964+0000 7fd38644b700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd3801001a0 0x7fd380199100 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7fd37c00ed70 tx=0x7fd37c00c5b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:21.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.964+0000 7fd3777fe700 1 -- 192.168.123.102:0/3076095282 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd37c00cd70 con 0x7fd3801001a0 2026-03-10T10:27:21.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.964+0000 7fd3777fe700 1 -- 192.168.123.102:0/3076095282 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fd37c00eec0 con 0x7fd3801001a0 2026-03-10T10:27:21.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.964+0000 7fd3777fe700 1 -- 192.168.123.102:0/3076095282 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd37c0188b0 con 0x7fd3801001a0 2026-03-10T10:27:21.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.964+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd38019e0b0 con 0x7fd3801001a0 2026-03-10T10:27:21.967 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.964+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd38019e520 con 0x7fd3801001a0 2026-03-10T10:27:21.968 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.965+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd38010a120 con 0x7fd3801001a0 2026-03-10T10:27:21.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.969+0000 7fd3777fe700 1 -- 192.168.123.102:0/3076095282 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fd37c018a10 con 0x7fd3801001a0 2026-03-10T10:27:21.970 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.969+0000 7fd3777fe700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd36c0779e0 0x7fd36c079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:21.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.969+0000 7fd385c4a700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd36c0779e0 0x7fd36c079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:21.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.970+0000 7fd3777fe700 1 -- 192.168.123.102:0/3076095282 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fd37c014070 con 0x7fd3801001a0 2026-03-10T10:27:21.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.970+0000 7fd3777fe700 1 -- 192.168.123.102:0/3076095282 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fd37c062df0 con 0x7fd3801001a0 2026-03-10T10:27:21.971 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:21.970+0000 7fd385c4a700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd36c0779e0 0x7fd36c079ea0 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fd370000c00 tx=0x7fd370005fd0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:21 vm05.local ceph-mon[103593]: pgmap v244: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:21 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/230926371' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:27:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:22.141 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.140+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fd38004ea90 con 0x7fd3801001a0 2026-03-10T10:27:22.142 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.141+0000 7fd3777fe700 1 -- 192.168.123.102:0/3076095282 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7fd37c062540 con 0x7fd3801001a0 2026-03-10T10:27:22.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.144+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd36c0779e0 msgr2=0x7fd36c079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:22.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.144+0000 7fd3886af700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd36c0779e0 0x7fd36c079ea0 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fd370000c00 tx=0x7fd370005fd0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.144+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd3801001a0 msgr2=0x7fd380199100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:22.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.144+0000 7fd3886af700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd3801001a0 0x7fd380199100 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7fd37c00ed70 tx=0x7fd37c00c5b0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.145+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 shutdown_connections 2026-03-10T10:27:22.146 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.145+0000 7fd3886af700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fd3801001a0 0x7fd380199100 unknown :-1 s=CLOSED pgs=144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.145+0000 7fd3886af700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fd36c0779e0 0x7fd36c079ea0 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.145+0000 7fd3886af700 1 --2- 192.168.123.102:0/3076095282 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd380199640 0x7fd38019dab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.145+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 >> 192.168.123.102:0/3076095282 conn(0x7fd380075960 msgr2=0x7fd3800feab0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:22.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.145+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 shutdown_connections 2026-03-10T10:27:22.147 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.146+0000 7fd3886af700 1 -- 192.168.123.102:0/3076095282 wait complete. 2026-03-10T10:27:22.160 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-10T10:27:22.221 DEBUG:teuthology.parallel:result is None 2026-03-10T10:27:22.221 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T10:27:22.224 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-10T10:27:22.224 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- bash -c 'ceph fs dump' 2026-03-10T10:27:22.401 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.698+0000 7f0b37fff700 1 -- 192.168.123.102:0/273543398 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 msgr2=0x7f0b38102120 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.698+0000 7f0b37fff700 1 --2- 192.168.123.102:0/273543398 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 0x7f0b38102120 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7f0b20009b00 tx=0x7f0b20009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.699+0000 7f0b37fff700 1 -- 192.168.123.102:0/273543398 shutdown_connections 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.699+0000 7f0b37fff700 1 --2- 192.168.123.102:0/273543398 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b381026f0 0x7f0b3810abe0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.699+0000 7f0b37fff700 1 --2- 192.168.123.102:0/273543398 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 0x7f0b38102120 unknown :-1 s=CLOSED pgs=145 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.699+0000 7f0b37fff700 1 -- 192.168.123.102:0/273543398 >> 192.168.123.102:0/273543398 conn(0x7f0b380fb640 msgr2=0x7f0b380fda60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.699+0000 7f0b37fff700 1 -- 192.168.123.102:0/273543398 shutdown_connections 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.700+0000 7f0b37fff700 1 -- 192.168.123.102:0/273543398 wait complete. 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.700+0000 7f0b37fff700 1 Processor -- start 2026-03-10T10:27:22.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.700+0000 7f0b37fff700 1 -- start start 2026-03-10T10:27:22.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.700+0000 7f0b37fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 0x7f0b380ffa30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:22.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.700+0000 7f0b37fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b381026f0 0x7f0b380fff70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:22.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.700+0000 7f0b37fff700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0b381004b0 con 0x7f0b38101d40 2026-03-10T10:27:22.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.700+0000 7f0b37fff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0b381005f0 con 0x7f0b381026f0 2026-03-10T10:27:22.705 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.701+0000 7f0b36ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 0x7f0b380ffa30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:22.705 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.701+0000 7f0b36ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 0x7f0b380ffa30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57812/0 (socket says 192.168.123.102:57812) 2026-03-10T10:27:22.705 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.701+0000 7f0b36ffd700 1 -- 192.168.123.102:0/98858069 learned_addr learned my addr 192.168.123.102:0/98858069 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:22.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.701+0000 7f0b36ffd700 1 -- 192.168.123.102:0/98858069 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b381026f0 msgr2=0x7f0b380fff70 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:27:22.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.701+0000 7f0b36ffd700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b381026f0 0x7f0b380fff70 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.701+0000 7f0b36ffd700 1 -- 192.168.123.102:0/98858069 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0b200097e0 con 0x7f0b38101d40 2026-03-10T10:27:22.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.701+0000 7f0b36ffd700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 0x7f0b380ffa30 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7f0b20005950 tx=0x7f0b20004990 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:22.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.702+0000 7f0b2ffff700 1 -- 192.168.123.102:0/98858069 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0b2001d070 con 0x7f0b38101d40 2026-03-10T10:27:22.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.702+0000 7f0b2ffff700 1 -- 192.168.123.102:0/98858069 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f0b2000bc50 con 0x7f0b38101d40 2026-03-10T10:27:22.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.702+0000 7f0b2ffff700 1 -- 192.168.123.102:0/98858069 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0b2000f740 con 0x7f0b38101d40 2026-03-10T10:27:22.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.702+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0b38100870 con 0x7f0b38101d40 2026-03-10T10:27:22.707 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.702+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0b381a2ea0 con 0x7f0b38101d40 2026-03-10T10:27:22.707 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.704+0000 7f0b2ffff700 1 -- 192.168.123.102:0/98858069 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f0b2000f8a0 con 0x7f0b38101d40 2026-03-10T10:27:22.707 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.704+0000 7f0b2ffff700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b24077990 0x7f0b24079e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:22.707 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.704+0000 7f0b2ffff700 1 -- 192.168.123.102:0/98858069 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f0b2009c360 con 0x7f0b38101d40 2026-03-10T10:27:22.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.705+0000 7f0b367fc700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b24077990 0x7f0b24079e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:22.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.706+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0b381082e0 con 0x7f0b38101d40 2026-03-10T10:27:22.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.709+0000 7f0b2ffff700 1 -- 192.168.123.102:0/98858069 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0b20064b10 con 0x7f0b38101d40 2026-03-10T10:27:22.710 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.709+0000 7f0b367fc700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b24077990 0x7f0b24079e50 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7f0b381011d0 tx=0x7f0b28006cb0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:22.750 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:22 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3076095282' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:27:22.863 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.861+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f0b3804f2e0 con 0x7f0b38101d40 2026-03-10T10:27:22.863 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.862+0000 7f0b2ffff700 1 -- 192.168.123.102:0/98858069 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 40 v40) v1 ==== 76+0+1999 (secure 0 0 0) 0x7f0b20027020 con 0x7f0b38101d40 2026-03-10T10:27:22.864 INFO:teuthology.orchestra.run.vm02.stdout:e40 2026-03-10T10:27:22.864 INFO:teuthology.orchestra.run.vm02.stdout:btime 2026-03-10T10:25:20:367937+0000 2026-03-10T10:27:22.864 INFO:teuthology.orchestra.run.vm02.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:27:22.864 INFO:teuthology.orchestra.run.vm02.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:legacy client fscid: 1 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:Filesystem 'cephfs' (1) 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:fs_name cephfs 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:epoch 40 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:created 2026-03-10T10:16:53.248683+0000 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:modified 2026-03-10T10:25:20.367934+0000 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:tableserver 0 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:root 0 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:session_timeout 60 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:session_autoclose 300 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:max_file_size 1099511627776 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:max_xattr_size 65536 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:required_client_features {} 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:last_failure 0 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:last_failure_osd_epoch 83 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:max_mds 1 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:in 0 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:up {0=34360} 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:failed 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:damaged 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:stopped 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:data_pools [3] 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:metadata_pool 2 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:inline_data disabled 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:balancer 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:bal_rank_mask -1 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:standby_count_wanted 1 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:qdb_cluster leader: 34360 members: 34360 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.zymcrs{0:34360} state up:active seq 11 join_fscid=1 addr [v2:192.168.123.102:6826/965109167,v1:192.168.123.102:6827/965109167] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm02.stcvsz{0:34364} state up:standby-replay seq 1 join_fscid=1 addr [v2:192.168.123.102:6828/3727526116,v1:192.168.123.102:6829/3727526116] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout:Standby daemons: 2026-03-10T10:27:22.865 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:22.866 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.liatdh{-1:34368} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6824/462039658,v1:192.168.123.105:6825/462039658] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:27:22.866 INFO:teuthology.orchestra.run.vm02.stdout:[mds.cephfs.vm05.sudjys{-1:44325} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6826/682293963,v1:192.168.123.105:6827/682293963] compat {c=[1],r=[1],i=[1fff]}] 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b24077990 msgr2=0x7f0b24079e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b24077990 0x7f0b24079e50 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7f0b381011d0 tx=0x7f0b28006cb0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 msgr2=0x7f0b380ffa30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 0x7f0b380ffa30 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7f0b20005950 tx=0x7f0b20004990 comp rx=0 tx=0).stop 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 shutdown_connections 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0b38101d40 0x7f0b380ffa30 unknown :-1 s=CLOSED pgs=146 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0b24077990 0x7f0b24079e50 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 --2- 192.168.123.102:0/98858069 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0b381026f0 0x7f0b380fff70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 >> 192.168.123.102:0/98858069 conn(0x7f0b380fb640 msgr2=0x7f0b38105420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.867+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 shutdown_connections 2026-03-10T10:27:22.869 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:22.868+0000 7f0b37fff700 1 -- 192.168.123.102:0/98858069 wait complete. 2026-03-10T10:27:22.870 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 40 2026-03-10T10:27:22.922 INFO:teuthology.run_tasks:Running task fs.post_upgrade_checks... 2026-03-10T10:27:22.927 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 2026-03-10T10:27:23.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:22 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3076095282' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T10:27:23.100 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:23.373 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.371+0000 7f2b04031700 1 -- 192.168.123.102:0/1516401083 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103cf0 msgr2=0x7f2afc107d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:23.373 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.371+0000 7f2b04031700 1 --2- 192.168.123.102:0/1516401083 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103cf0 0x7f2afc107d40 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f2af8009b00 tx=0x7f2af8009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:23.374 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.372+0000 7f2b04031700 1 -- 192.168.123.102:0/1516401083 shutdown_connections 2026-03-10T10:27:23.374 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.372+0000 7f2b04031700 1 --2- 192.168.123.102:0/1516401083 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103cf0 0x7f2afc107d40 unknown :-1 s=CLOSED pgs=147 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:23.374 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.372+0000 7f2b04031700 1 --2- 192.168.123.102:0/1516401083 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2afc103340 0x7f2afc103720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:23.374 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.372+0000 7f2b04031700 1 -- 192.168.123.102:0/1516401083 >> 192.168.123.102:0/1516401083 conn(0x7f2afc0feb90 msgr2=0x7f2afc100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:23.374 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.373+0000 7f2b04031700 1 -- 192.168.123.102:0/1516401083 shutdown_connections 2026-03-10T10:27:23.374 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.373+0000 7f2b04031700 1 -- 192.168.123.102:0/1516401083 wait complete. 2026-03-10T10:27:23.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.373+0000 7f2b04031700 1 Processor -- start 2026-03-10T10:27:23.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b04031700 1 -- start start 2026-03-10T10:27:23.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b04031700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103340 0x7f2afc198e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:23.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b04031700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2afc103cf0 0x7f2afc199380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:23.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b04031700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2afc199a60 con 0x7f2afc103340 2026-03-10T10:27:23.375 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b04031700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2afc19d7f0 con 0x7f2afc103cf0 2026-03-10T10:27:23.376 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b01dcd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103340 0x7f2afc198e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:23.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b01dcd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103340 0x7f2afc198e40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57824/0 (socket says 192.168.123.102:57824) 2026-03-10T10:27:23.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b01dcd700 1 -- 192.168.123.102:0/2976201194 learned_addr learned my addr 192.168.123.102:0/2976201194 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:23.377 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b01dcd700 1 -- 192.168.123.102:0/2976201194 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2afc103cf0 msgr2=0x7f2afc199380 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:27:23.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b015cc700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2afc103cf0 0x7f2afc199380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:23.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b01dcd700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2afc103cf0 0x7f2afc199380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:23.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b01dcd700 1 -- 192.168.123.102:0/2976201194 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2af80097e0 con 0x7f2afc103340 2026-03-10T10:27:23.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.374+0000 7f2b01dcd700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103340 0x7f2afc198e40 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7f2aec00ba70 tx=0x7f2aec00be30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:23.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.375+0000 7f2af2ffd700 1 -- 192.168.123.102:0/2976201194 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2aec00c780 con 0x7f2afc103340 2026-03-10T10:27:23.378 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.375+0000 7f2af2ffd700 1 -- 192.168.123.102:0/2976201194 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f2aec00cdc0 con 0x7f2afc103340 2026-03-10T10:27:23.379 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.375+0000 7f2af2ffd700 1 -- 192.168.123.102:0/2976201194 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2aec012550 con 0x7f2afc103340 2026-03-10T10:27:23.379 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.375+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2afc19dad0 con 0x7f2afc103340 2026-03-10T10:27:23.379 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.375+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2afc19df40 con 0x7f2afc103340 2026-03-10T10:27:23.379 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.376+0000 7f2af2ffd700 1 -- 192.168.123.102:0/2976201194 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f2aec0126b0 con 0x7f2afc103340 2026-03-10T10:27:23.379 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.376+0000 7f2af2ffd700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2ae80779e0 0x7f2ae8079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:23.380 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.377+0000 7f2af2ffd700 1 -- 192.168.123.102:0/2976201194 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f2aec099b60 con 0x7f2afc103340 2026-03-10T10:27:23.380 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.378+0000 7f2b015cc700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2ae80779e0 0x7f2ae8079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:23.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.378+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2afc10b690 con 0x7f2afc103340 2026-03-10T10:27:23.381 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.380+0000 7f2b015cc700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2ae80779e0 0x7f2ae8079ea0 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f2af8006010 tx=0x7f2af8005c00 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:23.385 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.384+0000 7f2af2ffd700 1 -- 192.168.123.102:0/2976201194 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f2aec062420 con 0x7f2afc103340 2026-03-10T10:27:23.531 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.529+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "format": "json"} v 0) v1 -- 0x7f2afc04ea90 con 0x7f2afc103340 2026-03-10T10:27:23.531 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.530+0000 7f2af2ffd700 1 -- 192.168.123.102:0/2976201194 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "format": "json"}]=0 dumped fsmap epoch 40 v40) v1 ==== 94+0+5264 (secure 0 0 0) 0x7f2aec061b70 con 0x7f2afc103340 2026-03-10T10:27:23.531 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:23.532 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":40,"btime":"2026-03-10T10:25:20:367937+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34368,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/462039658","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":462039658},{"type":"v1","addr":"192.168.123.105:6825","nonce":462039658}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":33},{"gid":44325,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/682293963","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":682293963},{"type":"v1","addr":"192.168.123.105:6827","nonce":682293963}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":37}],"filesystems":[{"mdsmap":{"epoch":40,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:20.367934+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34360},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34360":{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":35,"state":"up:active","state_seq":11,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}},"gid_34364":{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34360,"qdb_cluster":[34360]},"id":1}]} 2026-03-10T10:27:23.534 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.533+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2ae80779e0 msgr2=0x7f2ae8079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:23.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.533+0000 7f2b04031700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2ae80779e0 0x7f2ae8079ea0 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f2af8006010 tx=0x7f2af8005c00 comp rx=0 tx=0).stop 2026-03-10T10:27:23.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.533+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103340 msgr2=0x7f2afc198e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:23.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.533+0000 7f2b04031700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103340 0x7f2afc198e40 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7f2aec00ba70 tx=0x7f2aec00be30 comp rx=0 tx=0).stop 2026-03-10T10:27:23.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.533+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 shutdown_connections 2026-03-10T10:27:23.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.533+0000 7f2b04031700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2afc103340 0x7f2afc198e40 unknown :-1 s=CLOSED pgs=148 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:23.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.533+0000 7f2b04031700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2ae80779e0 0x7f2ae8079ea0 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:23.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.533+0000 7f2b04031700 1 --2- 192.168.123.102:0/2976201194 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2afc103cf0 0x7f2afc199380 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:23.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.533+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 >> 192.168.123.102:0/2976201194 conn(0x7f2afc0feb90 msgr2=0x7f2afc100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:23.535 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.534+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 shutdown_connections 2026-03-10T10:27:23.536 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:23.534+0000 7f2b04031700 1 -- 192.168.123.102:0/2976201194 wait complete. 2026-03-10T10:27:23.536 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 40 2026-03-10T10:27:23.602 DEBUG:tasks.fs:checking fs fscid=1,name=cephfs state = {'epoch': 13, 'max_mds': 1, 'flags': 50} 2026-03-10T10:27:23.602 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 14 2026-03-10T10:27:23.753 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:23 vm02.local ceph-mon[110129]: pgmap v245: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:23.753 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:23 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/98858069' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:27:23.753 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:23 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2976201194' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-10T10:27:23.760 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:24.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:23 vm05.local ceph-mon[103593]: pgmap v245: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:24.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:23 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/98858069' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-10T10:27:24.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:23 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2976201194' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-10T10:27:24.065 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.064+0000 7ff6b676a700 1 -- 192.168.123.102:0/55882709 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff6b0103d70 msgr2=0x7ff6b0107dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:24.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.064+0000 7ff6b676a700 1 --2- 192.168.123.102:0/55882709 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff6b0103d70 0x7ff6b0107dc0 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7ff6a0009b00 tx=0x7ff6a0009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:24.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.064+0000 7ff6b676a700 1 -- 192.168.123.102:0/55882709 shutdown_connections 2026-03-10T10:27:24.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.064+0000 7ff6b676a700 1 --2- 192.168.123.102:0/55882709 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff6b0103d70 0x7ff6b0107dc0 unknown :-1 s=CLOSED pgs=149 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.064+0000 7ff6b676a700 1 --2- 192.168.123.102:0/55882709 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6b01033c0 0x7ff6b01037a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.064+0000 7ff6b676a700 1 -- 192.168.123.102:0/55882709 >> 192.168.123.102:0/55882709 conn(0x7ff6b00fec30 msgr2=0x7ff6b0101050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:24.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.064+0000 7ff6b676a700 1 -- 192.168.123.102:0/55882709 shutdown_connections 2026-03-10T10:27:24.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.064+0000 7ff6b676a700 1 -- 192.168.123.102:0/55882709 wait complete. 2026-03-10T10:27:24.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.065+0000 7ff6b676a700 1 Processor -- start 2026-03-10T10:27:24.066 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.065+0000 7ff6b676a700 1 -- start start 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.065+0000 7ff6b676a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff6b01033c0 0x7ff6b019da70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.065+0000 7ff6b676a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6b0103d70 0x7ff6b0078380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.065+0000 7ff6b676a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff6b019e100 con 0x7ff6b01033c0 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.065+0000 7ff6b676a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff6b019e270 con 0x7ff6b0103d70 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.065+0000 7ff6af7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6b0103d70 0x7ff6b0078380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.065+0000 7ff6af7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6b0103d70 0x7ff6b0078380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:41342/0 (socket says 192.168.123.102:41342) 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.065+0000 7ff6af7fe700 1 -- 192.168.123.102:0/3877162228 learned_addr learned my addr 192.168.123.102:0/3877162228 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.066+0000 7ff6af7fe700 1 -- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff6b01033c0 msgr2=0x7ff6b019da70 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.066+0000 7ff6affff700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff6b01033c0 0x7ff6b019da70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.066+0000 7ff6af7fe700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff6b01033c0 0x7ff6b019da70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.067 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.066+0000 7ff6af7fe700 1 -- 192.168.123.102:0/3877162228 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff6a00097e0 con 0x7ff6b0103d70 2026-03-10T10:27:24.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.066+0000 7ff6affff700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff6b01033c0 0x7ff6b019da70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:24.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.066+0000 7ff6af7fe700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6b0103d70 0x7ff6b0078380 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7ff6a0005f50 tx=0x7ff6a0004990 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:24.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.067+0000 7ff6ad7fa700 1 -- 192.168.123.102:0/3877162228 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff6a001d070 con 0x7ff6b0103d70 2026-03-10T10:27:24.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.067+0000 7ff6b676a700 1 -- 192.168.123.102:0/3877162228 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff6b0078980 con 0x7ff6b0103d70 2026-03-10T10:27:24.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.067+0000 7ff6ad7fa700 1 -- 192.168.123.102:0/3877162228 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff6a000bc50 con 0x7ff6b0103d70 2026-03-10T10:27:24.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.067+0000 7ff6ad7fa700 1 -- 192.168.123.102:0/3877162228 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff6a000f7c0 con 0x7ff6b0103d70 2026-03-10T10:27:24.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.067+0000 7ff6b676a700 1 -- 192.168.123.102:0/3877162228 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff6b0078e70 con 0x7ff6b0103d70 2026-03-10T10:27:24.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.068+0000 7ff6a6ffd700 1 -- 192.168.123.102:0/3877162228 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff6b00791a0 con 0x7ff6b0103d70 2026-03-10T10:27:24.071 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.069+0000 7ff6ad7fa700 1 -- 192.168.123.102:0/3877162228 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff6a0022470 con 0x7ff6b0103d70 2026-03-10T10:27:24.071 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.069+0000 7ff6ad7fa700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff69c0778c0 0x7ff69c079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:24.071 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.069+0000 7ff6ad7fa700 1 -- 192.168.123.102:0/3877162228 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7ff6a009b6e0 con 0x7ff6b0103d70 2026-03-10T10:27:24.071 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.070+0000 7ff6affff700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff69c0778c0 0x7ff69c079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:24.072 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.070+0000 7ff6affff700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff69c0778c0 0x7ff69c079d80 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7ff698006fd0 tx=0x7ff698008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:24.073 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.072+0000 7ff6ad7fa700 1 -- 192.168.123.102:0/3877162228 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff6a0063d60 con 0x7ff6b0103d70 2026-03-10T10:27:24.236 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.235+0000 7ff6a6ffd700 1 -- 192.168.123.102:0/3877162228 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 14, "format": "json"} v 0) v1 -- 0x7ff6b004ea90 con 0x7ff6b0103d70 2026-03-10T10:27:24.237 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.236+0000 7ff6ad7fa700 1 -- 192.168.123.102:0/3877162228 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 14, "format": "json"}]=0 dumped fsmap epoch 14 v40) v1 ==== 107+0+4913 (secure 0 0 0) 0x7ff6a0005c00 con 0x7ff6b0103d70 2026-03-10T10:27:24.237 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:24.237 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":14,"btime":"1970-01-01T00:00:00:000000+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14484,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.105:6827/4269439469","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":4269439469},{"type":"v1","addr":"192.168.123.105:6827","nonce":4269439469}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":14},{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12}],"filesystems":[{"mdsmap":{"epoch":14,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:17:02.427935+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":39,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14464},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14464":{"gid":14464,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":11,"state":"up:active","state_seq":5,"addr":"192.168.123.102:6827/658252295","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":658252295},{"type":"v1","addr":"192.168.123.102:6827","nonce":658252295}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.238+0000 7ff6a6ffd700 1 -- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff69c0778c0 msgr2=0x7ff69c079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff69c0778c0 0x7ff69c079d80 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7ff698006fd0 tx=0x7ff698008040 comp rx=0 tx=0).stop 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 -- 192.168.123.102:0/3877162228 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6b0103d70 msgr2=0x7ff6b0078380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6b0103d70 0x7ff6b0078380 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7ff6a0005f50 tx=0x7ff6a0004990 comp rx=0 tx=0).stop 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 -- 192.168.123.102:0/3877162228 shutdown_connections 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff6b01033c0 0x7ff6b019da70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff69c0778c0 0x7ff69c079d80 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 --2- 192.168.123.102:0/3877162228 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6b0103d70 0x7ff6b0078380 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 -- 192.168.123.102:0/3877162228 >> 192.168.123.102:0/3877162228 conn(0x7ff6b00fec30 msgr2=0x7ff6b01001d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 -- 192.168.123.102:0/3877162228 shutdown_connections 2026-03-10T10:27:24.240 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.239+0000 7ff6a6ffd700 1 -- 192.168.123.102:0/3877162228 wait complete. 2026-03-10T10:27:24.241 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 14 2026-03-10T10:27:24.307 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 15 2026-03-10T10:27:24.469 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:24.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.760+0000 7ff754b31700 1 -- 192.168.123.102:0/2316611409 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff750068df0 msgr2=0x7ff75010d5b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:24.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.760+0000 7ff754b31700 1 --2- 192.168.123.102:0/2316611409 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff750068df0 0x7ff75010d5b0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7ff740009a60 tx=0x7ff740009d70 comp rx=0 tx=0).stop 2026-03-10T10:27:24.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.761+0000 7ff754b31700 1 -- 192.168.123.102:0/2316611409 shutdown_connections 2026-03-10T10:27:24.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.761+0000 7ff754b31700 1 --2- 192.168.123.102:0/2316611409 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff750068df0 0x7ff75010d5b0 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.761+0000 7ff754b31700 1 --2- 192.168.123.102:0/2316611409 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7500684d0 0x7ff7500688b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.761+0000 7ff754b31700 1 -- 192.168.123.102:0/2316611409 >> 192.168.123.102:0/2316611409 conn(0x7ff750075960 msgr2=0x7ff750075d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:24.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.761+0000 7ff754b31700 1 -- 192.168.123.102:0/2316611409 shutdown_connections 2026-03-10T10:27:24.762 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.761+0000 7ff754b31700 1 -- 192.168.123.102:0/2316611409 wait complete. 2026-03-10T10:27:24.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff754b31700 1 Processor -- start 2026-03-10T10:27:24.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff754b31700 1 -- start start 2026-03-10T10:27:24.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff754b31700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7500684d0 0x7ff750198e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:24.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff754b31700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff750068df0 0x7ff750199370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:24.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff754b31700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff7501999c0 con 0x7ff7500684d0 2026-03-10T10:27:24.763 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff754b31700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff750199b00 con 0x7ff750068df0 2026-03-10T10:27:24.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff74e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7500684d0 0x7ff750198e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:24.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff74e59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7500684d0 0x7ff750198e30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57854/0 (socket says 192.168.123.102:57854) 2026-03-10T10:27:24.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff74e59c700 1 -- 192.168.123.102:0/3867773091 learned_addr learned my addr 192.168.123.102:0/3867773091 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:24.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.762+0000 7ff74dd9b700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff750068df0 0x7ff750199370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:24.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.763+0000 7ff74e59c700 1 -- 192.168.123.102:0/3867773091 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff750068df0 msgr2=0x7ff750199370 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:24.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.763+0000 7ff74e59c700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff750068df0 0x7ff750199370 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.763+0000 7ff74e59c700 1 -- 192.168.123.102:0/3867773091 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff740009710 con 0x7ff7500684d0 2026-03-10T10:27:24.764 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.763+0000 7ff74e59c700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7500684d0 0x7ff750198e30 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7ff73800cc60 tx=0x7ff73800cf70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:24.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.763+0000 7ff7477fe700 1 -- 192.168.123.102:0/3867773091 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff738007940 con 0x7ff7500684d0 2026-03-10T10:27:24.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.763+0000 7ff7477fe700 1 -- 192.168.123.102:0/3867773091 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff73800f450 con 0x7ff7500684d0 2026-03-10T10:27:24.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.763+0000 7ff7477fe700 1 -- 192.168.123.102:0/3867773091 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff738018610 con 0x7ff7500684d0 2026-03-10T10:27:24.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.763+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff75019d950 con 0x7ff7500684d0 2026-03-10T10:27:24.765 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.763+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff75019dea0 con 0x7ff7500684d0 2026-03-10T10:27:24.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.765+0000 7ff7477fe700 1 -- 192.168.123.102:0/3867773091 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff738018770 con 0x7ff7500684d0 2026-03-10T10:27:24.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.765+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff75010ad20 con 0x7ff7500684d0 2026-03-10T10:27:24.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.765+0000 7ff7477fe700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff73c077920 0x7ff73c079de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:24.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.765+0000 7ff7477fe700 1 -- 192.168.123.102:0/3867773091 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7ff73809af70 con 0x7ff7500684d0 2026-03-10T10:27:24.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.765+0000 7ff74dd9b700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff73c077920 0x7ff73c079de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:24.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.766+0000 7ff74dd9b700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff73c077920 0x7ff73c079de0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7ff74000c010 tx=0x7ff74000bf00 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:24.769 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.768+0000 7ff7477fe700 1 -- 192.168.123.102:0/3867773091 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff7380635f0 con 0x7ff7500684d0 2026-03-10T10:27:24.916 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:24 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3877162228' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 14, "format": "json"}]: dispatch 2026-03-10T10:27:24.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.914+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 15, "format": "json"} v 0) v1 -- 0x7ff75019a190 con 0x7ff7500684d0 2026-03-10T10:27:24.916 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.915+0000 7ff7477fe700 1 -- 192.168.123.102:0/3867773091 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 15, "format": "json"}]=0 dumped fsmap epoch 15 v40) v1 ==== 107+0+4920 (secure 0 0 0) 0x7ff7380635f0 con 0x7ff7500684d0 2026-03-10T10:27:24.918 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:24.918 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":15,"btime":"1970-01-01T00:00:00:000000+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12}],"filesystems":[{"mdsmap":{"epoch":15,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:17:02.433444+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":39,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14464},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14464":{"gid":14464,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":11,"state":"up:active","state_seq":5,"addr":"192.168.123.102:6827/658252295","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":658252295},{"type":"v1","addr":"192.168.123.102:6827","nonce":658252295}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14484":{"gid":14484,"name":"cephfs.vm05.sudjys","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":2,"addr":"192.168.123.105:6827/4269439469","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":4269439469},{"type":"v1","addr":"192.168.123.105:6827","nonce":4269439469}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:24.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.919+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff73c077920 msgr2=0x7ff73c079de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:24.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.920+0000 7ff754b31700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff73c077920 0x7ff73c079de0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7ff74000c010 tx=0x7ff74000bf00 comp rx=0 tx=0).stop 2026-03-10T10:27:24.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.920+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7500684d0 msgr2=0x7ff750198e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:24.921 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.920+0000 7ff754b31700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7500684d0 0x7ff750198e30 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7ff73800cc60 tx=0x7ff73800cf70 comp rx=0 tx=0).stop 2026-03-10T10:27:24.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.920+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 shutdown_connections 2026-03-10T10:27:24.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.921+0000 7ff754b31700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff7500684d0 0x7ff750198e30 unknown :-1 s=CLOSED pgs=150 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.921+0000 7ff754b31700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff73c077920 0x7ff73c079de0 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.921+0000 7ff754b31700 1 --2- 192.168.123.102:0/3867773091 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff750068df0 0x7ff750199370 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:24.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.921+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 >> 192.168.123.102:0/3867773091 conn(0x7ff750075960 msgr2=0x7ff7500fe9b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:24.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.921+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 shutdown_connections 2026-03-10T10:27:24.922 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:24.921+0000 7ff754b31700 1 -- 192.168.123.102:0/3867773091 wait complete. 2026-03-10T10:27:24.923 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 15 2026-03-10T10:27:24.994 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 16 2026-03-10T10:27:25.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:24 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3877162228' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 14, "format": "json"}]: dispatch 2026-03-10T10:27:25.160 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:25.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.427+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3626407192 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 msgr2=0x7fa1580688b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:25.429 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.427+0000 7fa15cdd9700 1 --2- 192.168.123.102:0/3626407192 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 0x7fa1580688b0 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7fa140009b30 tx=0x7fa140009e40 comp rx=0 tx=0).stop 2026-03-10T10:27:25.430 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.429+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3626407192 shutdown_connections 2026-03-10T10:27:25.430 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.429+0000 7fa15cdd9700 1 --2- 192.168.123.102:0/3626407192 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa158068df0 0x7fa15810d5b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:25.430 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.429+0000 7fa15cdd9700 1 --2- 192.168.123.102:0/3626407192 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 0x7fa1580688b0 unknown :-1 s=CLOSED pgs=151 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:25.430 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.429+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3626407192 >> 192.168.123.102:0/3626407192 conn(0x7fa158075960 msgr2=0x7fa158075d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:25.430 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.429+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3626407192 shutdown_connections 2026-03-10T10:27:25.430 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.429+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3626407192 wait complete. 2026-03-10T10:27:25.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15cdd9700 1 Processor -- start 2026-03-10T10:27:25.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15cdd9700 1 -- start start 2026-03-10T10:27:25.431 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15cdd9700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 0x7fa158198e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:25.432 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15cdd9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa158068df0 0x7fa1581993d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:25.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15cdd9700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa158199ab0 con 0x7fa1580684d0 2026-03-10T10:27:25.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15cdd9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa15819d840 con 0x7fa158068df0 2026-03-10T10:27:25.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15659c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 0x7fa158198e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:25.433 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15659c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 0x7fa158198e90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57864/0 (socket says 192.168.123.102:57864) 2026-03-10T10:27:25.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15659c700 1 -- 192.168.123.102:0/3027709666 learned_addr learned my addr 192.168.123.102:0/3027709666 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:25.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15659c700 1 -- 192.168.123.102:0/3027709666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa158068df0 msgr2=0x7fa1581993d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:25.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15659c700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa158068df0 0x7fa1581993d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:25.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15659c700 1 -- 192.168.123.102:0/3027709666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa1400097e0 con 0x7fa1580684d0 2026-03-10T10:27:25.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.430+0000 7fa15659c700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 0x7fa158198e90 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7fa1400049d0 tx=0x7fa140004ab0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:25.434 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.431+0000 7fa14f7fe700 1 -- 192.168.123.102:0/3027709666 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa14001d070 con 0x7fa1580684d0 2026-03-10T10:27:25.435 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.431+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa15819db20 con 0x7fa1580684d0 2026-03-10T10:27:25.435 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.431+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa15819e070 con 0x7fa1580684d0 2026-03-10T10:27:25.435 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.431+0000 7fa14f7fe700 1 -- 192.168.123.102:0/3027709666 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa14000bc10 con 0x7fa1580684d0 2026-03-10T10:27:25.435 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.431+0000 7fa14f7fe700 1 -- 192.168.123.102:0/3027709666 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa14000f650 con 0x7fa1580684d0 2026-03-10T10:27:25.435 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.432+0000 7fa14f7fe700 1 -- 192.168.123.102:0/3027709666 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa14000f7b0 con 0x7fa1580684d0 2026-03-10T10:27:25.435 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.432+0000 7fa14f7fe700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa144080130 0x7fa1440825f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:25.435 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.432+0000 7fa14f7fe700 1 -- 192.168.123.102:0/3027709666 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fa14009c3b0 con 0x7fa1580684d0 2026-03-10T10:27:25.435 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.433+0000 7fa155d9b700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa144080130 0x7fa1440825f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:25.436 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.435+0000 7fa155d9b700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa144080130 0x7fa1440825f0 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fa15819a4b0 tx=0x7fa148006c60 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:25.439 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.435+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa138005320 con 0x7fa1580684d0 2026-03-10T10:27:25.439 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.438+0000 7fa14f7fe700 1 -- 192.168.123.102:0/3027709666 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa140064ba0 con 0x7fa1580684d0 2026-03-10T10:27:25.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.584+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 16, "format": "json"} v 0) v1 -- 0x7fa138005190 con 0x7fa1580684d0 2026-03-10T10:27:25.586 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.585+0000 7fa14f7fe700 1 -- 192.168.123.102:0/3027709666 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 16, "format": "json"}]=0 dumped fsmap epoch 16 v40) v1 ==== 107+0+4139 (secure 0 0 0) 0x7fa140027020 con 0x7fa1580684d0 2026-03-10T10:27:25.586 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:25.587 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":16,"btime":"2026-03-10T10:24:35:695041+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12}],"filesystems":[{"mdsmap":{"epoch":16,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:35.695039+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14464},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14464":{"gid":14464,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":11,"state":"up:active","state_seq":5,"addr":"192.168.123.102:6827/658252295","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":658252295},{"type":"v1","addr":"192.168.123.102:6827","nonce":658252295}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14464,"qdb_cluster":[14464]},"id":1}]} 2026-03-10T10:27:25.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.588+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa144080130 msgr2=0x7fa1440825f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.588+0000 7fa15cdd9700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa144080130 0x7fa1440825f0 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fa15819a4b0 tx=0x7fa148006c60 comp rx=0 tx=0).stop 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.588+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 msgr2=0x7fa158198e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.588+0000 7fa15cdd9700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 0x7fa158198e90 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7fa1400049d0 tx=0x7fa140004ab0 comp rx=0 tx=0).stop 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.588+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 shutdown_connections 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.588+0000 7fa15cdd9700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa1580684d0 0x7fa158198e90 unknown :-1 s=CLOSED pgs=152 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.588+0000 7fa15cdd9700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa144080130 0x7fa1440825f0 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.588+0000 7fa15cdd9700 1 --2- 192.168.123.102:0/3027709666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa158068df0 0x7fa1581993d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.588+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 >> 192.168.123.102:0/3027709666 conn(0x7fa158075960 msgr2=0x7fa1580fe950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.589+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 shutdown_connections 2026-03-10T10:27:25.590 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:25.589+0000 7fa15cdd9700 1 -- 192.168.123.102:0/3027709666 wait complete. 2026-03-10T10:27:25.591 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 16 2026-03-10T10:27:25.662 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 16 2026-03-10T10:27:25.662 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 17 2026-03-10T10:27:25.832 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:25.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:25 vm02.local ceph-mon[110129]: pgmap v246: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:25.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:25 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3867773091' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 15, "format": "json"}]: dispatch 2026-03-10T10:27:25.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:25 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3027709666' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 16, "format": "json"}]: dispatch 2026-03-10T10:27:26.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:25 vm05.local ceph-mon[103593]: pgmap v246: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:26.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:25 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3867773091' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 15, "format": "json"}]: dispatch 2026-03-10T10:27:26.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:25 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3027709666' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 16, "format": "json"}]: dispatch 2026-03-10T10:27:26.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.096+0000 7f20368a6700 1 -- 192.168.123.102:0/4219713334 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 msgr2=0x7f20301024c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:26.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.096+0000 7f20368a6700 1 --2- 192.168.123.102:0/4219713334 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 0x7f20301024c0 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7f2018009b00 tx=0x7f2018009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:26.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.097+0000 7f20368a6700 1 -- 192.168.123.102:0/4219713334 shutdown_connections 2026-03-10T10:27:26.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.097+0000 7f20368a6700 1 --2- 192.168.123.102:0/4219713334 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2030102a00 0x7f203010aef0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.097+0000 7f20368a6700 1 --2- 192.168.123.102:0/4219713334 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 0x7f20301024c0 unknown :-1 s=CLOSED pgs=153 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.097+0000 7f20368a6700 1 -- 192.168.123.102:0/4219713334 >> 192.168.123.102:0/4219713334 conn(0x7f20300fb830 msgr2=0x7f20300fdc50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:26.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.097+0000 7f20368a6700 1 -- 192.168.123.102:0/4219713334 shutdown_connections 2026-03-10T10:27:26.098 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.097+0000 7f20368a6700 1 -- 192.168.123.102:0/4219713334 wait complete. 2026-03-10T10:27:26.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.097+0000 7f20368a6700 1 Processor -- start 2026-03-10T10:27:26.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f20368a6700 1 -- start start 2026-03-10T10:27:26.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f20368a6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 0x7f203019a810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:26.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f20368a6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2030102a00 0x7f203019ad50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:26.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f20368a6700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f203019b3e0 con 0x7f20301020e0 2026-03-10T10:27:26.099 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f20368a6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2030194890 con 0x7f2030102a00 2026-03-10T10:27:26.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f202ffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 0x7f203019a810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:26.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f202f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2030102a00 0x7f203019ad50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:26.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f202f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2030102a00 0x7f203019ad50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:41370/0 (socket says 192.168.123.102:41370) 2026-03-10T10:27:26.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f202ffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 0x7f203019a810 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:57876/0 (socket says 192.168.123.102:57876) 2026-03-10T10:27:26.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.098+0000 7f202f7fe700 1 -- 192.168.123.102:0/1363578143 learned_addr learned my addr 192.168.123.102:0/1363578143 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:26.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.099+0000 7f202ffff700 1 -- 192.168.123.102:0/1363578143 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2030102a00 msgr2=0x7f203019ad50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:26.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.099+0000 7f202ffff700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2030102a00 0x7f203019ad50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.099+0000 7f202ffff700 1 -- 192.168.123.102:0/1363578143 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f20180097e0 con 0x7f20301020e0 2026-03-10T10:27:26.100 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.099+0000 7f202ffff700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 0x7f203019a810 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7f2018009fd0 tx=0x7f2018004c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:26.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.099+0000 7f202d7fa700 1 -- 192.168.123.102:0/1363578143 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f201801d070 con 0x7f20301020e0 2026-03-10T10:27:26.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.099+0000 7f202d7fa700 1 -- 192.168.123.102:0/1363578143 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f2018022470 con 0x7f20301020e0 2026-03-10T10:27:26.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.099+0000 7f202d7fa700 1 -- 192.168.123.102:0/1363578143 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f201800f670 con 0x7f20301020e0 2026-03-10T10:27:26.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.099+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2030194b10 con 0x7f20301020e0 2026-03-10T10:27:26.102 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.099+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2030194f80 con 0x7f20301020e0 2026-03-10T10:27:26.103 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.101+0000 7f202d7fa700 1 -- 192.168.123.102:0/1363578143 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f201800f7d0 con 0x7f20301020e0 2026-03-10T10:27:26.103 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.101+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f20301085f0 con 0x7f20301020e0 2026-03-10T10:27:26.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.104+0000 7f202d7fa700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f201c077870 0x7f201c079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:26.105 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.104+0000 7f202d7fa700 1 -- 192.168.123.102:0/1363578143 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f201809b1f0 con 0x7f20301020e0 2026-03-10T10:27:26.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.104+0000 7f202f7fe700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f201c077870 0x7f201c079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:26.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.105+0000 7f202f7fe700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f201c077870 0x7f201c079d30 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f20300fcf70 tx=0x7f2020009430 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:26.106 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.105+0000 7f202d7fa700 1 -- 192.168.123.102:0/1363578143 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f2018063050 con 0x7f20301020e0 2026-03-10T10:27:26.248 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.247+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 17, "format": "json"} v 0) v1 -- 0x7f2030068a10 con 0x7f20301020e0 2026-03-10T10:27:26.250 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.248+0000 7f202d7fa700 1 -- 192.168.123.102:0/1363578143 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 17, "format": "json"}]=0 dumped fsmap epoch 17 v40) v1 ==== 107+0+4123 (secure 0 0 0) 0x7f2018063050 con 0x7f20301020e0 2026-03-10T10:27:26.251 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:26.251 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":17,"btime":"2026-03-10T10:24:36:595148+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":17,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:36.595147+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":78,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:26.254 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.252+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f201c077870 msgr2=0x7f201c079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:26.254 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.253+0000 7f20368a6700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f201c077870 0x7f201c079d30 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f20300fcf70 tx=0x7f2020009430 comp rx=0 tx=0).stop 2026-03-10T10:27:26.254 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.253+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 msgr2=0x7f203019a810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:26.254 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.253+0000 7f20368a6700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 0x7f203019a810 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7f2018009fd0 tx=0x7f2018004c80 comp rx=0 tx=0).stop 2026-03-10T10:27:26.254 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.253+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 shutdown_connections 2026-03-10T10:27:26.254 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.253+0000 7f20368a6700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f20301020e0 0x7f203019a810 unknown :-1 s=CLOSED pgs=154 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.253+0000 7f20368a6700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f201c077870 0x7f201c079d30 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.253+0000 7f20368a6700 1 --2- 192.168.123.102:0/1363578143 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2030102a00 0x7f203019ad50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.253+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 >> 192.168.123.102:0/1363578143 conn(0x7f20300fb830 msgr2=0x7f2030105730 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:26.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.254+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 shutdown_connections 2026-03-10T10:27:26.255 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.254+0000 7f20368a6700 1 -- 192.168.123.102:0/1363578143 wait complete. 2026-03-10T10:27:26.256 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 17 2026-03-10T10:27:26.320 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 17 2026-03-10T10:27:26.320 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 18 2026-03-10T10:27:26.477 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:26.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.844+0000 7f6c89563700 1 -- 192.168.123.102:0/3251359898 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 msgr2=0x7f6c84103720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:26.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.844+0000 7f6c89563700 1 --2- 192.168.123.102:0/3251359898 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 0x7f6c84103720 secure :-1 s=READY pgs=155 cs=0 l=1 rev1=1 crypto rx=0x7f6c6c009ab0 tx=0x7f6c6c009dc0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.845+0000 7f6c89563700 1 -- 192.168.123.102:0/3251359898 shutdown_connections 2026-03-10T10:27:26.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.845+0000 7f6c89563700 1 --2- 192.168.123.102:0/3251359898 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6c84103cf0 0x7f6c84107d40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.845+0000 7f6c89563700 1 --2- 192.168.123.102:0/3251359898 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 0x7f6c84103720 unknown :-1 s=CLOSED pgs=155 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.846 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.845+0000 7f6c89563700 1 -- 192.168.123.102:0/3251359898 >> 192.168.123.102:0/3251359898 conn(0x7f6c840febd0 msgr2=0x7f6c84100ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:26.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.845+0000 7f6c89563700 1 -- 192.168.123.102:0/3251359898 shutdown_connections 2026-03-10T10:27:26.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.845+0000 7f6c89563700 1 -- 192.168.123.102:0/3251359898 wait complete. 2026-03-10T10:27:26.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.846+0000 7f6c89563700 1 Processor -- start 2026-03-10T10:27:26.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.846+0000 7f6c89563700 1 -- start start 2026-03-10T10:27:26.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.846+0000 7f6c89563700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 0x7f6c84198e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:26.847 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.846+0000 7f6c89563700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6c84103cf0 0x7f6c841993d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.846+0000 7f6c89563700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6c84199ab0 con 0x7f6c84103340 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.846+0000 7f6c89563700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6c8419d840 con 0x7f6c84103cf0 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.847+0000 7f6c82ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 0x7f6c84198e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.847+0000 7f6c82ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 0x7f6c84198e90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56358/0 (socket says 192.168.123.102:56358) 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.847+0000 7f6c82ffd700 1 -- 192.168.123.102:0/4222426460 learned_addr learned my addr 192.168.123.102:0/4222426460 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.847+0000 7f6c827fc700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6c84103cf0 0x7f6c841993d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.847+0000 7f6c82ffd700 1 -- 192.168.123.102:0/4222426460 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6c84103cf0 msgr2=0x7f6c841993d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.847+0000 7f6c82ffd700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6c84103cf0 0x7f6c841993d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.847+0000 7f6c82ffd700 1 -- 192.168.123.102:0/4222426460 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6c6c009710 con 0x7f6c84103340 2026-03-10T10:27:26.848 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.847+0000 7f6c82ffd700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 0x7f6c84198e90 secure :-1 s=READY pgs=156 cs=0 l=1 rev1=1 crypto rx=0x7f6c6c000c00 tx=0x7f6c6c00f800 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:26.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.848+0000 7f6c7bfff700 1 -- 192.168.123.102:0/4222426460 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6c6c01d070 con 0x7f6c84103340 2026-03-10T10:27:26.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.848+0000 7f6c7bfff700 1 -- 192.168.123.102:0/4222426460 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f6c6c00fb80 con 0x7f6c84103340 2026-03-10T10:27:26.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.848+0000 7f6c7bfff700 1 -- 192.168.123.102:0/4222426460 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6c6c0175e0 con 0x7f6c84103340 2026-03-10T10:27:26.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.848+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6c8419dac0 con 0x7f6c84103340 2026-03-10T10:27:26.850 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.848+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6c8419ded0 con 0x7f6c84103340 2026-03-10T10:27:26.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.849+0000 7f6c7bfff700 1 -- 192.168.123.102:0/4222426460 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f6c6c00fcf0 con 0x7f6c84103340 2026-03-10T10:27:26.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.849+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6c8410b6e0 con 0x7f6c84103340 2026-03-10T10:27:26.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.850+0000 7f6c7bfff700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6c70077870 0x7f6c70079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:26.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.850+0000 7f6c7bfff700 1 -- 192.168.123.102:0/4222426460 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f6c6c09ae20 con 0x7f6c84103340 2026-03-10T10:27:26.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.850+0000 7f6c827fc700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6c70077870 0x7f6c70079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:26.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.851+0000 7f6c827fc700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6c70077870 0x7f6c70079d30 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f6c8419a4b0 tx=0x7f6c74009500 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:26.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:26.853+0000 7f6c7bfff700 1 -- 192.168.123.102:0/4222426460 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6c6c063550 con 0x7f6c84103340 2026-03-10T10:27:27.003 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.001+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 18, "format": "json"} v 0) v1 -- 0x7f6c8419a1f0 con 0x7f6c84103340 2026-03-10T10:27:27.004 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.002+0000 7f6c7bfff700 1 -- 192.168.123.102:0/4222426460 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 18, "format": "json"}]=0 dumped fsmap epoch 18 v40) v1 ==== 107+0+4134 (secure 0 0 0) 0x7f6c6c063550 con 0x7f6c84103340 2026-03-10T10:27:27.004 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:27.004 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":18,"btime":"2026-03-10T10:24:36:615911+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":18,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:36.615892+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":78,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14494},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14494":{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":0,"incarnation":18,"state":"up:replay","state_seq":1,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:27.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.005+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6c70077870 msgr2=0x7f6c70079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:27.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.005+0000 7f6c89563700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6c70077870 0x7f6c70079d30 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f6c8419a4b0 tx=0x7f6c74009500 comp rx=0 tx=0).stop 2026-03-10T10:27:27.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.005+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 msgr2=0x7f6c84198e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:27.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.005+0000 7f6c89563700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 0x7f6c84198e90 secure :-1 s=READY pgs=156 cs=0 l=1 rev1=1 crypto rx=0x7f6c6c000c00 tx=0x7f6c6c00f800 comp rx=0 tx=0).stop 2026-03-10T10:27:27.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.006+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 shutdown_connections 2026-03-10T10:27:27.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.006+0000 7f6c89563700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f6c84103340 0x7f6c84198e90 unknown :-1 s=CLOSED pgs=156 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:27.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.006+0000 7f6c89563700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f6c70077870 0x7f6c70079d30 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:27.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.006+0000 7f6c89563700 1 --2- 192.168.123.102:0/4222426460 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6c84103cf0 0x7f6c841993d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:27.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.006+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 >> 192.168.123.102:0/4222426460 conn(0x7f6c840febd0 msgr2=0x7f6c84100fc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:27.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.006+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 shutdown_connections 2026-03-10T10:27:27.008 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.006+0000 7f6c89563700 1 -- 192.168.123.102:0/4222426460 wait complete. 2026-03-10T10:27:27.008 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 18 2026-03-10T10:27:27.031 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:26 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1363578143' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 17, "format": "json"}]: dispatch 2026-03-10T10:27:27.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:26 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1363578143' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 17, "format": "json"}]: dispatch 2026-03-10T10:27:27.053 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 18 2026-03-10T10:27:27.053 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 19 2026-03-10T10:27:27.223 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:27.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.489+0000 7f111e513700 1 -- 192.168.123.102:0/580186155 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1118101a80 msgr2=0x7f1118105ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:27.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.489+0000 7f111e513700 1 --2- 192.168.123.102:0/580186155 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1118101a80 0x7f1118105ad0 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7f1108009b50 tx=0x7f1108009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:27.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.491+0000 7f111e513700 1 -- 192.168.123.102:0/580186155 shutdown_connections 2026-03-10T10:27:27.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.491+0000 7f111e513700 1 --2- 192.168.123.102:0/580186155 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f1118101a80 0x7f1118105ad0 unknown :-1 s=CLOSED pgs=157 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:27.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.491+0000 7f111e513700 1 --2- 192.168.123.102:0/580186155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11181010d0 0x7f11181014b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:27.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.491+0000 7f111e513700 1 -- 192.168.123.102:0/580186155 >> 192.168.123.102:0/580186155 conn(0x7f11180fc920 msgr2=0x7f11180fed40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:27.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.491+0000 7f111e513700 1 -- 192.168.123.102:0/580186155 shutdown_connections 2026-03-10T10:27:27.492 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.491+0000 7f111e513700 1 -- 192.168.123.102:0/580186155 wait complete. 2026-03-10T10:27:27.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.492+0000 7f111e513700 1 Processor -- start 2026-03-10T10:27:27.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.492+0000 7f111e513700 1 -- start start 2026-03-10T10:27:27.493 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.492+0000 7f111e513700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f11181010d0 0x7f111819ca20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:27.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.492+0000 7f111e513700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1118101a80 0x7f111819cf60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:27.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.492+0000 7f111e513700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f111819d5f0 con 0x7f11181010d0 2026-03-10T10:27:27.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.492+0000 7f111e513700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1118196aa0 con 0x7f1118101a80 2026-03-10T10:27:27.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.493+0000 7f11177fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1118101a80 0x7f111819cf60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:27.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.493+0000 7f11177fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1118101a80 0x7f111819cf60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:34830/0 (socket says 192.168.123.102:34830) 2026-03-10T10:27:27.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.493+0000 7f11177fe700 1 -- 192.168.123.102:0/2299228803 learned_addr learned my addr 192.168.123.102:0/2299228803 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:27.494 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.493+0000 7f11177fe700 1 -- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f11181010d0 msgr2=0x7f111819ca20 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:27.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.493+0000 7f1117fff700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f11181010d0 0x7f111819ca20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:27.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.493+0000 7f11177fe700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f11181010d0 0x7f111819ca20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:27.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.493+0000 7f11177fe700 1 -- 192.168.123.102:0/2299228803 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1100009710 con 0x7f1118101a80 2026-03-10T10:27:27.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.494+0000 7f11177fe700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1118101a80 0x7f111819cf60 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f1108005950 tx=0x7f1108004970 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:27.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.494+0000 7f11157fa700 1 -- 192.168.123.102:0/2299228803 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f110801d070 con 0x7f1118101a80 2026-03-10T10:27:27.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.494+0000 7f11157fa700 1 -- 192.168.123.102:0/2299228803 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f1108022470 con 0x7f1118101a80 2026-03-10T10:27:27.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.494+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f11080097e0 con 0x7f1118101a80 2026-03-10T10:27:27.495 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.494+0000 7f1117fff700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f11181010d0 0x7f111819ca20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:27.496 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.494+0000 7f11157fa700 1 -- 192.168.123.102:0/2299228803 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f110800f650 con 0x7f1118101a80 2026-03-10T10:27:27.496 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.494+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1118197080 con 0x7f1118101a80 2026-03-10T10:27:27.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.496+0000 7f11157fa700 1 -- 192.168.123.102:0/2299228803 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f110800f7b0 con 0x7f1118101a80 2026-03-10T10:27:27.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.496+0000 7f11157fa700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1104077920 0x7f1104079de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:27.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.496+0000 7f11157fa700 1 -- 192.168.123.102:0/2299228803 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f110809c2a0 con 0x7f1118101a80 2026-03-10T10:27:27.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.496+0000 7f1117fff700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1104077920 0x7f1104079de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:27.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.496+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1118196cc0 con 0x7f1118101a80 2026-03-10T10:27:27.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.497+0000 7f1117fff700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1104077920 0x7f1104079de0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f1100009f60 tx=0x7f1100009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:27.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.500+0000 7f11157fa700 1 -- 192.168.123.102:0/2299228803 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f1118196cc0 con 0x7f1118101a80 2026-03-10T10:27:27.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.651+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 19, "format": "json"} v 0) v1 -- 0x7f111804eaf0 con 0x7f1118101a80 2026-03-10T10:27:27.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.652+0000 7f11157fa700 1 -- 192.168.123.102:0/2299228803 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 19, "format": "json"}]=0 dumped fsmap epoch 19 v40) v1 ==== 107+0+4139 (secure 0 0 0) 0x7f1108027070 con 0x7f1118101a80 2026-03-10T10:27:27.655 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:27.655 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":19,"btime":"2026-03-10T10:24:43:134762+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":19,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:42.214434+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":78,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14494},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14494":{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":0,"incarnation":18,"state":"up:reconnect","state_seq":117,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:27.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.656+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1104077920 msgr2=0x7f1104079de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:27.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.656+0000 7f111e513700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1104077920 0x7f1104079de0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f1100009f60 tx=0x7f1100009450 comp rx=0 tx=0).stop 2026-03-10T10:27:27.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.657+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1118101a80 msgr2=0x7f111819cf60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:27.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.657+0000 7f111e513700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1118101a80 0x7f111819cf60 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f1108005950 tx=0x7f1108004970 comp rx=0 tx=0).stop 2026-03-10T10:27:27.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.657+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 shutdown_connections 2026-03-10T10:27:27.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.657+0000 7f111e513700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f11181010d0 0x7f111819ca20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:27.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.657+0000 7f111e513700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f1104077920 0x7f1104079de0 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:27.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.657+0000 7f111e513700 1 --2- 192.168.123.102:0/2299228803 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1118101a80 0x7f111819cf60 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:27.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.657+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 >> 192.168.123.102:0/2299228803 conn(0x7f11180fc920 msgr2=0x7f11180fed20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:27.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.657+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 shutdown_connections 2026-03-10T10:27:27.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:27.658+0000 7f111e513700 1 -- 192.168.123.102:0/2299228803 wait complete. 2026-03-10T10:27:27.660 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 19 2026-03-10T10:27:27.709 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 19 2026-03-10T10:27:27.710 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 20 2026-03-10T10:27:27.875 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:27.911 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:27 vm02.local ceph-mon[110129]: pgmap v247: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:27.911 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:27 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/4222426460' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 18, "format": "json"}]: dispatch 2026-03-10T10:27:27.911 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:27 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2299228803' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 19, "format": "json"}]: dispatch 2026-03-10T10:27:28.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:27 vm05.local ceph-mon[103593]: pgmap v247: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:28.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:27 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/4222426460' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 18, "format": "json"}]: dispatch 2026-03-10T10:27:28.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:27 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2299228803' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 19, "format": "json"}]: dispatch 2026-03-10T10:27:28.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.152+0000 7ff151a97700 1 -- 192.168.123.102:0/3755991236 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c1030a0 msgr2=0x7ff14c103480 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:28.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.152+0000 7ff151a97700 1 --2- 192.168.123.102:0/3755991236 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c1030a0 0x7ff14c103480 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7ff13c009b00 tx=0x7ff13c009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:28.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.152+0000 7ff151a97700 1 -- 192.168.123.102:0/3755991236 shutdown_connections 2026-03-10T10:27:28.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.152+0000 7ff151a97700 1 --2- 192.168.123.102:0/3755991236 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff14c103a50 0x7ff14c107aa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.152+0000 7ff151a97700 1 --2- 192.168.123.102:0/3755991236 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c1030a0 0x7ff14c103480 unknown :-1 s=CLOSED pgs=158 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.152+0000 7ff151a97700 1 -- 192.168.123.102:0/3755991236 >> 192.168.123.102:0/3755991236 conn(0x7ff14c0fe930 msgr2=0x7ff14c100d50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:28.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.153+0000 7ff151a97700 1 -- 192.168.123.102:0/3755991236 shutdown_connections 2026-03-10T10:27:28.154 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.153+0000 7ff151a97700 1 -- 192.168.123.102:0/3755991236 wait complete. 2026-03-10T10:27:28.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.153+0000 7ff151a97700 1 Processor -- start 2026-03-10T10:27:28.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.154+0000 7ff151a97700 1 -- start start 2026-03-10T10:27:28.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.154+0000 7ff151a97700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff14c103a50 0x7ff14c198de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:28.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.154+0000 7ff151a97700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c199320 0x7ff14c19d790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:28.155 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.154+0000 7ff151a97700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff14c199940 con 0x7ff14c199320 2026-03-10T10:27:28.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.154+0000 7ff14bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c199320 0x7ff14c19d790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:28.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.154+0000 7ff14bfff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c199320 0x7ff14c19d790 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56374/0 (socket says 192.168.123.102:56374) 2026-03-10T10:27:28.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.154+0000 7ff14bfff700 1 -- 192.168.123.102:0/1134495094 learned_addr learned my addr 192.168.123.102:0/1134495094 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:28.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.155+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff14c199ab0 con 0x7ff14c103a50 2026-03-10T10:27:28.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.155+0000 7ff150a95700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff14c103a50 0x7ff14c198de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:28.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.155+0000 7ff14bfff700 1 -- 192.168.123.102:0/1134495094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff14c103a50 msgr2=0x7ff14c198de0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:28.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.155+0000 7ff14bfff700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff14c103a50 0x7ff14c198de0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.156 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.155+0000 7ff14bfff700 1 -- 192.168.123.102:0/1134495094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff13c0097e0 con 0x7ff14c199320 2026-03-10T10:27:28.157 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.155+0000 7ff14bfff700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c199320 0x7ff14c19d790 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7ff14000da40 tx=0x7ff14000de00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:28.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.155+0000 7ff149ffb700 1 -- 192.168.123.102:0/1134495094 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff140009940 con 0x7ff14c199320 2026-03-10T10:27:28.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.155+0000 7ff149ffb700 1 -- 192.168.123.102:0/1134495094 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff140010460 con 0x7ff14c199320 2026-03-10T10:27:28.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.155+0000 7ff149ffb700 1 -- 192.168.123.102:0/1134495094 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff14000f5f0 con 0x7ff14c199320 2026-03-10T10:27:28.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.156+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff14c19dd90 con 0x7ff14c199320 2026-03-10T10:27:28.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.156+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff14c19e290 con 0x7ff14c199320 2026-03-10T10:27:28.158 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.157+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff14c10b3d0 con 0x7ff14c199320 2026-03-10T10:27:28.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.160+0000 7ff149ffb700 1 -- 192.168.123.102:0/1134495094 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff1400105d0 con 0x7ff14c199320 2026-03-10T10:27:28.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.160+0000 7ff149ffb700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff1340779e0 0x7ff134079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:28.164 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.160+0000 7ff149ffb700 1 -- 192.168.123.102:0/1134495094 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7ff140099cb0 con 0x7ff14c199320 2026-03-10T10:27:28.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.163+0000 7ff150a95700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff1340779e0 0x7ff134079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:28.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.164+0000 7ff150a95700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff1340779e0 0x7ff134079ea0 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7ff13c009fd0 tx=0x7ff13c005fd0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:28.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.164+0000 7ff149ffb700 1 -- 192.168.123.102:0/1134495094 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff1400623b0 con 0x7ff14c199320 2026-03-10T10:27:28.317 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.315+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 20, "format": "json"} v 0) v1 -- 0x7ff14c04f2e0 con 0x7ff14c199320 2026-03-10T10:27:28.317 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.316+0000 7ff149ffb700 1 -- 192.168.123.102:0/1134495094 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 20, "format": "json"}]=0 dumped fsmap epoch 20 v40) v1 ==== 107+0+4136 (secure 0 0 0) 0x7ff140061b00 con 0x7ff14c199320 2026-03-10T10:27:28.317 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:28.318 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":20,"btime":"2026-03-10T10:24:44:141530+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":20,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:43.145943+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":78,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14494},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14494":{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":0,"incarnation":18,"state":"up:rejoin","state_seq":118,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:28.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.319+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff1340779e0 msgr2=0x7ff134079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:28.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.319+0000 7ff151a97700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff1340779e0 0x7ff134079ea0 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7ff13c009fd0 tx=0x7ff13c005fd0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.319+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c199320 msgr2=0x7ff14c19d790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:28.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.319+0000 7ff151a97700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c199320 0x7ff14c19d790 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7ff14000da40 tx=0x7ff14000de00 comp rx=0 tx=0).stop 2026-03-10T10:27:28.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.320+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 shutdown_connections 2026-03-10T10:27:28.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.320+0000 7ff151a97700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff1340779e0 0x7ff134079ea0 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.320+0000 7ff151a97700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff14c103a50 0x7ff14c198de0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.320+0000 7ff151a97700 1 --2- 192.168.123.102:0/1134495094 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff14c199320 0x7ff14c19d790 unknown :-1 s=CLOSED pgs=159 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.320+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 >> 192.168.123.102:0/1134495094 conn(0x7ff14c0fe930 msgr2=0x7ff14c100090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:28.322 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.320+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 shutdown_connections 2026-03-10T10:27:28.322 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.320+0000 7ff151a97700 1 -- 192.168.123.102:0/1134495094 wait complete. 2026-03-10T10:27:28.322 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 20 2026-03-10T10:27:28.396 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 20 2026-03-10T10:27:28.397 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 21 2026-03-10T10:27:28.556 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:28.851 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.850+0000 7fa9f0ed5700 1 -- 192.168.123.102:0/396743928 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 msgr2=0x7fa9ec073510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:28.851 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.850+0000 7fa9f0ed5700 1 --2- 192.168.123.102:0/396743928 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 0x7fa9ec073510 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7fa9dc009b50 tx=0x7fa9dc009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:28.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.850+0000 7fa9f0ed5700 1 -- 192.168.123.102:0/396743928 shutdown_connections 2026-03-10T10:27:28.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.850+0000 7fa9f0ed5700 1 --2- 192.168.123.102:0/396743928 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9ec073a50 0x7fa9ec111720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.850+0000 7fa9f0ed5700 1 --2- 192.168.123.102:0/396743928 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 0x7fa9ec073510 unknown :-1 s=CLOSED pgs=160 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.850+0000 7fa9f0ed5700 1 -- 192.168.123.102:0/396743928 >> 192.168.123.102:0/396743928 conn(0x7fa9ec0fc720 msgr2=0x7fa9ec0feb40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:28.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.851+0000 7fa9f0ed5700 1 -- 192.168.123.102:0/396743928 shutdown_connections 2026-03-10T10:27:28.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.851+0000 7fa9f0ed5700 1 -- 192.168.123.102:0/396743928 wait complete. 2026-03-10T10:27:28.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.851+0000 7fa9f0ed5700 1 Processor -- start 2026-03-10T10:27:28.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.851+0000 7fa9f0ed5700 1 -- start start 2026-03-10T10:27:28.852 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.851+0000 7fa9f0ed5700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 0x7fa9ec19d1a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.851+0000 7fa9f0ed5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9ec073a50 0x7fa9ec19d6e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9ea59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 0x7fa9ec19d1a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9ea59c700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 0x7fa9ec19d1a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56394/0 (socket says 192.168.123.102:56394) 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9f0ed5700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa9ec19ddc0 con 0x7fa9ec073130 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9f0ed5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa9ec1a1b50 con 0x7fa9ec073a50 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9ea59c700 1 -- 192.168.123.102:0/1599693150 learned_addr learned my addr 192.168.123.102:0/1599693150 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9e1bff700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9ec073a50 0x7fa9ec19d6e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9ea59c700 1 -- 192.168.123.102:0/1599693150 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9ec073a50 msgr2=0x7fa9ec19d6e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9ea59c700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9ec073a50 0x7fa9ec19d6e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9ea59c700 1 -- 192.168.123.102:0/1599693150 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa9dc0097e0 con 0x7fa9ec073130 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9e1bff700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9ec073a50 0x7fa9ec19d6e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:28.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9ea59c700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 0x7fa9ec19d1a0 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7fa9dc004970 tx=0x7fa9dc004a50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:28.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9e3fff700 1 -- 192.168.123.102:0/1599693150 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa9dc01d070 con 0x7fa9ec073130 2026-03-10T10:27:28.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.852+0000 7fa9f0ed5700 1 -- 192.168.123.102:0/1599693150 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa9ec1a1e30 con 0x7fa9ec073130 2026-03-10T10:27:28.855 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.854+0000 7fa9f0ed5700 1 -- 192.168.123.102:0/1599693150 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa9ec1a2380 con 0x7fa9ec073130 2026-03-10T10:27:28.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.854+0000 7fa9e3fff700 1 -- 192.168.123.102:0/1599693150 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa9dc004dd0 con 0x7fa9ec073130 2026-03-10T10:27:28.856 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.854+0000 7fa9e13fe700 1 -- 192.168.123.102:0/1599693150 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa9ec04f2e0 con 0x7fa9ec073130 2026-03-10T10:27:28.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.858+0000 7fa9e3fff700 1 -- 192.168.123.102:0/1599693150 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa9dc022860 con 0x7fa9ec073130 2026-03-10T10:27:28.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.858+0000 7fa9e3fff700 1 -- 192.168.123.102:0/1599693150 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa9dc022a80 con 0x7fa9ec073130 2026-03-10T10:27:28.859 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.858+0000 7fa9e3fff700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa9d8077ab0 0x7fa9d8079f70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:28.860 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.858+0000 7fa9e1bff700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa9d8077ab0 0x7fa9d8079f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:28.860 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.858+0000 7fa9e3fff700 1 -- 192.168.123.102:0/1599693150 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fa9dc0a0170 con 0x7fa9ec073130 2026-03-10T10:27:28.860 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.859+0000 7fa9e3fff700 1 -- 192.168.123.102:0/1599693150 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa9dc0a1050 con 0x7fa9ec073130 2026-03-10T10:27:28.860 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:28.859+0000 7fa9e1bff700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa9d8077ab0 0x7fa9d8079f70 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fa9ec19e7c0 tx=0x7fa9d4009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:29.005 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.003+0000 7fa9e13fe700 1 -- 192.168.123.102:0/1599693150 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 21, "format": "json"} v 0) v1 -- 0x7fa9ec04ea90 con 0x7fa9ec073130 2026-03-10T10:27:29.005 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:28 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1134495094' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 20, "format": "json"}]: dispatch 2026-03-10T10:27:29.007 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.005+0000 7fa9e3fff700 1 -- 192.168.123.102:0/1599693150 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 21, "format": "json"}]=0 dumped fsmap epoch 21 v40) v1 ==== 107+0+4145 (secure 0 0 0) 0x7fa9dc064a70 con 0x7fa9ec073130 2026-03-10T10:27:29.007 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:29.007 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":21,"btime":"2026-03-10T10:24:45:148445+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":21,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:45.148444+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":78,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14494},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14494":{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":0,"incarnation":18,"state":"up:active","state_seq":119,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14494,"qdb_cluster":[14494]},"id":1}]} 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.008+0000 7fa9e13fe700 1 -- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa9d8077ab0 msgr2=0x7fa9d8079f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.008+0000 7fa9e13fe700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa9d8077ab0 0x7fa9d8079f70 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fa9ec19e7c0 tx=0x7fa9d4009450 comp rx=0 tx=0).stop 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.008+0000 7fa9e13fe700 1 -- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 msgr2=0x7fa9ec19d1a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.008+0000 7fa9e13fe700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 0x7fa9ec19d1a0 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7fa9dc004970 tx=0x7fa9dc004a50 comp rx=0 tx=0).stop 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.009+0000 7fa9e13fe700 1 -- 192.168.123.102:0/1599693150 shutdown_connections 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.009+0000 7fa9e13fe700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fa9ec073130 0x7fa9ec19d1a0 unknown :-1 s=CLOSED pgs=161 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.009+0000 7fa9e13fe700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fa9d8077ab0 0x7fa9d8079f70 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.009+0000 7fa9e13fe700 1 --2- 192.168.123.102:0/1599693150 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9ec073a50 0x7fa9ec19d6e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.009+0000 7fa9e13fe700 1 -- 192.168.123.102:0/1599693150 >> 192.168.123.102:0/1599693150 conn(0x7fa9ec0fc720 msgr2=0x7fa9ec103230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.009+0000 7fa9e13fe700 1 -- 192.168.123.102:0/1599693150 shutdown_connections 2026-03-10T10:27:29.010 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.009+0000 7fa9e13fe700 1 -- 192.168.123.102:0/1599693150 wait complete. 2026-03-10T10:27:29.011 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 21 2026-03-10T10:27:29.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:28 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1134495094' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 20, "format": "json"}]: dispatch 2026-03-10T10:27:29.060 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 21 2026-03-10T10:27:29.060 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 22 2026-03-10T10:27:29.225 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:29.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.502+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1952644281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 msgr2=0x7f13b4073980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:29.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.502+0000 7f13b9cfb700 1 --2- 192.168.123.102:0/1952644281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 0x7f13b4073980 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f13a4009b50 tx=0x7f13a4009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:29.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.503+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1952644281 shutdown_connections 2026-03-10T10:27:29.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.503+0000 7f13b9cfb700 1 --2- 192.168.123.102:0/1952644281 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 0x7f13b4073980 unknown :-1 s=CLOSED pgs=162 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.503+0000 7f13b9cfb700 1 --2- 192.168.123.102:0/1952644281 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b4074d70 0x7f13b4072fc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.503+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1952644281 >> 192.168.123.102:0/1952644281 conn(0x7f13b4078ea0 msgr2=0x7f13b40792b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:29.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.503+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1952644281 shutdown_connections 2026-03-10T10:27:29.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.503+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1952644281 wait complete. 2026-03-10T10:27:29.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b9cfb700 1 Processor -- start 2026-03-10T10:27:29.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b9cfb700 1 -- start start 2026-03-10T10:27:29.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b9cfb700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 0x7f13b419d210 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b9cfb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b4074d70 0x7f13b419d750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b9cfb700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13b419de30 con 0x7f13b4073500 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b9cfb700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13b41a1bc0 con 0x7f13b4074d70 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b37fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 0x7f13b419d210 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b37fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 0x7f13b419d210 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56416/0 (socket says 192.168.123.102:56416) 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b37fe700 1 -- 192.168.123.102:0/1644189631 learned_addr learned my addr 192.168.123.102:0/1644189631 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b37fe700 1 -- 192.168.123.102:0/1644189631 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b4074d70 msgr2=0x7f13b419d750 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b37fe700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b4074d70 0x7f13b419d750 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.504+0000 7f13b37fe700 1 -- 192.168.123.102:0/1644189631 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f13a40097e0 con 0x7f13b4073500 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.505+0000 7f13b37fe700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 0x7f13b419d210 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7f139c00eb10 tx=0x7f139c00eed0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:29.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.505+0000 7f13b0ff9700 1 -- 192.168.123.102:0/1644189631 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f139c00cca0 con 0x7f13b4073500 2026-03-10T10:27:29.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.505+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f13b41a1ea0 con 0x7f13b4073500 2026-03-10T10:27:29.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.505+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f13b41a23f0 con 0x7f13b4073500 2026-03-10T10:27:29.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.505+0000 7f13b0ff9700 1 -- 192.168.123.102:0/1644189631 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f139c00ce00 con 0x7f13b4073500 2026-03-10T10:27:29.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.505+0000 7f13b0ff9700 1 -- 192.168.123.102:0/1644189631 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f139c0189c0 con 0x7f13b4073500 2026-03-10T10:27:29.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.506+0000 7f13b0ff9700 1 -- 192.168.123.102:0/1644189631 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f139c018b20 con 0x7f13b4073500 2026-03-10T10:27:29.508 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.506+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1394005320 con 0x7f13b4073500 2026-03-10T10:27:29.511 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.507+0000 7f13b0ff9700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f13a00779e0 0x7f13a0079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:29.511 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.507+0000 7f13b2ffd700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f13a00779e0 0x7f13a0079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:29.511 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.508+0000 7f13b2ffd700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f13a00779e0 0x7f13a0079ea0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f13a4005e50 tx=0x7f13a4005dc0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:29.511 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.508+0000 7f13b0ff9700 1 -- 192.168.123.102:0/1644189631 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f139c014070 con 0x7f13b4073500 2026-03-10T10:27:29.511 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.510+0000 7f13b0ff9700 1 -- 192.168.123.102:0/1644189631 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f139c062b80 con 0x7f13b4073500 2026-03-10T10:27:29.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.650+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 22, "format": "json"} v 0) v1 -- 0x7f13940059f0 con 0x7f13b4073500 2026-03-10T10:27:29.652 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.650+0000 7f13b0ff9700 1 -- 192.168.123.102:0/1644189631 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 22, "format": "json"}]=0 dumped fsmap epoch 22 v40) v1 ==== 107+0+4993 (secure 0 0 0) 0x7f139c0622d0 con 0x7f13b4073500 2026-03-10T10:27:29.652 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:29.652 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":22,"btime":"2026-03-10T10:24:47:166965+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17},{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22}],"filesystems":[{"mdsmap":{"epoch":21,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:45.148444+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":78,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14494},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14494":{"gid":14494,"name":"cephfs.vm02.stcvsz","rank":0,"incarnation":18,"state":"up:active","state_seq":119,"addr":"192.168.123.102:6829/2194475647","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":2194475647},{"type":"v1","addr":"192.168.123.102:6829","nonce":2194475647}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14494,"qdb_cluster":[14494]},"id":1}]} 2026-03-10T10:27:29.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.653+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f13a00779e0 msgr2=0x7f13a0079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:29.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.653+0000 7f13b9cfb700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f13a00779e0 0x7f13a0079ea0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f13a4005e50 tx=0x7f13a4005dc0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.653+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 msgr2=0x7f13b419d210 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:29.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.653+0000 7f13b9cfb700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 0x7f13b419d210 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7f139c00eb10 tx=0x7f139c00eed0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.653+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 shutdown_connections 2026-03-10T10:27:29.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.653+0000 7f13b9cfb700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f13b4073500 0x7f13b419d210 unknown :-1 s=CLOSED pgs=163 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.653+0000 7f13b9cfb700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f13a00779e0 0x7f13a0079ea0 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.653+0000 7f13b9cfb700 1 --2- 192.168.123.102:0/1644189631 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b4074d70 0x7f13b419d750 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:29.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.654+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 >> 192.168.123.102:0/1644189631 conn(0x7f13b4078ea0 msgr2=0x7f13b410f9f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:29.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.654+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 shutdown_connections 2026-03-10T10:27:29.655 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:29.654+0000 7f13b9cfb700 1 -- 192.168.123.102:0/1644189631 wait complete. 2026-03-10T10:27:29.656 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 22 2026-03-10T10:27:29.720 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 22 2026-03-10T10:27:29.720 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 23 2026-03-10T10:27:29.884 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:29.910 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:29 vm02.local ceph-mon[110129]: pgmap v248: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:29.911 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:29 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1599693150' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 21, "format": "json"}]: dispatch 2026-03-10T10:27:29.911 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:29 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1644189631' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 22, "format": "json"}]: dispatch 2026-03-10T10:27:30.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:29 vm05.local ceph-mon[103593]: pgmap v248: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:30.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:29 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1599693150' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 21, "format": "json"}]: dispatch 2026-03-10T10:27:30.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:29 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1644189631' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 22, "format": "json"}]: dispatch 2026-03-10T10:27:30.169 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.167+0000 7f62a4972700 1 -- 192.168.123.102:0/3704866554 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f629c103d70 msgr2=0x7f629c107dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:30.169 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.167+0000 7f62a4972700 1 --2- 192.168.123.102:0/3704866554 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f629c103d70 0x7f629c107dc0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f6290009a60 tx=0x7f6290009d70 comp rx=0 tx=0).stop 2026-03-10T10:27:30.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.168+0000 7f62a4972700 1 -- 192.168.123.102:0/3704866554 shutdown_connections 2026-03-10T10:27:30.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.168+0000 7f62a4972700 1 --2- 192.168.123.102:0/3704866554 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f629c103d70 0x7f629c107dc0 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.168+0000 7f62a4972700 1 --2- 192.168.123.102:0/3704866554 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f629c1033c0 0x7f629c1037a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.168+0000 7f62a4972700 1 -- 192.168.123.102:0/3704866554 >> 192.168.123.102:0/3704866554 conn(0x7f629c0fec30 msgr2=0x7f629c101050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:30.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.169+0000 7f62a4972700 1 -- 192.168.123.102:0/3704866554 shutdown_connections 2026-03-10T10:27:30.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.169+0000 7f62a4972700 1 -- 192.168.123.102:0/3704866554 wait complete. 2026-03-10T10:27:30.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.170+0000 7f62a4972700 1 Processor -- start 2026-03-10T10:27:30.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.170+0000 7f62a4972700 1 -- start start 2026-03-10T10:27:30.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.170+0000 7f62a4972700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f629c1033c0 0x7f629c198e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:30.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.170+0000 7f62a4972700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f629c103d70 0x7f629c199360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:30.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.170+0000 7f62a4972700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f629c199a40 con 0x7f629c103d70 2026-03-10T10:27:30.176 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.170+0000 7f62a4972700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f629c19d7d0 con 0x7f629c1033c0 2026-03-10T10:27:30.176 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.171+0000 7f62a1f0d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f629c103d70 0x7f629c199360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.171+0000 7f62a1f0d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f629c103d70 0x7f629c199360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56426/0 (socket says 192.168.123.102:56426) 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.171+0000 7f62a1f0d700 1 -- 192.168.123.102:0/4204679573 learned_addr learned my addr 192.168.123.102:0/4204679573 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.171+0000 7f62a1f0d700 1 -- 192.168.123.102:0/4204679573 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f629c1033c0 msgr2=0x7f629c198e20 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.171+0000 7f62a1f0d700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f629c1033c0 0x7f629c198e20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.171+0000 7f62a1f0d700 1 -- 192.168.123.102:0/4204679573 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f62880097e0 con 0x7f629c103d70 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.171+0000 7f62a1f0d700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f629c103d70 0x7f629c199360 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7f629000f700 tx=0x7f629000f7e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.172+0000 7f62977fe700 1 -- 192.168.123.102:0/4204679573 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f629001d070 con 0x7f629c103d70 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.172+0000 7f62977fe700 1 -- 192.168.123.102:0/4204679573 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f629000fca0 con 0x7f629c103d70 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.172+0000 7f62977fe700 1 -- 192.168.123.102:0/4204679573 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f62900177c0 con 0x7f629c103d70 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.172+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6290009710 con 0x7f629c103d70 2026-03-10T10:27:30.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.172+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f629c19ddb0 con 0x7f629c103d70 2026-03-10T10:27:30.179 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.173+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6280005320 con 0x7f629c103d70 2026-03-10T10:27:30.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.178+0000 7f62977fe700 1 -- 192.168.123.102:0/4204679573 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f6290017920 con 0x7f629c103d70 2026-03-10T10:27:30.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.179+0000 7f62977fe700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f628c077990 0x7f628c079e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:30.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.179+0000 7f62977fe700 1 -- 192.168.123.102:0/4204679573 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f629009b0d0 con 0x7f629c103d70 2026-03-10T10:27:30.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.179+0000 7f62977fe700 1 -- 192.168.123.102:0/4204679573 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f629009b550 con 0x7f629c103d70 2026-03-10T10:27:30.180 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.179+0000 7f62a270e700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f628c077990 0x7f628c079e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:30.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.180+0000 7f62a270e700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f628c077990 0x7f628c079e50 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f6288005fd0 tx=0x7f6288009500 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:30.320 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.319+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 23, "format": "json"} v 0) v1 -- 0x7f6280005190 con 0x7f629c103d70 2026-03-10T10:27:30.321 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.320+0000 7f62977fe700 1 -- 192.168.123.102:0/4204679573 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 23, "format": "json"}]=0 dumped fsmap epoch 23 v40) v1 ==== 107+0+4188 (secure 0 0 0) 0x7f6290063880 con 0x7f629c103d70 2026-03-10T10:27:30.321 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:30.321 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":23,"btime":"2026-03-10T10:24:50:477484+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24299,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":12},{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17},{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22}],"filesystems":[{"mdsmap":{"epoch":23,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:50.477482+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":81,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:30.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.323+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f628c077990 msgr2=0x7f628c079e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:30.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.323+0000 7f62a4972700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f628c077990 0x7f628c079e50 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f6288005fd0 tx=0x7f6288009500 comp rx=0 tx=0).stop 2026-03-10T10:27:30.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.323+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f629c103d70 msgr2=0x7f629c199360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:30.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.323+0000 7f62a4972700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f629c103d70 0x7f629c199360 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7f629000f700 tx=0x7f629000f7e0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.323+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 shutdown_connections 2026-03-10T10:27:30.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.323+0000 7f62a4972700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f628c077990 0x7f628c079e50 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.323+0000 7f62a4972700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f629c1033c0 0x7f629c198e20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.323+0000 7f62a4972700 1 --2- 192.168.123.102:0/4204679573 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f629c103d70 0x7f629c199360 unknown :-1 s=CLOSED pgs=164 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.325 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.323+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 >> 192.168.123.102:0/4204679573 conn(0x7f629c0fec30 msgr2=0x7f629c107630 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:30.325 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.324+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 shutdown_connections 2026-03-10T10:27:30.325 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.324+0000 7f62a4972700 1 -- 192.168.123.102:0/4204679573 wait complete. 2026-03-10T10:27:30.326 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 23 2026-03-10T10:27:30.403 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 23 2026-03-10T10:27:30.403 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 24 2026-03-10T10:27:30.569 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.869+0000 7f791e8b1700 1 -- 192.168.123.102:0/2267753635 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 msgr2=0x7f791810d1f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.869+0000 7f791e8b1700 1 --2- 192.168.123.102:0/2267753635 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 0x7f791810d1f0 secure :-1 s=READY pgs=165 cs=0 l=1 rev1=1 crypto rx=0x7f7908009b00 tx=0x7f7908009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.870+0000 7f791e8b1700 1 -- 192.168.123.102:0/2267753635 shutdown_connections 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.870+0000 7f791e8b1700 1 --2- 192.168.123.102:0/2267753635 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 0x7f791810d1f0 unknown :-1 s=CLOSED pgs=165 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.870+0000 7f791e8b1700 1 --2- 192.168.123.102:0/2267753635 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f79180730b0 0x7f7918073490 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.870+0000 7f791e8b1700 1 -- 192.168.123.102:0/2267753635 >> 192.168.123.102:0/2267753635 conn(0x7f79180fc920 msgr2=0x7f79180fed40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.871+0000 7f791e8b1700 1 -- 192.168.123.102:0/2267753635 shutdown_connections 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.871+0000 7f791e8b1700 1 -- 192.168.123.102:0/2267753635 wait complete. 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.871+0000 7f791e8b1700 1 Processor -- start 2026-03-10T10:27:30.872 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.871+0000 7f791e8b1700 1 -- start start 2026-03-10T10:27:30.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f791e8b1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f79180730b0 0x7f7918198d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:30.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f791e8b1700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 0x7f79181992c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:30.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f791e8b1700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f79181999a0 con 0x7f79180739d0 2026-03-10T10:27:30.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f791e8b1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f791819d730 con 0x7f79180730b0 2026-03-10T10:27:30.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f79177fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 0x7f79181992c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:30.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f79177fe700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 0x7f79181992c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56446/0 (socket says 192.168.123.102:56446) 2026-03-10T10:27:30.873 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f79177fe700 1 -- 192.168.123.102:0/1072669910 learned_addr learned my addr 192.168.123.102:0/1072669910 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:30.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f79177fe700 1 -- 192.168.123.102:0/1072669910 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f79180730b0 msgr2=0x7f7918198d80 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:27:30.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f79177fe700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f79180730b0 0x7f7918198d80 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:30.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f79177fe700 1 -- 192.168.123.102:0/1072669910 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f79080097e0 con 0x7f79180739d0 2026-03-10T10:27:30.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.872+0000 7f79177fe700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 0x7f79181992c0 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7f790800b5c0 tx=0x7f7908004950 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:30.874 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.873+0000 7f79157fa700 1 -- 192.168.123.102:0/1072669910 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f790801d070 con 0x7f79180739d0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.873+0000 7f79157fa700 1 -- 192.168.123.102:0/1072669910 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f7908022470 con 0x7f79180739d0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.873+0000 7f79157fa700 1 -- 192.168.123.102:0/1072669910 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f790800f650 con 0x7f79180739d0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.873+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f791819d9b0 con 0x7f79180739d0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.873+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f791819dea0 con 0x7f79180739d0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.874+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f791810a8f0 con 0x7f79180739d0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.876+0000 7f79157fa700 1 -- 192.168.123.102:0/1072669910 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f79080225e0 con 0x7f79180739d0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.876+0000 7f79157fa700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7904077910 0x7f7904079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.876+0000 7f79157fa700 1 -- 192.168.123.102:0/1072669910 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f790809bfe0 con 0x7f79180739d0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.877+0000 7f7917fff700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7904077910 0x7f7904079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.877+0000 7f7917fff700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7904077910 0x7f7904079dd0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f790000ba10 tx=0x7f790000b3f0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:30.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:30.878+0000 7f79157fa700 1 -- 192.168.123.102:0/1072669910 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f79080646e0 con 0x7f79180739d0 2026-03-10T10:27:31.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:30 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/4204679573' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 23, "format": "json"}]: dispatch 2026-03-10T10:27:31.031 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.030+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 24, "format": "json"} v 0) v1 -- 0x7f791804ea90 con 0x7f79180739d0 2026-03-10T10:27:31.033 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.031+0000 7f79157fa700 1 -- 192.168.123.102:0/1072669910 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 24, "format": "json"}]=0 dumped fsmap epoch 24 v40) v1 ==== 107+0+4199 (secure 0 0 0) 0x7f7908063e30 con 0x7f79180739d0 2026-03-10T10:27:31.033 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:31.033 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":24,"btime":"2026-03-10T10:24:50:483096+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17},{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22}],"filesystems":[{"mdsmap":{"epoch":24,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:50.483093+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":81,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24299},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24299":{"gid":24299,"name":"cephfs.vm05.liatdh","rank":0,"incarnation":24,"state":"up:replay","state_seq":1,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:31.035 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7904077910 msgr2=0x7f7904079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:31.035 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7904077910 0x7f7904079dd0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f790000ba10 tx=0x7f790000b3f0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.035 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 msgr2=0x7f79181992c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:31.036 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 0x7f79181992c0 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7f790800b5c0 tx=0x7f7908004950 comp rx=0 tx=0).stop 2026-03-10T10:27:31.036 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 shutdown_connections 2026-03-10T10:27:31.036 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7904077910 0x7f7904079dd0 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.036 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f79180730b0 0x7f7918198d80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.036 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 --2- 192.168.123.102:0/1072669910 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f79180739d0 0x7f79181992c0 unknown :-1 s=CLOSED pgs=166 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.036 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 >> 192.168.123.102:0/1072669910 conn(0x7f79180fc920 msgr2=0x7f7918107a30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:31.036 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 shutdown_connections 2026-03-10T10:27:31.036 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.034+0000 7f791e8b1700 1 -- 192.168.123.102:0/1072669910 wait complete. 2026-03-10T10:27:31.037 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 24 2026-03-10T10:27:31.100 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 24 2026-03-10T10:27:31.101 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 25 2026-03-10T10:27:31.263 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:31.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:30 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/4204679573' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 23, "format": "json"}]: dispatch 2026-03-10T10:27:31.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.558+0000 7fc73d8ab700 1 -- 192.168.123.102:0/548352626 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc73810d0f0 msgr2=0x7fc73810d570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:31.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.558+0000 7fc73d8ab700 1 --2- 192.168.123.102:0/548352626 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc73810d0f0 0x7fc73810d570 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fc728009b00 tx=0x7fc728009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:31.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.558+0000 7fc73d8ab700 1 -- 192.168.123.102:0/548352626 shutdown_connections 2026-03-10T10:27:31.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.558+0000 7fc73d8ab700 1 --2- 192.168.123.102:0/548352626 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc73810d0f0 0x7fc73810d570 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.558+0000 7fc73d8ab700 1 --2- 192.168.123.102:0/548352626 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc73810f340 0x7fc73810f720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.558+0000 7fc73d8ab700 1 -- 192.168.123.102:0/548352626 >> 192.168.123.102:0/548352626 conn(0x7fc73806ce20 msgr2=0x7fc73806d230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:31.561 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.558+0000 7fc73d8ab700 1 -- 192.168.123.102:0/548352626 shutdown_connections 2026-03-10T10:27:31.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.558+0000 7fc73d8ab700 1 -- 192.168.123.102:0/548352626 wait complete. 2026-03-10T10:27:31.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.560+0000 7fc73d8ab700 1 Processor -- start 2026-03-10T10:27:31.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.560+0000 7fc73d8ab700 1 -- start start 2026-03-10T10:27:31.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.560+0000 7fc73d8ab700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc73810f340 0x7fc73811bf00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:31.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.560+0000 7fc73d8ab700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc738116f50 0x7fc7381173d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:31.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.560+0000 7fc73d8ab700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc7381179a0 con 0x7fc73810f340 2026-03-10T10:27:31.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.560+0000 7fc73d8ab700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc738117b10 con 0x7fc738116f50 2026-03-10T10:27:31.562 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.560+0000 7fc737fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc738116f50 0x7fc7381173d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:31.563 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.561+0000 7fc737fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc738116f50 0x7fc7381173d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:34926/0 (socket says 192.168.123.102:34926) 2026-03-10T10:27:31.563 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.561+0000 7fc737fff700 1 -- 192.168.123.102:0/1862289113 learned_addr learned my addr 192.168.123.102:0/1862289113 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:31.563 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.561+0000 7fc737fff700 1 -- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc73810f340 msgr2=0x7fc73811bf00 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:31.563 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.561+0000 7fc73c8a9700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc73810f340 0x7fc73811bf00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:31.563 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.561+0000 7fc737fff700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc73810f340 0x7fc73811bf00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.563 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.561+0000 7fc737fff700 1 -- 192.168.123.102:0/1862289113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc72c009710 con 0x7fc738116f50 2026-03-10T10:27:31.563 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.561+0000 7fc73c8a9700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc73810f340 0x7fc73811bf00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:31.563 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.562+0000 7fc737fff700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc738116f50 0x7fc7381173d0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fc728009fd0 tx=0x7fc72800f740 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:31.563 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.562+0000 7fc735ffb700 1 -- 192.168.123.102:0/1862289113 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc72801c070 con 0x7fc738116f50 2026-03-10T10:27:31.564 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.562+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc7280097e0 con 0x7fc738116f50 2026-03-10T10:27:31.564 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.562+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc7380771b0 con 0x7fc738116f50 2026-03-10T10:27:31.564 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.563+0000 7fc735ffb700 1 -- 192.168.123.102:0/1862289113 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fc728003680 con 0x7fc738116f50 2026-03-10T10:27:31.564 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.563+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc73804f2e0 con 0x7fc738116f50 2026-03-10T10:27:31.564 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.563+0000 7fc735ffb700 1 -- 192.168.123.102:0/1862289113 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc7280176d0 con 0x7fc738116f50 2026-03-10T10:27:31.565 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.564+0000 7fc735ffb700 1 -- 192.168.123.102:0/1862289113 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc728017830 con 0x7fc738116f50 2026-03-10T10:27:31.566 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.565+0000 7fc735ffb700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc720077910 0x7fc720079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:31.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.565+0000 7fc73c8a9700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc720077910 0x7fc720079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:31.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.566+0000 7fc735ffb700 1 -- 192.168.123.102:0/1862289113 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fc72802f080 con 0x7fc738116f50 2026-03-10T10:27:31.567 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.566+0000 7fc73c8a9700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc720077910 0x7fc720079dd0 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7fc72c009f60 tx=0x7fc72c009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:31.568 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.566+0000 7fc735ffb700 1 -- 192.168.123.102:0/1862289113 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc728064990 con 0x7fc738116f50 2026-03-10T10:27:31.720 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.719+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 25, "format": "json"} v 0) v1 -- 0x7fc73804ea90 con 0x7fc738116f50 2026-03-10T10:27:31.721 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.720+0000 7fc735ffb700 1 -- 192.168.123.102:0/1862289113 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 25, "format": "json"}]=0 dumped fsmap epoch 25 v40) v1 ==== 107+0+4204 (secure 0 0 0) 0x7fc72801f370 con 0x7fc738116f50 2026-03-10T10:27:31.721 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:31.722 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":25,"btime":"2026-03-10T10:24:54:866624+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17},{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22}],"filesystems":[{"mdsmap":{"epoch":25,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:54.622578+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":81,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24299},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24299":{"gid":24299,"name":"cephfs.vm05.liatdh","rank":0,"incarnation":24,"state":"up:reconnect","state_seq":120,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:31.724 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.723+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc720077910 msgr2=0x7fc720079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:31.724 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.723+0000 7fc73d8ab700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc720077910 0x7fc720079dd0 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7fc72c009f60 tx=0x7fc72c009450 comp rx=0 tx=0).stop 2026-03-10T10:27:31.724 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.723+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc738116f50 msgr2=0x7fc7381173d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:31.724 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.723+0000 7fc73d8ab700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc738116f50 0x7fc7381173d0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fc728009fd0 tx=0x7fc72800f740 comp rx=0 tx=0).stop 2026-03-10T10:27:31.724 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.723+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 shutdown_connections 2026-03-10T10:27:31.725 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.723+0000 7fc73d8ab700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fc73810f340 0x7fc73811bf00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.725 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.723+0000 7fc73d8ab700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fc720077910 0x7fc720079dd0 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.725 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.723+0000 7fc73d8ab700 1 --2- 192.168.123.102:0/1862289113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc738116f50 0x7fc7381173d0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:31.725 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.724+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 >> 192.168.123.102:0/1862289113 conn(0x7fc73806ce20 msgr2=0x7fc738070070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:31.725 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.724+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 shutdown_connections 2026-03-10T10:27:31.725 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:31.724+0000 7fc73d8ab700 1 -- 192.168.123.102:0/1862289113 wait complete. 2026-03-10T10:27:31.726 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 25 2026-03-10T10:27:31.811 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:31 vm02.local ceph-mon[110129]: pgmap v249: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:31.811 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:31 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1072669910' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 24, "format": "json"}]: dispatch 2026-03-10T10:27:31.811 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:31 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1862289113' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 25, "format": "json"}]: dispatch 2026-03-10T10:27:31.813 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 25 2026-03-10T10:27:31.814 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 26 2026-03-10T10:27:31.981 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:32.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.257+0000 7f3028db4700 1 -- 192.168.123.102:0/2640853875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3024102da0 msgr2=0x7f3024103180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:32.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.257+0000 7f3028db4700 1 --2- 192.168.123.102:0/2640853875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3024102da0 0x7f3024103180 secure :-1 s=READY pgs=167 cs=0 l=1 rev1=1 crypto rx=0x7f3014009b50 tx=0x7f3014009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:32.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.258+0000 7f3028db4700 1 -- 192.168.123.102:0/2640853875 shutdown_connections 2026-03-10T10:27:32.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.258+0000 7f3028db4700 1 --2- 192.168.123.102:0/2640853875 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3024069180 0x7f3024069600 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.258+0000 7f3028db4700 1 --2- 192.168.123.102:0/2640853875 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3024102da0 0x7f3024103180 unknown :-1 s=CLOSED pgs=167 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.258+0000 7f3028db4700 1 -- 192.168.123.102:0/2640853875 >> 192.168.123.102:0/2640853875 conn(0x7f3024076b70 msgr2=0x7f3024076f80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:32.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.258+0000 7f3028db4700 1 -- 192.168.123.102:0/2640853875 shutdown_connections 2026-03-10T10:27:32.259 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.258+0000 7f3028db4700 1 -- 192.168.123.102:0/2640853875 wait complete. 2026-03-10T10:27:32.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.259+0000 7f3028db4700 1 Processor -- start 2026-03-10T10:27:32.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.259+0000 7f3028db4700 1 -- start start 2026-03-10T10:27:32.260 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.259+0000 7f3028db4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3024069180 0x7f302419a820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.259+0000 7f3028db4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3024102da0 0x7f302419ad60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.259+0000 7f3028db4700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f302419b3f0 con 0x7f3024102da0 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.259+0000 7f3028db4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f30241948a0 con 0x7f3024069180 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.259+0000 7f302259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3024069180 0x7f302419a820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.259+0000 7f302259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3024069180 0x7f302419a820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:34954/0 (socket says 192.168.123.102:34954) 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.259+0000 7f302259c700 1 -- 192.168.123.102:0/1145304107 learned_addr learned my addr 192.168.123.102:0/1145304107 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.260+0000 7f302259c700 1 -- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3024102da0 msgr2=0x7f302419ad60 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.260+0000 7f3021d9b700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3024102da0 0x7f302419ad60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.260+0000 7f302259c700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3024102da0 0x7f302419ad60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.260+0000 7f302259c700 1 -- 192.168.123.102:0/1145304107 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f30140097e0 con 0x7f3024069180 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.260+0000 7f3021d9b700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3024102da0 0x7f302419ad60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:32.261 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.260+0000 7f302259c700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3024069180 0x7f302419a820 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f3014005f50 tx=0x7f30140049e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:32.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.260+0000 7f30137fe700 1 -- 192.168.123.102:0/1145304107 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f301401d070 con 0x7f3024069180 2026-03-10T10:27:32.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.260+0000 7f30137fe700 1 -- 192.168.123.102:0/1145304107 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3014022470 con 0x7f3024069180 2026-03-10T10:27:32.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.261+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3024194b20 con 0x7f3024069180 2026-03-10T10:27:32.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.261+0000 7f30137fe700 1 -- 192.168.123.102:0/1145304107 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f301400f670 con 0x7f3024069180 2026-03-10T10:27:32.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.261+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3024195010 con 0x7f3024069180 2026-03-10T10:27:32.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.262+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f302404ea90 con 0x7f3024069180 2026-03-10T10:27:32.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.263+0000 7f30137fe700 1 -- 192.168.123.102:0/1145304107 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3014022ac0 con 0x7f3024069180 2026-03-10T10:27:32.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.263+0000 7f30137fe700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f300c077920 0x7f300c079de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:32.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.263+0000 7f30137fe700 1 -- 192.168.123.102:0/1145304107 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f301409bbc0 con 0x7f3024069180 2026-03-10T10:27:32.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.264+0000 7f3021d9b700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f300c077920 0x7f300c079de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:32.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.264+0000 7f3021d9b700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f300c077920 0x7f300c079de0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f30240ffc20 tx=0x7f3018011040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:32.266 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.265+0000 7f30137fe700 1 -- 192.168.123.102:0/1145304107 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f30140642c0 con 0x7f3024069180 2026-03-10T10:27:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:31 vm05.local ceph-mon[103593]: pgmap v249: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:31 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1072669910' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 24, "format": "json"}]: dispatch 2026-03-10T10:27:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:31 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1862289113' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 25, "format": "json"}]: dispatch 2026-03-10T10:27:32.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.408+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 26, "format": "json"} v 0) v1 -- 0x7f3024066e80 con 0x7f3024069180 2026-03-10T10:27:32.413 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.411+0000 7f30137fe700 1 -- 192.168.123.102:0/1145304107 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 26, "format": "json"}]=0 dumped fsmap epoch 26 v40) v1 ==== 107+0+5052 (secure 0 0 0) 0x7f3014063a10 con 0x7f3024069180 2026-03-10T10:27:32.414 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:32.414 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":26,"btime":"2026-03-10T10:24:55:871764+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17},{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22},{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26}],"filesystems":[{"mdsmap":{"epoch":26,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:54.876457+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":81,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24299},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24299":{"gid":24299,"name":"cephfs.vm05.liatdh","rank":0,"incarnation":24,"state":"up:rejoin","state_seq":121,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:32.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.415+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f300c077920 msgr2=0x7f300c079de0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:32.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.415+0000 7f3028db4700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f300c077920 0x7f300c079de0 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f30240ffc20 tx=0x7f3018011040 comp rx=0 tx=0).stop 2026-03-10T10:27:32.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.415+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3024069180 msgr2=0x7f302419a820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:32.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.415+0000 7f3028db4700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3024069180 0x7f302419a820 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f3014005f50 tx=0x7f30140049e0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.415+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 shutdown_connections 2026-03-10T10:27:32.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.415+0000 7f3028db4700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f300c077920 0x7f300c079de0 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.415+0000 7f3028db4700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3024069180 0x7f302419a820 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.416+0000 7f3028db4700 1 --2- 192.168.123.102:0/1145304107 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3024102da0 0x7f302419ad60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.416+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 >> 192.168.123.102:0/1145304107 conn(0x7f3024076b70 msgr2=0x7f30240fde10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:32.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.416+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 shutdown_connections 2026-03-10T10:27:32.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.416+0000 7f3028db4700 1 -- 192.168.123.102:0/1145304107 wait complete. 2026-03-10T10:27:32.418 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 26 2026-03-10T10:27:32.465 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 26 2026-03-10T10:27:32.465 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 27 2026-03-10T10:27:32.631 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:32.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.899+0000 7f9616ded700 1 -- 192.168.123.102:0/46541 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 msgr2=0x7f961010d5b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:32.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.899+0000 7f9616ded700 1 --2- 192.168.123.102:0/46541 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 0x7f961010d5b0 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7f9604009b30 tx=0x7f9604009e40 comp rx=0 tx=0).stop 2026-03-10T10:27:32.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.900+0000 7f9616ded700 1 -- 192.168.123.102:0/46541 shutdown_connections 2026-03-10T10:27:32.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.900+0000 7f9616ded700 1 --2- 192.168.123.102:0/46541 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 0x7f961010d5b0 unknown :-1 s=CLOSED pgs=168 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.900+0000 7f9616ded700 1 --2- 192.168.123.102:0/46541 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96100684d0 0x7f96100688b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.901+0000 7f9616ded700 1 -- 192.168.123.102:0/46541 >> 192.168.123.102:0/46541 conn(0x7f9610075960 msgr2=0x7f9610075d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:32.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.901+0000 7f9616ded700 1 -- 192.168.123.102:0/46541 shutdown_connections 2026-03-10T10:27:32.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.901+0000 7f9616ded700 1 -- 192.168.123.102:0/46541 wait complete. 2026-03-10T10:27:32.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.901+0000 7f9616ded700 1 Processor -- start 2026-03-10T10:27:32.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.902+0000 7f9616ded700 1 -- start start 2026-03-10T10:27:32.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.902+0000 7f9616ded700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96100684d0 0x7f9610198de0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:32.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.902+0000 7f9616ded700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 0x7f9610199320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:32.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.902+0000 7f9616ded700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9610199a00 con 0x7f9610068df0 2026-03-10T10:27:32.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.902+0000 7f9616ded700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f961019d790 con 0x7f96100684d0 2026-03-10T10:27:32.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.902+0000 7f960ffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 0x7f9610199320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:32.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.902+0000 7f960ffff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 0x7f9610199320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56496/0 (socket says 192.168.123.102:56496) 2026-03-10T10:27:32.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.902+0000 7f960ffff700 1 -- 192.168.123.102:0/2396784243 learned_addr learned my addr 192.168.123.102:0/2396784243 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:32.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.903+0000 7f960ffff700 1 -- 192.168.123.102:0/2396784243 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96100684d0 msgr2=0x7f9610198de0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:27:32.904 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.903+0000 7f9614b89700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96100684d0 0x7f9610198de0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:32.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.903+0000 7f960ffff700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96100684d0 0x7f9610198de0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:32.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.903+0000 7f960ffff700 1 -- 192.168.123.102:0/2396784243 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f96040097e0 con 0x7f9610068df0 2026-03-10T10:27:32.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.903+0000 7f9614b89700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96100684d0 0x7f9610198de0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:32.905 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.903+0000 7f960ffff700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 0x7f9610199320 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7f9604005850 tx=0x7f96040049e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:32.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.904+0000 7f960dffb700 1 -- 192.168.123.102:0/2396784243 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f960401d070 con 0x7f9610068df0 2026-03-10T10:27:32.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.904+0000 7f960dffb700 1 -- 192.168.123.102:0/2396784243 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f960400bc10 con 0x7f9610068df0 2026-03-10T10:27:32.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.904+0000 7f960dffb700 1 -- 192.168.123.102:0/2396784243 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f960400f790 con 0x7f9610068df0 2026-03-10T10:27:32.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.904+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f961019da70 con 0x7f9610068df0 2026-03-10T10:27:32.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.904+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f961019dfc0 con 0x7f9610068df0 2026-03-10T10:27:32.907 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.905+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f961010ad20 con 0x7f9610068df0 2026-03-10T10:27:32.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.907+0000 7f960dffb700 1 -- 192.168.123.102:0/2396784243 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f960400bd80 con 0x7f9610068df0 2026-03-10T10:27:32.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.907+0000 7f960dffb700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f95f80779e0 0x7f95f8079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:32.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.907+0000 7f960dffb700 1 -- 192.168.123.102:0/2396784243 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f960409b350 con 0x7f9610068df0 2026-03-10T10:27:32.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.907+0000 7f9614b89700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f95f80779e0 0x7f95f8079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:32.910 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.908+0000 7f9614b89700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f95f80779e0 0x7f95f8079ea0 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f9600007900 tx=0x7f9600008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:32.911 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:32.910+0000 7f960dffb700 1 -- 192.168.123.102:0/2396784243 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9604063a50 con 0x7f9610068df0 2026-03-10T10:27:33.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:32 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1145304107' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 26, "format": "json"}]: dispatch 2026-03-10T10:27:33.064 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.063+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 27, "format": "json"} v 0) v1 -- 0x7f961004ea90 con 0x7f9610068df0 2026-03-10T10:27:33.065 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.064+0000 7f960dffb700 1 -- 192.168.123.102:0/2396784243 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 27, "format": "json"}]=0 dumped fsmap epoch 27 v40) v1 ==== 107+0+5061 (secure 0 0 0) 0x7f96040631a0 con 0x7f9610068df0 2026-03-10T10:27:33.066 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:33.066 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":27,"btime":"2026-03-10T10:24:56:879494+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17},{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22},{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26}],"filesystems":[{"mdsmap":{"epoch":27,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:56.879493+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":81,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24299},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24299":{"gid":24299,"name":"cephfs.vm05.liatdh","rank":0,"incarnation":24,"state":"up:active","state_seq":122,"addr":"192.168.123.105:6825/3526415895","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3526415895},{"type":"v1","addr":"192.168.123.105:6825","nonce":3526415895}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":24299,"qdb_cluster":[24299]},"id":1}]} 2026-03-10T10:27:33.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.067+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f95f80779e0 msgr2=0x7f95f8079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:33.068 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.067+0000 7f9616ded700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f95f80779e0 0x7f95f8079ea0 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f9600007900 tx=0x7f9600008040 comp rx=0 tx=0).stop 2026-03-10T10:27:33.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.068+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 msgr2=0x7f9610199320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:33.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.068+0000 7f9616ded700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 0x7f9610199320 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7f9604005850 tx=0x7f96040049e0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.068+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 shutdown_connections 2026-03-10T10:27:33.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.068+0000 7f9616ded700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f95f80779e0 0x7f95f8079ea0 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.068+0000 7f9616ded700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f96100684d0 0x7f9610198de0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.070 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.068+0000 7f9616ded700 1 --2- 192.168.123.102:0/2396784243 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f9610068df0 0x7f9610199320 unknown :-1 s=CLOSED pgs=169 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.070 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.068+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 >> 192.168.123.102:0/2396784243 conn(0x7f9610075960 msgr2=0x7f96100fe970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:33.070 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.069+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 shutdown_connections 2026-03-10T10:27:33.070 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.069+0000 7f9616ded700 1 -- 192.168.123.102:0/2396784243 wait complete. 2026-03-10T10:27:33.071 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 27 2026-03-10T10:27:33.144 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 27 2026-03-10T10:27:33.145 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 28 2026-03-10T10:27:33.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:32 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1145304107' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 26, "format": "json"}]: dispatch 2026-03-10T10:27:33.308 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:33.579 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.577+0000 7f64d34b6700 1 -- 192.168.123.102:0/1688965559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64cc102970 msgr2=0x7f64cc10ae60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:33.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.577+0000 7f64d34b6700 1 --2- 192.168.123.102:0/1688965559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64cc102970 0x7f64cc10ae60 secure :-1 s=READY pgs=170 cs=0 l=1 rev1=1 crypto rx=0x7f64c8009b00 tx=0x7f64c8009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:33.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.578+0000 7f64d34b6700 1 -- 192.168.123.102:0/1688965559 shutdown_connections 2026-03-10T10:27:33.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.578+0000 7f64d34b6700 1 --2- 192.168.123.102:0/1688965559 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64cc102970 0x7f64cc10ae60 unknown :-1 s=CLOSED pgs=170 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.578+0000 7f64d34b6700 1 --2- 192.168.123.102:0/1688965559 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64cc102050 0x7f64cc102430 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.578+0000 7f64d34b6700 1 -- 192.168.123.102:0/1688965559 >> 192.168.123.102:0/1688965559 conn(0x7f64cc0fb820 msgr2=0x7f64cc0fdc40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:33.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.579+0000 7f64d34b6700 1 -- 192.168.123.102:0/1688965559 shutdown_connections 2026-03-10T10:27:33.580 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.579+0000 7f64d34b6700 1 -- 192.168.123.102:0/1688965559 wait complete. 2026-03-10T10:27:33.581 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.579+0000 7f64d34b6700 1 Processor -- start 2026-03-10T10:27:33.581 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.580+0000 7f64d34b6700 1 -- start start 2026-03-10T10:27:33.581 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.580+0000 7f64d34b6700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64cc102050 0x7f64cc198e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:33.581 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.580+0000 7f64d34b6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64cc102970 0x7f64cc1993d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:33.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.580+0000 7f64d34b6700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f64cc199ab0 con 0x7f64cc102050 2026-03-10T10:27:33.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.580+0000 7f64d34b6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f64cc19d840 con 0x7f64cc102970 2026-03-10T10:27:33.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.580+0000 7f64d0a51700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64cc102970 0x7f64cc1993d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:33.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.580+0000 7f64d0a51700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64cc102970 0x7f64cc1993d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:34986/0 (socket says 192.168.123.102:34986) 2026-03-10T10:27:33.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.580+0000 7f64d0a51700 1 -- 192.168.123.102:0/1342301260 learned_addr learned my addr 192.168.123.102:0/1342301260 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:33.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.581+0000 7f64d1252700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64cc102050 0x7f64cc198e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:33.582 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.581+0000 7f64d0a51700 1 -- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64cc102050 msgr2=0x7f64cc198e90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:33.583 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.581+0000 7f64d0a51700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64cc102050 0x7f64cc198e90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.583 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.581+0000 7f64d0a51700 1 -- 192.168.123.102:0/1342301260 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f64c80097e0 con 0x7f64cc102970 2026-03-10T10:27:33.583 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.581+0000 7f64d1252700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64cc102050 0x7f64cc198e90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:33.583 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.581+0000 7f64d0a51700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64cc102970 0x7f64cc1993d0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f64c800b5c0 tx=0x7f64c80049b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:33.583 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.582+0000 7f64c27fc700 1 -- 192.168.123.102:0/1342301260 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f64c801d070 con 0x7f64cc102970 2026-03-10T10:27:33.584 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.582+0000 7f64c27fc700 1 -- 192.168.123.102:0/1342301260 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f64c8004500 con 0x7f64cc102970 2026-03-10T10:27:33.584 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.582+0000 7f64c27fc700 1 -- 192.168.123.102:0/1342301260 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f64c8022470 con 0x7f64cc102970 2026-03-10T10:27:33.584 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.582+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f64cc19dac0 con 0x7f64cc102970 2026-03-10T10:27:33.584 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.582+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f64cc19df50 con 0x7f64cc102970 2026-03-10T10:27:33.584 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.583+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f64cc10a9c0 con 0x7f64cc102970 2026-03-10T10:27:33.588 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.586+0000 7f64c27fc700 1 -- 192.168.123.102:0/1342301260 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f64c8003680 con 0x7f64cc102970 2026-03-10T10:27:33.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.587+0000 7f64c27fc700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f64b8077910 0x7f64b8079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:33.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.587+0000 7f64d1252700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f64b8077910 0x7f64b8079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:33.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.587+0000 7f64c27fc700 1 -- 192.168.123.102:0/1342301260 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f64c809bf10 con 0x7f64cc102970 2026-03-10T10:27:33.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.588+0000 7f64c27fc700 1 -- 192.168.123.102:0/1342301260 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f64c8064610 con 0x7f64cc102970 2026-03-10T10:27:33.589 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.588+0000 7f64d1252700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f64b8077910 0x7f64b8079dd0 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f64bc006fd0 tx=0x7f64bc008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:33.728 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.726+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 28, "format": "json"} v 0) v1 -- 0x7f64cc068a10 con 0x7f64cc102970 2026-03-10T10:27:33.729 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.728+0000 7f64c27fc700 1 -- 192.168.123.102:0/1342301260 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 28, "format": "json"}]=0 dumped fsmap epoch 28 v40) v1 ==== 107+0+4256 (secure 0 0 0) 0x7f64c8063d60 con 0x7f64cc102970 2026-03-10T10:27:33.730 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:33.730 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":28,"btime":"2026-03-10T10:24:59:119446+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34328,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":17},{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22},{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26}],"filesystems":[{"mdsmap":{"epoch":28,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:59.119445+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":82,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:33.732 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.731+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f64b8077910 msgr2=0x7f64b8079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:33.732 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.731+0000 7f64d34b6700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f64b8077910 0x7f64b8079dd0 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f64bc006fd0 tx=0x7f64bc008040 comp rx=0 tx=0).stop 2026-03-10T10:27:33.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.731+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64cc102970 msgr2=0x7f64cc1993d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:33.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.731+0000 7f64d34b6700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64cc102970 0x7f64cc1993d0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f64c800b5c0 tx=0x7f64c80049b0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.732+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 shutdown_connections 2026-03-10T10:27:33.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.732+0000 7f64d34b6700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f64cc102050 0x7f64cc198e90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.732+0000 7f64d34b6700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f64b8077910 0x7f64b8079dd0 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.732+0000 7f64d34b6700 1 --2- 192.168.123.102:0/1342301260 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f64cc102970 0x7f64cc1993d0 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:33.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.732+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 >> 192.168.123.102:0/1342301260 conn(0x7f64cc0fb820 msgr2=0x7f64cc100450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:33.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.732+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 shutdown_connections 2026-03-10T10:27:33.733 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:33.732+0000 7f64d34b6700 1 -- 192.168.123.102:0/1342301260 wait complete. 2026-03-10T10:27:33.734 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 28 2026-03-10T10:27:33.802 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 28 2026-03-10T10:27:33.802 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 29 2026-03-10T10:27:33.899 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:33 vm02.local ceph-mon[110129]: pgmap v250: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:33.899 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:33 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2396784243' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 27, "format": "json"}]: dispatch 2026-03-10T10:27:33.899 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:33 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1342301260' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 28, "format": "json"}]: dispatch 2026-03-10T10:27:33.989 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.259+0000 7f0ff883a700 1 -- 192.168.123.102:0/2937828897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0ff0100b50 msgr2=0x7f0ff0104a40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.259+0000 7f0ff883a700 1 --2- 192.168.123.102:0/2937828897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0ff0100b50 0x7f0ff0104a40 secure :-1 s=READY pgs=171 cs=0 l=1 rev1=1 crypto rx=0x7f0fe4009b00 tx=0x7f0fe4009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.261+0000 7f0ff883a700 1 -- 192.168.123.102:0/2937828897 shutdown_connections 2026-03-10T10:27:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.261+0000 7f0ff883a700 1 --2- 192.168.123.102:0/2937828897 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0ff0100b50 0x7f0ff0104a40 unknown :-1 s=CLOSED pgs=171 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.261+0000 7f0ff883a700 1 --2- 192.168.123.102:0/2937828897 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0ff01001a0 0x7f0ff0100580 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.261+0000 7f0ff883a700 1 -- 192.168.123.102:0/2937828897 >> 192.168.123.102:0/2937828897 conn(0x7f0ff0075960 msgr2=0x7f0ff0075d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.261+0000 7f0ff883a700 1 -- 192.168.123.102:0/2937828897 shutdown_connections 2026-03-10T10:27:34.262 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.261+0000 7f0ff883a700 1 -- 192.168.123.102:0/2937828897 wait complete. 2026-03-10T10:27:34.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.262+0000 7f0ff883a700 1 Processor -- start 2026-03-10T10:27:34.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.262+0000 7f0ff883a700 1 -- start start 2026-03-10T10:27:34.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.262+0000 7f0ff883a700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0ff01001a0 0x7f0ff0198e00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:34.263 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.262+0000 7f0ff883a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0ff0100b50 0x7f0ff0199340 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.262+0000 7f0ff883a700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0ff0199a20 con 0x7f0ff01001a0 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.262+0000 7f0ff883a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0ff019d7b0 con 0x7f0ff0100b50 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.262+0000 7f0ff5dd5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0ff0100b50 0x7f0ff0199340 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.262+0000 7f0ff5dd5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0ff0100b50 0x7f0ff0199340 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:35010/0 (socket says 192.168.123.102:35010) 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.262+0000 7f0ff5dd5700 1 -- 192.168.123.102:0/4123285520 learned_addr learned my addr 192.168.123.102:0/4123285520 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.263+0000 7f0ff5dd5700 1 -- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0ff01001a0 msgr2=0x7f0ff0198e00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.263+0000 7f0ff65d6700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0ff01001a0 0x7f0ff0198e00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.263+0000 7f0ff5dd5700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0ff01001a0 0x7f0ff0198e00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.263+0000 7f0ff5dd5700 1 -- 192.168.123.102:0/4123285520 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0fe40097e0 con 0x7f0ff0100b50 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.263+0000 7f0ff65d6700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0ff01001a0 0x7f0ff0198e00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.263+0000 7f0ff5dd5700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0ff0100b50 0x7f0ff0199340 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f0fe4009fd0 tx=0x7f0fe4004970 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:34.264 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.263+0000 7f0fe37fe700 1 -- 192.168.123.102:0/4123285520 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0fe401d070 con 0x7f0ff0100b50 2026-03-10T10:27:34.265 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.263+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0ff019da30 con 0x7f0ff0100b50 2026-03-10T10:27:34.266 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.264+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0ff019df20 con 0x7f0ff0100b50 2026-03-10T10:27:34.266 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.264+0000 7f0fe37fe700 1 -- 192.168.123.102:0/4123285520 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f0fe400bc50 con 0x7f0ff0100b50 2026-03-10T10:27:34.266 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.264+0000 7f0fe37fe700 1 -- 192.168.123.102:0/4123285520 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0fe40175f0 con 0x7f0ff0100b50 2026-03-10T10:27:34.267 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.265+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0ff010c760 con 0x7f0ff0100b50 2026-03-10T10:27:34.267 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.265+0000 7f0fe37fe700 1 -- 192.168.123.102:0/4123285520 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f0fe400f9d0 con 0x7f0ff0100b50 2026-03-10T10:27:34.267 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.265+0000 7f0fe37fe700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fdc0778c0 0x7f0fdc079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:34.267 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.265+0000 7f0fe37fe700 1 -- 192.168.123.102:0/4123285520 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f0fe409b520 con 0x7f0ff0100b50 2026-03-10T10:27:34.267 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.266+0000 7f0ff65d6700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fdc0778c0 0x7f0fdc079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:34.268 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.267+0000 7f0ff65d6700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fdc0778c0 0x7f0fdc079d80 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f0fe8005fd0 tx=0x7f0fe8005dc0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:34.270 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.269+0000 7f0fe37fe700 1 -- 192.168.123.102:0/4123285520 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0fe4063ba0 con 0x7f0ff0100b50 2026-03-10T10:27:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:33 vm05.local ceph-mon[103593]: pgmap v250: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:33 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2396784243' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 27, "format": "json"}]: dispatch 2026-03-10T10:27:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:33 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1342301260' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 28, "format": "json"}]: dispatch 2026-03-10T10:27:34.422 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.420+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 29, "format": "json"} v 0) v1 -- 0x7f0ff019a160 con 0x7f0ff0100b50 2026-03-10T10:27:34.424 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.422+0000 7f0fe37fe700 1 -- 192.168.123.102:0/4123285520 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 29, "format": "json"}]=0 dumped fsmap epoch 29 v40) v1 ==== 107+0+4267 (secure 0 0 0) 0x7f0fe40632f0 con 0x7f0ff0100b50 2026-03-10T10:27:34.424 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:34.424 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":29,"btime":"2026-03-10T10:24:59:124651+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22},{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26}],"filesystems":[{"mdsmap":{"epoch":29,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:24:59.124648+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":82,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":34328},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34328":{"gid":34328,"name":"cephfs.vm05.sudjys","rank":0,"incarnation":29,"state":"up:replay","state_seq":1,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:34.426 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fdc0778c0 msgr2=0x7f0fdc079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:34.426 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fdc0778c0 0x7f0fdc079d80 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f0fe8005fd0 tx=0x7f0fe8005dc0 comp rx=0 tx=0).stop 2026-03-10T10:27:34.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0ff0100b50 msgr2=0x7f0ff0199340 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:34.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0ff0100b50 0x7f0ff0199340 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f0fe4009fd0 tx=0x7f0fe4004970 comp rx=0 tx=0).stop 2026-03-10T10:27:34.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 shutdown_connections 2026-03-10T10:27:34.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0ff01001a0 0x7f0ff0198e00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:34.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fdc0778c0 0x7f0fdc079d80 secure :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f0fe8005fd0 tx=0x7f0fe8005dc0 comp rx=0 tx=0).stop 2026-03-10T10:27:34.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 --2- 192.168.123.102:0/4123285520 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0ff0100b50 0x7f0ff0199340 secure :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f0fe4009fd0 tx=0x7f0fe4004970 comp rx=0 tx=0).stop 2026-03-10T10:27:34.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 >> 192.168.123.102:0/4123285520 conn(0x7f0ff0075960 msgr2=0x7f0ff01042b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:34.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.425+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 shutdown_connections 2026-03-10T10:27:34.427 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.426+0000 7f0ff883a700 1 -- 192.168.123.102:0/4123285520 wait complete. 2026-03-10T10:27:34.428 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 29 2026-03-10T10:27:34.484 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 29 2026-03-10T10:27:34.484 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 30 2026-03-10T10:27:34.643 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:34.931 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:34 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/4123285520' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 29, "format": "json"}]: dispatch 2026-03-10T10:27:34.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.928+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2391605723 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 msgr2=0x7fef38107d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:34.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.928+0000 7fef3fb0d700 1 --2- 192.168.123.102:0/2391605723 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 0x7fef38107d40 secure :-1 s=READY pgs=172 cs=0 l=1 rev1=1 crypto rx=0x7fef2c009b00 tx=0x7fef2c009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:34.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.930+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2391605723 shutdown_connections 2026-03-10T10:27:34.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.930+0000 7fef3fb0d700 1 --2- 192.168.123.102:0/2391605723 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 0x7fef38107d40 unknown :-1 s=CLOSED pgs=172 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:34.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.930+0000 7fef3fb0d700 1 --2- 192.168.123.102:0/2391605723 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fef38103340 0x7fef38103720 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:34.931 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.930+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2391605723 >> 192.168.123.102:0/2391605723 conn(0x7fef380feb90 msgr2=0x7fef38100fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:34.932 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.930+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2391605723 shutdown_connections 2026-03-10T10:27:34.932 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.930+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2391605723 wait complete. 2026-03-10T10:27:34.932 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.931+0000 7fef3fb0d700 1 Processor -- start 2026-03-10T10:27:34.932 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.931+0000 7fef3fb0d700 1 -- start start 2026-03-10T10:27:34.932 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.931+0000 7fef3fb0d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fef38103340 0x7fef38198f70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.931+0000 7fef3fb0d700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 0x7fef381994b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.931+0000 7fef3fb0d700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fef38199b90 con 0x7fef38103cf0 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.931+0000 7fef3fb0d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fef3819d920 con 0x7fef38103340 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.931+0000 7fef3d0a8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 0x7fef381994b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.931+0000 7fef3d0a8700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 0x7fef381994b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56530/0 (socket says 192.168.123.102:56530) 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.931+0000 7fef3d0a8700 1 -- 192.168.123.102:0/2494096118 learned_addr learned my addr 192.168.123.102:0/2494096118 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.932+0000 7fef3d8a9700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fef38103340 0x7fef38198f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.932+0000 7fef3d0a8700 1 -- 192.168.123.102:0/2494096118 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fef38103340 msgr2=0x7fef38198f70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.932+0000 7fef3d0a8700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fef38103340 0x7fef38198f70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:34.933 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.932+0000 7fef3d0a8700 1 -- 192.168.123.102:0/2494096118 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fef2c0097e0 con 0x7fef38103cf0 2026-03-10T10:27:34.934 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.932+0000 7fef3d0a8700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 0x7fef381994b0 secure :-1 s=READY pgs=173 cs=0 l=1 rev1=1 crypto rx=0x7fef2c009fd0 tx=0x7fef2c004970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:34.934 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.933+0000 7fef2affd700 1 -- 192.168.123.102:0/2494096118 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fef2c01d070 con 0x7fef38103cf0 2026-03-10T10:27:34.935 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.933+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fef3819dba0 con 0x7fef38103cf0 2026-03-10T10:27:34.935 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.933+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fef3819e090 con 0x7fef38103cf0 2026-03-10T10:27:34.935 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.933+0000 7fef2affd700 1 -- 192.168.123.102:0/2494096118 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fef2c00bc50 con 0x7fef38103cf0 2026-03-10T10:27:34.935 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.933+0000 7fef2affd700 1 -- 192.168.123.102:0/2494096118 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fef2c00f740 con 0x7fef38103cf0 2026-03-10T10:27:34.936 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.935+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fef1c005320 con 0x7fef38103cf0 2026-03-10T10:27:34.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.936+0000 7fef2affd700 1 -- 192.168.123.102:0/2494096118 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fef2c00f8a0 con 0x7fef38103cf0 2026-03-10T10:27:34.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.936+0000 7fef2affd700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fef240778c0 0x7fef24079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:34.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.936+0000 7fef3d8a9700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fef240778c0 0x7fef24079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:34.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.937+0000 7fef3d8a9700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fef240778c0 0x7fef24079d80 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7fef34006fd0 tx=0x7fef34008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:34.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.937+0000 7fef2affd700 1 -- 192.168.123.102:0/2494096118 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fef2c09b260 con 0x7fef38103cf0 2026-03-10T10:27:34.941 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:34.940+0000 7fef2affd700 1 -- 192.168.123.102:0/2494096118 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fef2c063960 con 0x7fef38103cf0 2026-03-10T10:27:35.089 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.088+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 30, "format": "json"} v 0) v1 -- 0x7fef1c005190 con 0x7fef38103cf0 2026-03-10T10:27:35.090 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.089+0000 7fef2affd700 1 -- 192.168.123.102:0/2494096118 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 30, "format": "json"}]=0 dumped fsmap epoch 30 v40) v1 ==== 107+0+4270 (secure 0 0 0) 0x7fef2c0630b0 con 0x7fef38103cf0 2026-03-10T10:27:35.090 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:35.090 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":30,"btime":"2026-03-10T10:25:03:698384+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22},{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26}],"filesystems":[{"mdsmap":{"epoch":30,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:03.335536+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":82,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":34328},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34328":{"gid":34328,"name":"cephfs.vm05.sudjys","rank":0,"incarnation":29,"state":"up:reconnect","state_seq":8,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:35.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.094+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fef240778c0 msgr2=0x7fef24079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:35.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.095+0000 7fef3fb0d700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fef240778c0 0x7fef24079d80 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7fef34006fd0 tx=0x7fef34008040 comp rx=0 tx=0).stop 2026-03-10T10:27:35.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.095+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 msgr2=0x7fef381994b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:35.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.095+0000 7fef3fb0d700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 0x7fef381994b0 secure :-1 s=READY pgs=173 cs=0 l=1 rev1=1 crypto rx=0x7fef2c009fd0 tx=0x7fef2c004970 comp rx=0 tx=0).stop 2026-03-10T10:27:35.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.095+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 shutdown_connections 2026-03-10T10:27:35.096 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.095+0000 7fef3fb0d700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fef240778c0 0x7fef24079d80 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:35.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.095+0000 7fef3fb0d700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fef38103340 0x7fef38198f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:35.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.095+0000 7fef3fb0d700 1 --2- 192.168.123.102:0/2494096118 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fef38103cf0 0x7fef381994b0 unknown :-1 s=CLOSED pgs=173 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:35.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.096+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 >> 192.168.123.102:0/2494096118 conn(0x7fef380feb90 msgr2=0x7fef38100240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:35.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.096+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 shutdown_connections 2026-03-10T10:27:35.097 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.096+0000 7fef3fb0d700 1 -- 192.168.123.102:0/2494096118 wait complete. 2026-03-10T10:27:35.098 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 30 2026-03-10T10:27:35.168 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 30 2026-03-10T10:27:35.168 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 31 2026-03-10T10:27:35.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:34 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/4123285520' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 29, "format": "json"}]: dispatch 2026-03-10T10:27:35.341 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:35.625 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.623+0000 7f2119309700 1 -- 192.168.123.102:0/1385179704 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ff260 msgr2=0x7f21140ff640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:35.625 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.623+0000 7f2119309700 1 --2- 192.168.123.102:0/1385179704 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ff260 0x7f21140ff640 secure :-1 s=READY pgs=174 cs=0 l=1 rev1=1 crypto rx=0x7f20fc009b50 tx=0x7f20fc009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.623+0000 7f2119309700 1 -- 192.168.123.102:0/1385179704 shutdown_connections 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.623+0000 7f2119309700 1 --2- 192.168.123.102:0/1385179704 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21140ffc10 0x7f211410d260 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.623+0000 7f2119309700 1 --2- 192.168.123.102:0/1385179704 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ff260 0x7f21140ff640 unknown :-1 s=CLOSED pgs=174 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.623+0000 7f2119309700 1 -- 192.168.123.102:0/1385179704 >> 192.168.123.102:0/1385179704 conn(0x7f2114074bd0 msgr2=0x7f2114074fe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.624+0000 7f2119309700 1 -- 192.168.123.102:0/1385179704 shutdown_connections 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.624+0000 7f2119309700 1 -- 192.168.123.102:0/1385179704 wait complete. 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.625+0000 7f2119309700 1 Processor -- start 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.625+0000 7f2119309700 1 -- start start 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.625+0000 7f2119309700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21140ff260 0x7f2114198da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.625+0000 7f2119309700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ffc10 0x7f21141992e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.625+0000 7f2119309700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f21141999c0 con 0x7f21140ffc10 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.625+0000 7f2119309700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f211419d750 con 0x7f21140ff260 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.625+0000 7f21127fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ffc10 0x7f21141992e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:35.626 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.625+0000 7f21127fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ffc10 0x7f21141992e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56554/0 (socket says 192.168.123.102:56554) 2026-03-10T10:27:35.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.625+0000 7f21127fc700 1 -- 192.168.123.102:0/4258218556 learned_addr learned my addr 192.168.123.102:0/4258218556 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:35.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.626+0000 7f21127fc700 1 -- 192.168.123.102:0/4258218556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21140ff260 msgr2=0x7f2114198da0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:35.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.626+0000 7f21127fc700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21140ff260 0x7f2114198da0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:35.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.626+0000 7f21127fc700 1 -- 192.168.123.102:0/4258218556 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f20fc0097e0 con 0x7f21140ffc10 2026-03-10T10:27:35.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.626+0000 7f21127fc700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ffc10 0x7f21141992e0 secure :-1 s=READY pgs=175 cs=0 l=1 rev1=1 crypto rx=0x7f210400dc40 tx=0x7f210400be10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:35.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.626+0000 7f210bfff700 1 -- 192.168.123.102:0/4258218556 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f21040099a0 con 0x7f21140ffc10 2026-03-10T10:27:35.627 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.626+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f211419da30 con 0x7f21140ffc10 2026-03-10T10:27:35.628 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.626+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f211419df80 con 0x7f21140ffc10 2026-03-10T10:27:35.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.626+0000 7f210bfff700 1 -- 192.168.123.102:0/4258218556 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f2104010460 con 0x7f21140ffc10 2026-03-10T10:27:35.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.626+0000 7f210bfff700 1 -- 192.168.123.102:0/4258218556 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f210400f660 con 0x7f21140ffc10 2026-03-10T10:27:35.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.628+0000 7f210bfff700 1 -- 192.168.123.102:0/4258218556 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f210400f7c0 con 0x7f21140ffc10 2026-03-10T10:27:35.629 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.628+0000 7f210bfff700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f21000778c0 0x7f2100079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:35.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.629+0000 7f2112ffd700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f21000778c0 0x7f2100079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:35.630 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.629+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f20f4005320 con 0x7f21140ffc10 2026-03-10T10:27:35.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.629+0000 7f2112ffd700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f21000778c0 0x7f2100079d80 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f20fc009b20 tx=0x7f20fc005c20 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:35.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.629+0000 7f210bfff700 1 -- 192.168.123.102:0/4258218556 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f2104099e70 con 0x7f21140ffc10 2026-03-10T10:27:35.633 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.632+0000 7f210bfff700 1 -- 192.168.123.102:0/4258218556 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f21040624f0 con 0x7f21140ffc10 2026-03-10T10:27:35.786 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.784+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 31, "format": "json"} v 0) v1 -- 0x7f20f4005190 con 0x7f21140ffc10 2026-03-10T10:27:35.787 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.785+0000 7f210bfff700 1 -- 192.168.123.102:0/4258218556 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 31, "format": "json"}]=0 dumped fsmap epoch 31 v40) v1 ==== 107+0+4267 (secure 0 0 0) 0x7f2104061c40 con 0x7f21140ffc10 2026-03-10T10:27:35.787 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:35.787 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":31,"btime":"2026-03-10T10:25:05:477437+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22},{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26}],"filesystems":[{"mdsmap":{"epoch":31,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:04.464732+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":82,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":34328},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34328":{"gid":34328,"name":"cephfs.vm05.sudjys","rank":0,"incarnation":29,"state":"up:rejoin","state_seq":9,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:35.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f21000778c0 msgr2=0x7f2100079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:35.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f21000778c0 0x7f2100079d80 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7f20fc009b20 tx=0x7f20fc005c20 comp rx=0 tx=0).stop 2026-03-10T10:27:35.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ffc10 msgr2=0x7f21141992e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:35.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ffc10 0x7f21141992e0 secure :-1 s=READY pgs=175 cs=0 l=1 rev1=1 crypto rx=0x7f210400dc40 tx=0x7f210400be10 comp rx=0 tx=0).stop 2026-03-10T10:27:35.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 shutdown_connections 2026-03-10T10:27:35.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f21000778c0 0x7f2100079d80 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:35.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21140ff260 0x7f2114198da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:35.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 --2- 192.168.123.102:0/4258218556 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f21140ffc10 0x7f21141992e0 unknown :-1 s=CLOSED pgs=175 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:35.790 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 >> 192.168.123.102:0/4258218556 conn(0x7f2114074bd0 msgr2=0x7f21140fcd70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:35.791 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 shutdown_connections 2026-03-10T10:27:35.791 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:35.789+0000 7f2119309700 1 -- 192.168.123.102:0/4258218556 wait complete. 2026-03-10T10:27:35.791 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 31 2026-03-10T10:27:35.861 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 31 2026-03-10T10:27:35.861 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 32 2026-03-10T10:27:36.029 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:36.097 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:35 vm02.local ceph-mon[110129]: pgmap v251: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:36.097 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:35 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2494096118' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 30, "format": "json"}]: dispatch 2026-03-10T10:27:36.097 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:35 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/4258218556' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 31, "format": "json"}]: dispatch 2026-03-10T10:27:36.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:35 vm05.local ceph-mon[103593]: pgmap v251: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:36.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:35 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2494096118' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 30, "format": "json"}]: dispatch 2026-03-10T10:27:36.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:35 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/4258218556' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 31, "format": "json"}]: dispatch 2026-03-10T10:27:36.337 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.335+0000 7f2e0eb06700 1 -- 192.168.123.102:0/1755653175 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069b20 msgr2=0x7f2e0810d640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:36.337 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.335+0000 7f2e0eb06700 1 --2- 192.168.123.102:0/1755653175 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069b20 0x7f2e0810d640 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7f2df8009ab0 tx=0x7f2df8009dc0 comp rx=0 tx=0).stop 2026-03-10T10:27:36.337 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.336+0000 7f2e0eb06700 1 -- 192.168.123.102:0/1755653175 shutdown_connections 2026-03-10T10:27:36.337 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.336+0000 7f2e0eb06700 1 --2- 192.168.123.102:0/1755653175 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069b20 0x7f2e0810d640 unknown :-1 s=CLOSED pgs=176 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:36.337 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.336+0000 7f2e0eb06700 1 --2- 192.168.123.102:0/1755653175 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2e08069200 0x7f2e080695e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:36.337 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.336+0000 7f2e0eb06700 1 -- 192.168.123.102:0/1755653175 >> 192.168.123.102:0/1755653175 conn(0x7f2e08076b30 msgr2=0x7f2e08076f40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:36.337 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.336+0000 7f2e0eb06700 1 -- 192.168.123.102:0/1755653175 shutdown_connections 2026-03-10T10:27:36.337 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.336+0000 7f2e0eb06700 1 -- 192.168.123.102:0/1755653175 wait complete. 2026-03-10T10:27:36.338 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.336+0000 7f2e0eb06700 1 Processor -- start 2026-03-10T10:27:36.338 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0eb06700 1 -- start start 2026-03-10T10:27:36.338 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0eb06700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069200 0x7f2e08198e60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:36.338 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0eb06700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2e08069b20 0x7f2e081993a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:36.338 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0eb06700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2e08199a80 con 0x7f2e08069200 2026-03-10T10:27:36.338 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0eb06700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2e0819d810 con 0x7f2e08069b20 2026-03-10T10:27:36.338 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0c8a2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069200 0x7f2e08198e60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:36.338 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0c8a2700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069200 0x7f2e08198e60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:56570/0 (socket says 192.168.123.102:56570) 2026-03-10T10:27:36.338 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0c8a2700 1 -- 192.168.123.102:0/3633095853 learned_addr learned my addr 192.168.123.102:0/3633095853 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:36.339 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e07fff700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2e08069b20 0x7f2e081993a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:36.339 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0c8a2700 1 -- 192.168.123.102:0/3633095853 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2e08069b20 msgr2=0x7f2e081993a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:36.339 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0c8a2700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2e08069b20 0x7f2e081993a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:36.339 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.337+0000 7f2e0c8a2700 1 -- 192.168.123.102:0/3633095853 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2dfc0097e0 con 0x7f2e08069200 2026-03-10T10:27:36.339 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.338+0000 7f2e0c8a2700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069200 0x7f2e08198e60 secure :-1 s=READY pgs=177 cs=0 l=1 rev1=1 crypto rx=0x7f2dfc00efd0 tx=0x7f2dfc00c5b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:36.339 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.338+0000 7f2e05ffb700 1 -- 192.168.123.102:0/3633095853 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2dfc004020 con 0x7f2e08069200 2026-03-10T10:27:36.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.338+0000 7f2e05ffb700 1 -- 192.168.123.102:0/3633095853 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f2dfc003680 con 0x7f2e08069200 2026-03-10T10:27:36.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.338+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2df8009710 con 0x7f2e08069200 2026-03-10T10:27:36.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.338+0000 7f2e05ffb700 1 -- 192.168.123.102:0/3633095853 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2dfc010780 con 0x7f2e08069200 2026-03-10T10:27:36.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.338+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2e0819deb0 con 0x7f2e08069200 2026-03-10T10:27:36.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.340+0000 7f2e05ffb700 1 -- 192.168.123.102:0/3633095853 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f2dfc010900 con 0x7f2e08069200 2026-03-10T10:27:36.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.340+0000 7f2e05ffb700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2df00779e0 0x7f2df0079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:36.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.340+0000 7f2e05ffb700 1 -- 192.168.123.102:0/3633095853 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f2dfc014070 con 0x7f2e08069200 2026-03-10T10:27:36.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.341+0000 7f2e07fff700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2df00779e0 0x7f2df0079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:36.342 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.341+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2e0810adb0 con 0x7f2e08069200 2026-03-10T10:27:36.346 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.344+0000 7f2e05ffb700 1 -- 192.168.123.102:0/3633095853 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f2dfc063a90 con 0x7f2e08069200 2026-03-10T10:27:36.346 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.344+0000 7f2e07fff700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2df00779e0 0x7f2df0079ea0 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f2e0819a480 tx=0x7f2df8005ce0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:36.500 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.498+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 32, "format": "json"} v 0) v1 -- 0x7f2e08066e80 con 0x7f2e08069200 2026-03-10T10:27:36.500 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.498+0000 7f2e05ffb700 1 -- 192.168.123.102:0/3633095853 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 32, "format": "json"}]=0 dumped fsmap epoch 32 v40) v1 ==== 107+0+4277 (secure 0 0 0) 0x7f2dfc0631e0 con 0x7f2e08069200 2026-03-10T10:27:36.500 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:36.500 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":32,"btime":"2026-03-10T10:25:06:572167+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22},{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26}],"filesystems":[{"mdsmap":{"epoch":32,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:06.572166+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":82,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":34328},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34328":{"gid":34328,"name":"cephfs.vm05.sudjys","rank":0,"incarnation":29,"state":"up:active","state_seq":10,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34328,"qdb_cluster":[34328]},"id":1}]} 2026-03-10T10:27:36.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.502+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2df00779e0 msgr2=0x7f2df0079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:36.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.502+0000 7f2e0eb06700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2df00779e0 0x7f2df0079ea0 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7f2e0819a480 tx=0x7f2df8005ce0 comp rx=0 tx=0).stop 2026-03-10T10:27:36.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.502+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069200 msgr2=0x7f2e08198e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:36.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.502+0000 7f2e0eb06700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069200 0x7f2e08198e60 secure :-1 s=READY pgs=177 cs=0 l=1 rev1=1 crypto rx=0x7f2dfc00efd0 tx=0x7f2dfc00c5b0 comp rx=0 tx=0).stop 2026-03-10T10:27:36.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.502+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 shutdown_connections 2026-03-10T10:27:36.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.502+0000 7f2e0eb06700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f2e08069200 0x7f2e08198e60 unknown :-1 s=CLOSED pgs=177 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:36.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.502+0000 7f2e0eb06700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f2df00779e0 0x7f2df0079ea0 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:36.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.502+0000 7f2e0eb06700 1 --2- 192.168.123.102:0/3633095853 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2e08069b20 0x7f2e081993a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:36.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.502+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 >> 192.168.123.102:0/3633095853 conn(0x7f2e08076b30 msgr2=0x7f2e080feb00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:36.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.503+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 shutdown_connections 2026-03-10T10:27:36.504 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:36.503+0000 7f2e0eb06700 1 -- 192.168.123.102:0/3633095853 wait complete. 2026-03-10T10:27:36.505 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 32 2026-03-10T10:27:36.576 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 32 2026-03-10T10:27:36.576 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 33 2026-03-10T10:27:36.747 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:36.865 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:36 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3633095853' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 32, "format": "json"}]: dispatch 2026-03-10T10:27:36.865 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:37.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.021+0000 7f531daf7700 1 -- 192.168.123.102:0/3210644276 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f53180684d0 msgr2=0x7f53180688b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:37.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.021+0000 7f531daf7700 1 --2- 192.168.123.102:0/3210644276 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f53180684d0 0x7f53180688b0 secure :-1 s=READY pgs=178 cs=0 l=1 rev1=1 crypto rx=0x7f5300009b30 tx=0x7f5300009e40 comp rx=0 tx=0).stop 2026-03-10T10:27:37.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.021+0000 7f531daf7700 1 -- 192.168.123.102:0/3210644276 shutdown_connections 2026-03-10T10:27:37.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.021+0000 7f531daf7700 1 --2- 192.168.123.102:0/3210644276 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5318068df0 0x7f531810d5b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.021+0000 7f531daf7700 1 --2- 192.168.123.102:0/3210644276 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f53180684d0 0x7f53180688b0 unknown :-1 s=CLOSED pgs=178 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.022+0000 7f531daf7700 1 -- 192.168.123.102:0/3210644276 >> 192.168.123.102:0/3210644276 conn(0x7f5318075960 msgr2=0x7f5318075d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:37.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.022+0000 7f531daf7700 1 -- 192.168.123.102:0/3210644276 shutdown_connections 2026-03-10T10:27:37.023 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.022+0000 7f531daf7700 1 -- 192.168.123.102:0/3210644276 wait complete. 2026-03-10T10:27:37.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.023+0000 7f531daf7700 1 Processor -- start 2026-03-10T10:27:37.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.023+0000 7f531daf7700 1 -- start start 2026-03-10T10:27:37.024 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.023+0000 7f531daf7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53180684d0 0x7f5318198df0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:37.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.023+0000 7f531daf7700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5318068df0 0x7f5318199330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:37.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.023+0000 7f531daf7700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5318199a10 con 0x7f5318068df0 2026-03-10T10:27:37.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.023+0000 7f531daf7700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f531819d7a0 con 0x7f53180684d0 2026-03-10T10:27:37.025 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.023+0000 7f5316ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5318068df0 0x7f5318199330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:37.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.023+0000 7f5316ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5318068df0 0x7f5318199330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44654/0 (socket says 192.168.123.102:44654) 2026-03-10T10:27:37.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.023+0000 7f5316ffd700 1 -- 192.168.123.102:0/2762822229 learned_addr learned my addr 192.168.123.102:0/2762822229 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:37.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.024+0000 7f5316ffd700 1 -- 192.168.123.102:0/2762822229 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53180684d0 msgr2=0x7f5318198df0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:27:37.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.024+0000 7f5316ffd700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53180684d0 0x7f5318198df0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.024+0000 7f5316ffd700 1 -- 192.168.123.102:0/2762822229 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f53000097e0 con 0x7f5318068df0 2026-03-10T10:27:37.026 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.024+0000 7f5316ffd700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5318068df0 0x7f5318199330 secure :-1 s=READY pgs=179 cs=0 l=1 rev1=1 crypto rx=0x7f530800c8a0 tx=0x7f530800cc60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:37.027 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.024+0000 7f5314ff9700 1 -- 192.168.123.102:0/2762822229 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f530800cea0 con 0x7f5318068df0 2026-03-10T10:27:37.027 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.024+0000 7f5314ff9700 1 -- 192.168.123.102:0/2762822229 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f5308004830 con 0x7f5318068df0 2026-03-10T10:27:37.027 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.024+0000 7f5314ff9700 1 -- 192.168.123.102:0/2762822229 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5308005650 con 0x7f5318068df0 2026-03-10T10:27:37.027 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.025+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f531819da80 con 0x7f5318068df0 2026-03-10T10:27:37.027 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.025+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f531819dea0 con 0x7f5318068df0 2026-03-10T10:27:37.028 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.026+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f531810ad20 con 0x7f5318068df0 2026-03-10T10:27:37.031 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.030+0000 7f5314ff9700 1 -- 192.168.123.102:0/2762822229 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f53080049a0 con 0x7f5318068df0 2026-03-10T10:27:37.031 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.030+0000 7f5314ff9700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5304077990 0x7f5304079e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:37.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.031+0000 7f53177fe700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5304077990 0x7f5304079e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:37.032 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.031+0000 7f5314ff9700 1 -- 192.168.123.102:0/2762822229 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f5308099fc0 con 0x7f5318068df0 2026-03-10T10:27:37.033 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.031+0000 7f53177fe700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5304077990 0x7f5304079e50 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f530000b580 tx=0x7f5300005e20 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:37.033 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.031+0000 7f5314ff9700 1 -- 192.168.123.102:0/2762822229 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f530805de30 con 0x7f5318068df0 2026-03-10T10:27:37.181 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.179+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 33, "format": "json"} v 0) v1 -- 0x7f531804ea90 con 0x7f5318068df0 2026-03-10T10:27:37.183 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.181+0000 7f5314ff9700 1 -- 192.168.123.102:0/2762822229 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 33, "format": "json"}]=0 dumped fsmap epoch 33 v40) v1 ==== 107+0+5125 (secure 0 0 0) 0x7f5308061e10 con 0x7f5318068df0 2026-03-10T10:27:37.183 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:37.183 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":33,"btime":"2026-03-10T10:25:08:375174+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22},{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26},{"gid":34368,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/462039658","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":462039658},{"type":"v1","addr":"192.168.123.105:6825","nonce":462039658}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":33}],"filesystems":[{"mdsmap":{"epoch":32,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:06.572166+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":82,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":34328},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34328":{"gid":34328,"name":"cephfs.vm05.sudjys","rank":0,"incarnation":29,"state":"up:active","state_seq":10,"addr":"192.168.123.105:6827/3693577687","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3693577687},{"type":"v1","addr":"192.168.123.105:6827","nonce":3693577687}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34328,"qdb_cluster":[34328]},"id":1}]} 2026-03-10T10:27:37.187 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.186+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5304077990 msgr2=0x7f5304079e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:37.187 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.186+0000 7f531daf7700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5304077990 0x7f5304079e50 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f530000b580 tx=0x7f5300005e20 comp rx=0 tx=0).stop 2026-03-10T10:27:37.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.186+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5318068df0 msgr2=0x7f5318199330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:37.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.186+0000 7f531daf7700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5318068df0 0x7f5318199330 secure :-1 s=READY pgs=179 cs=0 l=1 rev1=1 crypto rx=0x7f530800c8a0 tx=0x7f530800cc60 comp rx=0 tx=0).stop 2026-03-10T10:27:37.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.186+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 shutdown_connections 2026-03-10T10:27:37.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.186+0000 7f531daf7700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f5304077990 0x7f5304079e50 secure :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f530000b580 tx=0x7f5300005e20 comp rx=0 tx=0).stop 2026-03-10T10:27:37.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.186+0000 7f531daf7700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53180684d0 0x7f5318198df0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.186+0000 7f531daf7700 1 --2- 192.168.123.102:0/2762822229 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f5318068df0 0x7f5318199330 secure :-1 s=CLOSED pgs=179 cs=0 l=1 rev1=1 crypto rx=0x7f530800c8a0 tx=0x7f530800cc60 comp rx=0 tx=0).stop 2026-03-10T10:27:37.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.186+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 >> 192.168.123.102:0/2762822229 conn(0x7f5318075960 msgr2=0x7f53180fe950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:37.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.187+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 shutdown_connections 2026-03-10T10:27:37.188 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.187+0000 7f531daf7700 1 -- 192.168.123.102:0/2762822229 wait complete. 2026-03-10T10:27:37.189 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 33 2026-03-10T10:27:37.256 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 33 2026-03-10T10:27:37.256 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 34 2026-03-10T10:27:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:36 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3633095853' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 32, "format": "json"}]: dispatch 2026-03-10T10:27:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:37.422 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:37.699 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.697+0000 7ff5a2abb700 1 -- 192.168.123.102:0/3342178863 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff59c105650 msgr2=0x7ff59c105a30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:37.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.697+0000 7ff5a2abb700 1 --2- 192.168.123.102:0/3342178863 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff59c105650 0x7ff59c105a30 secure :-1 s=READY pgs=180 cs=0 l=1 rev1=1 crypto rx=0x7ff598009b00 tx=0x7ff598009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:37.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.699+0000 7ff5a2abb700 1 -- 192.168.123.102:0/3342178863 shutdown_connections 2026-03-10T10:27:37.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.699+0000 7ff5a2abb700 1 --2- 192.168.123.102:0/3342178863 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff59c0ffaf0 0x7ff59c0fff70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.699+0000 7ff5a2abb700 1 --2- 192.168.123.102:0/3342178863 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff59c105650 0x7ff59c105a30 secure :-1 s=CLOSED pgs=180 cs=0 l=1 rev1=1 crypto rx=0x7ff598009b00 tx=0x7ff598009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:37.700 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.699+0000 7ff5a2abb700 1 -- 192.168.123.102:0/3342178863 >> 192.168.123.102:0/3342178863 conn(0x7ff59c074bd0 msgr2=0x7ff59c074fe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:37.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.699+0000 7ff5a2abb700 1 -- 192.168.123.102:0/3342178863 shutdown_connections 2026-03-10T10:27:37.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.700+0000 7ff5a2abb700 1 -- 192.168.123.102:0/3342178863 wait complete. 2026-03-10T10:27:37.701 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.700+0000 7ff5a2abb700 1 Processor -- start 2026-03-10T10:27:37.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.700+0000 7ff5a2abb700 1 -- start start 2026-03-10T10:27:37.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff5a2abb700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff59c0ffaf0 0x7ff59c06ab60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:37.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff5a2abb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff59c06b0a0 0x7ff59c1a5a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:37.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff5a2abb700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff59c06b670 con 0x7ff59c0ffaf0 2026-03-10T10:27:37.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff5a2abb700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff59c1a5fa0 con 0x7ff59c06b0a0 2026-03-10T10:27:37.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff593fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff59c06b0a0 0x7ff59c1a5a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:37.702 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff593fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff59c06b0a0 0x7ff59c1a5a60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:41982/0 (socket says 192.168.123.102:41982) 2026-03-10T10:27:37.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff593fff700 1 -- 192.168.123.102:0/2741187686 learned_addr learned my addr 192.168.123.102:0/2741187686 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:37.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff593fff700 1 -- 192.168.123.102:0/2741187686 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff59c0ffaf0 msgr2=0x7ff59c06ab60 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:37.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff593fff700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff59c0ffaf0 0x7ff59c06ab60 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff593fff700 1 -- 192.168.123.102:0/2741187686 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff5980097e0 con 0x7ff59c06b0a0 2026-03-10T10:27:37.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.701+0000 7ff593fff700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff59c06b0a0 0x7ff59c1a5a60 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7ff58c00d8d0 tx=0x7ff58c00dc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:37.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.702+0000 7ff591ffb700 1 -- 192.168.123.102:0/2741187686 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff58c009940 con 0x7ff59c06b0a0 2026-03-10T10:27:37.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.702+0000 7ff591ffb700 1 -- 192.168.123.102:0/2741187686 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff58c010460 con 0x7ff59c06b0a0 2026-03-10T10:27:37.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.702+0000 7ff591ffb700 1 -- 192.168.123.102:0/2741187686 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff58c00f5d0 con 0x7ff59c06b0a0 2026-03-10T10:27:37.703 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.702+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff59c1a6230 con 0x7ff59c06b0a0 2026-03-10T10:27:37.704 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.702+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff59c1a6780 con 0x7ff59c06b0a0 2026-03-10T10:27:37.704 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.703+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff59c04ea90 con 0x7ff59c06b0a0 2026-03-10T10:27:37.705 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.704+0000 7ff591ffb700 1 -- 192.168.123.102:0/2741187686 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff58c009af0 con 0x7ff59c06b0a0 2026-03-10T10:27:37.705 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.704+0000 7ff591ffb700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff5840778c0 0x7ff584079d80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:37.705 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.704+0000 7ff591ffb700 1 -- 192.168.123.102:0/2741187686 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7ff58c0999d0 con 0x7ff59c06b0a0 2026-03-10T10:27:37.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.705+0000 7ff5a0857700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff5840778c0 0x7ff584079d80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:37.706 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.705+0000 7ff5a0857700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff5840778c0 0x7ff584079d80 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7ff598000c00 tx=0x7ff598005de0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:37.708 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.707+0000 7ff591ffb700 1 -- 192.168.123.102:0/2741187686 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff58c061980 con 0x7ff59c06b0a0 2026-03-10T10:27:37.853 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.851+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 34, "format": "json"} v 0) v1 -- 0x7ff59c066e80 con 0x7ff59c06b0a0 2026-03-10T10:27:37.854 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.852+0000 7ff591ffb700 1 -- 192.168.123.102:0/2741187686 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 34, "format": "json"}]=0 dumped fsmap epoch 34 v40) v1 ==== 107+0+4321 (secure 0 0 0) 0x7ff58c0617a0 con 0x7ff59c06b0a0 2026-03-10T10:27:37.854 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:37.854 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":34,"btime":"2026-03-10T10:25:10:902392+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":22},{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26},{"gid":34368,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/462039658","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":462039658},{"type":"v1","addr":"192.168.123.105:6825","nonce":462039658}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":33}],"filesystems":[{"mdsmap":{"epoch":34,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:10.902389+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.855+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff5840778c0 msgr2=0x7ff584079d80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.855+0000 7ff5a2abb700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff5840778c0 0x7ff584079d80 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7ff598000c00 tx=0x7ff598005de0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.856+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff59c06b0a0 msgr2=0x7ff59c1a5a60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.856+0000 7ff5a2abb700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff59c06b0a0 0x7ff59c1a5a60 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7ff58c00d8d0 tx=0x7ff58c00dc90 comp rx=0 tx=0).stop 2026-03-10T10:27:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.856+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 shutdown_connections 2026-03-10T10:27:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.856+0000 7ff5a2abb700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff59c0ffaf0 0x7ff59c06ab60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.856+0000 7ff5a2abb700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff5840778c0 0x7ff584079d80 unknown :-1 s=CLOSED pgs=132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.857 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.856+0000 7ff5a2abb700 1 --2- 192.168.123.102:0/2741187686 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff59c06b0a0 0x7ff59c1a5a60 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:37.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.856+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 >> 192.168.123.102:0/2741187686 conn(0x7ff59c074bd0 msgr2=0x7ff59c103110 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:37.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.857+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 shutdown_connections 2026-03-10T10:27:37.858 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:37.857+0000 7ff5a2abb700 1 -- 192.168.123.102:0/2741187686 wait complete. 2026-03-10T10:27:37.859 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 34 2026-03-10T10:27:37.932 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 34 2026-03-10T10:27:37.932 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 35 2026-03-10T10:27:37.957 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:37 vm02.local ceph-mon[110129]: pgmap v252: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:37.957 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:37 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2762822229' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 33, "format": "json"}]: dispatch 2026-03-10T10:27:38.095 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:37 vm05.local ceph-mon[103593]: pgmap v252: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:37 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2762822229' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 33, "format": "json"}]: dispatch 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.406+0000 7ff6575fc700 1 -- 192.168.123.102:0/638595014 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff650075be0 msgr2=0x7ff6501115d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.406+0000 7ff6575fc700 1 --2- 192.168.123.102:0/638595014 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff650075be0 0x7ff6501115d0 secure :-1 s=READY pgs=181 cs=0 l=1 rev1=1 crypto rx=0x7ff640009b50 tx=0x7ff640009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.406+0000 7ff6575fc700 1 -- 192.168.123.102:0/638595014 shutdown_connections 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.406+0000 7ff6575fc700 1 --2- 192.168.123.102:0/638595014 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff650075be0 0x7ff6501115d0 unknown :-1 s=CLOSED pgs=181 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.406+0000 7ff6575fc700 1 --2- 192.168.123.102:0/638595014 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6500752c0 0x7ff6500756a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.406+0000 7ff6575fc700 1 -- 192.168.123.102:0/638595014 >> 192.168.123.102:0/638595014 conn(0x7ff6500fe980 msgr2=0x7ff650100da0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.406+0000 7ff6575fc700 1 -- 192.168.123.102:0/638595014 shutdown_connections 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.406+0000 7ff6575fc700 1 -- 192.168.123.102:0/638595014 wait complete. 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.407+0000 7ff6575fc700 1 Processor -- start 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.407+0000 7ff6575fc700 1 -- start start 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.407+0000 7ff6575fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6500752c0 0x7ff650072900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:38.408 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.407+0000 7ff6575fc700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff65006d900 0x7ff65006dd80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:38.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.407+0000 7ff6575fc700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff65006e2c0 con 0x7ff65006d900 2026-03-10T10:27:38.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.407+0000 7ff6575fc700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff65006e400 con 0x7ff6500752c0 2026-03-10T10:27:38.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.407+0000 7ff654b97700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff65006d900 0x7ff65006dd80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:38.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.407+0000 7ff655398700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6500752c0 0x7ff650072900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:38.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.408+0000 7ff655398700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6500752c0 0x7ff650072900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:42002/0 (socket says 192.168.123.102:42002) 2026-03-10T10:27:38.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.408+0000 7ff655398700 1 -- 192.168.123.102:0/3708999002 learned_addr learned my addr 192.168.123.102:0/3708999002 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:38.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.408+0000 7ff654b97700 1 -- 192.168.123.102:0/3708999002 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6500752c0 msgr2=0x7ff650072900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:38.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.408+0000 7ff654b97700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6500752c0 0x7ff650072900 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:38.409 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.408+0000 7ff654b97700 1 -- 192.168.123.102:0/3708999002 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff6400097e0 con 0x7ff65006d900 2026-03-10T10:27:38.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.408+0000 7ff654b97700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff65006d900 0x7ff65006dd80 secure :-1 s=READY pgs=182 cs=0 l=1 rev1=1 crypto rx=0x7ff640005b40 tx=0x7ff64000fce0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:38.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.409+0000 7ff6467fc700 1 -- 192.168.123.102:0/3708999002 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff64001c070 con 0x7ff65006d900 2026-03-10T10:27:38.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.409+0000 7ff6575fc700 1 -- 192.168.123.102:0/3708999002 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff65006e600 con 0x7ff65006d900 2026-03-10T10:27:38.410 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.409+0000 7ff6575fc700 1 -- 192.168.123.102:0/3708999002 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff650103000 con 0x7ff65006d900 2026-03-10T10:27:38.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.410+0000 7ff6467fc700 1 -- 192.168.123.102:0/3708999002 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff6400056e0 con 0x7ff65006d900 2026-03-10T10:27:38.411 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.410+0000 7ff6467fc700 1 -- 192.168.123.102:0/3708999002 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff6400177d0 con 0x7ff65006d900 2026-03-10T10:27:38.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.411+0000 7ff6467fc700 1 -- 192.168.123.102:0/3708999002 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff640017930 con 0x7ff65006d900 2026-03-10T10:27:38.412 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.411+0000 7ff6575fc700 1 -- 192.168.123.102:0/3708999002 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff65010ed40 con 0x7ff65006d900 2026-03-10T10:27:38.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.415+0000 7ff6467fc700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff63c0779e0 0x7ff63c079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:38.416 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.415+0000 7ff6467fc700 1 -- 192.168.123.102:0/3708999002 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7ff64009bc40 con 0x7ff65006d900 2026-03-10T10:27:38.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.416+0000 7ff655398700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff63c0779e0 0x7ff63c079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:38.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.416+0000 7ff655398700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff63c0779e0 0x7ff63c079ea0 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7ff64c0099d0 tx=0x7ff64c008040 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:38.417 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.416+0000 7ff6467fc700 1 -- 192.168.123.102:0/3708999002 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff640064340 con 0x7ff65006d900 2026-03-10T10:27:38.569 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.567+0000 7ff6575fc700 1 -- 192.168.123.102:0/3708999002 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 35, "format": "json"} v 0) v1 -- 0x7ff65006ee30 con 0x7ff65006d900 2026-03-10T10:27:38.570 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.569+0000 7ff6467fc700 1 -- 192.168.123.102:0/3708999002 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 35, "format": "json"}]=0 dumped fsmap epoch 35 v40) v1 ==== 107+0+4400 (secure 0 0 0) 0x7ff640063a90 con 0x7ff65006d900 2026-03-10T10:27:38.570 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:38.570 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":35,"btime":"2026-03-10T10:25:10:909790+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26},{"gid":34368,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/462039658","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":462039658},{"type":"v1","addr":"192.168.123.105:6825","nonce":462039658}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":33}],"filesystems":[{"mdsmap":{"epoch":35,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:10.909776+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34360},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34360":{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":35,"state":"up:replay","state_seq":1,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:38.575 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 -- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff63c0779e0 msgr2=0x7ff63c079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:38.575 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff63c0779e0 0x7ff63c079ea0 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7ff64c0099d0 tx=0x7ff64c008040 comp rx=0 tx=0).stop 2026-03-10T10:27:38.575 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 -- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff65006d900 msgr2=0x7ff65006dd80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:38.575 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff65006d900 0x7ff65006dd80 secure :-1 s=READY pgs=182 cs=0 l=1 rev1=1 crypto rx=0x7ff640005b40 tx=0x7ff64000fce0 comp rx=0 tx=0).stop 2026-03-10T10:27:38.575 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 -- 192.168.123.102:0/3708999002 shutdown_connections 2026-03-10T10:27:38.575 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7ff63c0779e0 0x7ff63c079ea0 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:38.575 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff6500752c0 0x7ff650072900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:38.575 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 --2- 192.168.123.102:0/3708999002 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7ff65006d900 0x7ff65006dd80 unknown :-1 s=CLOSED pgs=182 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:38.575 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 -- 192.168.123.102:0/3708999002 >> 192.168.123.102:0/3708999002 conn(0x7ff6500fe980 msgr2=0x7ff6500ff8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:38.576 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 -- 192.168.123.102:0/3708999002 shutdown_connections 2026-03-10T10:27:38.576 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:38.574+0000 7ff633fff700 1 -- 192.168.123.102:0/3708999002 wait complete. 2026-03-10T10:27:38.576 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 35 2026-03-10T10:27:38.648 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 35 2026-03-10T10:27:38.649 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 36 2026-03-10T10:27:38.884 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:39.101 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:38 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/2741187686' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 34, "format": "json"}]: dispatch 2026-03-10T10:27:39.101 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:38 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3708999002' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 35, "format": "json"}]: dispatch 2026-03-10T10:27:39.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.168+0000 7f3cec0c7700 1 -- 192.168.123.102:0/373694567 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 msgr2=0x7f3ce410d5b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:39.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.168+0000 7f3cec0c7700 1 --2- 192.168.123.102:0/373694567 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 0x7f3ce410d5b0 secure :-1 s=READY pgs=183 cs=0 l=1 rev1=1 crypto rx=0x7f3ce0009b30 tx=0x7f3ce0009e40 comp rx=0 tx=0).stop 2026-03-10T10:27:39.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.169+0000 7f3cec0c7700 1 -- 192.168.123.102:0/373694567 shutdown_connections 2026-03-10T10:27:39.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.169+0000 7f3cec0c7700 1 --2- 192.168.123.102:0/373694567 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 0x7f3ce410d5b0 unknown :-1 s=CLOSED pgs=183 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.169+0000 7f3cec0c7700 1 --2- 192.168.123.102:0/373694567 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ce40684d0 0x7f3ce40688b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.169+0000 7f3cec0c7700 1 -- 192.168.123.102:0/373694567 >> 192.168.123.102:0/373694567 conn(0x7f3ce4075960 msgr2=0x7f3ce4075d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:39.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.169+0000 7f3cec0c7700 1 -- 192.168.123.102:0/373694567 shutdown_connections 2026-03-10T10:27:39.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.170+0000 7f3cec0c7700 1 -- 192.168.123.102:0/373694567 wait complete. 2026-03-10T10:27:39.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.170+0000 7f3cec0c7700 1 Processor -- start 2026-03-10T10:27:39.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.170+0000 7f3cec0c7700 1 -- start start 2026-03-10T10:27:39.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.170+0000 7f3cec0c7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ce40684d0 0x7f3ce4198d60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:39.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3cec0c7700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 0x7f3ce41992a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:39.172 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3cec0c7700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3ce41998f0 con 0x7f3ce4068df0 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3cec0c7700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3ce4199a30 con 0x7f3ce40684d0 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3ce9e63700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ce40684d0 0x7f3ce4198d60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3ce9662700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 0x7f3ce41992a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3ce9662700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 0x7f3ce41992a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44704/0 (socket says 192.168.123.102:44704) 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3ce9662700 1 -- 192.168.123.102:0/1884844782 learned_addr learned my addr 192.168.123.102:0/1884844782 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3ce9662700 1 -- 192.168.123.102:0/1884844782 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ce40684d0 msgr2=0x7f3ce4198d60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3ce9662700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ce40684d0 0x7f3ce4198d60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3ce9662700 1 -- 192.168.123.102:0/1884844782 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3ce00097e0 con 0x7f3ce4068df0 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3ce9662700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 0x7f3ce41992a0 secure :-1 s=READY pgs=184 cs=0 l=1 rev1=1 crypto rx=0x7f3ce0005350 tx=0x7f3ce00049e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3cdaffd700 1 -- 192.168.123.102:0/1884844782 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3ce001d070 con 0x7f3ce4068df0 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.171+0000 7f3cdaffd700 1 -- 192.168.123.102:0/1884844782 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3ce000bc10 con 0x7f3ce4068df0 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.172+0000 7f3cdaffd700 1 -- 192.168.123.102:0/1884844782 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3ce0022620 con 0x7f3ce4068df0 2026-03-10T10:27:39.173 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.172+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3ce419d880 con 0x7f3ce4068df0 2026-03-10T10:27:39.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.172+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3ce419dd50 con 0x7f3ce4068df0 2026-03-10T10:27:39.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.174+0000 7f3cdaffd700 1 -- 192.168.123.102:0/1884844782 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3ce0022890 con 0x7f3ce4068df0 2026-03-10T10:27:39.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.174+0000 7f3cdaffd700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3cd0077870 0x7f3cd0079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:39.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.175+0000 7f3ce9e63700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3cd0077870 0x7f3cd0079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:39.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.175+0000 7f3ce9e63700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3cd0077870 0x7f3cd0079d30 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f3cd4005ea0 tx=0x7f3cd4005e30 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:39.177 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.176+0000 7f3cdaffd700 1 -- 192.168.123.102:0/1884844782 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f3ce00679d0 con 0x7f3ce4068df0 2026-03-10T10:27:39.182 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.176+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3ce410ad20 con 0x7f3ce4068df0 2026-03-10T10:27:39.182 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.181+0000 7f3cdaffd700 1 -- 192.168.123.102:0/1884844782 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3ce00a0050 con 0x7f3ce4068df0 2026-03-10T10:27:39.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:38 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/2741187686' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 34, "format": "json"}]: dispatch 2026-03-10T10:27:39.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:38 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3708999002' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 35, "format": "json"}]: dispatch 2026-03-10T10:27:39.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.322+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 36, "format": "json"} v 0) v1 -- 0x7f3ce404ea90 con 0x7f3ce4068df0 2026-03-10T10:27:39.324 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:39.324 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.323+0000 7f3cdaffd700 1 -- 192.168.123.102:0/1884844782 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 36, "format": "json"}]=0 dumped fsmap epoch 36 v40) v1 ==== 107+0+4403 (secure 0 0 0) 0x7f3ce0027020 con 0x7f3ce4068df0 2026-03-10T10:27:39.325 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":36,"btime":"2026-03-10T10:25:16:108531+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26},{"gid":34368,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/462039658","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":462039658},{"type":"v1","addr":"192.168.123.105:6825","nonce":462039658}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":33}],"filesystems":[{"mdsmap":{"epoch":36,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:15.454193+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34360},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34360":{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":35,"state":"up:reconnect","state_seq":9,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:39.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.326+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3cd0077870 msgr2=0x7f3cd0079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:39.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.326+0000 7f3cec0c7700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3cd0077870 0x7f3cd0079d30 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f3cd4005ea0 tx=0x7f3cd4005e30 comp rx=0 tx=0).stop 2026-03-10T10:27:39.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.326+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 msgr2=0x7f3ce41992a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:39.327 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.326+0000 7f3cec0c7700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 0x7f3ce41992a0 secure :-1 s=READY pgs=184 cs=0 l=1 rev1=1 crypto rx=0x7f3ce0005350 tx=0x7f3ce00049e0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.326+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 shutdown_connections 2026-03-10T10:27:39.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.326+0000 7f3cec0c7700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f3cd0077870 0x7f3cd0079d30 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.326+0000 7f3cec0c7700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ce40684d0 0x7f3ce4198d60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.326+0000 7f3cec0c7700 1 --2- 192.168.123.102:0/1884844782 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f3ce4068df0 0x7f3ce41992a0 unknown :-1 s=CLOSED pgs=184 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.326+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 >> 192.168.123.102:0/1884844782 conn(0x7f3ce4075960 msgr2=0x7f3ce40fe960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:39.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.327+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 shutdown_connections 2026-03-10T10:27:39.328 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.327+0000 7f3cec0c7700 1 -- 192.168.123.102:0/1884844782 wait complete. 2026-03-10T10:27:39.329 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 36 2026-03-10T10:27:39.383 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 36 2026-03-10T10:27:39.383 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 37 2026-03-10T10:27:39.546 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:39.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.823+0000 7f0fcc17b700 1 -- 192.168.123.102:0/4021217376 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fc4103c90 msgr2=0x7f0fc4107ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:39.825 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.823+0000 7f0fcc17b700 1 --2- 192.168.123.102:0/4021217376 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fc4103c90 0x7f0fc4107ce0 secure :-1 s=READY pgs=185 cs=0 l=1 rev1=1 crypto rx=0x7f0fc0009b50 tx=0x7f0fc0009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:39.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.824+0000 7f0fcc17b700 1 -- 192.168.123.102:0/4021217376 shutdown_connections 2026-03-10T10:27:39.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.824+0000 7f0fcc17b700 1 --2- 192.168.123.102:0/4021217376 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fc4103c90 0x7f0fc4107ce0 unknown :-1 s=CLOSED pgs=185 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.824+0000 7f0fcc17b700 1 --2- 192.168.123.102:0/4021217376 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0fc41032e0 0x7f0fc41036c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.824+0000 7f0fcc17b700 1 -- 192.168.123.102:0/4021217376 >> 192.168.123.102:0/4021217376 conn(0x7f0fc40feb50 msgr2=0x7f0fc4100f70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:39.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.825+0000 7f0fcc17b700 1 -- 192.168.123.102:0/4021217376 shutdown_connections 2026-03-10T10:27:39.826 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.825+0000 7f0fcc17b700 1 -- 192.168.123.102:0/4021217376 wait complete. 2026-03-10T10:27:39.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fcc17b700 1 Processor -- start 2026-03-10T10:27:39.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fcc17b700 1 -- start start 2026-03-10T10:27:39.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fcc17b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0fc41032e0 0x7f0fc4198e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:39.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fcc17b700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fc4103c90 0x7f0fc4199360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:39.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fcc17b700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0fc4199a40 con 0x7f0fc4103c90 2026-03-10T10:27:39.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fcc17b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0fc419d7d0 con 0x7f0fc41032e0 2026-03-10T10:27:39.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fc9f17700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0fc41032e0 0x7f0fc4198e20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:39.827 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fc9716700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fc4103c90 0x7f0fc4199360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:39.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fc9f17700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0fc41032e0 0x7f0fc4198e20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:42028/0 (socket says 192.168.123.102:42028) 2026-03-10T10:27:39.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.826+0000 7f0fc9f17700 1 -- 192.168.123.102:0/3546466546 learned_addr learned my addr 192.168.123.102:0/3546466546 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:39.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.827+0000 7f0fc9f17700 1 -- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fc4103c90 msgr2=0x7f0fc4199360 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:39.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.827+0000 7f0fc9f17700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fc4103c90 0x7f0fc4199360 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.827+0000 7f0fc9f17700 1 -- 192.168.123.102:0/3546466546 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0fc00097e0 con 0x7f0fc41032e0 2026-03-10T10:27:39.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.827+0000 7f0fc9716700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fc4103c90 0x7f0fc4199360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:27:39.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.827+0000 7f0fc9f17700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0fc41032e0 0x7f0fc4198e20 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f0fb800eb10 tx=0x7f0fb800eed0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:39.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.827+0000 7f0fb6ffd700 1 -- 192.168.123.102:0/3546466546 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0fb800cca0 con 0x7f0fc41032e0 2026-03-10T10:27:39.828 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.827+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0fc419da50 con 0x7f0fc41032e0 2026-03-10T10:27:39.829 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.827+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0fc419dfa0 con 0x7f0fc41032e0 2026-03-10T10:27:39.830 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.828+0000 7f0fb6ffd700 1 -- 192.168.123.102:0/3546466546 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f0fb800ce00 con 0x7f0fc41032e0 2026-03-10T10:27:39.830 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.828+0000 7f0fb6ffd700 1 -- 192.168.123.102:0/3546466546 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0fb8018910 con 0x7f0fc41032e0 2026-03-10T10:27:39.831 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.829+0000 7f0fb6ffd700 1 -- 192.168.123.102:0/3546466546 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f0fb8018b50 con 0x7f0fc41032e0 2026-03-10T10:27:39.831 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.829+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0fc404ea90 con 0x7f0fc41032e0 2026-03-10T10:27:39.831 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.829+0000 7f0fb6ffd700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fb00778e0 0x7f0fb0079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:39.831 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.830+0000 7f0fb6ffd700 1 -- 192.168.123.102:0/3546466546 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f0fb8014070 con 0x7f0fc41032e0 2026-03-10T10:27:39.831 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.830+0000 7f0fc9716700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fb00778e0 0x7f0fb0079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:39.832 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.831+0000 7f0fc9716700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fb00778e0 0x7f0fb0079da0 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f0fc0005950 tx=0x7f0fc00058e0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:39.834 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.832+0000 7f0fb6ffd700 1 -- 192.168.123.102:0/3546466546 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0fb8062c40 con 0x7f0fc41032e0 2026-03-10T10:27:39.982 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:39 vm02.local ceph-mon[110129]: pgmap v253: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:39.982 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:39 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1884844782' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 36, "format": "json"}]: dispatch 2026-03-10T10:27:39.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.980+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 37, "format": "json"} v 0) v1 -- 0x7f0fc419e280 con 0x7f0fc41032e0 2026-03-10T10:27:39.982 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.981+0000 7f0fb6ffd700 1 -- 192.168.123.102:0/3546466546 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 37, "format": "json"}]=0 dumped fsmap epoch 37 v40) v1 ==== 107+0+5249 (secure 0 0 0) 0x7f0fb8062390 con 0x7f0fc41032e0 2026-03-10T10:27:39.984 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:39.984 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":37,"btime":"2026-03-10T10:25:17:228043+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26},{"gid":34368,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/462039658","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":462039658},{"type":"v1","addr":"192.168.123.105:6825","nonce":462039658}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":33},{"gid":44325,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/682293963","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":682293963},{"type":"v1","addr":"192.168.123.105:6827","nonce":682293963}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":37}],"filesystems":[{"mdsmap":{"epoch":37,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:16.233246+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34360},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34360":{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":35,"state":"up:rejoin","state_seq":10,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-10T10:27:39.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.985+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fb00778e0 msgr2=0x7f0fb0079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:39.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.986+0000 7f0fcc17b700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fb00778e0 0x7f0fb0079da0 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f0fc0005950 tx=0x7f0fc00058e0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.986+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0fc41032e0 msgr2=0x7f0fc4198e20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:39.987 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.986+0000 7f0fcc17b700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0fc41032e0 0x7f0fc4198e20 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f0fb800eb10 tx=0x7f0fb800eed0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.986+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 shutdown_connections 2026-03-10T10:27:39.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.986+0000 7f0fcc17b700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f0fb00778e0 0x7f0fb0079da0 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.987+0000 7f0fcc17b700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0fc41032e0 0x7f0fc4198e20 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.987+0000 7f0fcc17b700 1 --2- 192.168.123.102:0/3546466546 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f0fc4103c90 0x7f0fc4199360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:39.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.987+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 >> 192.168.123.102:0/3546466546 conn(0x7f0fc40feb50 msgr2=0x7f0fc4100180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:39.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.987+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 shutdown_connections 2026-03-10T10:27:39.988 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:39.987+0000 7f0fcc17b700 1 -- 192.168.123.102:0/3546466546 wait complete. 2026-03-10T10:27:39.989 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 37 2026-03-10T10:27:40.057 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 37 2026-03-10T10:27:40.057 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 38 2026-03-10T10:27:40.188 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:39 vm05.local ceph-mon[103593]: pgmap v253: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:40.189 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:39 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1884844782' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 36, "format": "json"}]: dispatch 2026-03-10T10:27:40.221 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.495+0000 7f7ff3d96700 1 -- 192.168.123.102:0/209741878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7fec101280 msgr2=0x7f7fec101660 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.495+0000 7f7ff3d96700 1 --2- 192.168.123.102:0/209741878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7fec101280 0x7f7fec101660 secure :-1 s=READY pgs=186 cs=0 l=1 rev1=1 crypto rx=0x7f7fdc009b50 tx=0x7f7fdc009e60 comp rx=0 tx=0).stop 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.496+0000 7f7ff3d96700 1 -- 192.168.123.102:0/209741878 shutdown_connections 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.496+0000 7f7ff3d96700 1 --2- 192.168.123.102:0/209741878 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7fec101c30 0x7f7fec105b70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.496+0000 7f7ff3d96700 1 --2- 192.168.123.102:0/209741878 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7fec101280 0x7f7fec101660 unknown :-1 s=CLOSED pgs=186 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.496+0000 7f7ff3d96700 1 -- 192.168.123.102:0/209741878 >> 192.168.123.102:0/209741878 conn(0x7f7fec078ea0 msgr2=0x7f7fec0792b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.496+0000 7f7ff3d96700 1 -- 192.168.123.102:0/209741878 shutdown_connections 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.496+0000 7f7ff3d96700 1 -- 192.168.123.102:0/209741878 wait complete. 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.497+0000 7f7ff3d96700 1 Processor -- start 2026-03-10T10:27:40.498 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.497+0000 7f7ff3d96700 1 -- start start 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.497+0000 7f7ff3d96700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7fec101280 0x7f7fec073090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.497+0000 7f7ff3d96700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7fec101c30 0x7f7fec0735d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.497+0000 7f7ff3d96700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7fec073b10 con 0x7f7fec101c30 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.497+0000 7f7ff3d96700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7fec073c50 con 0x7f7fec101280 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.499+0000 7f7ff1b32700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7fec101280 0x7f7fec073090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.499+0000 7f7ff1b32700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7fec101280 0x7f7fec073090 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.102:42044/0 (socket says 192.168.123.102:42044) 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.499+0000 7f7ff1b32700 1 -- 192.168.123.102:0/4264314434 learned_addr learned my addr 192.168.123.102:0/4264314434 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.499+0000 7f7ff1b32700 1 -- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7fec101c30 msgr2=0x7f7fec0735d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.500+0000 7f7ff1331700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7fec101c30 0x7f7fec0735d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.500+0000 7f7ff1b32700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7fec101c30 0x7f7fec0735d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.500+0000 7f7ff1b32700 1 -- 192.168.123.102:0/4264314434 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7fdc0097e0 con 0x7f7fec101280 2026-03-10T10:27:40.501 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.500+0000 7f7ff1331700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7fec101c30 0x7f7fec0735d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:40.502 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.500+0000 7f7ff1b32700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7fec101280 0x7f7fec073090 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f7fdc006010 tx=0x7f7fdc004970 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:40.502 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.501+0000 7f7fe2ffd700 1 -- 192.168.123.102:0/4264314434 <== mon.1 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7fdc01d070 con 0x7f7fec101280 2026-03-10T10:27:40.502 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.501+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7fec073ed0 con 0x7f7fec101280 2026-03-10T10:27:40.502 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.501+0000 7f7fe2ffd700 1 -- 192.168.123.102:0/4264314434 <== mon.1 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f7fdc004b80 con 0x7f7fec101280 2026-03-10T10:27:40.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.501+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7fec1a2fe0 con 0x7f7fec101280 2026-03-10T10:27:40.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.502+0000 7f7fe2ffd700 1 -- 192.168.123.102:0/4264314434 <== mon.1 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7fdc01d070 con 0x7f7fec101280 2026-03-10T10:27:40.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.502+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7fec04ea90 con 0x7f7fec101280 2026-03-10T10:27:40.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.503+0000 7f7fe2ffd700 1 -- 192.168.123.102:0/4264314434 <== mon.1 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f7fdc00bc50 con 0x7f7fec101280 2026-03-10T10:27:40.505 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.504+0000 7f7fe2ffd700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7fd8077940 0x7f7fd8079e00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:40.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.504+0000 7f7ff1331700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7fd8077940 0x7f7fd8079e00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:40.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.504+0000 7f7ff1331700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7fd8077940 0x7f7fd8079e00 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f7fec074880 tx=0x7f7fe8009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:40.506 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.504+0000 7f7fe2ffd700 1 -- 192.168.123.102:0/4264314434 <== mon.1 v2:192.168.123.105:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f7fdc031080 con 0x7f7fec101280 2026-03-10T10:27:40.507 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.506+0000 7f7fe2ffd700 1 -- 192.168.123.102:0/4264314434 <== mon.1 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f7fdc05fdc0 con 0x7f7fec101280 2026-03-10T10:27:40.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.651+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 38, "format": "json"} v 0) v1 -- 0x7f7fec066e80 con 0x7f7fec101280 2026-03-10T10:27:40.654 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.652+0000 7f7fe2ffd700 1 -- 192.168.123.102:0/4264314434 <== mon.1 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 38, "format": "json"}]=0 dumped fsmap epoch 38 v40) v1 ==== 107+0+5258 (secure 0 0 0) 0x7f7fdc05fdc0 con 0x7f7fec101280 2026-03-10T10:27:40.654 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:40.654 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":38,"btime":"2026-03-10T10:25:18:247519+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26},{"gid":34368,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/462039658","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":462039658},{"type":"v1","addr":"192.168.123.105:6825","nonce":462039658}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":33},{"gid":44325,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/682293963","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":682293963},{"type":"v1","addr":"192.168.123.105:6827","nonce":682293963}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":37}],"filesystems":[{"mdsmap":{"epoch":38,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:18.247517+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34360},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34360":{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":35,"state":"up:active","state_seq":11,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34360,"qdb_cluster":[34360]},"id":1}]} 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.655+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7fd8077940 msgr2=0x7f7fd8079e00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.655+0000 7f7ff3d96700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7fd8077940 0x7f7fd8079e00 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f7fec074880 tx=0x7f7fe8009450 comp rx=0 tx=0).stop 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.655+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7fec101280 msgr2=0x7f7fec073090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.655+0000 7f7ff3d96700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7fec101280 0x7f7fec073090 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f7fdc006010 tx=0x7f7fdc004970 comp rx=0 tx=0).stop 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.656+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 shutdown_connections 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.656+0000 7f7ff3d96700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f7fd8077940 0x7f7fd8079e00 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.656+0000 7f7ff3d96700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7fec101280 0x7f7fec073090 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.656+0000 7f7ff3d96700 1 --2- 192.168.123.102:0/4264314434 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f7fec101c30 0x7f7fec0735d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.656+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 >> 192.168.123.102:0/4264314434 conn(0x7f7fec078ea0 msgr2=0x7f7fec0ffa30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:40.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.656+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 shutdown_connections 2026-03-10T10:27:40.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:40.656+0000 7f7ff3d96700 1 -- 192.168.123.102:0/4264314434 wait complete. 2026-03-10T10:27:40.658 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 38 2026-03-10T10:27:40.708 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 38 2026-03-10T10:27:40.708 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph fs dump --format=json 39 2026-03-10T10:27:40.877 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:41.134 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:40 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3546466546' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 37, "format": "json"}]: dispatch 2026-03-10T10:27:41.134 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:40 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/4264314434' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 38, "format": "json"}]: dispatch 2026-03-10T10:27:41.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.159+0000 7fe0cae33700 1 -- 192.168.123.102:0/111141489 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe0c4103d70 msgr2=0x7fe0c4107dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:41.161 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.159+0000 7fe0cae33700 1 --2- 192.168.123.102:0/111141489 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe0c4103d70 0x7fe0c4107dc0 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fe0b8009b00 tx=0x7fe0b8009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:41.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.160+0000 7fe0cae33700 1 -- 192.168.123.102:0/111141489 shutdown_connections 2026-03-10T10:27:41.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.160+0000 7fe0cae33700 1 --2- 192.168.123.102:0/111141489 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe0c4103d70 0x7fe0c4107dc0 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:41.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.160+0000 7fe0cae33700 1 --2- 192.168.123.102:0/111141489 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe0c41033c0 0x7fe0c41037a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:41.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.160+0000 7fe0cae33700 1 -- 192.168.123.102:0/111141489 >> 192.168.123.102:0/111141489 conn(0x7fe0c40fec30 msgr2=0x7fe0c4101050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:41.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.161+0000 7fe0cae33700 1 -- 192.168.123.102:0/111141489 shutdown_connections 2026-03-10T10:27:41.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.161+0000 7fe0cae33700 1 -- 192.168.123.102:0/111141489 wait complete. 2026-03-10T10:27:41.162 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.161+0000 7fe0cae33700 1 Processor -- start 2026-03-10T10:27:41.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.161+0000 7fe0cae33700 1 -- start start 2026-03-10T10:27:41.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0cae33700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe0c41033c0 0x7fe0c4198ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:41.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0cae33700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe0c4103d70 0x7fe0c4199400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:41.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0cae33700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe0c4199ae0 con 0x7fe0c4103d70 2026-03-10T10:27:41.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0cae33700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe0c419d870 con 0x7fe0c41033c0 2026-03-10T10:27:41.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0c3fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe0c4103d70 0x7fe0c4199400 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:41.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0c3fff700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe0c4103d70 0x7fe0c4199400 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44768/0 (socket says 192.168.123.102:44768) 2026-03-10T10:27:41.163 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0c3fff700 1 -- 192.168.123.102:0/3875419520 learned_addr learned my addr 192.168.123.102:0/3875419520 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:41.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0c3fff700 1 -- 192.168.123.102:0/3875419520 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe0c41033c0 msgr2=0x7fe0c4198ec0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:27:41.165 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0c3fff700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe0c41033c0 0x7fe0c4198ec0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0c3fff700 1 -- 192.168.123.102:0/3875419520 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe0b4009710 con 0x7fe0c4103d70 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.162+0000 7fe0c3fff700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe0c4103d70 0x7fe0c4199400 secure :-1 s=READY pgs=187 cs=0 l=1 rev1=1 crypto rx=0x7fe0b800bb30 tx=0x7fe0b800bc10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.163+0000 7fe0c1ffb700 1 -- 192.168.123.102:0/3875419520 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe0b801d070 con 0x7fe0c4103d70 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.163+0000 7fe0c1ffb700 1 -- 192.168.123.102:0/3875419520 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fe0b800f460 con 0x7fe0c4103d70 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.163+0000 7fe0c1ffb700 1 -- 192.168.123.102:0/3875419520 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe0b8021620 con 0x7fe0c4103d70 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.163+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe0b80097e0 con 0x7fe0c4103d70 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.163+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe0c419dd70 con 0x7fe0c4103d70 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.164+0000 7fe0c1ffb700 1 -- 192.168.123.102:0/3875419520 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe0b8021780 con 0x7fe0c4103d70 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.164+0000 7fe0c1ffb700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe0ac0800d0 0x7fe0ac082590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.164+0000 7fe0c1ffb700 1 -- 192.168.123.102:0/3875419520 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7fe0b809bf00 con 0x7fe0c4103d70 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.165+0000 7fe0c8bcf700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe0ac0800d0 0x7fe0ac082590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:41.166 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.165+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe0c404ea90 con 0x7fe0c4103d70 2026-03-10T10:27:41.169 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.166+0000 7fe0c8bcf700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe0ac0800d0 0x7fe0ac082590 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7fe0b4009e90 tx=0x7fe0b4009450 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:41.170 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.168+0000 7fe0c1ffb700 1 -- 192.168.123.102:0/3875419520 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe0b8064630 con 0x7fe0c4103d70 2026-03-10T10:27:41.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:40 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3546466546' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 37, "format": "json"}]: dispatch 2026-03-10T10:27:41.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:40 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/4264314434' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 38, "format": "json"}]: dispatch 2026-03-10T10:27:41.310 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.309+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 39, "format": "json"} v 0) v1 -- 0x7fe0c419a220 con 0x7fe0c4103d70 2026-03-10T10:27:41.311 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.309+0000 7fe0c1ffb700 1 -- 192.168.123.102:0/3875419520 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 39, "format": "json"}]=0 dumped fsmap epoch 39 v40) v1 ==== 107+0+5257 (secure 0 0 0) 0x7fe0b8063d80 con 0x7fe0c4103d70 2026-03-10T10:27:41.311 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:27:41.311 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":39,"btime":"2026-03-10T10:25:20:363820+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34364,"name":"cephfs.vm02.stcvsz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.102:6829/3727526116","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":3727526116},{"type":"v1","addr":"192.168.123.102:6829","nonce":3727526116}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":26},{"gid":34368,"name":"cephfs.vm05.liatdh","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6825/462039658","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":462039658},{"type":"v1","addr":"192.168.123.105:6825","nonce":462039658}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":33},{"gid":44325,"name":"cephfs.vm05.sudjys","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/682293963","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":682293963},{"type":"v1","addr":"192.168.123.105:6827","nonce":682293963}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":37}],"filesystems":[{"mdsmap":{"epoch":39,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-10T10:16:53.248683+0000","modified":"2026-03-10T10:25:19.366485+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":83,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34360},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34360":{"gid":34360,"name":"cephfs.vm02.zymcrs","rank":0,"incarnation":35,"state":"up:active","state_seq":11,"addr":"192.168.123.102:6827/965109167","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":965109167},{"type":"v1","addr":"192.168.123.102:6827","nonce":965109167}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34360,"qdb_cluster":[34360]},"id":1}]} 2026-03-10T10:27:41.313 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.312+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe0ac0800d0 msgr2=0x7fe0ac082590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.312+0000 7fe0cae33700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe0ac0800d0 0x7fe0ac082590 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7fe0b4009e90 tx=0x7fe0b4009450 comp rx=0 tx=0).stop 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.312+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe0c4103d70 msgr2=0x7fe0c4199400 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.312+0000 7fe0cae33700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe0c4103d70 0x7fe0c4199400 secure :-1 s=READY pgs=187 cs=0 l=1 rev1=1 crypto rx=0x7fe0b800bb30 tx=0x7fe0b800bc10 comp rx=0 tx=0).stop 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.313+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 shutdown_connections 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.313+0000 7fe0cae33700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7fe0ac0800d0 0x7fe0ac082590 unknown :-1 s=CLOSED pgs=137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.313+0000 7fe0cae33700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe0c41033c0 0x7fe0c4198ec0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.313+0000 7fe0cae33700 1 --2- 192.168.123.102:0/3875419520 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7fe0c4103d70 0x7fe0c4199400 unknown :-1 s=CLOSED pgs=187 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.313+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 >> 192.168.123.102:0/3875419520 conn(0x7fe0c40fec30 msgr2=0x7fe0c4100220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.313+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 shutdown_connections 2026-03-10T10:27:41.314 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.313+0000 7fe0cae33700 1 -- 192.168.123.102:0/3875419520 wait complete. 2026-03-10T10:27:41.315 INFO:teuthology.orchestra.run.vm02.stderr:dumped fsmap epoch 39 2026-03-10T10:27:41.367 DEBUG:teuthology.run_tasks:Unwinding manager ceph-fuse 2026-03-10T10:27:41.370 INFO:tasks.ceph_fuse:Unmounting ceph-fuse clients... 2026-03-10T10:27:41.370 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:27:41.370 DEBUG:teuthology.orchestra.run.vm02:> dd if=/proc/self/mounts of=/dev/stdout 2026-03-10T10:27:41.388 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:27:41.388 DEBUG:teuthology.orchestra.run.vm02:> dd if=/proc/self/mounts of=/dev/stdout 2026-03-10T10:27:41.445 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd blocklist ls 2026-03-10T10:27:41.650 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:41.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.940+0000 7f9523a18700 1 -- 192.168.123.102:0/2671869387 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073a50 msgr2=0x7f951c111940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:41.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.940+0000 7f9523a18700 1 --2- 192.168.123.102:0/2671869387 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073a50 0x7f951c111940 secure :-1 s=READY pgs=188 cs=0 l=1 rev1=1 crypto rx=0x7f9518009b00 tx=0x7f9518009e10 comp rx=0 tx=0).stop 2026-03-10T10:27:41.942 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.941+0000 7f9523a18700 1 -- 192.168.123.102:0/2671869387 shutdown_connections 2026-03-10T10:27:41.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.941+0000 7f9523a18700 1 --2- 192.168.123.102:0/2671869387 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073a50 0x7f951c111940 unknown :-1 s=CLOSED pgs=188 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:41.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.941+0000 7f9523a18700 1 --2- 192.168.123.102:0/2671869387 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f951c073130 0x7f951c073510 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:41.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.941+0000 7f9523a18700 1 -- 192.168.123.102:0/2671869387 >> 192.168.123.102:0/2671869387 conn(0x7f951c0fc920 msgr2=0x7f951c0fed40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:41.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.941+0000 7f9523a18700 1 -- 192.168.123.102:0/2671869387 shutdown_connections 2026-03-10T10:27:41.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.941+0000 7f9523a18700 1 -- 192.168.123.102:0/2671869387 wait complete. 2026-03-10T10:27:41.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.942+0000 7f9523a18700 1 Processor -- start 2026-03-10T10:27:41.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.942+0000 7f9523a18700 1 -- start start 2026-03-10T10:27:41.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.942+0000 7f9523a18700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073130 0x7f951c19d0e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:41.943 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.942+0000 7f95217b4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073130 0x7f951c19d0e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:41.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.942+0000 7f95217b4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073130 0x7f951c19d0e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44780/0 (socket says 192.168.123.102:44780) 2026-03-10T10:27:41.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.942+0000 7f9523a18700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f951c073a50 0x7f951c19d620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:41.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.942+0000 7f9523a18700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f951c19dd00 con 0x7f951c073130 2026-03-10T10:27:41.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.942+0000 7f9523a18700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f951c1a1a90 con 0x7f951c073a50 2026-03-10T10:27:41.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.943+0000 7f95217b4700 1 -- 192.168.123.102:0/1011739048 learned_addr learned my addr 192.168.123.102:0/1011739048 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:41.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.943+0000 7f9520fb3700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f951c073a50 0x7f951c19d620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:41.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.943+0000 7f95217b4700 1 -- 192.168.123.102:0/1011739048 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f951c073a50 msgr2=0x7f951c19d620 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:41.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.943+0000 7f95217b4700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f951c073a50 0x7f951c19d620 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:41.944 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.943+0000 7f95217b4700 1 -- 192.168.123.102:0/1011739048 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f95180097e0 con 0x7f951c073130 2026-03-10T10:27:41.945 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.943+0000 7f95217b4700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073130 0x7f951c19d0e0 secure :-1 s=READY pgs=189 cs=0 l=1 rev1=1 crypto rx=0x7f950c00dc40 tx=0x7f950c00df50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:41.946 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.944+0000 7f95127fc700 1 -- 192.168.123.102:0/1011739048 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f950c0098e0 con 0x7f951c073130 2026-03-10T10:27:41.946 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.944+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f951c1a1d70 con 0x7f951c073130 2026-03-10T10:27:41.946 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.944+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f951c1a22c0 con 0x7f951c073130 2026-03-10T10:27:41.946 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.944+0000 7f95127fc700 1 -- 192.168.123.102:0/1011739048 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f950c010460 con 0x7f951c073130 2026-03-10T10:27:41.946 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.944+0000 7f95127fc700 1 -- 192.168.123.102:0/1011739048 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f950c00f5d0 con 0x7f951c073130 2026-03-10T10:27:41.947 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.945+0000 7f95127fc700 1 -- 192.168.123.102:0/1011739048 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f950c00f730 con 0x7f951c073130 2026-03-10T10:27:41.947 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.946+0000 7f95127fc700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9508077910 0x7f9508079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:41.947 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.946+0000 7f9520fb3700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9508077910 0x7f9508079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:41.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.946+0000 7f95127fc700 1 -- 192.168.123.102:0/1011739048 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f950c09aab0 con 0x7f951c073130 2026-03-10T10:27:41.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.947+0000 7f9520fb3700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9508077910 0x7f9508079dd0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f951c19e700 tx=0x7f9518005fb0 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:41.948 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.947+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9500005320 con 0x7f951c073130 2026-03-10T10:27:41.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:41.950+0000 7f95127fc700 1 -- 192.168.123.102:0/1011739048 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f950c062a60 con 0x7f951c073130 2026-03-10T10:27:42.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:41 vm02.local ceph-mon[110129]: pgmap v254: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:42.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:41 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/3875419520' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 39, "format": "json"}]: dispatch 2026-03-10T10:27:42.078 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.076+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd blocklist ls"} v 0) v1 -- 0x7f9500005f70 con 0x7f951c073130 2026-03-10T10:27:42.079 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.077+0000 7f95127fc700 1 -- 192.168.123.102:0/1011739048 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd blocklist ls"}]=0 listed 41 entries v83) v1 ==== 81+0+2524 (secure 0 0 0) 0x7f950c0209f0 con 0x7f951c073130 2026-03-10T10:27:42.079 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6825/3526415895 2026-03-11T10:24:59.119237+0000 2026-03-10T10:27:42.079 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6824/3526415895 2026-03-11T10:24:59.119237+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6829/2194475647 2026-03-11T10:24:50.477291+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6827/658252295 2026-03-11T10:24:36.594949+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/336143856 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6800/2642809286 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/1945774564 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/3947397798 2026-03-11T10:19:40.235266+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6828/1021252581 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6801/2642809286 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/4113500240 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6828/2194475647 2026-03-11T10:24:50.477291+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1517189708 2026-03-11T10:14:37.093481+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2365816117 2026-03-11T10:15:14.708009+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/22834603 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6826/658252295 2026-03-11T10:24:36.594949+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/3564367406 2026-03-11T10:19:40.235266+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1164545653 2026-03-11T10:15:14.708009+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6800/2 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1808407342 2026-03-11T10:19:40.235266+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1117450327 2026-03-11T10:14:37.093481+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/982532372 2026-03-11T10:14:37.093481+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1091112719 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2843173556 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2663851614 2026-03-11T10:19:40.235266+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6801/2 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6829/1021252581 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/3252030326 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/245502493 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2700080577 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1508780527 2026-03-11T10:15:14.708009+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6824/2054341310 2026-03-11T10:16:59.221010+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1091306464 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6825/2054341310 2026-03-11T10:16:59.221010+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6826/3693577687 2026-03-11T10:25:10.902179+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/3370052680 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/1396392175 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6827/3693577687 2026-03-11T10:25:10.902179+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/3931430898 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2162611976 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.080 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/3744571840 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.081+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9508077910 msgr2=0x7f9508079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:42.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.081+0000 7f9523a18700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9508077910 0x7f9508079dd0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f951c19e700 tx=0x7f9518005fb0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.081+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073130 msgr2=0x7f951c19d0e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:42.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.081+0000 7f9523a18700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073130 0x7f951c19d0e0 secure :-1 s=READY pgs=189 cs=0 l=1 rev1=1 crypto rx=0x7f950c00dc40 tx=0x7f950c00df50 comp rx=0 tx=0).stop 2026-03-10T10:27:42.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.081+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 shutdown_connections 2026-03-10T10:27:42.082 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.081+0000 7f9523a18700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f951c073130 0x7f951c19d0e0 unknown :-1 s=CLOSED pgs=189 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.081+0000 7f9523a18700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f9508077910 0x7f9508079dd0 unknown :-1 s=CLOSED pgs=138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.081+0000 7f9523a18700 1 --2- 192.168.123.102:0/1011739048 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f951c073a50 0x7f951c19d620 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.081+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 >> 192.168.123.102:0/1011739048 conn(0x7f951c0fc920 msgr2=0x7f951c103450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:42.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.082+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 shutdown_connections 2026-03-10T10:27:42.083 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.082+0000 7f9523a18700 1 -- 192.168.123.102:0/1011739048 wait complete. 2026-03-10T10:27:42.083 INFO:teuthology.orchestra.run.vm02.stderr:listed 41 entries 2026-03-10T10:27:42.144 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T10:27:42.144 DEBUG:teuthology.orchestra.run.vm02:> dd if=/proc/self/mounts of=/dev/stdout 2026-03-10T10:27:42.161 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph osd blocklist ls 2026-03-10T10:27:42.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:41 vm05.local ceph-mon[103593]: pgmap v254: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:42.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:41 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/3875419520' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 39, "format": "json"}]: dispatch 2026-03-10T10:27:42.363 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:27:42.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.647+0000 7f74d99a4700 1 -- 192.168.123.102:0/3958515812 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d4068df0 msgr2=0x7f74d410d5b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:42.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.647+0000 7f74d99a4700 1 --2- 192.168.123.102:0/3958515812 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d4068df0 0x7f74d410d5b0 secure :-1 s=READY pgs=190 cs=0 l=1 rev1=1 crypto rx=0x7f74bc009b80 tx=0x7f74bc009e90 comp rx=0 tx=0).stop 2026-03-10T10:27:42.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.648+0000 7f74d99a4700 1 -- 192.168.123.102:0/3958515812 shutdown_connections 2026-03-10T10:27:42.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.648+0000 7f74d99a4700 1 --2- 192.168.123.102:0/3958515812 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d4068df0 0x7f74d410d5b0 unknown :-1 s=CLOSED pgs=190 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.648+0000 7f74d99a4700 1 --2- 192.168.123.102:0/3958515812 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74d40684d0 0x7f74d40688b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.649 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.648+0000 7f74d99a4700 1 -- 192.168.123.102:0/3958515812 >> 192.168.123.102:0/3958515812 conn(0x7f74d4075960 msgr2=0x7f74d4075d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:42.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.648+0000 7f74d99a4700 1 -- 192.168.123.102:0/3958515812 shutdown_connections 2026-03-10T10:27:42.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.649+0000 7f74d99a4700 1 -- 192.168.123.102:0/3958515812 wait complete. 2026-03-10T10:27:42.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.649+0000 7f74d99a4700 1 Processor -- start 2026-03-10T10:27:42.650 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.649+0000 7f74d99a4700 1 -- start start 2026-03-10T10:27:42.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.650+0000 7f74d99a4700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d40684d0 0x7f74d4198d60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:42.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.650+0000 7f74d99a4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74d4068df0 0x7f74d41992a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:42.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.650+0000 7f74d99a4700 1 -- --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f74d4199980 con 0x7f74d40684d0 2026-03-10T10:27:42.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.650+0000 7f74d99a4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f74d419d710 con 0x7f74d4068df0 2026-03-10T10:27:42.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.650+0000 7f74d2ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d40684d0 0x7f74d4198d60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:42.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.650+0000 7f74d2ffd700 1 --2- >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d40684d0 0x7f74d4198d60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.102:3300/0 says I am v2:192.168.123.102:44810/0 (socket says 192.168.123.102:44810) 2026-03-10T10:27:42.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.650+0000 7f74d2ffd700 1 -- 192.168.123.102:0/1064187679 learned_addr learned my addr 192.168.123.102:0/1064187679 (peer_addr_for_me v2:192.168.123.102:0/0) 2026-03-10T10:27:42.652 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.650+0000 7f74d27fc700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74d4068df0 0x7f74d41992a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:42.652 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.651+0000 7f74d2ffd700 1 -- 192.168.123.102:0/1064187679 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74d4068df0 msgr2=0x7f74d41992a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:42.652 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.651+0000 7f74d2ffd700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74d4068df0 0x7f74d41992a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.652 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.651+0000 7f74d2ffd700 1 -- 192.168.123.102:0/1064187679 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f74bc0097e0 con 0x7f74d40684d0 2026-03-10T10:27:42.652 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.651+0000 7f74d27fc700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74d4068df0 0x7f74d41992a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:27:42.652 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.651+0000 7f74d2ffd700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d40684d0 0x7f74d4198d60 secure :-1 s=READY pgs=191 cs=0 l=1 rev1=1 crypto rx=0x7f74c400c610 tx=0x7f74c400c9d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:42.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.651+0000 7f74d89a2700 1 -- 192.168.123.102:0/1064187679 <== mon.0 v2:192.168.123.102:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f74c401e430 con 0x7f74d40684d0 2026-03-10T10:27:42.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.651+0000 7f74d89a2700 1 -- 192.168.123.102:0/1064187679 <== mon.0 v2:192.168.123.102:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f74c401ea70 con 0x7f74d40684d0 2026-03-10T10:27:42.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.652+0000 7f74d89a2700 1 -- 192.168.123.102:0/1064187679 <== mon.0 v2:192.168.123.102:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f74c400bce0 con 0x7f74d40684d0 2026-03-10T10:27:42.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.652+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f74d419d990 con 0x7f74d40684d0 2026-03-10T10:27:42.653 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.652+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f74d419de60 con 0x7f74d40684d0 2026-03-10T10:27:42.656 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.654+0000 7f74d89a2700 1 -- 192.168.123.102:0/1064187679 <== mon.0 v2:192.168.123.102:3300/0 4 ==== mgrmap(e 37) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f74c400b3e0 con 0x7f74d40684d0 2026-03-10T10:27:42.656 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.654+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f74d410ad20 con 0x7f74d40684d0 2026-03-10T10:27:42.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.655+0000 7f74d89a2700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f74c0077710 0x7f74c0079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:27:42.657 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.655+0000 7f74d89a2700 1 -- 192.168.123.102:0/1064187679 <== mon.0 v2:192.168.123.102:3300/0 5 ==== osd_map(83..83 src has 1..83) v4 ==== 6566+0+0 (secure 0 0 0) 0x7f74c40a1f90 con 0x7f74d40684d0 2026-03-10T10:27:42.658 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.657+0000 7f74d27fc700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f74c0077710 0x7f74c0079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:27:42.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.658+0000 7f74d27fc700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f74c0077710 0x7f74c0079bd0 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7f74bc005380 tx=0x7f74bc005a90 comp rx=0 tx=0).ready entity=mgr.24549 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:27:42.659 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.658+0000 7f74d89a2700 1 -- 192.168.123.102:0/1064187679 <== mon.0 v2:192.168.123.102:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f74c406a690 con 0x7f74d40684d0 2026-03-10T10:27:42.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.791+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 --> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] -- mon_command({"prefix": "osd blocklist ls"} v 0) v1 -- 0x7f74d404ea90 con 0x7f74d40684d0 2026-03-10T10:27:42.793 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.792+0000 7f74d89a2700 1 -- 192.168.123.102:0/1064187679 <== mon.0 v2:192.168.123.102:3300/0 7 ==== mon_command_ack([{"prefix": "osd blocklist ls"}]=0 listed 41 entries v83) v1 ==== 81+0+2524 (secure 0 0 0) 0x7f74c4069de0 con 0x7f74d40684d0 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6825/3526415895 2026-03-11T10:24:59.119237+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6824/3526415895 2026-03-11T10:24:59.119237+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6829/2194475647 2026-03-11T10:24:50.477291+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6827/658252295 2026-03-11T10:24:36.594949+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/336143856 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6800/2642809286 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/1945774564 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/3947397798 2026-03-11T10:19:40.235266+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6828/1021252581 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6801/2642809286 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/4113500240 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6828/2194475647 2026-03-11T10:24:50.477291+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1517189708 2026-03-11T10:14:37.093481+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2365816117 2026-03-11T10:15:14.708009+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/22834603 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6826/658252295 2026-03-11T10:24:36.594949+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/3564367406 2026-03-11T10:19:40.235266+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1164545653 2026-03-11T10:15:14.708009+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6800/2 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1808407342 2026-03-11T10:19:40.235266+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1117450327 2026-03-11T10:14:37.093481+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/982532372 2026-03-11T10:14:37.093481+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1091112719 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2843173556 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2663851614 2026-03-11T10:19:40.235266+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:6801/2 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6829/1021252581 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/3252030326 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/245502493 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2700080577 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1508780527 2026-03-11T10:15:14.708009+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6824/2054341310 2026-03-11T10:16:59.221010+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/1091306464 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6825/2054341310 2026-03-11T10:16:59.221010+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6826/3693577687 2026-03-11T10:25:10.902179+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/3370052680 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:0/1396392175 2026-03-11T10:20:03.873339+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.105:6827/3693577687 2026-03-11T10:25:10.902179+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/3931430898 2026-03-11T10:14:24.165331+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/2162611976 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.795 INFO:teuthology.orchestra.run.vm02.stdout:192.168.123.102:0/3744571840 2026-03-11T10:20:36.492703+0000 2026-03-10T10:27:42.796 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.795+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f74c0077710 msgr2=0x7f74c0079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:42.796 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.795+0000 7f74d99a4700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f74c0077710 0x7f74c0079bd0 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7f74bc005380 tx=0x7f74bc005a90 comp rx=0 tx=0).stop 2026-03-10T10:27:42.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.795+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d40684d0 msgr2=0x7f74d4198d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:27:42.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.796+0000 7f74d99a4700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d40684d0 0x7f74d4198d60 secure :-1 s=READY pgs=191 cs=0 l=1 rev1=1 crypto rx=0x7f74c400c610 tx=0x7f74c400c9d0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.796+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 shutdown_connections 2026-03-10T10:27:42.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.796+0000 7f74d99a4700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] conn(0x7f74d40684d0 0x7f74d4198d60 unknown :-1 s=CLOSED pgs=191 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.796+0000 7f74d99a4700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.102:6800/4215644163,v1:192.168.123.102:6801/4215644163] conn(0x7f74c0077710 0x7f74c0079bd0 unknown :-1 s=CLOSED pgs=139 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.796+0000 7f74d99a4700 1 --2- 192.168.123.102:0/1064187679 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74d4068df0 0x7f74d41992a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:27:42.797 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.796+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 >> 192.168.123.102:0/1064187679 conn(0x7f74d4075960 msgr2=0x7f74d40fe960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:27:42.798 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.796+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 shutdown_connections 2026-03-10T10:27:42.798 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-10T10:27:42.797+0000 7f74d99a4700 1 -- 192.168.123.102:0/1064187679 wait complete. 2026-03-10T10:27:42.799 INFO:teuthology.orchestra.run.vm02.stderr:listed 41 entries 2026-03-10T10:27:42.868 INFO:tasks.cephfs.fuse_mount:Running fusermount -u on ubuntu@vm02.local... 2026-03-10T10:27:42.868 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-10T10:27:42.868 DEBUG:teuthology.orchestra.run.vm02:> sudo fusermount -u /home/ubuntu/cephtest/mnt.0 2026-03-10T10:27:42.899 INFO:teuthology.orchestra.run:waiting for 300 2026-03-10T10:27:43.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:42 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1011739048' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:27:43.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:42 vm02.local ceph-mon[110129]: from='client.? 192.168.123.102:0/1064187679' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:27:43.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:42 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1011739048' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:27:43.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:42 vm05.local ceph-mon[103593]: from='client.? 192.168.123.102:0/1064187679' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:27:44.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:43 vm02.local ceph-mon[110129]: pgmap v255: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:43 vm05.local ceph-mon[103593]: pgmap v255: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:46.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:45 vm02.local ceph-mon[110129]: pgmap v256: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:45 vm05.local ceph-mon[103593]: pgmap v256: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:48.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:47 vm02.local ceph-mon[110129]: pgmap v257: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:47 vm05.local ceph-mon[103593]: pgmap v257: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:49 vm02.local ceph-mon[110129]: pgmap v258: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:49 vm05.local ceph-mon[103593]: pgmap v258: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:27:52.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:51 vm02.local ceph-mon[110129]: pgmap v259: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:52.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:51 vm05.local ceph-mon[103593]: pgmap v259: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:54 vm02.local ceph-mon[110129]: pgmap v260: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:54 vm05.local ceph-mon[103593]: pgmap v260: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:56.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:56 vm02.local ceph-mon[110129]: pgmap v261: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:56 vm05.local ceph-mon[103593]: pgmap v261: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:27:58.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:27:58 vm02.local ceph-mon[110129]: pgmap v262: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:27:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:27:58 vm05.local ceph-mon[103593]: pgmap v262: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:00.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:00 vm02.local ceph-mon[110129]: pgmap v263: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:00 vm05.local ceph-mon[103593]: pgmap v263: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:01.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:28:01.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:28:02.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:02 vm02.local ceph-mon[110129]: pgmap v264: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:02.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:28:02.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:28:02.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:28:02.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:02 vm05.local ceph-mon[103593]: pgmap v264: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:02.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:28:02.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:28:02.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:28:04.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:04 vm02.local ceph-mon[110129]: pgmap v265: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:04 vm05.local ceph-mon[103593]: pgmap v265: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:06.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:06 vm02.local ceph-mon[110129]: pgmap v266: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:06.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:06 vm05.local ceph-mon[103593]: pgmap v266: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:07.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:08.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:08 vm02.local ceph-mon[110129]: pgmap v267: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:08 vm05.local ceph-mon[103593]: pgmap v267: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:10.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:10 vm02.local ceph-mon[110129]: pgmap v268: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:10.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:10 vm05.local ceph-mon[103593]: pgmap v268: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:12.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:12 vm02.local ceph-mon[110129]: pgmap v269: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:12.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:12 vm05.local ceph-mon[103593]: pgmap v269: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:14.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:14 vm02.local ceph-mon[110129]: pgmap v270: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:14.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:14 vm05.local ceph-mon[103593]: pgmap v270: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:16.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:16 vm02.local ceph-mon[110129]: pgmap v271: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:16.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:16 vm05.local ceph-mon[103593]: pgmap v271: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:17.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:17 vm02.local ceph-mon[110129]: pgmap v272: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:17.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:17 vm05.local ceph-mon[103593]: pgmap v272: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:20.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:19 vm02.local ceph-mon[110129]: pgmap v273: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:20.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:19 vm05.local ceph-mon[103593]: pgmap v273: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:22.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:21 vm02.local ceph-mon[110129]: pgmap v274: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:22.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:21 vm05.local ceph-mon[103593]: pgmap v274: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:22.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:24.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:23 vm05.local ceph-mon[103593]: pgmap v275: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:24.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:23 vm02.local ceph-mon[110129]: pgmap v275: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:26.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:25 vm02.local ceph-mon[110129]: pgmap v276: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:26.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:25 vm05.local ceph-mon[103593]: pgmap v276: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:28.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:27 vm02.local ceph-mon[110129]: pgmap v277: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:28.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:27 vm05.local ceph-mon[103593]: pgmap v277: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:30.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:29 vm02.local ceph-mon[110129]: pgmap v278: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:30.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:29 vm05.local ceph-mon[103593]: pgmap v278: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:32.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:32 vm02.local ceph-mon[110129]: pgmap v279: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:32 vm05.local ceph-mon[103593]: pgmap v279: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:34.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:34 vm02.local ceph-mon[110129]: pgmap v280: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:34 vm05.local ceph-mon[103593]: pgmap v280: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:36.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:36 vm02.local ceph-mon[110129]: pgmap v281: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:36.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:36 vm05.local ceph-mon[103593]: pgmap v281: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:37.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:37.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:38.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:38 vm02.local ceph-mon[110129]: pgmap v282: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:38.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:38 vm05.local ceph-mon[103593]: pgmap v282: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:39.658 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:39 vm02.local ceph-mon[110129]: pgmap v283: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:39.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:39 vm05.local ceph-mon[103593]: pgmap v283: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:42.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:41 vm02.local ceph-mon[110129]: pgmap v284: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:42.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:41 vm05.local ceph-mon[103593]: pgmap v284: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:44.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:43 vm02.local ceph-mon[110129]: pgmap v285: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:44.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:43 vm05.local ceph-mon[103593]: pgmap v285: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:46.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:45 vm02.local ceph-mon[110129]: pgmap v286: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:46.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:45 vm05.local ceph-mon[103593]: pgmap v286: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:48.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:47 vm02.local ceph-mon[110129]: pgmap v287: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:48.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:47 vm05.local ceph-mon[103593]: pgmap v287: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:50.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:49 vm02.local ceph-mon[110129]: pgmap v288: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:50.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:49 vm05.local ceph-mon[103593]: pgmap v288: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:28:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:51 vm02.local ceph-mon[110129]: pgmap v289: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:52.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:51 vm05.local ceph-mon[103593]: pgmap v289: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:52.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:54.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:53 vm02.local ceph-mon[110129]: pgmap v290: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:54.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:53 vm05.local ceph-mon[103593]: pgmap v290: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:55.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:55 vm05.local ceph-mon[103593]: pgmap v291: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:56.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:55 vm02.local ceph-mon[110129]: pgmap v291: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:28:58.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:57 vm02.local ceph-mon[110129]: pgmap v292: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:28:58.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:57 vm05.local ceph-mon[103593]: pgmap v292: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:00.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:28:59 vm02.local ceph-mon[110129]: pgmap v293: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:00.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:28:59 vm05.local ceph-mon[103593]: pgmap v293: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:02.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:01 vm05.local ceph-mon[103593]: pgmap v294: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:02.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:29:02.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:29:02.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:29:02.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:01 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:29:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:01 vm02.local ceph-mon[110129]: pgmap v294: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:29:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:29:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:29:02.280 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:01 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:29:04.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:03 vm02.local ceph-mon[110129]: pgmap v295: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:04.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:03 vm05.local ceph-mon[103593]: pgmap v295: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:06.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:05 vm05.local ceph-mon[103593]: pgmap v296: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:06.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:05 vm02.local ceph-mon[110129]: pgmap v296: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:07.180 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:07.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:08.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:07 vm02.local ceph-mon[110129]: pgmap v297: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:08.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:07 vm05.local ceph-mon[103593]: pgmap v297: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:10.089 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:09 vm05.local ceph-mon[103593]: pgmap v298: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:09 vm02.local ceph-mon[110129]: pgmap v298: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:12.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:11 vm02.local ceph-mon[110129]: pgmap v299: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:11 vm05.local ceph-mon[103593]: pgmap v299: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:13 vm02.local ceph-mon[110129]: pgmap v300: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:13 vm05.local ceph-mon[103593]: pgmap v300: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:16.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:15 vm05.local ceph-mon[103593]: pgmap v301: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:16.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:15 vm02.local ceph-mon[110129]: pgmap v301: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:17 vm02.local ceph-mon[110129]: pgmap v302: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:17 vm05.local ceph-mon[103593]: pgmap v302: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:20.110 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:19 vm02.local ceph-mon[110129]: pgmap v303: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:20.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:19 vm05.local ceph-mon[103593]: pgmap v303: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:22.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:21 vm02.local ceph-mon[110129]: pgmap v304: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:22.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:21 vm05.local ceph-mon[103593]: pgmap v304: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:24.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:23 vm02.local ceph-mon[110129]: pgmap v305: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:24.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:23 vm05.local ceph-mon[103593]: pgmap v305: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:26.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:25 vm02.local ceph-mon[110129]: pgmap v306: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:26.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:25 vm05.local ceph-mon[103593]: pgmap v306: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:28.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:27 vm02.local ceph-mon[110129]: pgmap v307: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:28.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:27 vm05.local ceph-mon[103593]: pgmap v307: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:30.222 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:29 vm02.local ceph-mon[110129]: pgmap v308: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:30.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:29 vm05.local ceph-mon[103593]: pgmap v308: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:32.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:31 vm02.local ceph-mon[110129]: pgmap v309: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:31 vm05.local ceph-mon[103593]: pgmap v309: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:33 vm02.local ceph-mon[110129]: pgmap v310: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:33 vm05.local ceph-mon[103593]: pgmap v310: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:36.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:35 vm02.local ceph-mon[110129]: pgmap v311: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:36.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:35 vm05.local ceph-mon[103593]: pgmap v311: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:37.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:37.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:38.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:38 vm02.local ceph-mon[110129]: pgmap v312: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:38.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:38 vm05.local ceph-mon[103593]: pgmap v312: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:39.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:39 vm02.local ceph-mon[110129]: pgmap v313: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:39.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:39 vm05.local ceph-mon[103593]: pgmap v313: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:42.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:41 vm02.local ceph-mon[110129]: pgmap v314: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:42.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:41 vm05.local ceph-mon[103593]: pgmap v314: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:44.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:43 vm02.local ceph-mon[110129]: pgmap v315: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:43 vm05.local ceph-mon[103593]: pgmap v315: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:46.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:45 vm02.local ceph-mon[110129]: pgmap v316: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:45 vm05.local ceph-mon[103593]: pgmap v316: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:48.197 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:47 vm02.local ceph-mon[110129]: pgmap v317: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:47 vm05.local ceph-mon[103593]: pgmap v317: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:49 vm02.local ceph-mon[110129]: pgmap v318: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:49 vm05.local ceph-mon[103593]: pgmap v318: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:29:52.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:51 vm02.local ceph-mon[110129]: pgmap v319: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:52.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:51 vm05.local ceph-mon[103593]: pgmap v319: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:53 vm02.local ceph-mon[110129]: pgmap v320: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:53 vm05.local ceph-mon[103593]: pgmap v320: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:29:56.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:55 vm02.local ceph-mon[110129]: pgmap v321: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:55 vm05.local ceph-mon[103593]: pgmap v321: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:29:58.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:57 vm02.local ceph-mon[110129]: pgmap v322: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:57 vm05.local ceph-mon[103593]: pgmap v322: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:00.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:29:59 vm02.local ceph-mon[110129]: pgmap v323: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:29:59 vm05.local ceph-mon[103593]: pgmap v323: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:01.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:00 vm02.local ceph-mon[110129]: overall HEALTH_OK 2026-03-10T10:30:01.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:00 vm05.local ceph-mon[103593]: overall HEALTH_OK 2026-03-10T10:30:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:02 vm02.local ceph-mon[110129]: pgmap v324: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:30:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:02 vm05.local ceph-mon[103593]: pgmap v324: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:30:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:03 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:30:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:03 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:30:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:03 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:30:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:03 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:30:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:03 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:30:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:03 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:30:04.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:04 vm02.local ceph-mon[110129]: pgmap v325: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:04.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:04 vm05.local ceph-mon[103593]: pgmap v325: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:06.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:06 vm02.local ceph-mon[110129]: pgmap v326: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:06.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:06 vm05.local ceph-mon[103593]: pgmap v326: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:07.431 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:07.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:08.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:08 vm02.local ceph-mon[110129]: pgmap v327: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:08.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:08 vm05.local ceph-mon[103593]: pgmap v327: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:10.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:10 vm02.local ceph-mon[110129]: pgmap v328: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:10.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:10 vm05.local ceph-mon[103593]: pgmap v328: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:12.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:12 vm02.local ceph-mon[110129]: pgmap v329: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:12.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:12 vm05.local ceph-mon[103593]: pgmap v329: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:14.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:14 vm02.local ceph-mon[110129]: pgmap v330: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:14.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:14 vm05.local ceph-mon[103593]: pgmap v330: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:16.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:16 vm02.local ceph-mon[110129]: pgmap v331: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:16.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:16 vm05.local ceph-mon[103593]: pgmap v331: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:18.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:18 vm02.local ceph-mon[110129]: pgmap v332: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:18.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:18 vm05.local ceph-mon[103593]: pgmap v332: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:20 vm02.local ceph-mon[110129]: pgmap v333: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:20 vm05.local ceph-mon[103593]: pgmap v333: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:22.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:22 vm02.local ceph-mon[110129]: pgmap v334: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:22.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:22.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:22 vm05.local ceph-mon[103593]: pgmap v334: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:22.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:24.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:24 vm02.local ceph-mon[110129]: pgmap v335: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:24.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:24 vm05.local ceph-mon[103593]: pgmap v335: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:26.209 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:26 vm02.local ceph-mon[110129]: pgmap v336: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:26.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:26 vm05.local ceph-mon[103593]: pgmap v336: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:28.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:28 vm02.local ceph-mon[110129]: pgmap v337: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:28 vm05.local ceph-mon[103593]: pgmap v337: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:30 vm02.local ceph-mon[110129]: pgmap v338: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:30 vm05.local ceph-mon[103593]: pgmap v338: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:32.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:32 vm02.local ceph-mon[110129]: pgmap v339: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:32.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:32 vm05.local ceph-mon[103593]: pgmap v339: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:34.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:34 vm02.local ceph-mon[110129]: pgmap v340: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:34 vm05.local ceph-mon[103593]: pgmap v340: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:36.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:36 vm02.local ceph-mon[110129]: pgmap v341: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:36.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:36 vm05.local ceph-mon[103593]: pgmap v341: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:37.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:37.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:38.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:38 vm02.local ceph-mon[110129]: pgmap v342: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:38.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:38 vm05.local ceph-mon[103593]: pgmap v342: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:40.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:40 vm02.local ceph-mon[110129]: pgmap v343: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:40 vm05.local ceph-mon[103593]: pgmap v343: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:42.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:42 vm02.local ceph-mon[110129]: pgmap v344: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:42.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:42 vm05.local ceph-mon[103593]: pgmap v344: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:44.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:44 vm02.local ceph-mon[110129]: pgmap v345: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:44.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:44 vm05.local ceph-mon[103593]: pgmap v345: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:46.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:46 vm02.local ceph-mon[110129]: pgmap v346: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:46.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:46 vm05.local ceph-mon[103593]: pgmap v346: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:48.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:48 vm05.local ceph-mon[103593]: pgmap v347: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:48.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:48 vm02.local ceph-mon[110129]: pgmap v347: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:50.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:50 vm02.local ceph-mon[110129]: pgmap v348: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:50.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:50 vm05.local ceph-mon[103593]: pgmap v348: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:30:52.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:52 vm02.local ceph-mon[110129]: pgmap v349: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:52.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:52 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:52.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:52 vm05.local ceph-mon[103593]: pgmap v349: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:52.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:52 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:53.779 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:53 vm02.local ceph-mon[110129]: pgmap v350: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:53.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:53 vm05.local ceph-mon[103593]: pgmap v350: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:56.162 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:55 vm02.local ceph-mon[110129]: pgmap v351: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:55 vm05.local ceph-mon[103593]: pgmap v351: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:30:58.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:57 vm02.local ceph-mon[110129]: pgmap v352: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:30:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:57 vm05.local ceph-mon[103593]: pgmap v352: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:00.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:30:59 vm02.local ceph-mon[110129]: pgmap v353: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:30:59 vm05.local ceph-mon[103593]: pgmap v353: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:02.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:01 vm05.local ceph-mon[103593]: pgmap v354: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:01 vm02.local ceph-mon[110129]: pgmap v354: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:31:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:31:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:31:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:31:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:31:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:31:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:31:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:31:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:31:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:31:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:31:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:31:04.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:03 vm02.local ceph-mon[110129]: pgmap v355: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:04.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:03 vm05.local ceph-mon[103593]: pgmap v355: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:06.263 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:05 vm02.local ceph-mon[110129]: pgmap v356: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:06.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:05 vm05.local ceph-mon[103593]: pgmap v356: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:07.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:06 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:07.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:06 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:08.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:07 vm02.local ceph-mon[110129]: pgmap v357: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:08.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:07 vm05.local ceph-mon[103593]: pgmap v357: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:09 vm02.local ceph-mon[110129]: pgmap v358: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:09 vm05.local ceph-mon[103593]: pgmap v358: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:12.037 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:11 vm05.local ceph-mon[103593]: pgmap v359: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:12.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:11 vm02.local ceph-mon[110129]: pgmap v359: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:13 vm02.local ceph-mon[110129]: pgmap v360: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:13 vm05.local ceph-mon[103593]: pgmap v360: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:16.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:15 vm02.local ceph-mon[110129]: pgmap v361: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:16.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:15 vm05.local ceph-mon[103593]: pgmap v361: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:18.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:17 vm02.local ceph-mon[110129]: pgmap v362: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:18.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:17 vm05.local ceph-mon[103593]: pgmap v362: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:20.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:19 vm02.local ceph-mon[110129]: pgmap v363: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:20.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:19 vm05.local ceph-mon[103593]: pgmap v363: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:22.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:21 vm02.local ceph-mon[110129]: pgmap v364: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:22.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:21 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:21 vm05.local ceph-mon[103593]: pgmap v364: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:22.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:21 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:24.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:23 vm02.local ceph-mon[110129]: pgmap v365: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:24.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:23 vm05.local ceph-mon[103593]: pgmap v365: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:26.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:25 vm02.local ceph-mon[110129]: pgmap v366: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:26.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:25 vm05.local ceph-mon[103593]: pgmap v366: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:28.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:27 vm02.local ceph-mon[110129]: pgmap v367: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:28.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:27 vm05.local ceph-mon[103593]: pgmap v367: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:30.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:29 vm02.local ceph-mon[110129]: pgmap v368: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:30.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:29 vm05.local ceph-mon[103593]: pgmap v368: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:32.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:31 vm02.local ceph-mon[110129]: pgmap v369: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:32.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:31 vm05.local ceph-mon[103593]: pgmap v369: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:34.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:33 vm02.local ceph-mon[110129]: pgmap v370: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:34.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:33 vm05.local ceph-mon[103593]: pgmap v370: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:36.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:35 vm02.local ceph-mon[110129]: pgmap v371: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:36.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:35 vm05.local ceph-mon[103593]: pgmap v371: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:37.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:36 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:37.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:36 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:38.180 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:37 vm02.local ceph-mon[110129]: pgmap v372: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:38.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:37 vm05.local ceph-mon[103593]: pgmap v372: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:40.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:39 vm02.local ceph-mon[110129]: pgmap v373: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:40.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:39 vm05.local ceph-mon[103593]: pgmap v373: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:42.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:41 vm02.local ceph-mon[110129]: pgmap v374: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:42.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:41 vm05.local ceph-mon[103593]: pgmap v374: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:44.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:43 vm02.local ceph-mon[110129]: pgmap v375: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:44.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:43 vm05.local ceph-mon[103593]: pgmap v375: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:46.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:45 vm02.local ceph-mon[110129]: pgmap v376: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:46.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:45 vm05.local ceph-mon[103593]: pgmap v376: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:48.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:47 vm02.local ceph-mon[110129]: pgmap v377: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:48.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:47 vm05.local ceph-mon[103593]: pgmap v377: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:50.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:49 vm02.local ceph-mon[110129]: pgmap v378: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:50.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:49 vm05.local ceph-mon[103593]: pgmap v378: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:31:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:51 vm02.local ceph-mon[110129]: pgmap v379: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:52.029 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:51 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:51 vm05.local ceph-mon[103593]: pgmap v379: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:52.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:51 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:54.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:53 vm02.local ceph-mon[110129]: pgmap v380: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:54.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:53 vm05.local ceph-mon[103593]: pgmap v380: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:56.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:55 vm02.local ceph-mon[110129]: pgmap v381: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:56.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:55 vm05.local ceph-mon[103593]: pgmap v381: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:31:58.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:57 vm02.local ceph-mon[110129]: pgmap v382: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:31:58.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:57 vm05.local ceph-mon[103593]: pgmap v382: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:00.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:31:59 vm02.local ceph-mon[110129]: pgmap v383: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:00.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:31:59 vm05.local ceph-mon[103593]: pgmap v383: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:02.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:01 vm02.local ceph-mon[110129]: pgmap v384: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:02.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:01 vm05.local ceph-mon[103593]: pgmap v384: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:32:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:32:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:32:03.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:02 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:32:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:32:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:32:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:32:03.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:02 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' 2026-03-10T10:32:04.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:03 vm02.local ceph-mon[110129]: pgmap v385: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:04.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:04 vm05.local ceph-mon[103593]: pgmap v385: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:06.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:06 vm02.local ceph-mon[110129]: pgmap v386: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:06.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:06 vm05.local ceph-mon[103593]: pgmap v386: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:07.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:07 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:07.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:07 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:08.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:08 vm02.local ceph-mon[110129]: pgmap v387: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:08.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:08 vm05.local ceph-mon[103593]: pgmap v387: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:10.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:10 vm02.local ceph-mon[110129]: pgmap v388: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:10.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:10 vm05.local ceph-mon[103593]: pgmap v388: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:12.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:12 vm02.local ceph-mon[110129]: pgmap v389: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:12.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:12 vm05.local ceph-mon[103593]: pgmap v389: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:14.279 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:14 vm02.local ceph-mon[110129]: pgmap v390: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:14.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:14 vm05.local ceph-mon[103593]: pgmap v390: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:16.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:16 vm05.local ceph-mon[103593]: pgmap v391: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:16.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:16 vm02.local ceph-mon[110129]: pgmap v391: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:18.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:18 vm02.local ceph-mon[110129]: pgmap v392: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:18.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:18 vm05.local ceph-mon[103593]: pgmap v392: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:20.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:20 vm02.local ceph-mon[110129]: pgmap v393: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:20.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:20 vm05.local ceph-mon[103593]: pgmap v393: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:22.357 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:22 vm05.local ceph-mon[103593]: pgmap v394: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:22.358 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:22 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:22.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:22 vm02.local ceph-mon[110129]: pgmap v394: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:22.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:22 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:24.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:24 vm02.local ceph-mon[110129]: pgmap v395: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:24.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:24 vm05.local ceph-mon[103593]: pgmap v395: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:26.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:26 vm02.local ceph-mon[110129]: pgmap v396: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:26.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:26 vm05.local ceph-mon[103593]: pgmap v396: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:28.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:28 vm02.local ceph-mon[110129]: pgmap v397: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:28.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:28 vm05.local ceph-mon[103593]: pgmap v397: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:30.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:30 vm02.local ceph-mon[110129]: pgmap v398: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:30.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:30 vm05.local ceph-mon[103593]: pgmap v398: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:32.442 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:32 vm05.local ceph-mon[103593]: pgmap v399: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:32.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:32 vm02.local ceph-mon[110129]: pgmap v399: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:34.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:34 vm02.local ceph-mon[110129]: pgmap v400: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:34.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:34 vm05.local ceph-mon[103593]: pgmap v400: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:36.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:36 vm02.local ceph-mon[110129]: pgmap v401: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:36.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:36 vm05.local ceph-mon[103593]: pgmap v401: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:37.483 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:37 vm05.local ceph-mon[103593]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:37.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:37 vm02.local ceph-mon[110129]: from='mgr.24549 192.168.123.102:0/471876368' entity='mgr.vm02.zmavgl' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:38.431 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:38 vm02.local ceph-mon[110129]: pgmap v402: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:38.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:38 vm05.local ceph-mon[103593]: pgmap v402: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:32:40.529 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:40 vm02.local ceph-mon[110129]: pgmap v403: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:40.537 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:40 vm05.local ceph-mon[103593]: pgmap v403: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-10T10:32:41.952 ERROR:tasks.cephfs.fuse_mount:process failed to terminate after unmount. This probably indicates a bug within ceph-fuse. 2026-03-10T10:32:41.952 ERROR:teuthology.run_tasks:Manager failed: ceph-fuse Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-10T10:32:41.953 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T10:32:41.955 INFO:tasks.cephadm:Teardown begin 2026-03-10T10:32:41.955 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 2252, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-10T10:32:41.956 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:32:41.980 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:32:42.009 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T10:32:42.009 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d -- ceph mgr module disable cephadm 2026-03-10T10:32:42.168 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/mon.vm02/config 2026-03-10T10:32:42.228 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:42 vm02.local ceph-mon[110129]: pgmap v404: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:42.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:42 vm05.local ceph-mon[103593]: pgmap v404: 65 pgs: 65 active+clean; 209 MiB data, 922 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-10T10:32:42.325 INFO:teuthology.orchestra.run.vm02.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T10:32:42.340 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T10:32:42.341 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T10:32:42.341 DEBUG:teuthology.orchestra.run.vm02:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T10:32:42.357 DEBUG:teuthology.orchestra.run.vm05:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T10:32:42.377 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T10:32:42.377 INFO:tasks.cephadm.mon.vm02:Stopping mon.vm02... 2026-03-10T10:32:42.377 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02 2026-03-10T10:32:42.510 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 10 10:32:42 vm02.local systemd[1]: Stopping Ceph mon.vm02 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:32:42.682 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm02.service' 2026-03-10T10:32:42.714 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:32:42.714 INFO:tasks.cephadm.mon.vm02:Stopped mon.vm02 2026-03-10T10:32:42.714 INFO:tasks.cephadm.mon.vm05:Stopping mon.vm05... 2026-03-10T10:32:42.714 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm05 2026-03-10T10:32:42.835 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 10 10:32:42 vm05.local systemd[1]: Stopping Ceph mon.vm05 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:32:43.010 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@mon.vm05.service' 2026-03-10T10:32:43.047 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:32:43.047 INFO:tasks.cephadm.mon.vm05:Stopped mon.vm05 2026-03-10T10:32:43.047 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T10:32:43.047 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.0 2026-03-10T10:32:43.529 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:43 vm02.local systemd[1]: Stopping Ceph osd.0 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:32:43.529 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:43 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[117845]: 2026-03-10T10:32:43.147+0000 7fdc2d4ac640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:32:43.529 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:43 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[117845]: 2026-03-10T10:32:43.147+0000 7fdc2d4ac640 -1 osd.0 83 *** Got signal Terminated *** 2026-03-10T10:32:43.529 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:43 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0[117845]: 2026-03-10T10:32:43.147+0000 7fdc2d4ac640 -1 osd.0 83 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:32:48.464 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:48 vm02.local podman[165528]: 2026-03-10 10:32:48.185231788 +0000 UTC m=+5.050309111 container died 319155aac7184bb690d71b68b867764b10891e041fd1b21825b0f1bab5557a1d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True) 2026-03-10T10:32:48.464 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:48 vm02.local podman[165528]: 2026-03-10 10:32:48.213718929 +0000 UTC m=+5.078796252 container remove 319155aac7184bb690d71b68b867764b10891e041fd1b21825b0f1bab5557a1d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:32:48.464 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:48 vm02.local bash[165528]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0 2026-03-10T10:32:48.464 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:48 vm02.local podman[165596]: 2026-03-10 10:32:48.372617285 +0000 UTC m=+0.016655506 container create 52cc1ed923b91b106e71ca00051394c81574d5be836bc4ba7b3b38214064cbd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2) 2026-03-10T10:32:48.464 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:48 vm02.local podman[165596]: 2026-03-10 10:32:48.41611439 +0000 UTC m=+0.060152611 container init 52cc1ed923b91b106e71ca00051394c81574d5be836bc4ba7b3b38214064cbd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:32:48.464 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:48 vm02.local podman[165596]: 2026-03-10 10:32:48.419584524 +0000 UTC m=+0.063622745 container start 52cc1ed923b91b106e71ca00051394c81574d5be836bc4ba7b3b38214064cbd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:32:48.464 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:48 vm02.local podman[165596]: 2026-03-10 10:32:48.420706855 +0000 UTC m=+0.064745066 container attach 52cc1ed923b91b106e71ca00051394c81574d5be836bc4ba7b3b38214064cbd6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-0-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) 2026-03-10T10:32:48.464 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 10:32:48 vm02.local podman[165596]: 2026-03-10 10:32:48.365425874 +0000 UTC m=+0.009464095 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:32:48.576 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.0.service' 2026-03-10T10:32:48.607 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:32:48.607 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T10:32:48.607 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T10:32:48.607 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.1 2026-03-10T10:32:48.748 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:48 vm02.local systemd[1]: Stopping Ceph osd.1 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:32:49.029 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:48 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[124217]: 2026-03-10T10:32:48.747+0000 7fa79ce7e640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:32:49.029 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:48 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[124217]: 2026-03-10T10:32:48.747+0000 7fa79ce7e640 -1 osd.1 83 *** Got signal Terminated *** 2026-03-10T10:32:49.029 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:48 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1[124217]: 2026-03-10T10:32:48.747+0000 7fa79ce7e640 -1 osd.1 83 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:32:54.039 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:53 vm02.local podman[165691]: 2026-03-10 10:32:53.781597775 +0000 UTC m=+5.046808692 container died 6b6be7f62bd3c4b52f1b9176d517329b91b750cea5194c64b9d7b03023dde879 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True) 2026-03-10T10:32:54.039 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:53 vm02.local podman[165691]: 2026-03-10 10:32:53.810783472 +0000 UTC m=+5.075994389 container remove 6b6be7f62bd3c4b52f1b9176d517329b91b750cea5194c64b9d7b03023dde879 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:32:54.039 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:53 vm02.local bash[165691]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1 2026-03-10T10:32:54.039 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:53 vm02.local podman[165768]: 2026-03-10 10:32:53.946771967 +0000 UTC m=+0.016444000 container create dc08b0e364decff2af9f6092a29d9962ccd0c539bc3d0576a966c8facf936c47 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:32:54.039 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:53 vm02.local podman[165768]: 2026-03-10 10:32:53.986577443 +0000 UTC m=+0.056249485 container init dc08b0e364decff2af9f6092a29d9962ccd0c539bc3d0576a966c8facf936c47 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T10:32:54.039 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:53 vm02.local podman[165768]: 2026-03-10 10:32:53.989148103 +0000 UTC m=+0.058820156 container start dc08b0e364decff2af9f6092a29d9962ccd0c539bc3d0576a966c8facf936c47 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-10T10:32:54.039 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 10:32:53 vm02.local podman[165768]: 2026-03-10 10:32:53.995629365 +0000 UTC m=+0.065301398 container attach dc08b0e364decff2af9f6092a29d9962ccd0c539bc3d0576a966c8facf936c47 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-1-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-10T10:32:54.138 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.1.service' 2026-03-10T10:32:54.169 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:32:54.169 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T10:32:54.169 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T10:32:54.169 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.2 2026-03-10T10:32:54.301 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:54 vm02.local systemd[1]: Stopping Ceph osd.2 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:32:54.779 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:54 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[129476]: 2026-03-10T10:32:54.300+0000 7fe1a38b5640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:32:54.779 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:54 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[129476]: 2026-03-10T10:32:54.300+0000 7fe1a38b5640 -1 osd.2 83 *** Got signal Terminated *** 2026-03-10T10:32:54.779 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:54 vm02.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2[129476]: 2026-03-10T10:32:54.300+0000 7fe1a38b5640 -1 osd.2 83 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:32:59.587 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:59 vm02.local podman[165865]: 2026-03-10 10:32:59.333081211 +0000 UTC m=+5.047090045 container died 745b9931485f3ac4dcf7a8b986a17d05af607019958af3969fd02bfe351b3fc4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T10:32:59.587 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:59 vm02.local podman[165865]: 2026-03-10 10:32:59.356393145 +0000 UTC m=+5.070401968 container remove 745b9931485f3ac4dcf7a8b986a17d05af607019958af3969fd02bfe351b3fc4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T10:32:59.587 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:59 vm02.local bash[165865]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2 2026-03-10T10:32:59.587 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:59 vm02.local podman[165931]: 2026-03-10 10:32:59.493976224 +0000 UTC m=+0.015926601 container create 3758350cfb79c10ce682030e05e7e7eff74aea20b1c49f79b3d9f29b8235787f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T10:32:59.587 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:59 vm02.local podman[165931]: 2026-03-10 10:32:59.534451774 +0000 UTC m=+0.056402151 container init 3758350cfb79c10ce682030e05e7e7eff74aea20b1c49f79b3d9f29b8235787f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, CEPH_REF=squid, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:32:59.587 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:59 vm02.local podman[165931]: 2026-03-10 10:32:59.537204126 +0000 UTC m=+0.059154503 container start 3758350cfb79c10ce682030e05e7e7eff74aea20b1c49f79b3d9f29b8235787f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:32:59.587 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:59 vm02.local podman[165931]: 2026-03-10 10:32:59.538701227 +0000 UTC m=+0.060651595 container attach 3758350cfb79c10ce682030e05e7e7eff74aea20b1c49f79b3d9f29b8235787f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-2-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid) 2026-03-10T10:32:59.587 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 10:32:59 vm02.local podman[165931]: 2026-03-10 10:32:59.487653811 +0000 UTC m=+0.009604197 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:32:59.711 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.2.service' 2026-03-10T10:32:59.745 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:32:59.745 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T10:32:59.745 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-10T10:32:59.745 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.3 2026-03-10T10:33:00.037 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:32:59 vm05.local systemd[1]: Stopping Ceph osd.3 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:33:00.038 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:32:59 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[113012]: 2026-03-10T10:32:59.840+0000 7fd14151a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:33:00.038 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:32:59 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[113012]: 2026-03-10T10:32:59.840+0000 7fd14151a640 -1 osd.3 83 *** Got signal Terminated *** 2026-03-10T10:33:00.038 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:32:59 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3[113012]: 2026-03-10T10:32:59.840+0000 7fd14151a640 -1 osd.3 83 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:33:05.126 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:33:04 vm05.local podman[142520]: 2026-03-10 10:33:04.867399643 +0000 UTC m=+5.040043595 container died fe29904ecf52192debc50149842b405666ee59a003daedcb382192e10ec2f386 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-10T10:33:05.126 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:33:04 vm05.local podman[142520]: 2026-03-10 10:33:04.898906851 +0000 UTC m=+5.071550794 container remove fe29904ecf52192debc50149842b405666ee59a003daedcb382192e10ec2f386 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T10:33:05.126 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:33:04 vm05.local bash[142520]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3 2026-03-10T10:33:05.126 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:33:05 vm05.local podman[142587]: 2026-03-10 10:33:05.03534632 +0000 UTC m=+0.019505392 container create 8ff9ebc19ba3280d8077be3be7a306db7b594905a945679fd5f73d76595bdf5c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T10:33:05.126 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:33:05 vm05.local podman[142587]: 2026-03-10 10:33:05.079249331 +0000 UTC m=+0.063408403 container init 8ff9ebc19ba3280d8077be3be7a306db7b594905a945679fd5f73d76595bdf5c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T10:33:05.126 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:33:05 vm05.local podman[142587]: 2026-03-10 10:33:05.081771572 +0000 UTC m=+0.065930644 container start 8ff9ebc19ba3280d8077be3be7a306db7b594905a945679fd5f73d76595bdf5c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.build-date=20260223) 2026-03-10T10:33:05.126 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:33:05 vm05.local podman[142587]: 2026-03-10 10:33:05.082585015 +0000 UTC m=+0.066744077 container attach 8ff9ebc19ba3280d8077be3be7a306db7b594905a945679fd5f73d76595bdf5c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-3-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2) 2026-03-10T10:33:05.126 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 10:33:05 vm05.local podman[142587]: 2026-03-10 10:33:05.026869102 +0000 UTC m=+0.011028184 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:33:05.245 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.3.service' 2026-03-10T10:33:05.275 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:33:05.276 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-10T10:33:05.276 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-10T10:33:05.276 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.4 2026-03-10T10:33:05.416 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:05 vm05.local systemd[1]: Stopping Ceph osd.4 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:33:05.787 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:05 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[118348]: 2026-03-10T10:33:05.414+0000 7f9310b6b640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:33:05.787 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:05 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[118348]: 2026-03-10T10:33:05.414+0000 7f9310b6b640 -1 osd.4 83 *** Got signal Terminated *** 2026-03-10T10:33:05.787 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:05 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4[118348]: 2026-03-10T10:33:05.414+0000 7f9310b6b640 -1 osd.4 83 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:33:10.454 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:10 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:33:10.042+0000 7f528b9d3640 -1 osd.5 83 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T10:32:47.149525+0000 front 2026-03-10T10:32:47.149333+0000 (oldest deadline 2026-03-10T10:33:09.449211+0000) 2026-03-10T10:33:10.712 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:10 vm05.local podman[142683]: 2026-03-10 10:33:10.454721989 +0000 UTC m=+5.053339772 container died fe0b3f802cec8cd51fa95c03893dc79ef1887c437250693623f042848aa7a479 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid) 2026-03-10T10:33:10.712 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:10 vm05.local podman[142683]: 2026-03-10 10:33:10.483530085 +0000 UTC m=+5.082147868 container remove fe0b3f802cec8cd51fa95c03893dc79ef1887c437250693623f042848aa7a479 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, ceph=True) 2026-03-10T10:33:10.712 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:10 vm05.local bash[142683]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4 2026-03-10T10:33:10.712 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:10 vm05.local podman[142762]: 2026-03-10 10:33:10.619362748 +0000 UTC m=+0.017691087 container create 7db20bdcbfdf8fae23c145a8be4fb263d2d54a067e61a2f1ecfd21b1218afaac (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T10:33:10.712 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:10 vm05.local podman[142762]: 2026-03-10 10:33:10.667763729 +0000 UTC m=+0.066092058 container init 7db20bdcbfdf8fae23c145a8be4fb263d2d54a067e61a2f1ecfd21b1218afaac (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T10:33:10.712 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:10 vm05.local podman[142762]: 2026-03-10 10:33:10.670593446 +0000 UTC m=+0.068921785 container start 7db20bdcbfdf8fae23c145a8be4fb263d2d54a067e61a2f1ecfd21b1218afaac (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:33:10.712 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 10:33:10 vm05.local podman[142762]: 2026-03-10 10:33:10.672635509 +0000 UTC m=+0.070963848 container attach 7db20bdcbfdf8fae23c145a8be4fb263d2d54a067e61a2f1ecfd21b1218afaac (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-4-deactivate, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) 2026-03-10T10:33:10.824 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.4.service' 2026-03-10T10:33:10.855 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:33:10.855 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-10T10:33:10.855 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-10T10:33:10.855 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.5 2026-03-10T10:33:11.002 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:10 vm05.local systemd[1]: Stopping Ceph osd.5 for d0ab5dc6-1c69-11f1-8798-3b5e87c3385d... 2026-03-10T10:33:11.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:11 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:33:11.001+0000 7f528fbcc640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T10:33:11.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:11 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:33:11.001+0000 7f528fbcc640 -1 osd.5 83 *** Got signal Terminated *** 2026-03-10T10:33:11.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:11 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:33:11.001+0000 7f528fbcc640 -1 osd.5 83 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:33:11.287 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:11 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:33:11.076+0000 7f528b9d3640 -1 osd.5 83 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T10:32:47.149525+0000 front 2026-03-10T10:32:47.149333+0000 (oldest deadline 2026-03-10T10:33:09.449211+0000) 2026-03-10T10:33:12.537 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:12 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:33:12.094+0000 7f528b9d3640 -1 osd.5 83 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T10:32:47.149525+0000 front 2026-03-10T10:32:47.149333+0000 (oldest deadline 2026-03-10T10:33:09.449211+0000) 2026-03-10T10:33:13.537 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:13 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:33:13.120+0000 7f528b9d3640 -1 osd.5 83 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T10:32:47.149525+0000 front 2026-03-10T10:32:47.149333+0000 (oldest deadline 2026-03-10T10:33:09.449211+0000) 2026-03-10T10:33:14.537 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:14 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:33:14.166+0000 7f528b9d3640 -1 osd.5 83 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T10:32:47.149525+0000 front 2026-03-10T10:32:47.149333+0000 (oldest deadline 2026-03-10T10:33:09.449211+0000) 2026-03-10T10:33:15.537 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:15 vm05.local ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5[123553]: 2026-03-10T10:33:15.142+0000 7f528b9d3640 -1 osd.5 83 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T10:32:47.149525+0000 front 2026-03-10T10:32:47.149333+0000 (oldest deadline 2026-03-10T10:33:09.449211+0000) 2026-03-10T10:33:16.292 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:16 vm05.local podman[142858]: 2026-03-10 10:33:16.037935143 +0000 UTC m=+5.049733381 container died c60f7383494fc3060f444b458f96ceaed9016f9dd747044bc574ab6497b83ba1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T10:33:16.292 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:16 vm05.local podman[142858]: 2026-03-10 10:33:16.067656799 +0000 UTC m=+5.079455027 container remove c60f7383494fc3060f444b458f96ceaed9016f9dd747044bc574ab6497b83ba1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T10:33:16.292 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:16 vm05.local bash[142858]: ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5 2026-03-10T10:33:16.292 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:16 vm05.local podman[142924]: 2026-03-10 10:33:16.198665633 +0000 UTC m=+0.015991515 container create 973f7d4ae91a050b41383bba96038f1519e3e426508bfcbfbf044c3e0abf4963 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T10:33:16.292 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:16 vm05.local podman[142924]: 2026-03-10 10:33:16.237639668 +0000 UTC m=+0.054965570 container init 973f7d4ae91a050b41383bba96038f1519e3e426508bfcbfbf044c3e0abf4963 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default) 2026-03-10T10:33:16.292 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:16 vm05.local podman[142924]: 2026-03-10 10:33:16.241191096 +0000 UTC m=+0.058516978 container start 973f7d4ae91a050b41383bba96038f1519e3e426508bfcbfbf044c3e0abf4963 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) 2026-03-10T10:33:16.292 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 10:33:16 vm05.local podman[142924]: 2026-03-10 10:33:16.242663432 +0000 UTC m=+0.059989325 container attach 973f7d4ae91a050b41383bba96038f1519e3e426508bfcbfbf044c3e0abf4963 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d-osd-5-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_REF=squid) 2026-03-10T10:33:16.398 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d0ab5dc6-1c69-11f1-8798-3b5e87c3385d@osd.5.service' 2026-03-10T10:33:16.431 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:33:16.431 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-10T10:33:16.431 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d --force --keep-logs 2026-03-10T10:33:16.535 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:33:17.824 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm02.stderr:ceph-fuse[94923]: fuse finished with error 0 and tester_r 0 2026-03-10T10:33:25.649 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d --force --keep-logs 2026-03-10T10:33:25.747 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:33:30.836 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:33:30.866 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:33:30.890 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T10:33:30.891 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/997/remote/vm02/crash 2026-03-10T10:33:30.891 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/crash -- . 2026-03-10T10:33:30.929 INFO:teuthology.orchestra.run.vm02.stderr:tar: /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/crash: Cannot open: No such file or directory 2026-03-10T10:33:30.929 INFO:teuthology.orchestra.run.vm02.stderr:tar: Error is not recoverable: exiting now 2026-03-10T10:33:30.930 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/997/remote/vm05/crash 2026-03-10T10:33:30.931 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/crash -- . 2026-03-10T10:33:30.954 INFO:teuthology.orchestra.run.vm05.stderr:tar: /var/lib/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/crash: Cannot open: No such file or directory 2026-03-10T10:33:30.954 INFO:teuthology.orchestra.run.vm05.stderr:tar: Error is not recoverable: exiting now 2026-03-10T10:33:30.955 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T10:33:30.955 DEBUG:teuthology.orchestra.run.vm02:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v FS_DEGRADED | egrep -v 'filesystem is degraded' | egrep -v FS_INLINE_DATA_DEPRECATED | egrep -v FS_WITH_FAILED_MDS | egrep -v MDS_ALL_DOWN | egrep -v 'filesystem is offline' | egrep -v 'is offline because no MDS' | egrep -v MDS_DAMAGE | egrep -v MDS_DEGRADED | egrep -v MDS_FAILED | egrep -v MDS_INSUFFICIENT_STANDBY | egrep -v MDS_UP_LESS_THAN_MAX | egrep -v 'online, but wants' | egrep -v 'filesystem is online with fewer MDS than max_mds' | egrep -v POOL_APP_NOT_ENABLED | egrep -v 'do not have an application enabled' | egrep -v 'overall HEALTH_' | egrep -v 'Replacing daemon' | egrep -v 'deprecated feature inline_data' | egrep -v MGR_MODULE_ERROR | egrep -v OSD_DOWN | egrep -v 'osds down' | egrep -v 'overall HEALTH_' | egrep -v '\(OSD_DOWN\)' | egrep -v '\(OSD_' | egrep -v 'but it is still running' | egrep -v 'is not responding' | egrep -v MON_DOWN | egrep -v PG_AVAILABILITY | egrep -v PG_DEGRADED | egrep -v 'Reduced data availability' | egrep -v 'Degraded data redundancy' | egrep -v 'pg .* is stuck inactive' | egrep -v 'pg .* is .*degraded' | egrep -v 'pg .* is stuck peering' | head -n 1 2026-03-10T10:33:31.022 INFO:tasks.cephadm:Compressing logs... 2026-03-10T10:33:31.022 DEBUG:teuthology.orchestra.run.vm02:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T10:33:31.023 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T10:33:31.045 INFO:teuthology.orchestra.run.vm02.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T10:33:31.045 INFO:teuthology.orchestra.run.vm02.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T10:33:31.045 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mon.vm02.log 2026-03-10T10:33:31.046 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.log 2026-03-10T10:33:31.046 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mon.vm02.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.audit.log 2026-03-10T10:33:31.047 INFO:teuthology.orchestra.run.vm05.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T10:33:31.047 INFO:teuthology.orchestra.run.vm05.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T10:33:31.048 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-volume.log 2026-03-10T10:33:31.048 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-client.ceph-exporter.vm05.log 2026-03-10T10:33:31.049 INFO:teuthology.orchestra.run.vm05.stderr: 92.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T10:33:31.050 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mgr.vm05.coparq.log 2026-03-10T10:33:31.050 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mon.vm05.log 2026-03-10T10:33:31.054 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.log: 87.6% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.log.gz 2026-03-10T10:33:31.057 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mgr.vm02.zmavgl.log 2026-03-10T10:33:31.058 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mgr.vm05.coparq.log: /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-client.ceph-exporter.vm05.log: 94.0% 93.2% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-volume.log.gz 2026-03-10T10:33:31.058 INFO:teuthology.orchestra.run.vm05.stderr: -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-client.ceph-exporter.vm05.log.gz 2026-03-10T10:33:31.058 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.audit.log 2026-03-10T10:33:31.058 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.audit.log: 90.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T10:33:31.058 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mon.vm05.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.log 2026-03-10T10:33:31.059 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.cephadm.log 2026-03-10T10:33:31.062 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mgr.vm02.zmavgl.log: 91.3% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.audit.log.gz 2026-03-10T10:33:31.065 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-volume.log 2026-03-10T10:33:31.066 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.cephadm.log: 85.1% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.cephadm.log.gz 2026-03-10T10:33:31.069 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.audit.log: 91.4% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.audit.log.gz 2026-03-10T10:33:31.069 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.cephadm.log 2026-03-10T10:33:31.069 INFO:teuthology.orchestra.run.vm05.stderr: 89.2% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mgr.vm05.coparq.log.gz 2026-03-10T10:33:31.070 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.3.log 2026-03-10T10:33:31.070 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-client.ceph-exporter.vm02.log 2026-03-10T10:33:31.070 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.cephadm.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.4.log 2026-03-10T10:33:31.071 INFO:teuthology.orchestra.run.vm05.stderr: 85.1% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.cephadm.log.gz 2026-03-10T10:33:31.073 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.3.log: 87.6% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph.log.gz 2026-03-10T10:33:31.076 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.5.log 2026-03-10T10:33:31.079 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.0.log 2026-03-10T10:33:31.081 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.4.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm05.liatdh.log 2026-03-10T10:33:31.084 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-client.ceph-exporter.vm02.log: 93.9% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-client.ceph-exporter.vm02.log.gz 2026-03-10T10:33:31.086 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm05.sudjys.log 2026-03-10T10:33:31.086 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.1.log 2026-03-10T10:33:31.087 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.0.log: 93.5% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-volume.log.gz 2026-03-10T10:33:31.091 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.2.log 2026-03-10T10:33:31.097 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm02.zymcrs.log 2026-03-10T10:33:31.097 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm05.liatdh.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.1.log 2026-03-10T10:33:31.102 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm02.stcvsz.log 2026-03-10T10:33:31.115 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm02.zymcrs.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.0.log 2026-03-10T10:33:31.557 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm05.sudjys.log: /var/log/ceph/ceph-client.1.log: 92.2% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mon.vm05.log.gz 2026-03-10T10:33:31.623 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm02.stcvsz.log: /var/log/ceph/ceph-client.0.log: 89.5% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mgr.vm02.zmavgl.log.gz 2026-03-10T10:33:32.597 INFO:teuthology.orchestra.run.vm02.stderr: 90.5% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mon.vm02.log.gz 2026-03-10T10:33:39.777 INFO:teuthology.orchestra.run.vm05.stderr: 93.6% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.4.log.gz 2026-03-10T10:33:40.715 INFO:teuthology.orchestra.run.vm02.stderr: 93.8% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.2.log.gz 2026-03-10T10:33:41.484 INFO:teuthology.orchestra.run.vm05.stderr: 94.8% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm05.liatdh.log.gz 2026-03-10T10:33:41.672 INFO:teuthology.orchestra.run.vm02.stderr: 93.8% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.0.log.gz 2026-03-10T10:33:41.787 INFO:teuthology.orchestra.run.vm05.stderr: 93.8% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.5.log.gz 2026-03-10T10:33:42.433 INFO:teuthology.orchestra.run.vm02.stderr: 93.8% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.1.log.gz 2026-03-10T10:33:42.846 INFO:teuthology.orchestra.run.vm05.stderr: 93.8% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-osd.3.log.gz 2026-03-10T10:33:46.657 INFO:teuthology.orchestra.run.vm05.stderr: 94.9% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm05.sudjys.log.gz 2026-03-10T10:33:46.759 INFO:teuthology.orchestra.run.vm05.stderr:gzip: /var/log/ceph/ceph-client.1.log: file size changed while zipping 2026-03-10T10:33:46.759 INFO:teuthology.orchestra.run.vm05.stderr: 93.4% -- replaced with /var/log/ceph/ceph-client.1.log.gz 2026-03-10T10:33:46.761 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T10:33:46.761 INFO:teuthology.orchestra.run.vm05.stderr:real 0m15.723s 2026-03-10T10:33:46.761 INFO:teuthology.orchestra.run.vm05.stderr:user 0m29.810s 2026-03-10T10:33:46.761 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m1.439s 2026-03-10T10:33:47.982 INFO:teuthology.orchestra.run.vm02.stderr: 94.8% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm02.stcvsz.log.gz 2026-03-10T10:33:49.273 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.0.log: file size changed while zipping 2026-03-10T10:33:49.394 INFO:teuthology.orchestra.run.vm02.stderr: 93.5% -- replaced with /var/log/ceph/ceph-client.0.log.gz 2026-03-10T10:34:44.357 INFO:teuthology.orchestra.run.vm02.stderr: 92.9% -- replaced with /var/log/ceph/d0ab5dc6-1c69-11f1-8798-3b5e87c3385d/ceph-mds.cephfs.vm02.zymcrs.log.gz 2026-03-10T10:34:44.360 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T10:34:44.360 INFO:teuthology.orchestra.run.vm02.stderr:real 1m13.324s 2026-03-10T10:34:44.360 INFO:teuthology.orchestra.run.vm02.stderr:user 1m23.958s 2026-03-10T10:34:44.360 INFO:teuthology.orchestra.run.vm02.stderr:sys 0m5.415s 2026-03-10T10:34:44.360 INFO:tasks.cephadm:Archiving logs... 2026-03-10T10:34:44.360 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/997/remote/vm02/log 2026-03-10T10:34:44.360 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T10:34:48.712 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/997/remote/vm05/log 2026-03-10T10:34:48.713 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T10:34:50.129 INFO:tasks.cephadm:Removing cluster... 2026-03-10T10:34:50.129 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d --force 2026-03-10T10:34:50.312 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:34:51.120 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid d0ab5dc6-1c69-11f1-8798-3b5e87c3385d --force 2026-03-10T10:34:51.210 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: d0ab5dc6-1c69-11f1-8798-3b5e87c3385d 2026-03-10T10:34:51.437 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T10:34:51.437 DEBUG:teuthology.orchestra.run.vm02:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T10:34:51.452 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T10:34:51.466 INFO:tasks.cephadm:Teardown complete 2026-03-10T10:34:51.466 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T10:34:51.469 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-10T10:34:51.469 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T10:34:51.469 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T10:34:51.494 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T10:34:51.537 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T10:34:51.537 DEBUG:teuthology.orchestra.run.vm02:> 2026-03-10T10:34:51.537 DEBUG:teuthology.orchestra.run.vm02:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T10:34:51.537 DEBUG:teuthology.orchestra.run.vm02:> sudo yum -y remove $d || true 2026-03-10T10:34:51.537 DEBUG:teuthology.orchestra.run.vm02:> done 2026-03-10T10:34:51.542 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T10:34:51.542 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-10T10:34:51.542 DEBUG:teuthology.orchestra.run.vm05:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T10:34:51.542 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y remove $d || true 2026-03-10T10:34:51.542 DEBUG:teuthology.orchestra.run.vm05:> done 2026-03-10T10:34:51.793 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:34:51.793 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:18.2.1-0.el9 @ceph 31 M 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout:Remove 2 Packages 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 31 M 2026-03-10T10:34:51.794 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:34:51.798 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:34:51.798 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:34:51.814 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:34:51.815 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:34:51.845 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout: Package Architecture Version Repository Size 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw x86_64 2:18.2.1-0.el9 @ceph 31 M 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout:Remove 2 Packages 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 31 M 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:34:51.846 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:34:51.850 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:34:51.850 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:34:51.864 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:34:51.864 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:34:51.868 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:34:51.868 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:51.868 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T10:34:51.868 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T10:34:51.868 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T10:34:51.868 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:51.870 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-radosgw-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:34:51.878 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:34:51.892 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T10:34:51.895 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:34:51.918 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:34:51.918 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:51.918 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T10:34:51.918 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T10:34:51.918 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T10:34:51.918 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:51.920 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-radosgw-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:34:51.930 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:34:51.944 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T10:34:51.963 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T10:34:51.963 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:34:52.008 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T10:34:52.008 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-radosgw-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:34:52.013 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T10:34:52.013 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.013 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T10:34:52.013 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:18.2.1-0.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T10:34:52.013 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.013 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:34:52.057 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T10:34:52.057 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.057 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T10:34:52.057 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw-2:18.2.1-0.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T10:34:52.057 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.057 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:34:52.221 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:18.2.1-0.el9 @ceph 164 M 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout:Remove 4 Packages 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 166 M 2026-03-10T10:34:52.222 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:34:52.225 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:34:52.225 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:34:52.249 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:34:52.249 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:34:52.253 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout: Package Architecture Version Repository Size 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test x86_64 2:18.2.1-0.el9 @ceph 164 M 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout:Remove 4 Packages 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 166 M 2026-03-10T10:34:52.254 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:34:52.257 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:34:52.257 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:34:52.282 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:34:52.282 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:34:52.301 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:34:52.306 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-test-2:18.2.1-0.el9.x86_64 1/4 2026-03-10T10:34:52.308 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T10:34:52.312 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T10:34:52.329 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T10:34:52.333 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:34:52.339 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-test-2:18.2.1-0.el9.x86_64 1/4 2026-03-10T10:34:52.341 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T10:34:52.344 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T10:34:52.360 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T10:34:52.398 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T10:34:52.398 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:18.2.1-0.el9.x86_64 1/4 2026-03-10T10:34:52.398 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T10:34:52.398 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T10:34:52.421 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T10:34:52.421 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-test-2:18.2.1-0.el9.x86_64 1/4 2026-03-10T10:34:52.421 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T10:34:52.421 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T10:34:52.451 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T10:34:52.451 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.451 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T10:34:52.451 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:18.2.1-0.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T10:34:52.451 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T10:34:52.451 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.451 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:34:52.473 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T10:34:52.473 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.473 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T10:34:52.473 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test-2:18.2.1-0.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T10:34:52.473 INFO:teuthology.orchestra.run.vm02.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T10:34:52.473 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.473 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:34:52.674 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:18.2.1-0.el9 @ceph 0 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:18.2.1-0.el9 @ceph 6.5 M 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:18.2.1-0.el9 @ceph 20 M 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:18.2.1-0.el9 @ceph 61 M 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout:Remove 8 Packages 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 89 M 2026-03-10T10:34:52.675 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:34:52.678 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:34:52.678 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:34:52.678 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: ceph x86_64 2:18.2.1-0.el9 @ceph 0 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds x86_64 2:18.2.1-0.el9 @ceph 6.5 M 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon x86_64 2:18.2.1-0.el9 @ceph 20 M 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd x86_64 2:18.2.1-0.el9 @ceph 61 M 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout:Remove 8 Packages 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 89 M 2026-03-10T10:34:52.679 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:34:52.682 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:34:52.682 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:34:52.701 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:34:52.701 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:34:52.705 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:34:52.706 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:34:52.739 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:34:52.741 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-2:18.2.1-0.el9.x86_64 1/8 2026-03-10T10:34:52.743 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:34:52.745 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-2:18.2.1-0.el9.x86_64 1/8 2026-03-10T10:34:52.762 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:18.2.1-0.el9.x86_64 2/8 2026-03-10T10:34:52.762 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:52.762 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T10:34:52.762 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T10:34:52.762 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T10:34:52.762 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.764 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-osd-2:18.2.1-0.el9.x86_64 2/8 2026-03-10T10:34:52.765 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:18.2.1-0.el9.x86_64 2/8 2026-03-10T10:34:52.765 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:52.765 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T10:34:52.765 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T10:34:52.765 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T10:34:52.765 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.768 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-osd-2:18.2.1-0.el9.x86_64 2/8 2026-03-10T10:34:52.774 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:18.2.1-0.el9.x86_64 2/8 2026-03-10T10:34:52.776 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:18.2.1-0.el9.x86_64 2/8 2026-03-10T10:34:52.789 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-10T10:34:52.789 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T10:34:52.789 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.790 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-10T10:34:52.790 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T10:34:52.790 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.790 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-10T10:34:52.791 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-10T10:34:52.810 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-10T10:34:52.814 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-10T10:34:52.814 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 4/8 2026-03-10T10:34:52.816 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-10T10:34:52.817 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 4/8 2026-03-10T10:34:52.818 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-10T10:34:52.819 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-10T10:34:52.821 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-10T10:34:52.839 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:18.2.1-0.el9.x86_64 7/8 2026-03-10T10:34:52.839 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:52.839 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T10:34:52.839 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T10:34:52.839 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T10:34:52.839 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.840 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mds-2:18.2.1-0.el9.x86_64 7/8 2026-03-10T10:34:52.844 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:18.2.1-0.el9.x86_64 7/8 2026-03-10T10:34:52.844 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:52.844 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T10:34:52.844 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T10:34:52.844 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T10:34:52.844 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.844 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mds-2:18.2.1-0.el9.x86_64 7/8 2026-03-10T10:34:52.847 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:18.2.1-0.el9.x86_64 7/8 2026-03-10T10:34:52.852 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:18.2.1-0.el9.x86_64 7/8 2026-03-10T10:34:52.868 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:18.2.1-0.el9.x86_64 8/8 2026-03-10T10:34:52.868 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:52.868 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T10:34:52.868 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T10:34:52.868 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T10:34:52.868 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:52.869 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mon-2:18.2.1-0.el9.x86_64 8/8 2026-03-10T10:34:52.872 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:18.2.1-0.el9.x86_64 8/8 2026-03-10T10:34:52.872 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:52.872 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T10:34:52.872 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T10:34:52.872 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T10:34:52.872 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:52.873 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mon-2:18.2.1-0.el9.x86_64 8/8 2026-03-10T10:34:52.953 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:18.2.1-0.el9.x86_64 8/8 2026-03-10T10:34:52.953 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:18.2.1-0.el9.x86_64 1/8 2026-03-10T10:34:52.953 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:18.2.1-0.el9.x86_64 2/8 2026-03-10T10:34:52.953 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:18.2.1-0.el9.x86_64 3/8 2026-03-10T10:34:52.954 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:18.2.1-0.el9.x86_64 4/8 2026-03-10T10:34:52.954 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-10T10:34:52.954 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-10T10:34:52.954 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 7/8 2026-03-10T10:34:52.964 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:18.2.1-0.el9.x86_64 8/8 2026-03-10T10:34:52.964 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-2:18.2.1-0.el9.x86_64 1/8 2026-03-10T10:34:52.964 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mds-2:18.2.1-0.el9.x86_64 2/8 2026-03-10T10:34:52.964 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mon-2:18.2.1-0.el9.x86_64 3/8 2026-03-10T10:34:52.964 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-osd-2:18.2.1-0.el9.x86_64 4/8 2026-03-10T10:34:52.964 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-10T10:34:52.964 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-10T10:34:52.964 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 7/8 2026-03-10T10:34:53.013 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 8/8 2026-03-10T10:34:53.013 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:53.013 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T10:34:53.014 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:18.2.1-0.el9.x86_64 ceph-mds-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:53.014 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:18.2.1-0.el9.x86_64 ceph-osd-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:53.014 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 libconfig-1.7.2-9.el9.x86_64 2026-03-10T10:34:53.014 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T10:34:53.014 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:53.014 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:34:53.023 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 8/8 2026-03-10T10:34:53.023 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:53.023 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T10:34:53.023 INFO:teuthology.orchestra.run.vm02.stdout: ceph-2:18.2.1-0.el9.x86_64 ceph-mds-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:53.023 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon-2:18.2.1-0.el9.x86_64 ceph-osd-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:53.023 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs-1.1.0-3.el9.x86_64 libconfig-1.7.2-9.el9.x86_64 2026-03-10T10:34:53.023 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T10:34:53.023 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:53.023 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:34:53.224 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:18.2.1-0.el9 @ceph 22 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:18.2.1-0.el9 @ceph 395 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:18.2.1-0.el9 @ceph 4.5 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:18.2.1-0.el9 @ceph-noarch 678 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:18.2.1-0.el9 @ceph-noarch 7.6 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.1-0.el9 @ceph-noarch 66 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:18.2.1-0.el9 @ceph-noarch 574 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:18.2.1-0.el9 @ceph 12 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:18.2.1-0.el9 @ceph 70 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:18.2.1-0.el9 @ceph-noarch 319 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:18.2.1-0.el9 @ceph-noarch 1.4 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:18.2.1-0.el9 @ceph-noarch 40 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:18.2.1-0.el9 @ceph 138 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: fmt x86_64 8.1.1-5.el9 @epel 337 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:18.2.1-0.el9 @ceph 434 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:18.2.1-0.el9 @ceph 1.5 M 2026-03-10T10:34:53.229 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:18.2.1-0.el9 @ceph 610 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt noarch 2.4.0-1.el9 @epel 103 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt+crypto noarch 2.4.0-1.el9 @epel 5.4 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T10:34:53.230 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout:Remove 84 Packages 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 434 M 2026-03-10T10:34:53.231 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:34:53.237 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base x86_64 2:18.2.1-0.el9 @ceph 22 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache x86_64 2:18.2.1-0.el9 @ceph 395 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr x86_64 2:18.2.1-0.el9 @ceph 4.5 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm noarch 2:18.2.1-0.el9 @ceph-noarch 678 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard noarch 2:18.2.1-0.el9 @ceph-noarch 7.6 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.1-0.el9 @ceph-noarch 66 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook noarch 2:18.2.1-0.el9 @ceph-noarch 574 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror x86_64 2:18.2.1-0.el9 @ceph 12 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common x86_64 2:18.2.1-0.el9 @ceph 70 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards noarch 2:18.2.1-0.el9 @ceph-noarch 319 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core noarch 2:18.2.1-0.el9 @ceph-noarch 1.4 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts noarch 2:18.2.1-0.el9 @ceph-noarch 40 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux x86_64 2:18.2.1-0.el9 @ceph 138 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: fmt x86_64 8.1.1-5.el9 @epel 337 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite x86_64 2:18.2.1-0.el9 @ceph 434 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 x86_64 2:18.2.1-0.el9 @ceph 1.5 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T10:34:53.242 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common x86_64 2:18.2.1-0.el9 @ceph 610 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jwt noarch 2.4.0-1.el9 @epel 103 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-jwt+crypto noarch 2.4.0-1.el9 @epel 5.4 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:34:53.243 INFO:teuthology.orchestra.run.vm02.stdout:Remove 84 Packages 2026-03-10T10:34:53.244 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:53.244 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 434 M 2026-03-10T10:34:53.244 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:34:53.253 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:34:53.253 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:34:53.266 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:34:53.266 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:34:53.360 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:34:53.360 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:34:53.373 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:34:53.373 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:34:53.493 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:34:53.493 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-rook-2:18.2.1-0.el9.noarch 1/84 2026-03-10T10:34:53.499 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.1-0.el9.noarch 1/84 2026-03-10T10:34:53.506 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:34:53.506 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-rook-2:18.2.1-0.el9.noarch 1/84 2026-03-10T10:34:53.513 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.1-0.el9.noarch 1/84 2026-03-10T10:34:53.519 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:18.2.1-0.el9.x86_64 2/84 2026-03-10T10:34:53.519 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:53.519 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T10:34:53.519 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T10:34:53.519 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T10:34:53.519 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:53.520 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-2:18.2.1-0.el9.x86_64 2/84 2026-03-10T10:34:53.529 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:18.2.1-0.el9.x86_64 2/84 2026-03-10T10:34:53.529 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:53.529 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T10:34:53.529 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T10:34:53.529 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T10:34:53.529 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:53.530 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-2:18.2.1-0.el9.x86_64 2/84 2026-03-10T10:34:53.532 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:18.2.1-0.el9.x86_64 2/84 2026-03-10T10:34:53.540 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 3/84 2026-03-10T10:34:53.540 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 4/84 2026-03-10T10:34:53.544 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:18.2.1-0.el9.x86_64 2/84 2026-03-10T10:34:53.553 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 3/84 2026-03-10T10:34:53.553 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 4/84 2026-03-10T10:34:53.598 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 4/84 2026-03-10T10:34:53.607 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/84 2026-03-10T10:34:53.611 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/84 2026-03-10T10:34:53.611 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 7/84 2026-03-10T10:34:53.613 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 4/84 2026-03-10T10:34:53.622 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/84 2026-03-10T10:34:53.623 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 7/84 2026-03-10T10:34:53.627 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/84 2026-03-10T10:34:53.627 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 7/84 2026-03-10T10:34:53.629 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/84 2026-03-10T10:34:53.632 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/84 2026-03-10T10:34:53.635 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 10/84 2026-03-10T10:34:53.638 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 7/84 2026-03-10T10:34:53.640 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 11/84 2026-03-10T10:34:53.644 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 12/84 2026-03-10T10:34:53.648 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/84 2026-03-10T10:34:53.652 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/84 2026-03-10T10:34:53.654 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 10/84 2026-03-10T10:34:53.655 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 13/84 2026-03-10T10:34:53.660 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 11/84 2026-03-10T10:34:53.666 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 12/84 2026-03-10T10:34:53.667 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 14/84 2026-03-10T10:34:53.674 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 15/84 2026-03-10T10:34:53.675 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 13/84 2026-03-10T10:34:53.684 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 16/84 2026-03-10T10:34:53.690 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 17/84 2026-03-10T10:34:53.694 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 14/84 2026-03-10T10:34:53.700 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 15/84 2026-03-10T10:34:53.711 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 16/84 2026-03-10T10:34:53.718 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 17/84 2026-03-10T10:34:53.720 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 18/84 2026-03-10T10:34:53.726 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 19/84 2026-03-10T10:34:53.729 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 20/84 2026-03-10T10:34:53.739 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 21/84 2026-03-10T10:34:53.746 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 22/84 2026-03-10T10:34:53.746 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarc 23/84 2026-03-10T10:34:53.750 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 18/84 2026-03-10T10:34:53.753 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarc 23/84 2026-03-10T10:34:53.757 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 19/84 2026-03-10T10:34:53.760 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 20/84 2026-03-10T10:34:53.770 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 21/84 2026-03-10T10:34:53.777 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 22/84 2026-03-10T10:34:53.777 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarc 23/84 2026-03-10T10:34:53.784 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarc 23/84 2026-03-10T10:34:53.846 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 24/84 2026-03-10T10:34:53.885 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 24/84 2026-03-10T10:34:53.925 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 25/84 2026-03-10T10:34:53.960 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 26/84 2026-03-10T10:34:53.966 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 27/84 2026-03-10T10:34:53.970 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 28/84 2026-03-10T10:34:53.971 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 25/84 2026-03-10T10:34:53.973 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 29/84 2026-03-10T10:34:53.976 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 30/84 2026-03-10T10:34:53.976 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 26/84 2026-03-10T10:34:53.979 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 31/84 2026-03-10T10:34:53.982 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 32/84 2026-03-10T10:34:53.983 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 27/84 2026-03-10T10:34:53.984 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jwt-2.4.0-1.el9.noarch 33/84 2026-03-10T10:34:53.987 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 28/84 2026-03-10T10:34:53.987 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jwt+crypto-2.4.0-1.el9.noarch 34/84 2026-03-10T10:34:53.989 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 29/84 2026-03-10T10:34:53.992 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 30/84 2026-03-10T10:34:53.994 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 31/84 2026-03-10T10:34:53.997 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 32/84 2026-03-10T10:34:54.000 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jwt-2.4.0-1.el9.noarch 33/84 2026-03-10T10:34:54.000 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 35/84 2026-03-10T10:34:54.003 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jwt+crypto-2.4.0-1.el9.noarch 34/84 2026-03-10T10:34:54.007 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 36/84 2026-03-10T10:34:54.011 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 37/84 2026-03-10T10:34:54.017 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 35/84 2026-03-10T10:34:54.023 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 36/84 2026-03-10T10:34:54.027 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 37/84 2026-03-10T10:34:54.058 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 38/84 2026-03-10T10:34:54.070 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 39/84 2026-03-10T10:34:54.073 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 40/84 2026-03-10T10:34:54.075 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 41/84 2026-03-10T10:34:54.077 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 38/84 2026-03-10T10:34:54.077 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 42/84 2026-03-10T10:34:54.079 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 43/84 2026-03-10T10:34:54.087 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 39/84 2026-03-10T10:34:54.089 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 40/84 2026-03-10T10:34:54.092 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 41/84 2026-03-10T10:34:54.094 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 42/84 2026-03-10T10:34:54.096 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 43/84 2026-03-10T10:34:54.101 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:18.2.1-0.el9.x86_64 44/84 2026-03-10T10:34:54.101 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:54.101 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T10:34:54.101 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T10:34:54.101 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T10:34:54.101 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:54.101 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-mirror-2:18.2.1-0.el9.x86_64 44/84 2026-03-10T10:34:54.109 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:18.2.1-0.el9.x86_64 44/84 2026-03-10T10:34:54.117 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:18.2.1-0.el9.x86_64 44/84 2026-03-10T10:34:54.118 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:54.118 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T10:34:54.118 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T10:34:54.118 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T10:34:54.118 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:54.118 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : rbd-mirror-2:18.2.1-0.el9.x86_64 44/84 2026-03-10T10:34:54.126 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:18.2.1-0.el9.x86_64 44/84 2026-03-10T10:34:54.130 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 45/84 2026-03-10T10:34:54.130 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:54.130 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T10:34:54.130 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:54.130 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 45/84 2026-03-10T10:34:54.138 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 45/84 2026-03-10T10:34:54.140 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 46/84 2026-03-10T10:34:54.142 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 47/84 2026-03-10T10:34:54.145 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ply-3.11-14.el9.noarch 48/84 2026-03-10T10:34:54.147 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 49/84 2026-03-10T10:34:54.149 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 45/84 2026-03-10T10:34:54.149 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T10:34:54.149 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T10:34:54.149 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:54.149 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 45/84 2026-03-10T10:34:54.150 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 50/84 2026-03-10T10:34:54.153 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 51/84 2026-03-10T10:34:54.156 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 52/84 2026-03-10T10:34:54.157 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 45/84 2026-03-10T10:34:54.159 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 46/84 2026-03-10T10:34:54.159 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 53/84 2026-03-10T10:34:54.161 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 47/84 2026-03-10T10:34:54.164 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-ply-3.11-14.el9.noarch 48/84 2026-03-10T10:34:54.166 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 49/84 2026-03-10T10:34:54.167 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 54/84 2026-03-10T10:34:54.168 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 50/84 2026-03-10T10:34:54.171 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 51/84 2026-03-10T10:34:54.172 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 55/84 2026-03-10T10:34:54.174 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 52/84 2026-03-10T10:34:54.175 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 56/84 2026-03-10T10:34:54.177 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 57/84 2026-03-10T10:34:54.178 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 53/84 2026-03-10T10:34:54.180 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 58/84 2026-03-10T10:34:54.185 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 59/84 2026-03-10T10:34:54.186 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 54/84 2026-03-10T10:34:54.189 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 60/84 2026-03-10T10:34:54.191 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 55/84 2026-03-10T10:34:54.193 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 61/84 2026-03-10T10:34:54.194 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 56/84 2026-03-10T10:34:54.196 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 57/84 2026-03-10T10:34:54.197 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 62/84 2026-03-10T10:34:54.198 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 58/84 2026-03-10T10:34:54.203 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 63/84 2026-03-10T10:34:54.204 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 59/84 2026-03-10T10:34:54.206 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 64/84 2026-03-10T10:34:54.208 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 60/84 2026-03-10T10:34:54.209 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 65/84 2026-03-10T10:34:54.212 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 66/84 2026-03-10T10:34:54.213 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 61/84 2026-03-10T10:34:54.217 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 62/84 2026-03-10T10:34:54.220 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 67/84 2026-03-10T10:34:54.222 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 63/84 2026-03-10T10:34:54.225 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 68/84 2026-03-10T10:34:54.225 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 64/84 2026-03-10T10:34:54.228 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 69/84 2026-03-10T10:34:54.228 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 65/84 2026-03-10T10:34:54.230 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 70/84 2026-03-10T10:34:54.231 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 66/84 2026-03-10T10:34:54.232 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 71/84 2026-03-10T10:34:54.237 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 72/84 2026-03-10T10:34:54.239 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 67/84 2026-03-10T10:34:54.240 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 73/84 2026-03-10T10:34:54.244 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 68/84 2026-03-10T10:34:54.247 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 69/84 2026-03-10T10:34:54.250 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 70/84 2026-03-10T10:34:54.251 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 71/84 2026-03-10T10:34:54.257 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 72/84 2026-03-10T10:34:54.259 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:18.2.1-0.el9.x86_64 74/84 2026-03-10T10:34:54.259 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T10:34:54.259 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:54.260 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 73/84 2026-03-10T10:34:54.265 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-base-2:18.2.1-0.el9.x86_64 74/84 2026-03-10T10:34:54.278 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:18.2.1-0.el9.x86_64 74/84 2026-03-10T10:34:54.279 INFO:teuthology.orchestra.run.vm02.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T10:34:54.279 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:54.283 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:18.2.1-0.el9.x86_64 74/84 2026-03-10T10:34:54.283 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-selinux-2:18.2.1-0.el9.x86_64 75/84 2026-03-10T10:34:54.285 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-base-2:18.2.1-0.el9.x86_64 74/84 2026-03-10T10:34:54.305 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:18.2.1-0.el9.x86_64 74/84 2026-03-10T10:34:54.305 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-selinux-2:18.2.1-0.el9.x86_64 75/84 2026-03-10T10:34:59.669 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:18.2.1-0.el9.x86_64 75/84 2026-03-10T10:34:59.669 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-10T10:34:59.669 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-10T10:34:59.669 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-10T10:34:59.669 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-10T10:34:59.669 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-10T10:34:59.669 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-10T10:34:59.669 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-10T10:34:59.669 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:59.678 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-common-2:18.2.1-0.el9.x86_64 76/84 2026-03-10T10:34:59.702 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:18.2.1-0.el9.x86_64 76/84 2026-03-10T10:34:59.705 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ceph-common-2:18.2.1-0.el9.x86_64 77/84 2026-03-10T10:34:59.707 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 78/84 2026-03-10T10:34:59.709 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 79/84 2026-03-10T10:34:59.709 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libradosstriper1-2:18.2.1-0.el9.x86_64 80/84 2026-03-10T10:34:59.724 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:18.2.1-0.el9.x86_64 80/84 2026-03-10T10:34:59.726 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : fmt-8.1.1-5.el9.x86_64 81/84 2026-03-10T10:34:59.729 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 82/84 2026-03-10T10:34:59.732 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 83/84 2026-03-10T10:34:59.732 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephsqlite-2:18.2.1-0.el9.x86_64 84/84 2026-03-10T10:34:59.816 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-selinux-2:18.2.1-0.el9.x86_64 75/84 2026-03-10T10:34:59.816 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /sys 2026-03-10T10:34:59.817 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /proc 2026-03-10T10:34:59.817 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /mnt 2026-03-10T10:34:59.817 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /var/tmp 2026-03-10T10:34:59.817 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /home 2026-03-10T10:34:59.817 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /root 2026-03-10T10:34:59.817 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /tmp 2026-03-10T10:34:59.817 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:18.2.1-0.el9.x86_64 84/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:18.2.1-0.el9.x86_64 1/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:18.2.1-0.el9.x86_64 2/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 3/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 4/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:18.2.1-0.el9.x86_64 5/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 6/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 7/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarc 8/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 9/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:18.2.1-0.el9.noarch 10/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 11/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:18.2.1-0.el9.x86_64 12/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 13/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 14/84 2026-03-10T10:34:59.824 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 15/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 16/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:18.2.1-0.el9.x86_64 17/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 18/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 19/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 20/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:18.2.1-0.el9.x86_64 21/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 22/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 23/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 24/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 25/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 26/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 27/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 28/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 29/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:18.2.1-0.el9.x86_64 30/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 31/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 32/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 33/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 34/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 35/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 36/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 37/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 38/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 39/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 40/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 41/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 42/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 43/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 44/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 45/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 46/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 47/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 48/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jwt-2.4.0-1.el9.noarch 49/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jwt+crypto-2.4.0-1.el9.noarch 50/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 51/84 2026-03-10T10:34:59.826 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 52/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 53/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 54/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 55/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 56/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 57/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 58/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 59/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 60/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 61/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 62/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 63/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 64/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 65/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 66/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 67/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 68/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 69/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 70/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 71/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 72/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 73/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 74/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 75/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 76/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 77/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 78/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 79/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 80/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 81/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 82/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 83/84 2026-03-10T10:34:59.827 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-common-2:18.2.1-0.el9.x86_64 76/84 2026-03-10T10:34:59.852 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:18.2.1-0.el9.x86_64 76/84 2026-03-10T10:34:59.855 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-ceph-common-2:18.2.1-0.el9.x86_64 77/84 2026-03-10T10:34:59.858 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 78/84 2026-03-10T10:34:59.860 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 79/84 2026-03-10T10:34:59.860 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libradosstriper1-2:18.2.1-0.el9.x86_64 80/84 2026-03-10T10:34:59.875 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libradosstriper1-2:18.2.1-0.el9.x86_64 80/84 2026-03-10T10:34:59.877 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : fmt-8.1.1-5.el9.x86_64 81/84 2026-03-10T10:34:59.880 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 82/84 2026-03-10T10:34:59.883 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 83/84 2026-03-10T10:34:59.883 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libcephsqlite-2:18.2.1-0.el9.x86_64 84/84 2026-03-10T10:34:59.901 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:18.2.1-0.el9.x86_64 84/84 2026-03-10T10:34:59.901 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:59.901 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T10:34:59.901 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:59.901 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:59.901 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 2026-03-10T10:34:59.901 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:59.901 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:59.901 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:18.2.1-0.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt-2.4.0-1.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt+crypto-2.4.0-1.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T10:34:59.902 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:18.2.1-0.el9.x86_64 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:34:59.903 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephsqlite-2:18.2.1-0.el9.x86_64 84/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-base-2:18.2.1-0.el9.x86_64 1/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-common-2:18.2.1-0.el9.x86_64 2/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 3/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 4/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-2:18.2.1-0.el9.x86_64 5/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 6/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 7/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarc 8/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 9/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-rook-2:18.2.1-0.el9.noarch 10/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 11/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-selinux-2:18.2.1-0.el9.x86_64 12/84 2026-03-10T10:34:59.979 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 13/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 14/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 15/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 16/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephsqlite-2:18.2.1-0.el9.x86_64 17/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 18/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 19/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 20/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libradosstriper1-2:18.2.1-0.el9.x86_64 21/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 22/84 2026-03-10T10:34:59.980 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 23/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 24/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 25/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 26/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 27/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 28/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 29/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-common-2:18.2.1-0.el9.x86_64 30/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 31/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 32/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 33/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 34/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 35/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 36/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 37/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 38/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 39/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 40/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 41/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 42/84 2026-03-10T10:34:59.981 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 43/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 44/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 45/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 46/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 47/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 48/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jwt-2.4.0-1.el9.noarch 49/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jwt+crypto-2.4.0-1.el9.noarch 50/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 51/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 52/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 53/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 54/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 55/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 56/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 57/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 58/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 59/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 60/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ply-3.11-14.el9.noarch 61/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 62/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 63/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 64/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 65/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 66/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 67/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 68/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 69/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 70/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 71/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 72/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 73/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 74/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 75/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 76/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 77/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 78/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 79/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 80/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 81/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 82/84 2026-03-10T10:34:59.982 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 83/84 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-mirror-2:18.2.1-0.el9.x86_64 84/84 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards-2:18.2.1-0.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm-2:18.2.1-0.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard-2:18.2.1-0.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local-2:18.2.1-0.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core-2:18.2.1-0.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook-2:18.2.1-0.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts-2:18.2.1-0.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T10:35:00.067 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-jwt-2.4.0-1.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-jwt+crypto-2.4.0-1.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:00.068 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:00.111 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:00.111 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:00.111 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-10T10:35:00.111 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:00.111 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T10:35:00.111 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:18.2.1-0.el9 @ceph-noarch 213 k 2026-03-10T10:35:00.111 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:00.111 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:35:00.112 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:00.112 INFO:teuthology.orchestra.run.vm05.stdout:Remove 1 Package 2026-03-10T10:35:00.112 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:00.112 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 213 k 2026-03-10T10:35:00.112 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:35:00.113 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:35:00.113 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:35:00.115 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:35:00.115 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:35:00.130 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:35:00.130 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : cephadm-2:18.2.1-0.el9.noarch 1/1 2026-03-10T10:35:00.243 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:18.2.1-0.el9.noarch 1/1 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout: Package Architecture Version Repository Size 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout: cephadm noarch 2:18.2.1-0.el9 @ceph-noarch 213 k 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:00.279 INFO:teuthology.orchestra.run.vm02.stdout:Remove 1 Package 2026-03-10T10:35:00.280 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:00.280 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 213 k 2026-03-10T10:35:00.280 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:35:00.281 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:35:00.281 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:35:00.282 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:35:00.282 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:35:00.289 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:18.2.1-0.el9.noarch 1/1 2026-03-10T10:35:00.289 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:00.289 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T10:35:00.289 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:18.2.1-0.el9.noarch 2026-03-10T10:35:00.289 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:00.289 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:00.298 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:35:00.298 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : cephadm-2:18.2.1-0.el9.noarch 1/1 2026-03-10T10:35:00.415 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: cephadm-2:18.2.1-0.el9.noarch 1/1 2026-03-10T10:35:00.457 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cephadm-2:18.2.1-0.el9.noarch 1/1 2026-03-10T10:35:00.457 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:00.457 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T10:35:00.457 INFO:teuthology.orchestra.run.vm02.stdout: cephadm-2:18.2.1-0.el9.noarch 2026-03-10T10:35:00.458 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:00.458 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:00.476 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T10:35:00.476 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:00.479 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:00.479 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:00.479 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:00.636 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T10:35:00.636 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:00.639 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:00.640 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:00.640 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:00.645 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr 2026-03-10T10:35:00.645 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:00.648 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:00.649 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:00.649 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:00.816 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr 2026-03-10T10:35:00.816 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:00.819 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:00.820 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:00.820 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:00.821 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T10:35:00.821 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:00.824 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:00.825 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:00.825 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:00.995 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T10:35:00.995 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:00.995 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T10:35:00.995 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:00.999 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:00.999 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:00.999 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:00.999 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:00.999 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:00.999 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:01.171 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T10:35:01.171 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:01.171 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-rook 2026-03-10T10:35:01.171 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:01.174 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:01.174 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:01.174 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:01.174 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:01.175 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:01.175 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:01.347 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T10:35:01.347 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:01.349 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-rook 2026-03-10T10:35:01.349 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:01.350 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:01.351 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:01.351 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:01.352 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:01.352 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:01.352 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:01.523 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T10:35:01.523 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:01.526 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:01.527 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:01.527 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:01.536 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:18.2.1-0.el9 @ceph 2.5 M 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout:Remove 1 Package 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 2.5 M 2026-03-10T10:35:01.537 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:35:01.539 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:35:01.539 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:35:01.548 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:35:01.548 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:35:01.572 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:35:01.586 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-fuse-2:18.2.1-0.el9.x86_64 1/1 2026-03-10T10:35:01.672 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-fuse-2:18.2.1-0.el9.x86_64 1/1 2026-03-10T10:35:01.708 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout: Package Architecture Version Repository Size 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse x86_64 2:18.2.1-0.el9 @ceph 2.5 M 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout:Remove 1 Package 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 2.5 M 2026-03-10T10:35:01.709 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:35:01.711 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:35:01.711 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:35:01.721 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:35:01.721 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:18.2.1-0.el9.x86_64 1/1 2026-03-10T10:35:01.721 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:01.721 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T10:35:01.721 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:01.721 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:01.721 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:01.721 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:35:01.746 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:35:01.761 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : ceph-fuse-2:18.2.1-0.el9.x86_64 1/1 2026-03-10T10:35:01.829 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-fuse-2:18.2.1-0.el9.x86_64 1/1 2026-03-10T10:35:01.878 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-fuse-2:18.2.1-0.el9.x86_64 1/1 2026-03-10T10:35:01.878 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:01.878 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T10:35:01.878 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:01.878 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:01.878 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:18.2.1-0.el9 @ceph 456 k 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:18.2.1-0.el9 @ceph 139 k 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:35:01.924 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:01.925 INFO:teuthology.orchestra.run.vm05.stdout:Remove 2 Packages 2026-03-10T10:35:01.925 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:01.925 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 595 k 2026-03-10T10:35:01.925 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:35:01.926 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:35:01.926 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:35:01.937 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:35:01.937 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:35:01.962 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:35:01.964 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephfs-devel-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:35:01.977 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librados-devel-2:18.2.1-0.el9.x86_64 2/2 2026-03-10T10:35:02.032 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados-devel-2:18.2.1-0.el9.x86_64 2/2 2026-03-10T10:35:02.033 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:35:02.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:18.2.1-0.el9.x86_64 2/2 2026-03-10T10:35:02.075 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:02.075 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T10:35:02.075 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:18.2.1-0.el9.x86_64 librados-devel-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:02.075 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:02.075 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:02.078 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout: Package Architecture Version Repository Size 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel x86_64 2:18.2.1-0.el9 @ceph 456 k 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel x86_64 2:18.2.1-0.el9 @ceph 139 k 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout:Remove 2 Packages 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 595 k 2026-03-10T10:35:02.079 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:35:02.081 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:35:02.081 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:35:02.091 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:35:02.091 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:35:02.116 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:35:02.118 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libcephfs-devel-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:35:02.131 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librados-devel-2:18.2.1-0.el9.x86_64 2/2 2026-03-10T10:35:02.190 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados-devel-2:18.2.1-0.el9.x86_64 2/2 2026-03-10T10:35:02.190 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs-devel-2:18.2.1-0.el9.x86_64 1/2 2026-03-10T10:35:02.234 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados-devel-2:18.2.1-0.el9.x86_64 2/2 2026-03-10T10:35:02.234 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.235 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T10:35:02.235 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel-2:18.2.1-0.el9.x86_64 librados-devel-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:02.235 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.235 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:02.278 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:02.278 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:02.278 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:18.2.1-0.el9 @ceph 1.9 M 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:18.2.1-0.el9 @ceph 505 k 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:18.2.1-0.el9 @ceph 186 k 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout:Remove 3 Packages 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 2.5 M 2026-03-10T10:35:02.279 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:35:02.281 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:35:02.281 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:35:02.292 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:35:02.292 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:35:02.318 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:35:02.320 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cephfs-2:18.2.1-0.el9.x86_64 1/3 2026-03-10T10:35:02.321 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ceph-argparse-2:18.2.1-0.el9.x86_64 2/3 2026-03-10T10:35:02.321 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephfs2-2:18.2.1-0.el9.x86_64 3/3 2026-03-10T10:35:02.392 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:18.2.1-0.el9.x86_64 3/3 2026-03-10T10:35:02.392 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:18.2.1-0.el9.x86_64 1/3 2026-03-10T10:35:02.392 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:18.2.1-0.el9.x86_64 2/3 2026-03-10T10:35:02.428 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2 x86_64 2:18.2.1-0.el9 @ceph 1.9 M 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs x86_64 2:18.2.1-0.el9 @ceph 505 k 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse x86_64 2:18.2.1-0.el9 @ceph 186 k 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:Remove 3 Packages 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 2.5 M 2026-03-10T10:35:02.429 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:35:02.430 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:18.2.1-0.el9.x86_64 3/3 2026-03-10T10:35:02.430 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:02.430 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T10:35:02.430 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:02.430 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:02.430 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:02.430 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:02.430 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:02.431 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:35:02.431 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:35:02.442 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:35:02.443 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:35:02.469 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:35:02.471 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-cephfs-2:18.2.1-0.el9.x86_64 1/3 2026-03-10T10:35:02.472 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-ceph-argparse-2:18.2.1-0.el9.x86_64 2/3 2026-03-10T10:35:02.472 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libcephfs2-2:18.2.1-0.el9.x86_64 3/3 2026-03-10T10:35:02.536 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephfs2-2:18.2.1-0.el9.x86_64 3/3 2026-03-10T10:35:02.536 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs2-2:18.2.1-0.el9.x86_64 1/3 2026-03-10T10:35:02.536 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-argparse-2:18.2.1-0.el9.x86_64 2/3 2026-03-10T10:35:02.576 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cephfs-2:18.2.1-0.el9.x86_64 3/3 2026-03-10T10:35:02.576 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.576 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T10:35:02.576 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:02.576 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:02.576 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:02.576 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.576 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:02.608 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: libcephfs-devel 2026-03-10T10:35:02.608 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:02.611 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:02.612 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:02.612 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:02.752 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: libcephfs-devel 2026-03-10T10:35:02.752 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:02.755 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:02.756 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:02.756 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:02.801 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:18.2.1-0.el9 @ceph 12 M 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:18.2.1-0.el9 @ceph 1.1 M 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:18.2.1-0.el9 @ceph 1.1 M 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:18.2.1-0.el9 @ceph 269 k 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:18.2.1-0.el9 @ceph 226 k 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:18.2.1-0.el9 @ceph 494 k 2026-03-10T10:35:02.802 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:18.2.1-0.el9 @ceph 12 M 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:18.2.1-0.el9 @ceph 15 M 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout:Remove 21 Packages 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 74 M 2026-03-10T10:35:02.803 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T10:35:02.807 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T10:35:02.807 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T10:35:02.829 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T10:35:02.829 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T10:35:02.869 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T10:35:02.872 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-nbd-2:18.2.1-0.el9.x86_64 1/21 2026-03-10T10:35:02.874 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-fuse-2:18.2.1-0.el9.x86_64 2/21 2026-03-10T10:35:02.877 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rgw-2:18.2.1-0.el9.x86_64 3/21 2026-03-10T10:35:02.877 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librgw2-2:18.2.1-0.el9.x86_64 4/21 2026-03-10T10:35:02.890 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:18.2.1-0.el9.x86_64 4/21 2026-03-10T10:35:02.893 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/21 2026-03-10T10:35:02.895 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rbd-2:18.2.1-0.el9.x86_64 6/21 2026-03-10T10:35:02.896 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rados-2:18.2.1-0.el9.x86_64 7/21 2026-03-10T10:35:02.898 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/21 2026-03-10T10:35:02.898 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librbd1-2:18.2.1-0.el9.x86_64 9/21 2026-03-10T10:35:02.911 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:18.2.1-0.el9.x86_64 9/21 2026-03-10T10:35:02.912 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librados2-2:18.2.1-0.el9.x86_64 10/21 2026-03-10T10:35:02.912 INFO:teuthology.orchestra.run.vm05.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T10:35:02.912 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:02.924 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:18.2.1-0.el9.x86_64 10/21 2026-03-10T10:35:02.927 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 11/21 2026-03-10T10:35:02.930 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 12/21 2026-03-10T10:35:02.932 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 13/21 2026-03-10T10:35:02.932 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout:Removing: 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: librados2 x86_64 2:18.2.1-0.el9 @ceph 12 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout:Removing dependent packages: 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados x86_64 2:18.2.1-0.el9 @ceph 1.1 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd x86_64 2:18.2.1-0.el9 @ceph 1.1 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw x86_64 2:18.2.1-0.el9 @ceph 269 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse x86_64 2:18.2.1-0.el9 @ceph 226 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd x86_64 2:18.2.1-0.el9 @ceph 494 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout:Removing unused dependencies: 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: librbd1 x86_64 2:18.2.1-0.el9 @ceph 12 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: librgw2 x86_64 2:18.2.1-0.el9 @ceph 15 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout:================================================================================ 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout:Remove 21 Packages 2026-03-10T10:35:02.934 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:02.935 INFO:teuthology.orchestra.run.vm02.stdout:Freed space: 74 M 2026-03-10T10:35:02.935 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-10T10:35:02.935 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 14/21 2026-03-10T10:35:02.938 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-10T10:35:02.938 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-10T10:35:02.938 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : re2-1:20211101-20.el9.x86_64 15/21 2026-03-10T10:35:02.941 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 16/21 2026-03-10T10:35:02.944 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 17/21 2026-03-10T10:35:02.946 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 18/21 2026-03-10T10:35:02.948 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 19/21 2026-03-10T10:35:02.951 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 20/21 2026-03-10T10:35:02.959 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-10T10:35:02.959 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-10T10:35:02.964 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 21/21 2026-03-10T10:35:03.000 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-10T10:35:03.003 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : rbd-nbd-2:18.2.1-0.el9.x86_64 1/21 2026-03-10T10:35:03.005 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : rbd-fuse-2:18.2.1-0.el9.x86_64 2/21 2026-03-10T10:35:03.007 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rgw-2:18.2.1-0.el9.x86_64 3/21 2026-03-10T10:35:03.008 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librgw2-2:18.2.1-0.el9.x86_64 4/21 2026-03-10T10:35:03.021 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librgw2-2:18.2.1-0.el9.x86_64 4/21 2026-03-10T10:35:03.024 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/21 2026-03-10T10:35:03.025 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rbd-2:18.2.1-0.el9.x86_64 6/21 2026-03-10T10:35:03.027 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : python3-rados-2:18.2.1-0.el9.x86_64 7/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 21/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 2/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 3/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 4/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:18.2.1-0.el9.x86_64 7/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:18.2.1-0.el9.x86_64 8/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:18.2.1-0.el9.x86_64 10/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 11/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 12/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 13/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:18.2.1-0.el9.x86_64 14/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:18.2.1-0.el9.x86_64 15/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:18.2.1-0.el9.x86_64 16/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 17/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:18.2.1-0.el9.x86_64 18/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:18.2.1-0.el9.x86_64 19/21 2026-03-10T10:35:03.028 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 20/21 2026-03-10T10:35:03.029 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/21 2026-03-10T10:35:03.030 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librbd1-2:18.2.1-0.el9.x86_64 9/21 2026-03-10T10:35:03.043 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:18.2.1-0.el9.x86_64 9/21 2026-03-10T10:35:03.043 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librados2-2:18.2.1-0.el9.x86_64 10/21 2026-03-10T10:35:03.043 INFO:teuthology.orchestra.run.vm02.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T10:35:03.044 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:03.057 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:18.2.1-0.el9.x86_64 10/21 2026-03-10T10:35:03.059 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 11/21 2026-03-10T10:35:03.062 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 12/21 2026-03-10T10:35:03.064 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 13/21 2026-03-10T10:35:03.067 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 14/21 2026-03-10T10:35:03.070 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : re2-1:20211101-20.el9.x86_64 15/21 2026-03-10T10:35:03.074 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 16/21 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 21/21 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: librbd1-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T10:35:03.075 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:03.076 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 17/21 2026-03-10T10:35:03.078 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 18/21 2026-03-10T10:35:03.080 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 19/21 2026-03-10T10:35:03.082 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 20/21 2026-03-10T10:35:03.096 INFO:teuthology.orchestra.run.vm02.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 21/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 21/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 2/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 3/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 4/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:18.2.1-0.el9.x86_64 7/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:18.2.1-0.el9.x86_64 8/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librgw2-2:18.2.1-0.el9.x86_64 10/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 11/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 12/21 2026-03-10T10:35:03.158 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 13/21 2026-03-10T10:35:03.159 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rados-2:18.2.1-0.el9.x86_64 14/21 2026-03-10T10:35:03.159 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rbd-2:18.2.1-0.el9.x86_64 15/21 2026-03-10T10:35:03.159 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rgw-2:18.2.1-0.el9.x86_64 16/21 2026-03-10T10:35:03.159 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 17/21 2026-03-10T10:35:03.159 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-fuse-2:18.2.1-0.el9.x86_64 18/21 2026-03-10T10:35:03.159 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-nbd-2:18.2.1-0.el9.x86_64 19/21 2026-03-10T10:35:03.159 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : re2-1:20211101-20.el9.x86_64 20/21 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 21/21 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout:Removed: 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: librados2-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: librbd1-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: librgw2-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd-2:18.2.1-0.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T10:35:03.203 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:03.297 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: librbd1 2026-03-10T10:35:03.297 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:03.298 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:03.299 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:03.299 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:03.401 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: librbd1 2026-03-10T10:35:03.401 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:03.405 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:03.405 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:03.405 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:03.480 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rados 2026-03-10T10:35:03.480 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:03.483 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:03.483 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:03.483 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:03.568 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-rados 2026-03-10T10:35:03.568 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:03.571 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:03.572 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:03.572 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:03.642 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rgw 2026-03-10T10:35:03.643 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:03.646 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:03.646 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:03.646 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:03.735 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-rgw 2026-03-10T10:35:03.735 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:03.738 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:03.739 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:03.739 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:03.808 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-cephfs 2026-03-10T10:35:03.808 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:03.811 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:03.811 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:03.811 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:03.903 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-cephfs 2026-03-10T10:35:03.903 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:03.906 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:03.907 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:03.907 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:03.975 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rbd 2026-03-10T10:35:03.975 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:03.978 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:03.979 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:03.979 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:04.069 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: python3-rbd 2026-03-10T10:35:04.069 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:04.072 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:04.073 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:04.073 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:04.143 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-fuse 2026-03-10T10:35:04.143 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:04.146 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:04.147 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:04.147 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:04.366 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: rbd-fuse 2026-03-10T10:35:04.367 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:04.370 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:04.370 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:04.370 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:04.381 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-mirror 2026-03-10T10:35:04.381 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:04.384 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:04.384 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:04.385 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:04.526 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: rbd-mirror 2026-03-10T10:35:04.527 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:04.530 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:04.531 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:04.531 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:04.542 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-nbd 2026-03-10T10:35:04.542 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T10:35:04.545 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T10:35:04.546 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T10:35:04.546 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T10:35:04.566 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-10T10:35:04.692 INFO:teuthology.orchestra.run.vm05.stdout:56 files removed 2026-03-10T10:35:04.705 INFO:teuthology.orchestra.run.vm02.stdout:No match for argument: rbd-nbd 2026-03-10T10:35:04.705 INFO:teuthology.orchestra.run.vm02.stderr:No packages marked for removal. 2026-03-10T10:35:04.708 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-10T10:35:04.709 INFO:teuthology.orchestra.run.vm02.stdout:Nothing to do. 2026-03-10T10:35:04.709 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-10T10:35:04.715 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T10:35:04.731 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean all 2026-03-10T10:35:04.738 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean expire-cache 2026-03-10T10:35:04.847 INFO:teuthology.orchestra.run.vm02.stdout:56 files removed 2026-03-10T10:35:04.867 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T10:35:04.889 INFO:teuthology.orchestra.run.vm05.stdout:Cache was expired 2026-03-10T10:35:04.889 INFO:teuthology.orchestra.run.vm05.stdout:0 files removed 2026-03-10T10:35:04.890 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean expire-cache 2026-03-10T10:35:04.909 DEBUG:teuthology.parallel:result is None 2026-03-10T10:35:05.039 INFO:teuthology.orchestra.run.vm02.stdout:Cache was expired 2026-03-10T10:35:05.039 INFO:teuthology.orchestra.run.vm02.stdout:0 files removed 2026-03-10T10:35:05.056 DEBUG:teuthology.parallel:result is None 2026-03-10T10:35:05.056 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm02.local 2026-03-10T10:35:05.056 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm05.local 2026-03-10T10:35:05.056 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T10:35:05.056 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T10:35:05.079 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T10:35:05.081 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T10:35:05.144 DEBUG:teuthology.parallel:result is None 2026-03-10T10:35:05.144 DEBUG:teuthology.parallel:result is None 2026-03-10T10:35:05.144 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T10:35:05.148 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T10:35:05.148 DEBUG:teuthology.orchestra.run.vm02:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T10:35:05.186 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T10:35:05.199 INFO:teuthology.orchestra.run.vm02.stderr:bash: line 1: ntpq: command not found 2026-03-10T10:35:05.200 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-10T10:35:05.266 INFO:teuthology.orchestra.run.vm02.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T10:35:05.266 INFO:teuthology.orchestra.run.vm02.stdout:=============================================================================== 2026-03-10T10:35:05.266 INFO:teuthology.orchestra.run.vm02.stdout:^+ funky.f5s.de 2 7 377 58 -117us[ -117us] +/- 38ms 2026-03-10T10:35:05.266 INFO:teuthology.orchestra.run.vm02.stdout:^* adenin.s2p.de 2 6 377 60 -25us[ -54us] +/- 17ms 2026-03-10T10:35:05.266 INFO:teuthology.orchestra.run.vm02.stdout:^+ ntp3.adminforge.de 2 7 377 123 -164us[ -193us] +/- 18ms 2026-03-10T10:35:05.266 INFO:teuthology.orchestra.run.vm02.stdout:^+ stratum2-3.NTP.TechFak.U> 2 7 377 253 +873us[ +838us] +/- 16ms 2026-03-10T10:35:05.266 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T10:35:05.267 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-10T10:35:05.267 INFO:teuthology.orchestra.run.vm05.stdout:^+ funky.f5s.de 2 7 377 1 -26us[ -26us] +/- 39ms 2026-03-10T10:35:05.267 INFO:teuthology.orchestra.run.vm05.stdout:^+ ntp3.adminforge.de 2 7 377 125 -30us[ -23us] +/- 18ms 2026-03-10T10:35:05.267 INFO:teuthology.orchestra.run.vm05.stdout:^+ stratum2-3.NTP.TechFak.U> 2 7 377 0 +851us[ +851us] +/- 17ms 2026-03-10T10:35:05.267 INFO:teuthology.orchestra.run.vm05.stdout:^* adenin.s2p.de 2 6 377 61 +99us[ +105us] +/- 17ms 2026-03-10T10:35:05.268 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T10:35:05.271 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T10:35:05.271 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T10:35:05.274 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T10:35:05.277 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T10:35:05.280 INFO:teuthology.task.internal:Duration was 1470.177841 seconds 2026-03-10T10:35:05.280 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T10:35:05.283 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T10:35:05.283 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T10:35:05.310 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T10:35:05.346 INFO:teuthology.orchestra.run.vm02.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T10:35:05.350 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T10:35:05.816 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T10:35:05.816 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm02.local 2026-03-10T10:35:05.816 DEBUG:teuthology.orchestra.run.vm02:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T10:35:05.841 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-10T10:35:05.841 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T10:35:05.880 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T10:35:05.880 DEBUG:teuthology.orchestra.run.vm02:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T10:35:05.883 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T10:35:06.617 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T10:35:06.617 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T10:35:06.619 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T10:35:06.639 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T10:35:06.639 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T10:35:06.640 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%gzip -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T10:35:06.640 INFO:teuthology.orchestra.run.vm02.stderr: -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T10:35:06.640 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T10:35:06.647 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T10:35:06.647 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T10:35:06.648 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T10:35:06.648 INFO:teuthology.orchestra.run.vm05.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T10:35:06.648 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T10:35:06.796 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.9% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T10:35:06.819 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T10:35:06.821 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T10:35:06.824 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T10:35:06.824 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T10:35:06.885 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T10:35:06.911 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T10:35:06.915 DEBUG:teuthology.orchestra.run.vm02:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T10:35:06.927 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T10:35:06.955 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = core 2026-03-10T10:35:06.980 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-10T10:35:06.992 DEBUG:teuthology.orchestra.run.vm02:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T10:35:07.024 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:35:07.024 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T10:35:07.048 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:35:07.048 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T10:35:07.051 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T10:35:07.051 DEBUG:teuthology.misc:Transferring archived files from vm02:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/997/remote/vm02 2026-03-10T10:35:07.052 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T10:35:07.097 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/997/remote/vm05 2026-03-10T10:35:07.097 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T10:35:07.125 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T10:35:07.125 DEBUG:teuthology.orchestra.run.vm02:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T10:35:07.137 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T10:35:07.184 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T10:35:07.187 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T10:35:07.187 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T10:35:07.191 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T10:35:07.191 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T10:35:07.192 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T10:35:07.207 INFO:teuthology.orchestra.run.vm02.stdout: 8532145 0 drwxr-xr-x 3 ubuntu ubuntu 19 Mar 10 10:35 /home/ubuntu/cephtest 2026-03-10T10:35:07.207 INFO:teuthology.orchestra.run.vm02.stdout: 21229443 0 d--------- 2 ubuntu ubuntu 6 Mar 10 10:17 /home/ubuntu/cephtest/mnt.0 2026-03-10T10:35:07.207 INFO:teuthology.orchestra.run.vm02.stderr:find: ‘/home/ubuntu/cephtest/mnt.0’: Permission denied 2026-03-10T10:35:07.208 INFO:teuthology.orchestra.run.vm02.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-10T10:35:07.226 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:35:07.226 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm02 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-10T10:35:07.226 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T10:35:07.229 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-10T10:35:07.230 INFO:teuthology.run:Summary data: description: orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} duration: 1470.1778409481049 failure_reason: reached maximum tries (50) after waiting for 300 seconds flavor: default owner: kyr status: fail success: false 2026-03-10T10:35:07.230 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T10:35:07.250 INFO:teuthology.run:FAIL